Guitar Lessons by Chip McDonald - chip@chipmcdonald.com: Approaching the Uncanny Valley of Guitar Playing

Wednesday, July 6, 2022

Approaching the Uncanny Valley of Guitar Playing

  I've written elsewhere about how a.i./machine learning is going to completely change the musician landscape, as far as production is concerned.

 People do not realize the leap forward we're at right now.  I won't rehash what is available elsewhere, except to say "it's probably not what you imagine it to be".   What is happening with Pytorch/Magenta/Deep Mind etc.  is as big as the Internet was, and will change us and our lives as much as it has. 

 I recently tried the Magenta project's plugin.  The sax version is amazing; it does what I promised would one day happen with ML/a.i. software, in that it doesn't just make the input come out having the timbral sound of "a saxophone", but also the inflections.

  To the reader: if you think what I'm talking about is akin to playing a synthesizer with a saxophone patch, you're wrong.  It is a completely different thing.  The program is doing things the developers literally don't totally understand; it is working on a pure dataset level.  It is NOT a DSP based wave shaping technology.

 It's a bit tricky to handle.  I have to imagine saxophone playing to match the dynamics, attack and vibrato - but when you get it right it's uncanny.  I wish the violin version worked as well, it's very creatively liberating: as far as I'm concerned, I can add a "tenor sax" part to a recording.

  I presume there will be a guitar equivalent soon.  You'll be able to whistle and have it come out with an inflected guitar sound.  The question is in the quality of the model training (that governs the output); as I predicted, soon it will be possible to make a training model create an output that "corrects" a player's dynamics and inflection to sound equivalent to "Stevie Ray Vaughn", "Brian May", etc...

 Unlike the saxophone, though, the variety of guitar sounds I think will make it impossible to have it be flexible enough to cover "all" styles.  And using it will pigeonhole your choices into a certain way of playing, just as the "saxophone" plugin does.  You can't "play" the saxophone plugin like guitar, and have a good result.

 So in the future - people will specialize in how they operate with their ML sounds - after a period of people getting confused/impressed by the realization of the technology actually working.  A period equivalent to "keyboard popped octave bass sample basslines", and then a maturity.

 One downside is that it will making mocking up a cliche clone of a Known Famous Song very easy, and many will do it and garner kudos for it.  The confusing effect of this is going to be a big negative.

 Another potential negative is - a company will jump on this to have it in a guitar amp.  I've been saying this for awhile: a beginner amp with this technology can have a couple of presets that will not only yield an output that sounds just like the original recording of a Famous Player, but correct dynamics and probably pitch as well.   Harmony will be a problem I think for a few years, but "lead guitar playing" is about to undergo a disaster in that people will think even less of the skillset required to ACTUALLY DO A GUITAR SOLO.   

 A renaissance mentality will maybe become a trendy thing, hopefully.  



 


No comments:

Post a Comment