Guitar Lessons by Chip McDonald - chip@chipmcdonald.com

Wednesday, November 13, 2019

"... Two EEs Walk Into a Bar, and..."

 I've just wasted time reading a Gearslutz thread-war regarding IRs (impulse response).   

 Briefly, if you don't know what an IR is: an impulse response is a data file of  the "captured" transform characteristics of a signal through a device.

 In other words, the math behind what happens to a signal when it is changed by running it through a device.  In the guitar idioms, through an amp or more specifically, a speaker cabinet+microphone+microphone preamp.  You do a process involving running a test tone through the speaker, recording it on the computer, converting it to an IR file.  You then apply it in different software to replicate what the speaker does to the output signal of a guitar amp to replicate its sound without having to play at a loud volume.

 Most "amp modelers" use this technology called "convolution" math.  Some use it to replicate different points of the circuit of an amp as well as the speaker, others just the aggregate result.

  On Gearslutz it's the usual argument, "can you tell a difference?", in another thread "are they ok or not?", a classic battle in the vein of digital vs. analog, records vs. CD, MP3 vs. lossless.

 The problem is that it seems nobody is paying any attention to the application differences, while also ignoring reality.


 Reality is that an IR does nearly perfectly capture a snapshot in time of *one state* of what an amplifier or speaker does to a signal. Without a doubt, if one bashes an open A chord through a guitar rig and it's captured with an IR, if you play the open A chord the same way through the IR it will be indistinguishable. 

Stray from playing that open A the same way, you'll also stray from the identical results.  How much will depend on how stressed the speaker was initially, and how much air loading was happening inside the cabinet.  This is also discounting the electrical non-linearities of the amplifier to speaker combination.

  What that means is this:

 If you're playing guitar with a constant, steady-state signal a speaker IR works great.  What means is that if you're playing modern metal with a very compressed (non-dynamic) amp sound, AND you don't want the sound of a speaker and cabinet being killed with volume, it can work great. 

 Or if you're playing non-dynamic, clean guitar sound based music - like modern pop-country music.  It works maybe a little less well in this case, because often there are passages where the instantaneous level deviates greatly on the "twangy" attack character of some sounds.  It's still plenty convincing for live use.

 Where IRs start to fail is when the input signal varies drastically.  This happens in blues-based music.

 Blues based music is rife with multi-levels of dynamics, where chords or notes are struck differently, to emphasis a change in tonal character. This is typically done with the awareness of the historical context of "classic loud guitar sounds" - which involves a fair amount of speakers behaving erratically and non-linearly relative to the input.

 Furthermore, a lot of that historical context involves recordings with a fair amount of ROOM AMBIENCE.

 What it comes down to is that if you try to play "Since I've Been Loving You" by Zeppelin, with it's hyper-delicate and soft, guitar-volume turned down and light touch morphing to a hard-hitting loud sound through an IR, it doesn't sound identical.

 Because the IR only perfectly captures the speaker in one state, the volume at which the IR was recorded at. 

 SPEAKERS DO NOT SOUND THE SAME AT ALL VOLUMES.  

 ROOM AMBIENCE DOES NOT SOUND THE SAME AT ALL VOLUMES.

 Impulse responses only takes a snapshot of one state of the speaker, one state of the room.  If you match that perfectly you get perfect results.  Otherwise, you're in an Uncanny Valley scenario.

 People that choose to enter into this "argument" while ignoring that the application, the GUITAR SOUND has everything to do with the results.  For some things it's fine.  For others it's a degree of "almost there". 

 In the thread I'm thinking about there are 2 guys, one with a million dollar studio, both with electrical engineering degrees.  They're going back and forth on the following 2 truths:


A) In perfect steady state situations, IRs create identical results as the captured hardware creates.

B) In non-steady states situations, the results are not perfect.


 It's alienating to me that 2 people that should be educated well enough to realize there ISN'T a dichotomy there, will present A or B as a mutually exclusive premise. 

"... but listen, you can't hear the difference!" (in a perfect clinical example with identical input signals)

"... but listen, it doesn't sound anything like a real speaker cabinet! (under non-clinical, multi-variate state input signals)


 This is a fundamentally basic thing to grasp, but apparently nobody does, and everyone is willing to expend enormous amounts of energy arguing that A or B negates the other.  This shouldn't be a high i.q. thing, but simple observation of reality.  Yet I haven't seen, ever, a cogent and self aware evaluation of the upsides and downsides to the digital vs. analog guitar amp argument.

 In certain specific scenarios digital works well, and in some cases it's a much better choice than analog.  However, it doesn't cover all scenarios that an analog arrangement can occupy.  

 So please, world: stop "demonstrating" amp modelers by just bashing out chords with a lot of distortion through a close-miced speaker IR.  You need to demonstrate different dynamics on the input, with different registers: low chords, high chords, low notes, high notes, complex chords all at different volumes.

 You're not going to get the non-linear mechanical effects of the speaker cone distorting, or a cabinet that is being pummeled by pressure into extreme resonant ringing, or the spectral shifts of low to high volume. It's not going to feedback the same. An IR CANNOT DO THAT. 

 But you probably don't *need* that unless you're trying to replicate a Hendrix, Page, Jeff Beck or VanHalen live recording. 

 Yes, modeling amps will do a great job matching certain sounds at a specific input structure.  They'll fail on other things - but it may not matter based on what music you play.

 It also is a better option if you have to make compromises because of volume or lack of the best selection of gear in the analog domain.  

 It's funny to hear people knocking digital amps when they only have a small, ersatz variation on some sort of Big Classic Tube Amplifier than they can't play at the crazy volumes their favorite recordings were made at.  Or conversely, people who play very steady state compressed metal insist digital amps are perfect, who never do anything that requires a finessed and dynamic touch that creates a signal of many varying tonal inflections at different levels.

STOP MAKING NON-SEQUITER ARGUMENTS!  Creating a dichotomy out of presenting a single hypothesis doesn't mean you're finished with your thesis.  That's not being scientific.  It's just being ignorant, and this kind of presumed exclusionary single hypothesis basis for everything is wrecking modern society.  Seldom will a problem consist of a single parameter.  Digital amps vs. analog is not a single parameter topic.

































Monday, September 16, 2019

Maxing Out on Music: Glenn Gould Well Tempered Clavier Book I

 I'm not doing YouTube because it seems like everything I think of trying either is being done by someone else, or it's not possible these days due to current YouTube shenanigans involving "copyright infringement" (that is actually covered by Fair Use laws, but hey... corporations get to do what they want).

 One thing would be a music appreciation angle involving "Why You Should Listen to This".  Which has been done by others, I know.  It seems a popular thing with a lot of my students, explaining aspects of music/musicians that may not be obvious.  In turn you have more fun - you can enjoy an art form that much more fully.

 It's literally why my life has been spent on music.

 So today I find myself putting Glenn Gould's  _Well Tempered Clavier, Book I BMV 846-853".

 This is maybe my go-to for "finding solace music". 

"Chip, there isn't any guitar!  Why should I listen to this?" I can hear you thinking.


 The first thing that comes to mind is that compositionally it's lean.  Joe Bach had an idea in mind, in these pieces he was demonstrating the utility of the new technological revelation of well tempered tuning.  In a nutshell, prior to this era music had to be written with the awareness that only certain chords could be in tune, certain combinations of notes would be more out of tune with each other.

 Bach used these pieces to demonstrate the harmonic interaction between notes in keys that previously would not have allowed these instances to happen sonorously.  He had a "harmonic agenda".

 Each prelude and fugue is a little more contained and stentorian than some of the more lurid things he's known for.  Perhaps a casual, less intensive approach was used.  They're not ultra florid, they're not overtly complicated.  In a sense it's "simple" music, very pure.

 Glenn Gould had a special approach that some didn't/don't like.  His (first) execution of Book 1 fits his style perfectly in my opinion. He's not hamstrung by garish ornamentation and loud trilling, ultra-bright "let's ignore the dynamics" thrashing. 

 It's delicate and nuanced, every last note.  Each prelude has a unique character, and he doesn't litter his performance with technique that he could have lathered on heavily had he wanted to.  A very respectful and artistic performance.  A great synergy. 

 He later made another recording that I don't care as much for, so I'm referencing just his first effort.   This recording falls into my "near perfection" category.  Nothing makes me annoyed about it, and both the composition and the playing is exemplary and smooth, measured. 

 Effectively the most anti-21st century example of music there is.

 Hard to appreciate without enduring countless renditions of Bach that are a little too fast... too dramatically slow... too laden with trills, bombastic accents, overly exaggerated execution.   When listening to this try to note dynamic shifts that happen subtlely.

 Prelude No 5 in D, there is a descending flourish that Gould decidedly begins a little loud - a little loud - and has an accent mid-run that again is a tiny bit louder... diminuendos at a specifically measured rate.  A specific, French curve of a rate.

 Not LOUD AS POSSIBLE soft LOUD AGAIN soft.   Measured, artistic.

 On this recording he is constantly delivering an ebb and flow with nice shades of variance.  Completely selfish performance; he's not concerned with CRASHING against the listener's obdurate inability to detect flashy playing and technique.  It is using technique in the best way possible.

 I've gone weeks where this is all I listen to.  "Mainlining" music is integral to absorbing the essence of music in my opinion.  If you really like something, just listen to it as much as possible in a concentrated form.  If the reader has never done this, I suggest this is an important part of being able to create music; your subconscious mind is noting what the conscious likes.  The repetition, like learning technique on guitar, is reinforcing your personal "filter" for what comes out of you.

 Listen it, and try to note the way he gets louder and softer, faster and slower, and the pacing.  The music itself - ultra Bach-logical, but it's not hitting you over the head.  Simple, lean and pure.












Monday, August 26, 2019

Why Not a Kemper/AxeFx/Helix?

  I'll admit; I've thought about one of the above not for the sound, but for the (ironically) limitations of having everything in a box.  That's all.  

 "Chip, that doesn't make sense, those things having hundreds of parameters!"


 Yes, but - not *near infinite* parameters.  And they do have limitations: a lot of choices, BUT what you're given in the Box.

 Outside of the box: I've had ... let's say 100 amps.  100 pedals.  Some of which, had I been able to keep a handful would now be worth more than my mortgage.  The Boxes give you a semblance of that, yes. 

 But limited resemblances.  They can perfectly mimic a sound under certain conditions.  If you put a certain analog pedal in front of them - then the model doesn't work right. They can't alter your pickups, or your guitar.  Nor can they perfectly replicate every possible room sound, and every possible mic position. 

 In the real  world - in my house - I have a near infinite amount of combinations not found in those Boxes.  


 I have 2 different SM57 microphones .  A bunch of other oddball mics not found in the modellers: an Apex U47 tube mic clone.  A matched pair of MBHO small diaphragm condenser mics.  An MXL U-87 clone.  A Shure SM-81.  MXL R144 ribbon mic.  Some others.

 A Grace preamp.  A Presonus MP20 preamp modded with Jensen transformers and Burr Brown op amps. 

 A ton of pedals with non-exact copies in the above mentioned Boxes.  And amps and speakers.

 And an oddball place to record them in.

 I have spent an inordinate amount of time "testing" combinations.  Ultimately I won't go to using a Box because even at worse, at least the sound I get sounds "real"; as opposed to the struggle for "real" with the boxes.  The downside being with the infinite choices the above yields, more than in one of the Boxes.

 The Boxes feature mic placements in inches.  I'm here to tell the reader, if you didn't know, moving a mic 1 degree, a quarter inch in any direction, changes the sound "a lot".  You also have more choices than "dust cap, dust cap edge, cone, cone edge".  It goes on forever.

 So for me, one of those boxes might actually be a good idea, because they're actually limiting.  Most people find them good enough, or great.  I should get on with it...

 But that's not what instigated this blog.   I previously wrote about a.i. being used in a VST by 2020; well, that's definitely going to happen.  

 Now I'm going to go further.

 By 2021, a.i./machine learning will make recording and playing guitar a process unrecognizable by our perspective today.  2 years from now.

 The 2021 Modeling Box won't just be about models of whatever piece of gear.  It will be about nailing models of existing recordings.  It almost won't matter what guitar you put in front of it.  With 10 ms latency you'll have exactly whatever sound you want. 

 Same goes for recording.  All channels - bass guitar, drums, vocals - will yield perfect renditions of any recording.  So much so the a.i./ML plugin will substitute "something" no matter how far off the source is.  By 2022 it will be trivial to make a recording sound like anything with practically no technical knowledge.

 By 2022 this will be in all amps.  Which will lead to something "new" but old, and scary I'll explain in my next post. 









Thursday, July 18, 2019

You Must Learn Songs.

It's paramount.

 You're not trying to just learn technique or a series of techniques.  Not just "theory".  You absolutely have to couple that with experience in context, or it means nothing.

 "Music theory" is NOT a set of instructions.  It does not tell you what to do.  It *can* be used as a sort of default decision-maker when you're left with no ideas. 

 Why would you have no ideas, though?

 Because there is an old saying in computer programming: GIGO - "Garbage In, Garbage Out".  If you "feed" your musical mind with junk, you're going to have junk ideas.  Furthermore, if you don't feed it anything at all, you're not going to have any ideas!

 So you have to listen to, and love music.  But to harness it you have to learn it.  Not just one song, but a lot of songs, and a lot by at least one artist.  That gives you a context to work from.  Without that context you're left with theory.  Which is like having a dictionary and wanting to build a car by reading the definition of "combustion engine".  You've got to have hands-on experience with what a car is.  Not just pictures, not just definitions, not just videos of people telling you what they are. 

  I would guess from what I've seen of students, you've got to get in the ballpark of around 30 songs on the low end, optimistic side, to a more realistic 50+ songs before you've "fed" your brain enough info to start having creative ideas. notions of how *you* would want a piece of music to do. 

 This used to be something of a default in the process of becoming a musician, because there was a time when at least half of everyone I taught inevitably ended up in the proverbial First Cover Band, where one was required to learn a set of music in order to perform in front of an audience. 

 For whatever reasons this process vanished a few years ago.  Some people play at church, but this is not the same process as getting together with friends/people of similar tastes, playing music YOU decided you like.  You end up learning more, practicing more, and being more enthusiastic about music in general.  That's not to say playing at church can't be fun, but it's usually a more regimented environment where someone has predetermined what the music will be, with limited rehearsal.

 That initial impetus of "I want to play in a band!" used to get a lot of people to that 30 song mark.  I think now people look at YouTube and feel the bar has been raised so high that there is no point, and that's a fallacy.  For every "perfectly groomed perfectly executed pop band" you see on YouTube there are a 100 you don't see that are having fun, performing without the artificial "reality" of YouTube being the goal.  Having said that, there is nothing to keep a "band" from "performing" on YouTube - I've seen bands do this, I think it's a good idea. 

 Why people don't do it is because they don't realize that all of the greatest bands on the planet at a nascent period where they weren't great.  You're only seeing the end result, and on YouTube under the Most Ideal Conditions.  The term "garage band" has completely legit origins, and everybody you like musically has probably been in a "garage band" at some point.

 So you've got to learn songs, even if it's in the context of a "virtual band", or from mere self-discipline, sense of accomplishment standpoint.  I got in my first band because I'd already learned "a lot" of songs on my own, which allowed me choices I wouldn't have had initially otherwise.  I'm telling you, things will always seem confusing a good bit until you cross that 30 song threshold, so the sooner you get there the better!

$.10







 






Monday, June 24, 2019

Toolboxes

 I was listening to the David Gilmour guitar auction podcast today, and heard him play the 12 string he used on "Wish You Were Here".





 He kinda played an excerpt of the beginning, elaborated on it.  What he did kind of illustrated the origins of the riff.  

 Part of what he did invoked a bit of a stylized ragtime piano rhythm.  An old upright piano, of which the never perfectly in tune 12 string somewhat invokes, would be the conduit here.

 Another thing he did was a sort of Lead Belly quasi-stride piano blues, except it wasn't a pure blues but detoured into a post-early 60's U.K. folky chord voicing territory.

 Very informative.  The first thing he did was the rendition of the opening riff to WYWH, but he played on the accents a little differently.  I interpret these accents as being "within the parameters of ragtime proto-jazz embellishment".  I think he perceives it like that as well, the significant part being while it's related to what Lead Belly did, I think Gilmour is not thinking of the parameters he was operating under (while demonstrating on this podcast) as being "Lead Belly" per se, but...

... a tool in the "toolbox" that renders "Wish You Were Here".

 He's got the ragtime rhythm intent, but also the Lead Belly blues shuffle, and Lead Belly melodic embellishment intent.  As well as the folk voicing influence, which I would suggest was a "silent" influence on a lot of the post-60's classic rock guitar players (Jimmy Page...).

 The point of this post is that he had an operating skill and knowledge base of these elements prior to creating WYWH.  It wasn't a linear "learn this chord, then this phrase" but a combination of things from a "tool" standpoint that he liked.  The result is that the gestalt of Wish You Were Here are those things, and in theory one could take the same toolbox and make other "Wish You Were Heres"; not that you should, but one should realize that you've got to have tools in the toolbox to build something.




Wednesday, May 15, 2019

Solos of the Unknown Guitar Heros

 As mentioned in the previous blog, a lot of my favorite solos are not by my "favorite" guitar players, and not even by "known" players.


 In fact, embarrassingly not even known to me, really! I'm horrible with names unless they're odd/atypical, and the music is primary to me: in some of the cases below I've had to look up the names to remember them.  Just so you know.

 "My Love" - Paul McCartney/Wings, 1973 Henry McCullough. 

  A perfect song and a perfect solo.  I like knowing McCartney's... wing man... He had played behind Hendrix and Pink Floyd - so I suppose it shoudn't be a surprise I like this a lot?  Denny Laine heard me play this solo in the Beatles tribute band I was in, and approved of the rendition.  I wish I'd had more time to do it properly, since it's at the top of the list eh?

 "Goodbye to Love" - the Carpenters, 1972.  Tony Peluso

   An odd name and I still can only barely remember it, he was well known back then as a studio guy I believe, but I was just 4 years old when I first heard that.  It's simple, a recapitulated melody, executed perfectly with great inflection.  A perfect solo IMO to a perfect song.  A sort of revolutionary one to some people at the time, since the Carpenters were considered "soft rock" and a CRAZY, DISTORTED GUITAR SOLO was considered a WILD THING TO DO in 1972.  That was like, Pink Floyd, man... or some sort of attitude was given regarding it from Some People at the time I gather.

 I didn't care, I was 4, I liked the song a whole lot.


"Easy" - Commodores, 1976, Thomas McClary.  

 I'm going into Speculative Musical Anthropology Mode here and say... I think this was inspired by the _Goodbye to Love_ solo.  Maybe.  I don't know, of course.  Regardless, I think it's a brilliant solo, the inflections are quirkily perfect and expressive.  Listening to his solo record right now - there was a part on one song where the vocal arrangement  echo Queen, and the one solo I've heard so far sound Brian May influenced?  Curious.  Huh, on the song "Whatsoever Things" there's a bit that definitely sounds Queen inspired. This is interesting to me, because I'm wondering if he has similar tastes to me (although the style music is not my taste at all), or does he happen to listen to some people I like a lot (Queen) and that's coming out?  Curious.  It even seems he's using sort of a Brian May sound on certain parts.  It's like bipolar R&B / Queen?

 Hmm, now he's got a song that sounds Kings-X influenced?  Hmm.  Solo in this song:  yeah, he likes Brian May.  Staccato phrasing.  Hmm.. (looking up Mr. McClarey) I would seem Mr. McClary's net worth is over $74 million????  THAT'S A GUITAR SOLO!  Maybe this should be at the top of the list???

 Listening to another song by McClary (sorry to detour this blog post, I didn't expect to listen to this - see, this is HOW YOU SHOULD USE SPOTIFY, blast it!!!  Do your research, people!).  Hmm.  I would guess I'm hearing a Line 6 product as well.

 Wow, that was a detour, sorry.   Now I'm thinking maybe the "Easy" solo was actually possibly a sort of Brian May influenced approach?  One of the bends I now think could be indicative, but in 1975 - the same year Bohemian Rhapsody came out - that would have been very, very novel.  Interesting!

 Now I'm completely derailed from the original intent of this blog post.  Great.

"I Won't Hold You Back Now" - 1982, Steve Lukather.  

Well, here's an outlier: I know it's Steve Lukather, but outside of the Guitar Hero community it's "Toto, soft-pop rock".  It was music acceptable to my mother - I remember hearing this song on my parent's alarm clock going off to wake them up to "wake" me up for school in 1982.  This is before I played guitar, but I loved the solo section to this song.  It's like a lost Pink Floyd song - it's a solo Gilmour would be proud of.  The horn arrangement is sublime, so it's a great bed for a great solo.  But it's a perfect example of simplicity combined with subtle, perfectly nuanced execution.  There isn't much to it except perfect taste.

 I wish Spotify allowed playlists based on A-B looped offsets.  Oh well.

 I think I've failed on this blog post after the derail, sorry to the reader, but perhaps it was entertaining...? 



Wednesday, May 8, 2019

R577X Polymorphism α-actnin-3 "Speed Gene": I was at least 70% Wrong In My Last Post!

  Ok, well... maybe speed isn't 100% up to how patient you are.

 I'm different than at least 30%, perhaps in a tiny fraction of the population.  When I was a kid there were some anomalies.  

 I could always run faster than anyone in the 3 schools I went to.  Not for long - but in 30 meters I would definitely be ahead.  The coaches wanted me to be on track team seeing me zip around playing football during p.e., but I knew I had zero endurance capability - doing 50 laps around the football field was always grueling.

 In BMX I never won any races.  In fact, I found it insanely difficult to keep power going through a whole race.  One of the few times I've passed out in my life was between heats in a BMX event.  BUT - I was always ahead of everyone into the first corner, despite having a very steep "wrong" gearing on my bike.

 In 5th grade I could leg press the entire amount of weight on the machine - at least 1 time. Which was more remarkable given I was almost the smallest guy in the school.  BUT - I did not have any special endurance at "normal" weights.

 Skateboarding came easy to me.  Anything fast and quick.  

 When I started playing guitar, I was instantly "fast".  I didn't find speed to be particularly challenging on guitar.   Playing an entire Sor piece from an endurance standpoint was different, probably a big reason I didn't pursue classical guitar, but preferred learning passages from classical violin pieces.

 My wife recently got us the 23andme genetic testing service for my birthday.  As it turns out I carry the R577 alpha actinin-3 "elite power athlete-sprinter gene": my muscle composition likely is more like 80% fast twitch/Type II fiber instead of 20%, versus "slow, endurance" Type I.  




 So it would seem I have an advantage that I suspected, but had no direct evidence of.  Additional bonuses of this polymorphism are recovery time and training response.  As an aside, I have always found my wife's muscle cramps to be rather... disturbing, kind of surreal.  Because - I've never had a muscle cramp.  I haven't read that that has something to do with R577X alpha-actinin-3, but I would suggest it probably does ("everyone" has muscle cramps?  Where the muscle just.. does stuff... by itself...?  Yikes).

 The flipside to this: I'm at a disadvantage for muscle fatigue!  So it's more likely doing those bar chord exercises are more difficult for you than me!  Then there is the variant that bestows endurance; which favors the classical guitarist greatly I would imagine, being able to do multiple hours of bar chord-form based effort. 

 All of this would follow my observations of giving guitar lessons. Of the Famous Guitar Players you know, there are the few that are in that "upper .1% speed" bracket. But here's the takeaway: it's more useful in a pragmatic, musician sense, to NOT have this gene!   

 Speed is not inherently artistic.  Nor is it particularly special today, as it has been allowed to become a trite gimmick. The person with the endurance variant is more likely to find that trait useful.  And given that the fatigue I get from holding bar chords for a long time is probably greater than the average population - that's a great disadvantage as a guitarist. 

 You now have an excuse not to be able to play "ultra fast", maybe... BUT - for the rest, you probably have an advantage over me!  So go practice!

 



Thursday, April 18, 2019

You Are As Fast As You Are Patient.

"That's Impossible!!!!"



 I realize that there are a lot of things a budding guitar player sees other people do that seem, effectively, humanly impossible.  In the sense that "most people" do not realize how straightforward it is to develop muscle memory to accomplish a kinesthetically complex task.

 I will explain more later, but first I'd like to present the following video.  One should watch it with the understanding that the complexity of this kid's movements are not that much different than what is required mechanically to play guitar.  He has been able to get to the skill level he's at because he has a very limited set of movements to execute, and in turn has spent a lot of effort working very specifically on the repetition of those movements:



 I think the viewer will agree visually it looks almost fake; he's at the limit of human reflexes, our brain tells us "that's outside 99.9% of our experience".  

 What is actually outside of 99.9% is the number of people with OCD enough to pursue something repetitively with complete precision.  That kid doesn't just want the record for stacking cups, he has a need to be precise. It is the satisfaction of executing a movement with precision that made him end up fast, not trying to be fast.

 The person that is innately careful, but initially slow, will automatically end up being "fast".  I've seen it very often.  That is not to say the person that is overly cautious, or careful to a fault, but the person that is measured and weighted towards a moderate speed  instead of towards their limit.  

 I say this often: being kinesthetically fast is as easy as being patient. 

 

 


Tuesday, April 9, 2019

Digital Knobs = Bad Sound?

 How's that Madonna song go, "we live in a digital world"?  Oh, no wait, that's not it.  Kraftwerk got it right in the 70's: Computer World.

 Serendipity: I was making a joke and brought up Kraftwerk, and have now decided to completely go in a different tangent than I had thought I was going to do when starting this post.  But it still relates (and this is a great example of improvising; when you step into something that involves chaotic math a totally unexpected outcome can result).

 Kraftwerk invented techno music - in the 60's.  They exploited synthesizers to produce music that was influenced by their classical training (many people don't realize that), but was other-worldly and "futuristic" sounding.

 Ironically by today's standards, what they used back then in the 60's and 70's was super primitive.  Effectively all non-analog, non-digital.  How this relates to the title is as follows:

 Analog gear was controlled either by button pushes/switches, or potentiometers.  Which meant you had the following decision making going on:

"I want this to be on/off"
"I want this to be "just so"" (via a knob turn)

 In the analog world a potentiometer - a variable resistor, it changes the "potential" of a circuit - could be one of 2 varieties: linear or algorithmic taper.  Meaning, as you turned the knob, the resistance would either increase linearly or exponentially.

 That's fine, that is akin to controls in the digital world.  The difference is this: analog gear itself tended to have non-linear scaling.  It also had limitations not found in the digital world.  Physics acted upon them, in that there was a built-in limit to how much energy one had available and one could put into a device.  The device itself did not react perfectly linearly.

 In a nutshell: when you turned a knob on an amplifier, a synthesizer, an effects pedal, it was restricted by physics, not software.  In turn (pun intended) there was an ergonomic relationship to how much you turned the knob to what you got/heard.

 The Marshall Super Lead "plexi" amp is a great example of this: basically, every combination of knob settings on this amp has been used and "claimed" by iconic guitar players through history.  Mainly because just about any way you set the knobs it still produces a "good" sound.

 Contrast that with the Average Digital Amp today.  It is difficult to get something to sound "good", and pretty easy to get into trouble with something that sounds like Martin Brundle's dog's breakfast.  Partially because you have under one knob often times the power to do something that would have taken $2,000 worth of gear to do 35 years ago, but also because....

 .... there is no obviously clear "sweet spot" as you turn the knob.  It will give you too much; and it doesn't care when in the travel of the knob that happens.

 People in the 21st century now have two situations facing them: if they're the person that doesn't like to think about things much they may turn knobs and end up on something approximating something they like.

 Or, the more discerning persona may tweak and tweak and tweak and never get there.  That doesn't mean "there" isn't there; it is that it is too easy to zoom right past it, and too hard to eliminate one of the nth variables that wouldn't have been there 35 years ago.

 In turn in music today we again face the tyranny of juvenile extremism.  Wacked out, primitive and non-nuanced sounds predominate.  Overshooting the right setting with a digital control is so easy it's very much factored against the user to get a "good" sound.  What was the sweet spot on a Marshall plexi - almost the entire range of the control - is condensed into a very small portion of its software representation, and may be linear in scaling.   You hit the sweet spot, but it's not obvious as it is surrounded by "bad".  It blends out on the high and low end of the scale, and within the sweet spot it may not actually be 1:1 with the intended result.

 Because one knob/fader/control in software often does more than one thing, OR does something that has no equivalent in the real world, the sweet spot can be divided up into many components.  For instance, the way an equalizer gain:bandwidth:q:curve might respond on an analog eq is automatic and intuitive to twisting one knob labeled "midrange", on the software equivalent you might need 3 hands + 3 mouses/mice to operate all of them at once.

 Which brings me to my last point:

 Some software houses produce not necessarily particularly good effects/products, but have done a good job at replicating the natural feel of the knob turning experience.  This is in contrast to the real world - in which I think there are famous examples of long-lasting gear that has earned their reps based on how ergonomic and quickly one can achieve "a sound" on them.

 The microphone/equalizer equivalent to a Marshall plexi, the Neve 1073 eq, is touted as an "easy" and "quick" solution to getting a sound on many instruments.  The travel of the knobs versus what they do results in a good result almost all the time.   These have been emulated in software many times, and while one can produce white papers on how accurate they are, they all respond differently when you turn the "knobs" on screen.  That experience is not linear, or there is an aspect to the sound that is non-linear to the real thing at different locations in the settings.

 On the other hand, there are now "classic" software that tends to give useful results relative to the operation of their controls.  I think the company Waves has earned their rep not on just being the first to produce outboard-DAW VST software effects, but for getting the operating experience right.  I prefer to use their now "ancient" Renaissance compressor not for the sound, but because the threshold and ratio controls are big long sliders that seem to work intuitively.  The result will sound better, or rather I will get a result I like faster.

 As opposed to many free alternatives that could sound identical, or maybe better, but whose controls are super finicky.  Additionally in the digital domain there are many, many, MANY ways of shooting yourself in the foot without realizing it, as well as being redundant in operation.

 Eventually I think this will shake out.  People will realize a certain make of a certain digital effect is popular because of its workflow and ergonomics, and that will become the default.





















 














Tuesday, March 26, 2019

Lesson Room Guitar Amp Conundrum?

I have a conundrum.

 I've been using a pair of little rinky-dink practice amps for lessons effectively all of my life.  They're both suffering from bad input jacks and noisy channel switches.  It's time to replace them.


 I've thought it not necessary to bother with anything more elaborate.  If I can't make a little amp sound good then what's the point?  The student may or may not have anything better at home, it's a "realistic" playing environment.

 The problem is that with these little $100 2 channel amps is that I'm having to lean over and adjust the volume multiple times in every lesson.  Students have a variety of single coil or high output humbuckers, Les Pauls with P-90s or a big Gretsch with Filtertrons, or anything in between.  Their ouput levels vary wildly, as does their frequency spectrum.

 I should have just gotten a volume pedal to do this, but that's still kinda tweaky.

 So what I've thought is:

 Shouldn't I have a pair of amps with presets set up for not just level differences (single coil/humbucker) but also a variety of tones?

 Maybe.

 I've got a Boss Katana 50, which while fairly satisfactory tone wise, it doesn't do a particularly accurate job representing specific amps/recorded tones.  Not only that, it only saves 4 presets, of which Boss/Roland has decided to put into separate banks.


 A pair of them would be limited to a bank for single coil guitars, clean/dirty, and humbuckers, clean/dirty.  The annoying part here is that Roland has so limited their patch arrangement from a pedal standpoint that this is only using a tiny aspect of the amp's potential.

 Yes, I could run them off a laptop for patch changes.  Again, the problem is that the Roland software is so fiddly that it would not be practical, requiring a lot of scrolling, selecting.

 In effect I would only have doubled my amp capability to a selection for single/double coils.  So the practical, pragmatic part of my brain (that short circuits many schemes...) says "you may as well just have the cheapo 2 channel amps and a volume pedal".  A waste of money?

 A semi-obvious solution might be "Line 6 Spider".  The Line 6 stuff is fun, you can dial in a pretty accurate representation of a Famous Sound - but it's just that for the most part.  I'm not familiar with the new Spider V line, having only played through one very briefly.   My reservations about this route is that they sound "detached", and generally have an overriding resonant frequency that is prevalent across all of the settings that, once you hear it, becomes... very annoying. 

 Additionally, they do not respond linearly to input like the gear they're supposedly emulating.  In my experience you're forced to play harder, to the "top" of the preset sound that is modeling one preset level.  I've found this can be worked around a bit by using a gain stage-distortion before the amp, but... uhg.

 In this scenario, I would have plenty of patch choices, but it would be fairly pricey since Line 6 has chosen to use a proprietary pedal board interface (instead of MIDI).  The pedal board for each amp would be almost as much as the amps.

 But, I could just step on a button when the student comes in with a Strat after the student with a humbucking Schecter leaves.  I could also choose more specific tones depending on the music.  I'd also have an easy "mute" button (not to be underestimate in importance...).

 Uhg.

 Or I could just get another pair of little Fender Champ 20 amps, keep being over to twiddle level and settings every... single... lesson... for another...10 years.

A pair of Katanas - limited switching, more tolerable sound.
A pair of Line 6 Spider Vs - perfect switching, more expensive, maybe aggravating sound?
A pair of Fender Champ 20s - cheap, practical, ok sound.

 Odd man-out choices would be a pair of Monoprice Laney Cub-clones.  Very nice sound, but still manual switching, AND more limiting because the nicer sound is more limited to the nature of the sound itself.  It is not neutral.

 Computer-VST: this would be great, except - very clunky switching arrangement, and the sound would be coming through the same speakers I'm using to hear the music I'm having to transcribe.  Which makes things more difficult.  There is also the latency issue, unless I got a Specifically Fast Computer and Bus Based Interface.

One of the Cheap Pedal-Combo Solutions: more cabling, difficult preset arranging, still need a pair of amps.

Yamaha THR based amp:  I have one of these at home.  I think they're great for a very portable amp+computer interface.  The downside is that the tiny speakers means they sound... tiny.   And while it has 5 presets, they're only accessible by tiny buttons on the amp itself.  But the big fault of this amp is that there is no speaker out: I've got a pair of Marshal 1x12 cabs I could use at my office, but I would have to tear the amp apart and literally modify it to have a speaker output to make that work.

 And it would still be pressing one of 5 buttons with a finger for a preset.

Unknowns would be the Vox AVR "analog modeling" amps I've not heard (but I think aren't foot switchable), Peavey Vipre (I don't really hold out much for those), Blackstar modeling amps (which sound kinda bad online)(foot switchable?), and any number of options that are wayyyy beyond my budget.

 I'm going to be using my little sketchy "broke power switch Crate" and "bad-channel switch Blackstar HT1" for the next few weeks.  My nerves will continue to be ground down dealing with these, and I should just commit to one of the above now, but... I much cogitate and ruminate on the Best Most Logical Option some more.



























Saturday, March 2, 2019

Are You Rhythm, Melody or Harmony Centric?

 I get to know the music tastes of all of my students.  People are very varied in their personalities and quirks (a good thing), but there are categories of preferences I've noticed over the years that can classify a person in a general sense relative to these elements:

Rhythm
Melody
Harmony

 A "rhythm-centric" person  is someone that looks first and foremost for music that doesn't stray too far from a particular rhythm or beat. 

 For instance, one might like "blues" but favor the Austin / SRV rhythms, or maybe the cajun/New Orleans types of beats.  Alternately more "traditional" rhythms may be preferred, and one tends to choose artists based on how they work within the limits of those rhythms, or how far away they stray from that (or not).

 Metal and rap music fans tend to fall into this category.  Rap for obvious reasons, it's segmented based on the beat, but in metal the seemingly endless sub genres tend to be based around favored drum beats.  In both genres, the sub-genre one prefers has demarcation based on very rigid opinions regarding the historic context of the rhythm.  This is important to consider, because in my experience it's all about "is this "new" or not?".

 A "melody-centric" person isn't too concerned about the genre that the melody is in.  This person's taste will not be genre specific (even if they initially thing so...).  A country song, a classic rock song, maybe even a classical theme. 

 The "harmony-centric" person needs the context to have a harmonic structure that stands out.  Either a strong layered vocal part, or an extended chord as it's basis.  This person will tend towards progressive music, or jazz, as they are listening for the interplay of voice leading through changes as being the entertainment.  Or they may prefer 7th or 9nth based music instead of basic triads. 

 There can be crossover between the above, and what might at first seem like one is dominant upon closer inspection that may not be the case.  A person might be a jazz fan, and not realize they like certain chord layering in metal.   A metal fan might not realize they're actually drawn to melody - in any genre, apart from the rhythmic context. Or they may actually prefer harmonic arrangements in jazz.  A blues-rhythm fan may not notice at first the blues rhythmic context of traditional jazz (or vice-versa, as happens a lot).

 One doesn't have to necessarily be any of the above, but I think self-examination along those lines would be eye opening for some people who may think of themselves as preferring one genre.  There is brilliance and genius to be found in all genres, and experiencing that is fun and rewarding. If you understand what you tend to prefer in the above, it can be interesting to see where that interest can cross into other genres in unexpected ways.









Saturday, February 16, 2019

$200 Guitar Review: Monoprice DLX strat-clone

 Back when I started playing guitar in 1837, we had to make do with rusty chicken wire stretched between a limb cut from a briar bush, and we liked it I tell you!!!

 Not really.  I had a super heavy thick ply body explorer made by a company called Hondo, which I believe was an Idonesian manufacturer in the 80's.  It was "ok", better than prior generations had.  Having said that, it was plywood.  The neck was literally as thick as a baseball bat, irregularly sawn with a thick laquer that was super-sticky, thin, small and soft frets that were quickly worn out.  The tuners were plastic wanna-be Schallers that slipped really bad.  Crummy humbucking pickups.  The bridge was ok for what it was, a hard tail with solid saddles, maybe made of brass.  The truss rod didn't work, and the action had to be kind of high because the frets were pretty poorly slotted.

 It was about $275 in 80's money. Uhg.  I replaced it with my first "real" guitar, an Eddie Van Halen-era Kramer Pacer Imperial, about $500.  Half of a THOUSAND dollars!!!  Paid for by teaching guitar lessons, but still living at home so I managed it.  

 A few weeks a go a student of mine John Butts brought in something he couldn't resist trying: a $200 strat bought from Monoprice.com, the HDMI cable-selling online company.

 What I first noticed: an actual Wilkinson bridge.  I don't know if the metal is case-hardened, but the bridge is I believe as thick as any Wilkinson I've encountered - in other words it doesn't appear to be a cheaper version.  The aftermarket version of this bridge goes for about $100 by itself.  The bridge posts have inserts, and the tremolo bar is not the basic screw in type.  Very unexpected at this price point.


The fit and finish is apparent at this point.  Where the pick guard is cut out you can see it's parallel with the sides of the bridge (the front of this particular bridge is at an angle, don't go by that...).  It appears symmetrical from side to side.  Pick guard alignment is usually the easiest tell for a hyper-cheap guitar.

 The nut was cut well, and again the fit/finish here was good.  No "we're in a hurry" filing marks on the nut, the edge is flush, the slots parallel and the appropriate widths. The clear coat is also not showing evidence of runs here, and the cut of the end of the neck is again parallel to the nut and doesn't betray sketch tooling.



 The height was good as well.  They didn't err on the high side - it's "quite low" but not too low, a tricky thing on a cheap guitar.  

 In the picture below you can see where most cheap guitars go wrong, or hide a multitude of sins: the fret ends.  These were perfectly flush with the edge.  No evidence of glue slopped around (or used to fill in undercut frets to make up the gap (common with more expensive guitars sometimes)).  It is a 2 piece neck - there is a separate fingerboard instead of one piece, but this is good in a cheap guitar as it will be dimensionally more stable.  

 The finish is a thin satin alcohol urethane I'm presuming, over knot-free maple.  The grain was actually fairly tight as shown here, and straight.  The fingerboard was of a lighter color - something you wouldn't expect on a "nicer" guitar, but has zero functional impact. 

 



 In the next picture you can see what is maybe the most important arbiter in a "good" guitar versus a bad quality instrument.  The tightness of the gap (or lack of one) where the neck meets the body; the accuracy of the manufacturing and assembly will be evident here.  My Hondo from the 80's was considered "ok" - you couldn't slide a pick in there, but you could maybe get a business card in.  On the Monoprice guitar it's perfectly tight, and straight: again, basically no evidence of sketchy tooling, say 90% perfect. It's hard to get the round part that falls away to not be wavy along the edge, and they did a pretty good job. The fit all the way around the neck joint appears to be uniformly tight.  This would be better than much more expensive guitars in the 90's, and a lot today.  This really sets a standard IMO: there is no reason a more expensive guitar should show any gap or bad milling here these days.  I presume this is the result of latest generation CNC milling.  Also note the neatness of the fret ends, which were nicely beveled, and again perfectly consistent on the sides:









Partially evident in the picture above, the action was setup "quite low" - with zero buzzing or fretting out.  The frets were nicely polished.  I cannot attest to the hardness of the frets or how long they'll last, but given this is aimed at the "quasi-beginner" they'll last long enough.  

 My only beef is that they're of the super tiny old-school Fender size.  A medium jumbo should be the default these days, but that is not really an issue for a $200 guitar. 

 It played well, the tuners were good Chinese Schaller copies and didn't slip, stayed in tune.  The pickups are fairly generic and neutral; not offensive.  The paint/finish was effectively perfect, the flame-veneer looked nice. 

 So here's the bottom line: should a beginner buy this?

 No.  I am guessing that the **$99** Monoprice version is the same guitar, minus the flame-veneer and the Wilkinson bridge. 2 items you don't need on a first guitar.  I would recommend that one instead, or...


 There is a European chain store called Thomann that sells a house-brand of foreign made guitars that are very cleverly specced for their price under the brand "Harley Benton".  I have not played one but they appear to recently be gaining some notoriety in Europe as being a giant-killer purchase.  Barring attempting to order a custom guitar from China, the next step up guitar past the "starter" guitar I would say would probably most likely be a Harley Benton if it turns out they're of a decent build quality, but I can't vouch for that just yet.  They appear to have taken the Chinese contracted-guitar builder business to the next level QC and specification wise, with some very well thought out choices. 

 I would have been loathe to write such a column 15, even 10 years ago.  Fact of the matter is, most "name brand" guitars are built "outside the U.S." - in China or Indonesia.  You're effectively paying for quality control and the name brand on a cheap/sub-$500 guitar.  Despite what is said on YouTube, you can actually order a good quality guitar straight from China if you know how, and are willing to take something of a chance. Thomann / Harley Benton appear to have perfectly understood how to take the sketchiness out of that process for a small up charge (I say appear to because as I've said, I've not actually encountered one yet, so caveat emptor....). 

 Regardless, entry level guitars are now pretty effectively what would have been considered a "professional quality" guitar 15 years ago.  The only precedent for this would have been the first 2 years of the Fender Squier series in the mid-80s made in the Japanese Fuji-Gen factory.  The lower end Ibanezes appear to have a fairly good quality control standard, perhaps the Japanese are still "Japanese" in that regard despite their cheap guitars being built in Indonesia or "elsewhere".  I would have said Ibanez holds the upper hand on fight-to-the-bottom guitar war, but I think as of about the middle of 2018 onward we've reached a new low price wise, and a new quality standard. 

 I literally cannot imagine a good guitar being cheaper, or the quality getting better at this sub-$200 point.  This is partially due to China winning the trade war, but also largely due to advances in computer aided design being coupled so closely to super accurate milling machines.  The accuracy of the neck joint, fret slots, frets (probably auto-cut?) and glued surfaces (fingerboard) and how that adds up in the manufacturing process is a watershed event. 

 There is no reason to have a "bad" guitar in the year 2019.   

 














Friday, February 8, 2019

Prediction: there will be an A.I./GAN (Generative Adversarial Network) Learning VST Plugin That Will Revolutionize Audio Mixing by 2020

This could happen this year, but certainly within 3 years I would think.

A VST plug in in which you provide a target sound - an example of what you want, basically as many conformal-equalization/convolution plugins do now, but...

... it uses confrontational machine learning based on post-processing a sample of your novel sound.

  The tricky part would be to eliminate pitch from the process, I think. You don't want the plugin to try to pitch correct your guitar input to the target sample's pitch.  Another aspect that might be difficult would be integrating a time constant so that it doesn't just try to do an FFT/convolution/bin based transform.

  There are already plugins that claim to have neural-net based algorithms involved with evaluative processing.  This is not the same as what I am suggesting, in that those plugins are implementing existing IFR tools to alter the sound, as opposed to directly replicating the sound from scratch.  In other words, GANs are already used to make an input - a picture of someone - appear to be someone "new", modified by a GAN having been trained on a data set.

The GAN doesn't know it's changing things we have labels for: colors, shading, angles, etc..  It's just making the data fit what we want it to do.  In the same way, you'd feed your GAN plugin an example of guitar sounds you like, then it would morph your guitar sound based on making an output data set fit your expectation-data set.

This might work really well if applied to speaker simulation, since present convolution based plugins are only applying math linearly with a single value as the input function.  A GAN applied to an example data set of a range of dynamic values into a speaker (equivalent to a bright face versus a dark, high eyebrows or low, etc..) would be able to create a new data set (function applied to a d.i. guitar signal) that would alter the data set in a similar non-linear way across the input range.

 It wouldn't be real time at first, since you'd be applying the process to single buffered time frames - 5 ms chunks overlapping by a 1 am maybe - on a 3 minute input file.  So for each buffered frame you'd apply the GAN function with that frame's input level of sample for the 5 ms (which I think means you'd have to train the GAN on a similar matrix derived from the same time base, 5ms/44.1 khz).  Repeat until EOF.

 I think that such a plugin could be used for "finalizing" guitar sounds and mastering, but also perhaps even for mixing, provided your example target has a similar instrumentation as what you're giving it for an input to transform.

 It would be revolutionary, because it would probably make the bedroom recording result sound deceptively close/identical to Whatever Established Professional Recording one wanted, if "trained" properly.  Or at least one could create a mastering spectral curve/harmonic balance that matched an input data set, that would either create weird artifacts to instrument sounds (in order to make the match) or if the input set was close enough, bring it to the Uncanny Valley and perhaps make it sound strange in that respect as well.  Which would be interesting, and probably attract attention unfortunately for a few years as producers abuse the sound for it's novelty.

 Or, it could simply work very well and "fix" whatever you record to sound as much like the sound of something else you wanted.

   








Monday, January 28, 2019

"How Long Will it Take Me To Learn Guitar?" - Addendum

 No, I don't have an answer for that, as I've written about in my book.  There can't be a definitive answer for that.

 However, I have thought of a way of looking at time spent practicing guitar in a specific, efficient manner.

 Malcolm Gladwell has his 10,000 hours for mastery. I'll say you need to do those 10,000 hours at least 30 minutes at a time.  Generally, for most "challenging but within grasp" singularly mechanical/kinesthetic things:

10 minutes can show a temporary improvement, under "skilled direction / coaching" (in other words - by me. :-))


  It won't stick.  It will go away in less than a few minutes possibly.  Muscles have become limber, and the beginnings of muscle memory are happening.  The constant focus clarifies the kinesthetic awareness of movement, it starts to become familiar.  My usual adage is "go home and keep at it for as long as you can" because...



15 minutes WILL make a semi-permanent improvement.  


Provided the mechanics of something are reduced to their simplest, most concentrated form. At 15 minutes it will "stick" for 24 hours - until your muscle memory fades.  If you do another 15 minutes, you'll maintain that, and another 15 minutes you can extend it another 24 hours, etc..


30 minutes straight: a sweet spot. 


At this point if you do it again within 24 hours, you will actually be "building on top" of what you've accomplished the previous day.  Provided you do the exact same thing, which is extremely difficult for most humans.


45 minutes, the amount where the next time you play you can "see the horizon"! 

This means you retain a bulk of the skill set you acquired previously, and you then gain the beginnings of having a "yardstick" in which you can judge - or at least feel - when you can accomplish something.  Knowing whether you can play something ceases to be a question mark.

 This is super important from my standpoint of running a business!  If everyone I taught practiced at least 45 minutes a day, I would keep students for a very long time.  Because, at this point I'm not having to do psychological therapy to try to convince you you'll get better - you'll SEE that you can get better.

Again, the caveat being if it's one specific goal: a phrase/technique/passage/part.

 I face two issues as a business: the student not playing enough daily, and then having to be a magician to make up for that lack of playing in a 30 minute lesson, AND "mission creep": wanting to accomplish too much in one week.  WAYYYYYYYYYYYYYYYY too much!

 Expectations in the 21st century are crazily out of whack with reality, for various reasons I won't go into.

1 hour.  Big gains!

  At one hour a day, on the day after one will experience the semblance satori: enlightenment.  Skill will be compounded, and the yardstick phenomenon cited above becomes clearer, and suddenly - one sees that most anything that a human has done on guitar is doable!  Well, hopefully one sees that.  If you apply your efforts accurately and consistently.

 This is a bench mark for parents to realize. This is the big takeaway, and that is:

learning that time spent on something almost always has a pay off! 

 Even the kid that is goofing around for an hour.  A human can't do something focused for an hour and not learn something.  It's not possible.  

 Let's say somebody gets really  hyped up about playing some little bit from a Favorite Song.  And that's all they do for an hour. Maybe not spectacularly well, or perfect, but when they do it the next day they'll realize "hey... this is easier than it was yesterday!".

 That is why taking guitar lessons is more important than football or soccer.  You're never going to get that self-awareness of "yesterday I did something I specifically couldn't do for an hour and today it's easier" from football practice.  It's too diffuse, too general, and too diluted among a distracting group of people.

 When one practices alone, on one thing, for 1 entire hour, it will still be there the next day as "bonus".  But another hour does something very special:


2 hours: magic kicks in.

2 hours, either back to back or across 2 days, is special.  That's when the sensation of "I'm getting significantly better" happens.  The beauty of that being it's very motivating!

 Not 2 hours of kinda messing around, but 2 hours of something specific, applied repetition. A guitar solo, a specific chord change.   It can be just a simple song with open chords.  Play that song over and over for 2 hours, the next day you will definitely feel you're technically much better.

 There is zero ways around it.  YOU WILL FEEL AND KNOW YOU'RE BETTER.  Repetition is simultaneously the easiest and most difficult thing in the world.

 Most people have not done anything very specifically skilled, constantly focused, for more than a minute at a time.  These days maybe not even that long.  Play something very specifically for 2 hours - your muscles will be stretched out, you will have developed a muscle memory, you will have crossed a boundary where what you're doing takes very little mental effort and is becoming ingrained.

 You will be stronger the next day. You will be able to retrieve the muscle memory faster.  You will reach the satori-state faster.  You will be able to focus your effort faster, compounding gains.  Maintain an hour a day and that in itself becomes compounded.


3 hours plus: the land of the professional...

 I won't lie. I think each hour past 2 has a modifier of about a ... 20% reduction in practical gain in ability.

 What happens here is the mental wielding of concepts and phrases, exploring combinations and marking down, mentally, the results.  While you don't get as much physically I think from 3,4,5 hours of practice as you do out of the first 2, what you gain is in mental manipulation of your acquired skill set.  It's where "sounds" start getting labels based on experience.  It's the land of memorization and mental agility.

 Practice that Eric Clapton solo for 2 hours with the song.  In the 3rd hour you're not going to get physically better, but your "situational awareness" changes.  You can relax for starters, and you can contemplate how what you're doing fits in with the whole in a proper way.  You can listen to the pick attack, the nuance of the vibrato and bending, the amp sound.

 At 3 hours you're committing it to memory.  5 years later it might take you a moment to refresh your recollection of how to do it, cobwebby, but it will be there.

 Has the reader ever sat down and played the same song for 3 hours?  Honestly?  I'm guessing probably not.  The difference in being able to scratch through playing something, and having COMMAND of it, is in this. It's difficult practicing out past the point of "I'm pretty sure I have this", but going further is what makes someone a pro. 


 All of the above time spans are referenced to one day.  It's just the way it is, you can't get around it.  15 minutes, 30, an hour, 2 hours - that's what happens.  The great thing is that the above is FREE.  It doesn't cost you anything to do, and if you can get to each of those landmarks you'll gain something that is literally applicable to any other skill.

 Back to Gladwell: I believe his assessment is generally true.  You CAN be a "master" on guitar in around 10,000 hours.  Whether you do that at a rate of 8 hours a day or 30 minutes at a time.  In other words, you may as well strive for it - mastering an instrument before you die is better than not, right?  You can do it slow or fast, but it's predicated upon those 10,000 hours accumulating under the above parameters.






Wednesday, January 16, 2019

Impressions - Being Human is Data Compression

 I just tried to make a video for YouTube.

I was doing an extemporaneous analysis of the bootleg multitracks of the Beatle's _Sgt. Pepper's Lonely Hearts Club Band_.  So I thought, hey, I'll just go through each track and babble about what strikes me as it happens.

 As it turns out - and I knew this, but it was not illustrated to me so viscerally - I think a lot.  I did an hour straight on trying to get out of my mind thoughts about just the second section string track and the vocal track.  I was trying to be "not super detailed, not overly OCD".  I skipped a lot, what I perceive as being "a lot".

 What I don't perceive as being "a lot" is what is condensed as "what I'm hearing".   To unpack what I'm perceiving on just 2 seconds of part of the strings track could really take easily over an hour.  Translating instantaneous perception to what is in reality "slow motion" human "music theory" jargon. 

 But then also, the implications of it.  How it strikes me emotionally, but then also what I think the context is, and the timbral sound, and the ambience. 

 I stopped after realizing I could probably make 4+ videos on each part.   Whether anyone would care I don't know, I halfway think I should just do it just to see, or for merely the sake of it.  What is interesting is that in the literal process of doing it, I realized how much information the idea of


"AN IMPRESSION"

 

reduces, as a human.  It's somewhat token based, but also a blend of other compression and sorting schemes.  

 The human input/output buffer is massively parallel, obviously.   An epiphany for me is that what probably makes me a "naturally overtly talented musician" will work against me in this context.  It might be informational for a student, when I forced to condense things into a 30 minute lesson, but when allowed to expand in this way without that temporal boundary it's an ocean of information to wade through and collate.

 I've been thinking deeply about music since I was very, very young.  There are pictures of me with headphones on when I was 4 years old, pictures of me plucking at a toy piano at younger than that.  The ... internal array, the framework of my perception being built for decades now, is a way of compressing experience.  It's what humans do, catalog, sort, and collate experience.  For musical moments, it's definitely too much to try to unpack into a video explanation of said perception in a completely accurate fashion.  It would take a brain download to do that, but the question is can I rise to the challenge of being able to *moderate* it well enough to make gradations of decompressed-perception, to present a pragmatically granular explanation of "thought" that can be of use to somebody?

 I don't know.

 For a few years I've been mulling the idea of making a video series on the title of "Speculative Musical Anthropology", where I babble on what I *think* are connections between different pieces of music from a common background/influence.  I've jettisoned that as YouTube has allowed the corporate copyright-claim jihad to obliterate doing video on "things that could reference copyrighted material", despite the allowance for such a thing under the premise of education.  I don't want to go gangbusters into such a thing only to have it taken down; and I'm typically not motivated to do things if they're inherently likely to be stilted from the outset.  Pursuing the middle ground is the most difficult thing of all, of course.

 I'm stilling cooking the idea, though.  Let me know your thoughts if one cares about said subject.  I know I need a "Youtube presence", but the option-anxiety of possibility is immense.  






Wednesday, January 2, 2019

The Problem With DAW Plugins Not Officially Discovered: Scurrilous Experiments and Non-scientific Conclusions - PART TWO

(note to the glitterati that has contacted me, that either chooses to be argumentatively rambunctious or reflexively pedantic in a ego-needful way: I don't really care, as written in Part One this is errant, off the cuff extemporaneous "speculation".  As such I'm not willing to debate about it, nor do I care if you want to make a mental ego-measuring contest out of it: I don't need to do that, why do you...?)

.. part two, where Chip further digs an unfounded hole.....



GRIPE #2

The temporal number crunching.  This is where Ye Old Infinite Resolution steps in, but wait! I'm not talking about it in the "traditional sense", give me a moment...

 In the analog domain, your distortion pedal is instantaneously changing your guitar sound.

Every moment you play, yields

1) a unique level
2) a unique pitch
3) a unique harmonic content


 Every moment.  With zero latency, with perfect parallelism.  From a processing standpoint, in software you've got to address those 3 things based on an instantaneous sampling reduced to a single number representing level.  To get a result from your function, you have to determine a modifier for those 3 things.

 This should be perfectly digital model-friendly, it would seem.  The problem I think, is that you have to do math on the single sample one at a time serially, or you have to do it component-wise and then add it together.  You're applying basic math to the number to represent the change in level, the change in pitch, and the harmonic content.  It's really just one number across a set of numbers  - a grouping of 1,024, or some such.  A processing "clump".

That "clump" then leads to another clump, etc..  The math applied to each clump will be the same.
The buffer is NOT instantaneous, however.  So while in theory the sample rate is "fast enough" to represent any audio signal, the software is trying to modify that signal faster than reality.  It's not that the analog world has Infinite Resolution, it's that it has Infinite Parallel Processing Power.  It's not doing anything in a buffered state.  It's not doing anything serially, or in modules paralleled.   No clumping.  One continuum.  The variability changes with infinite granularity; all aspects are not fitted to a curve and composited serially. 

 Comb filtering is (effectively) errors in sound that occur at mathematically regular intervals across the spectrum.  It's my belief that as a byproduct of the math in software happening temporally, clump by buffered clump - but with metered regularity of delimited by the buffer size - that across a longer time scale (a second, 2 seconds), there is a "temporal comb filtering" happening.

 "Temporal Comb Filtering": yes, I made that up.  Normally one describes comb filtering as an instantaneous phenomenon.  "Here is the sample of this moment, and we can see peaks at 100 hz, 200 hz, 400 hz, etc.".  What I am describing is this happening at some ratio across time.

 The buffer z is processed, then z+1, then z+2, etc..  But, because the same math is being applied to every buffer, there could be artifacts/errors introduced that creates a harmonic series only seen in multiples of the buffers.  On a waterfall plot it would be buried among the resulting signal.  A number being rounded up or down, 1,024 times modified by whatever other functions,  creating an artifice that is not visible in a graph, or even a waterfall plot because - how do you know it's an artifact when it's the result of math on a test signal that's changing?

 The rest signal is *variable*.  Guitars are not perfect signal generators.  The math applied to a perfect sine wave would be confusing, because you are making a function that is intentionally truncating values to yield distortion.   You have no way of knowing if your mathematical system across time is making a harmonic series alteration that is not linear to a Real World Analog Amp.

 Even if you have a sweep, or a set complex wave, you wouldn't know because you can never measure it against an analog equivalent perfectly.  Comb filtered sound can measure frequency wise as being "close" - but again I claim the human mind can discern the difference across a large sample set.

 Your brain realizes "there is a commonly reoccurring series here" that doesn't happen in the analog world. A non-humanly testable phenomena, and a non-scientifically testable phenomena.

 The result being, for most distorted guitar sounds I hear an amount of comb filtering I don't like in the mids/highs.  When that doesn't change - it sounds "digital" to me.

 I first had an inkling of this thought when the first Line 6 gear came out.  When I first heard it I was super impressed - it does sound like, in time slices, the real thing.  But then, if you hold a chord and spin the dial while the presets go by, you'll notice a harmonic coloration to *all* of the presets.

 That is software artifacts I think, and it's evidenced by comb filtering in the same manner on everything.  All digital sims have this I realized, when I tried the Fender Cyber Twin for the first time: spin the knob, and there it is, comb filtering.  Plug into the Vox modelling amp next to it, spin the knob - all the presets have that comb filtered sound, maybe at a different frequency/spread.

 Once you hear it, it's always there.  You can fool yourself into thinking you don't notice it, but it's there.  Every electrical system is going to have comb filtering artifacts, particularly speakers, but it's not a fixed thing between devices.  And it's state-variable; more or less evident depending on the input signal.

 As an example of this, I'll point to a video by John Segeborn that is tremendously great and educational.  In this video he plays the same thing back through different models of a Celestion Greenback speaker.  You'll hear comb filtering on each as a "shhhhh" harmonic coloration, but it will be different on model.  Which is fine - that's what speakers do.  The problem is when your software is adding another coloration on top of that one, or homogenizing it:



  In each example you can hear a spike in treble.  BUT, you're not just hearing a spectral peak, it also has comb filtering: a vaguely "smeary" sound, that changes in dominance depending on the signal.  My belief is that humans are super sensitive to this, and THIS is what software is messing up in sims.  I think it is too linear in general to signal level in sims.

 So, at some volumes it might be spot on.  At other levels it's too loud, or maybe buried by the upper harmonics.  This interaction is flawed in digital recreations I think.


 I think.  I do not feel like trying to provide proof or documentation.  I've been (unfortunately) doing all sorts of tedious comparisons and tests for years at this point that has led me to these assertions.  I think there is a problem here in the comb filtering and harmonic decay linearity.  I could be wrong.  Harmonic decay errors, and comb-filtering problems.

$.10.

POST SCRIPT

 Here's a yet another free idea I wish I had the resources in which to patent, but I don't:
Without a doubt, at some time within 3 years a company will come out with a post-processing VST plugin that will use A.I./confrontational machine laerning to conform a track output to mimic anything.


















Wednesday, December 26, 2018

The Problem With DAW Plugins Not Officially Discovered: Scurrilous Experiments and Non-scientific Conclusions - PART ONE

 I've spent... wasted... thousands of hours tinkering with variations on setting up processing chains in DAWs. 

 I know "in theory" things are Perfect, and "digital sound" is a myth.

 Except, I've never been happy with recorded sound, my own or with others in the post-digital age.  It's always been a nebulous thing, and it's always been something that has been attempted to be quantified by the usual parameters:

  • Time domain;
  • Spectral;
  • Bit rate/depth
  • Digital timing (jitter).

 These things have all been sorted out in the year 2018 to a very fine degree.  In theory, it's not only perfect - it's beyond perfect, because there is more theoretical digital dynamic range than there is in physical reality.

 ... but still I'm left unsatisfied.  Particularly by guitar sounds, but pretty much everything.  It occurred to me last year I was "chasing the dragon": after thinking about it - I kind of don't like most recorded guitar sound.  Even the Most Famous ones.  Even the ones of my favorite players.

 Furthermore, I think post-digital the aspect I don't like has been exaggerated.


 At first I thought I was hearing simply a spectral response I didn't like.  This is a way of thinking that I believe 99.9% of the musicians on the planet think like in regards to sound.  It's not a wrong way, but it's not comprehensive in 2 ways that not heard or read anybody discuss.


  •  The dynamic linearity of "effect simulations" are non-linear to reality.
  •  By default of the necessity for serialization in FIFO digital processing, phase relationships of non-Fourier transform processing has a "sound" when trying to mimic "near signal truncation" effects (distortion) - possibly leading to comb-filtering noticeable across time.

GRIPE #1



 This is a very, very subtle thing and I'm quite sure very few people can consciously perceive what I'm going to describe, but it's real:

  Software emulations of analog gear usually consists of a means of reproducing a spectral response or balance over time.  Meaning one expects (excuse my ham fisted notation)  x(fn1+x*x1),(fn*x2) to yield a frequency distribution that is the same as an analog device.

 The acceptable result is not expected to be perfect.  The analog devices are not perfectly linear, and the math is expected to be a "close approximation", which it usually, remarkably, is.  The functions yield a nice approximation of an instantaneous spectral response that sounds like The Thing Being Emulated.

 For my first Perhaps Imaginary Gripe I think that there is a substantial temporal difference in the math in the box versus the analog realm.  Mainly, in the timing of the non-linearity of the decay of the harmonic distortion spread dynamically.

CHIP, PLEASE SPEAK ENGLISH...

 Ok, what that means is that say for a classic "overdriven tube amp distortion" on a single note that is struck hard, as the note dies out in the first few ms there is a balance of low to high frequency content.  You hear a brash noisy "csryshhhh" on the attack and THEN you hear the lower harmonics, and as the note fades across the initial 100 ms the harmonic "blend" dies out at differing rates.

 What I "think" I'm hearing is this discrepancy:  with the digital simulations,

  •  The high frequency square wave upper harmonics last too long;
  •  As the note fades, the high harmonics fade at the same rate as the lower;
  •  This rate doesn't not change when you change how hard you play.

 With a real analog sound, those three things are reversed.

 So there is an Uncanny Valley (look up the term if you don't know what that means) wherein the mind hears a blend of harmonics - in the single "time slice" of awareness - that sounds almost exactly like the Real Thing.

 What the mind *doesn't* perceive precisely is that the way it's decaying doesn't match the real world.  But it's my pet theory that we can only internalize the examination of our internal "audio buffer" in single instantaneous time slices.  It's hard, or impossible, to really quantify the nature of how it falls out.

SIDEBAR:

 I also will theorize that this is due to evolutionary survival requirements.  The way things decay harmonically is also implicit in the way nature sounds at a distance.  The rustling of leaves, for instance: that has a particular decay characteristic, which is different than the sound of A Large Threatening Predator Brushing Against a Bush.

 The aggravating pedantic arguments placed by people wanting to assert themselves that humans are strictly limited to *acting on* information consciously testable is proven to be a fallacy in this example.  You can test 1,000 people by playing them the sound of an animal walking among nature, trampling on the ground, and while the auditory cues are only milliseconds in duration they'll all be able to say "sounds like an animal walking around".

 Play them one 100 ms example, and they won't have a clue.  Yet, across a large sample set (10 seconds), those tiny little sounds that only last a fraction of a second subconsciously conveys a very specific story: "large animal walking around behind you to the right, 20 feet away".

 So no - I'm not impressed by arguments of "the ear can only hear 20-20, 44.1/16bits captures all the information we can perceive", because it's based on primitively testing the instantaneous awareness of untrained people on test tones.  Your mind, as in the example above, makes an assessment across time of what it's hearing.  It's not *consciously* analyzing the frequency response, decay characteristics, phase relationships, etc. - your subconscious mind is doing the heavy lifting and returning a result that says

"something isn't real about this "amplifier" you're hearing".

 Comparing one single time slice to the victim amp doesn't mean it's identical temporally 100%.  That the technology gets very close is baffling, but I claim your cerebellum does tricky processing *across a sample set* that defies quantifying by instantaneous measurement parameters (frequency/level).

...sorry.

  
BACK TO OUR REGULAR PROGRAMMING...

 So you hear the simulated amp, and it reminds you of the real thing on an instantaneous basis.  But as you play it, you become less and less convinced.  You can't really put your finger on it...

.. but I claim the way the note dies out, the way the spectral balance changes, and the way that responds linear to your touch is giving your cerebellum a picture that only it is privvy to computationally.

END OF PART ONE.....