Guitar Lessons by Chip McDonald - chip@chipmcdonald.com: The Problem With DAW Plugins Not Officially Discovered: Scurrilous Experiments and Non-scientific Conclusions - PART ONE

Wednesday, December 26, 2018

The Problem With DAW Plugins Not Officially Discovered: Scurrilous Experiments and Non-scientific Conclusions - PART ONE

 I've spent... wasted... thousands of hours tinkering with variations on setting up processing chains in DAWs. 

 I know "in theory" things are Perfect, and "digital sound" is a myth.

 Except, I've never been happy with recorded sound, my own or with others in the post-digital age.  It's always been a nebulous thing, and it's always been something that has been attempted to be quantified by the usual parameters:

  • Time domain;
  • Spectral;
  • Bit rate/depth
  • Digital timing (jitter).

 These things have all been sorted out in the year 2018 to a very fine degree.  In theory, it's not only perfect - it's beyond perfect, because there is more theoretical digital dynamic range than there is in physical reality.

 ... but still I'm left unsatisfied.  Particularly by guitar sounds, but pretty much everything.  It occurred to me last year I was "chasing the dragon": after thinking about it - I kind of don't like most recorded guitar sound.  Even the Most Famous ones.  Even the ones of my favorite players.

 Furthermore, I think post-digital the aspect I don't like has been exaggerated.


 At first I thought I was hearing simply a spectral response I didn't like.  This is a way of thinking that I believe 99.9% of the musicians on the planet think like in regards to sound.  It's not a wrong way, but it's not comprehensive in 2 ways that not heard or read anybody discuss.


  •  The dynamic linearity of "effect simulations" are non-linear to reality.
  •  By default of the necessity for serialization in FIFO digital processing, phase relationships of non-Fourier transform processing has a "sound" when trying to mimic "near signal truncation" effects (distortion) - possibly leading to comb-filtering noticeable across time.

GRIPE #1



 This is a very, very subtle thing and I'm quite sure very few people can consciously perceive what I'm going to describe, but it's real:

  Software emulations of analog gear usually consists of a means of reproducing a spectral response or balance over time.  Meaning one expects (excuse my ham fisted notation)  x(fn1+x*x1),(fn*x2) to yield a frequency distribution that is the same as an analog device.

 The acceptable result is not expected to be perfect.  The analog devices are not perfectly linear, and the math is expected to be a "close approximation", which it usually, remarkably, is.  The functions yield a nice approximation of an instantaneous spectral response that sounds like The Thing Being Emulated.

 For my first Perhaps Imaginary Gripe I think that there is a substantial temporal difference in the math in the box versus the analog realm.  Mainly, in the timing of the non-linearity of the decay of the harmonic distortion spread dynamically.

CHIP, PLEASE SPEAK ENGLISH...

 Ok, what that means is that say for a classic "overdriven tube amp distortion" on a single note that is struck hard, as the note dies out in the first few ms there is a balance of low to high frequency content.  You hear a brash noisy "csryshhhh" on the attack and THEN you hear the lower harmonics, and as the note fades across the initial 100 ms the harmonic "blend" dies out at differing rates.

 What I "think" I'm hearing is this discrepancy:  with the digital simulations,

  •  The high frequency square wave upper harmonics last too long;
  •  As the note fades, the high harmonics fade at the same rate as the lower;
  •  This rate doesn't not change when you change how hard you play.

 With a real analog sound, those three things are reversed.

 So there is an Uncanny Valley (look up the term if you don't know what that means) wherein the mind hears a blend of harmonics - in the single "time slice" of awareness - that sounds almost exactly like the Real Thing.

 What the mind *doesn't* perceive precisely is that the way it's decaying doesn't match the real world.  But it's my pet theory that we can only internalize the examination of our internal "audio buffer" in single instantaneous time slices.  It's hard, or impossible, to really quantify the nature of how it falls out.

SIDEBAR:

 I also will theorize that this is due to evolutionary survival requirements.  The way things decay harmonically is also implicit in the way nature sounds at a distance.  The rustling of leaves, for instance: that has a particular decay characteristic, which is different than the sound of A Large Threatening Predator Brushing Against a Bush.

 The aggravating pedantic arguments placed by people wanting to assert themselves that humans are strictly limited to *acting on* information consciously testable is proven to be a fallacy in this example.  You can test 1,000 people by playing them the sound of an animal walking among nature, trampling on the ground, and while the auditory cues are only milliseconds in duration they'll all be able to say "sounds like an animal walking around".

 Play them one 100 ms example, and they won't have a clue.  Yet, across a large sample set (10 seconds), those tiny little sounds that only last a fraction of a second subconsciously conveys a very specific story: "large animal walking around behind you to the right, 20 feet away".

 So no - I'm not impressed by arguments of "the ear can only hear 20-20, 44.1/16bits captures all the information we can perceive", because it's based on primitively testing the instantaneous awareness of untrained people on test tones.  Your mind, as in the example above, makes an assessment across time of what it's hearing.  It's not *consciously* analyzing the frequency response, decay characteristics, phase relationships, etc. - your subconscious mind is doing the heavy lifting and returning a result that says

"something isn't real about this "amplifier" you're hearing".

 Comparing one single time slice to the victim amp doesn't mean it's identical temporally 100%.  That the technology gets very close is baffling, but I claim your cerebellum does tricky processing *across a sample set* that defies quantifying by instantaneous measurement parameters (frequency/level).

...sorry.

  
BACK TO OUR REGULAR PROGRAMMING...

 So you hear the simulated amp, and it reminds you of the real thing on an instantaneous basis.  But as you play it, you become less and less convinced.  You can't really put your finger on it...

.. but I claim the way the note dies out, the way the spectral balance changes, and the way that responds linear to your touch is giving your cerebellum a picture that only it is privvy to computationally.

END OF PART ONE.....




















































 











No comments:

Post a Comment