Guitar Lessons by Chip McDonald - chip@chipmcdonald.com: January 2019

Wednesday, January 16, 2019

Impressions - Being Human is Data Compression

 I just tried to make a video for YouTube.

I was doing an extemporaneous analysis of the bootleg multitracks of the Beatle's _Sgt. Pepper's Lonely Hearts Club Band_.  So I thought, hey, I'll just go through each track and babble about what strikes me as it happens.

 As it turns out - and I knew this, but it was not illustrated to me so viscerally - I think a lot.  I did an hour straight on trying to get out of my mind thoughts about just the second section string track and the vocal track.  I was trying to be "not super detailed, not overly OCD".  I skipped a lot, what I perceive as being "a lot".

 What I don't perceive as being "a lot" is what is condensed as "what I'm hearing".   To unpack what I'm perceiving on just 2 seconds of part of the strings track could really take easily over an hour.  Translating instantaneous perception to what is in reality "slow motion" human "music theory" jargon. 

 But then also, the implications of it.  How it strikes me emotionally, but then also what I think the context is, and the timbral sound, and the ambience. 

 I stopped after realizing I could probably make 4+ videos on each part.   Whether anyone would care I don't know, I halfway think I should just do it just to see, or for merely the sake of it.  What is interesting is that in the literal process of doing it, I realized how much information the idea of


"AN IMPRESSION"

 

reduces, as a human.  It's somewhat token based, but also a blend of other compression and sorting schemes.  

 The human input/output buffer is massively parallel, obviously.   An epiphany for me is that what probably makes me a "naturally overtly talented musician" will work against me in this context.  It might be informational for a student, when I forced to condense things into a 30 minute lesson, but when allowed to expand in this way without that temporal boundary it's an ocean of information to wade through and collate.

 I've been thinking deeply about music since I was very, very young.  There are pictures of me with headphones on when I was 4 years old, pictures of me plucking at a toy piano at younger than that.  The ... internal array, the framework of my perception being built for decades now, is a way of compressing experience.  It's what humans do, catalog, sort, and collate experience.  For musical moments, it's definitely too much to try to unpack into a video explanation of said perception in a completely accurate fashion.  It would take a brain download to do that, but the question is can I rise to the challenge of being able to *moderate* it well enough to make gradations of decompressed-perception, to present a pragmatically granular explanation of "thought" that can be of use to somebody?

 I don't know.

 For a few years I've been mulling the idea of making a video series on the title of "Speculative Musical Anthropology", where I babble on what I *think* are connections between different pieces of music from a common background/influence.  I've jettisoned that as YouTube has allowed the corporate copyright-claim jihad to obliterate doing video on "things that could reference copyrighted material", despite the allowance for such a thing under the premise of education.  I don't want to go gangbusters into such a thing only to have it taken down; and I'm typically not motivated to do things if they're inherently likely to be stilted from the outset.  Pursuing the middle ground is the most difficult thing of all, of course.

 I'm stilling cooking the idea, though.  Let me know your thoughts if one cares about said subject.  I know I need a "Youtube presence", but the option-anxiety of possibility is immense.  






Wednesday, January 2, 2019

The Problem With DAW Plugins Not Officially Discovered: Scurrilous Experiments and Non-scientific Conclusions - PART TWO

(note to the glitterati that has contacted me, that either chooses to be argumentatively rambunctious or reflexively pedantic in a ego-needful way: I don't really care, as written in Part One this is errant, off the cuff extemporaneous "speculation".  As such I'm not willing to debate about it, nor do I care if you want to make a mental ego-measuring contest out of it: I don't need to do that, why do you...?)

.. part two, where Chip further digs an unfounded hole.....



GRIPE #2

The temporal number crunching.  This is where Ye Old Infinite Resolution steps in, but wait! I'm not talking about it in the "traditional sense", give me a moment...

 In the analog domain, your distortion pedal is instantaneously changing your guitar sound.

Every moment you play, yields

1) a unique level
2) a unique pitch
3) a unique harmonic content


 Every moment.  With zero latency, with perfect parallelism.  From a processing standpoint, in software you've got to address those 3 things based on an instantaneous sampling reduced to a single number representing level.  To get a result from your function, you have to determine a modifier for those 3 things.

 This should be perfectly digital model-friendly, it would seem.  The problem I think, is that you have to do math on the single sample one at a time serially, or you have to do it component-wise and then add it together.  You're applying basic math to the number to represent the change in level, the change in pitch, and the harmonic content.  It's really just one number across a set of numbers  - a grouping of 1,024, or some such.  A processing "clump".

That "clump" then leads to another clump, etc..  The math applied to each clump will be the same.
The buffer is NOT instantaneous, however.  So while in theory the sample rate is "fast enough" to represent any audio signal, the software is trying to modify that signal faster than reality.  It's not that the analog world has Infinite Resolution, it's that it has Infinite Parallel Processing Power.  It's not doing anything in a buffered state.  It's not doing anything serially, or in modules paralleled.   No clumping.  One continuum.  The variability changes with infinite granularity; all aspects are not fitted to a curve and composited serially. 

 Comb filtering is (effectively) errors in sound that occur at mathematically regular intervals across the spectrum.  It's my belief that as a byproduct of the math in software happening temporally, clump by buffered clump - but with metered regularity of delimited by the buffer size - that across a longer time scale (a second, 2 seconds), there is a "temporal comb filtering" happening.

 "Temporal Comb Filtering": yes, I made that up.  Normally one describes comb filtering as an instantaneous phenomenon.  "Here is the sample of this moment, and we can see peaks at 100 hz, 200 hz, 400 hz, etc.".  What I am describing is this happening at some ratio across time.

 The buffer z is processed, then z+1, then z+2, etc..  But, because the same math is being applied to every buffer, there could be artifacts/errors introduced that creates a harmonic series only seen in multiples of the buffers.  On a waterfall plot it would be buried among the resulting signal.  A number being rounded up or down, 1,024 times modified by whatever other functions,  creating an artifice that is not visible in a graph, or even a waterfall plot because - how do you know it's an artifact when it's the result of math on a test signal that's changing?

 The rest signal is *variable*.  Guitars are not perfect signal generators.  The math applied to a perfect sine wave would be confusing, because you are making a function that is intentionally truncating values to yield distortion.   You have no way of knowing if your mathematical system across time is making a harmonic series alteration that is not linear to a Real World Analog Amp.

 Even if you have a sweep, or a set complex wave, you wouldn't know because you can never measure it against an analog equivalent perfectly.  Comb filtered sound can measure frequency wise as being "close" - but again I claim the human mind can discern the difference across a large sample set.

 Your brain realizes "there is a commonly reoccurring series here" that doesn't happen in the analog world. A non-humanly testable phenomena, and a non-scientifically testable phenomena.

 The result being, for most distorted guitar sounds I hear an amount of comb filtering I don't like in the mids/highs.  When that doesn't change - it sounds "digital" to me.

 I first had an inkling of this thought when the first Line 6 gear came out.  When I first heard it I was super impressed - it does sound like, in time slices, the real thing.  But then, if you hold a chord and spin the dial while the presets go by, you'll notice a harmonic coloration to *all* of the presets.

 That is software artifacts I think, and it's evidenced by comb filtering in the same manner on everything.  All digital sims have this I realized, when I tried the Fender Cyber Twin for the first time: spin the knob, and there it is, comb filtering.  Plug into the Vox modelling amp next to it, spin the knob - all the presets have that comb filtered sound, maybe at a different frequency/spread.

 Once you hear it, it's always there.  You can fool yourself into thinking you don't notice it, but it's there.  Every electrical system is going to have comb filtering artifacts, particularly speakers, but it's not a fixed thing between devices.  And it's state-variable; more or less evident depending on the input signal.

 As an example of this, I'll point to a video by John Segeborn that is tremendously great and educational.  In this video he plays the same thing back through different models of a Celestion Greenback speaker.  You'll hear comb filtering on each as a "shhhhh" harmonic coloration, but it will be different on model.  Which is fine - that's what speakers do.  The problem is when your software is adding another coloration on top of that one, or homogenizing it:



  In each example you can hear a spike in treble.  BUT, you're not just hearing a spectral peak, it also has comb filtering: a vaguely "smeary" sound, that changes in dominance depending on the signal.  My belief is that humans are super sensitive to this, and THIS is what software is messing up in sims.  I think it is too linear in general to signal level in sims.

 So, at some volumes it might be spot on.  At other levels it's too loud, or maybe buried by the upper harmonics.  This interaction is flawed in digital recreations I think.


 I think.  I do not feel like trying to provide proof or documentation.  I've been (unfortunately) doing all sorts of tedious comparisons and tests for years at this point that has led me to these assertions.  I think there is a problem here in the comb filtering and harmonic decay linearity.  I could be wrong.  Harmonic decay errors, and comb-filtering problems.

$.10.

POST SCRIPT

 Here's a yet another free idea I wish I had the resources in which to patent, but I don't:
Without a doubt, at some time within 3 years a company will come out with a post-processing VST plugin that will use A.I./confrontational machine laerning to conform a track output to mimic anything.