GUEST POST: Roger Mayer on digital modelling and plug-ins

I recently interviewed a famous producer who said they love to use Roger Mayer’s RM58 limiter, and they mentioned that they wished it came in plug-in form so everyone could enjoy it. But as Roger is one of the pioneers of analog music technology and is dedicated to preserving the full sonic glory of the analog audio signal, somehow I don’t see that happening. Roger has supplied the following application note explaining his discoveries on digital and its deficiencies compared to analog.


By Roger Mayer

The claims and performance of digital modelling and plug-ins have several basic flaws, which are conveniently forgotten in the hype and description of their use.

Information in the original sound source:

The fact is that you are trying to simulate or emulate a sound using a sound source that differs in one or more ways from that which you wish to emulate. Your starting source of information might or probably does not contain within itself the necessary information you are trying to simulate. It is not possible to accurately extrapolate information from any sources that do not contain it.

Predicting future musical events:

You cannot accurately predict what has not happened yet. What you can do is make a global decision about what has already passed and then apply various mathematical algorithms after a sufficient time or number of waveform cycles has passed with no accurate idea of what will happen next. This applies whether you are trying to simulate a valve, loudspeaker or whatever. You also cannot also possibly instruct your digital modeller what to do if nothing has happened yet. It must wait until an event has passed or idle in some state that was previously determined by a past event.

Speed of Processing:

Now also remember that the speed of processing is not the answer to all the problems as in the case of musical instruments or vocals you will have to wait for zero crossings of the wave form before you have any idea of what the frequency is and therefore what possibly to do. The bottom E note on a guitar equates to a period of 12.5ms. This is the main reason that realtime modelling does not work well and when using plug ins even with a substantial buffer time, only effects like overall reverberation and echo have achieved studio acceptance and still suffer severe digital degradation as the signal decays.

Playing in Real Time and Zero Latency:

Players using real time effects rely on the fact of zero latency or as close as possible, as the sound they are hearing will affect what they will play next as regards feel and content. Effects that require the brain to be included in the loop in the first instance have never been able to be added successfully later. These include wah-wah, chorusing and other modulation effects.

Amplifier Simulation:

The same goes to a lesser degree with amp simulation, as the overall sound level will influence the feel of the guitar player and not to mention the effect of acoustic feedback at high volume levels to the strings. These important factors are of course never mentioned and the fact remains: how could a sound source that does not contain sufficient detail or dynamics be artificially enhanced in a musical manner that would stand up to the sound source it is trying to emulate? Maybe it is good enough in a bedroom for a beginner, but most people with experienced ears use this type of technology very carefully with full knowledge of the downside of digital processing.

Digital Sampling Rates:

The golden rules are:

Analogue information is continuous, digital is maths.

Keep the conversions from A to D and D to A to a minimum.

The ultimate quest is one conversion only when making the CD.

Once high frequency detail is lost, it is gone forever and cannot be restored.

We live in an analogue world and sound by definition is analogue as our ears respond to analogue changes in sound pressure.

One thought on “GUEST POST: Roger Mayer on digital modelling and plug-ins

  1. What a load of crap :-). The only thing I agree with is that you need the monitored sound to be as close to the final result as possible and that you can’t add stuff like wah afterwards – but how is that a problem if you can emulate in realtime? But it’s good that he has built an analog time machine that can predict things from the future.

Comments are closed.