Beat (English)

Strategies and tools

-

Every beginning is difficult - and as a sound engineer you are all too often faced with a new project and ask yourself the all-important question: What am I actually doing here? In this tutorial we want to show you strategies and tools for the perfect mix. We would like to go beyond the usual setting tips and instead support you in developing your own individual path that fits perfectly to the respective song, your workflow preference­s and your studio - whether it‘s a home studio or a profession­al environmen­t, whether it‘s Hip Hop or singer-songwriter. 1 Mindset and vision

In the first step, we don‘t want to deal specifical­ly with a song - it should rather be about our head, our mindset. In my early days, I simply tried out many tools and spent hours tinkering with compressor­s, equalizers and reverb devices to hear which parameter had which effect, which device or plug-in went particular­ly well with which instrument. Especially if you don‘t have that much experience yet, this is an excellent method to learn, train your ear and develop your very own style. Therefore: Get rid of prefabrica­ted settings and try things out, experiment, make mistakes and learn from them. Take your time and have fun with the equipment. Sound engineerin­g is a wonderful adventure playground!

In contrast, you should change your focus a little when mixing a specific track. Trying out which plug-in has which effect is out of place here - it‘s more about consciousl­y shaping the sound the way the song demands. Now, at the latest, a rule that is all too often neglected today comes into play: every additional device, every plug-in basically makes our signal worse. If a plug-in doesn‘t shape the sound the way I want it to, try another tool instead of completely destroying the signal with five more processors.

I always start my work by developing a vision: What does the entire song sound like, what does each individual instrument sound like? What is okay as it is, what would I like to change? Right from the start, get a rough idea of what the song can and should sound like in the end. Then you use your tools in a targeted manner. In my mixing courses I often find that I use less than half of my course participan­ts‘ plug-ins and that‘s precisely why I get more convincing results. Less is often more at this point.

In order to be able to develop your own vision and learn which tools are useful for which task, you can combine the above-mentioned experiment­al phase with a little individual ear training: Listen to your favorite tracks and try to fathom them aurally:

• Which sound characteri­stics result from the mix and which from the master? “Loudness” and brickwall limiting are done with the master, and impressive stereo width can also come from mastering. The volume ratios of the individual instrument­s and their positionin­g in the stereo image, on the other hand, are more of a mixing matter.

• Which instrument­s were compressed and how? Was the attack time longer or were all transients “cut away”? Both can be okay, because ultimately it always depends on the context and the song. Try to understand why the mixing engineer made certain decisions. Maybe you would have done it differentl­y? Why?

• Which equalizer settings were used? Are the vocals brilliant or is it more of a warm ribbon sound?

• Which reverbs, delays and effects are used?

• How “loud” is the master? You can measure loudness with the freeware plug-in YouLean Loudness Meter.

By comparing the sound of well-known songs with the experiment­al experience­s you have gained yourself with plug-ins and hardware processors, you will gradually get a feeling for how the sound of your reference production­s was achieved.

2 Listen and analyze

In order to develop a vision, it is first necessary to get to know the song closely. If the customer has already supplied a rough mix, this is particular­ly easy. Otherwise you have to get an overview of the available material - for some arrangemen­ts this is not so trivial. For example, if certain instrument­s in the chorus and verse were recorded on different tracks, it often makes sense to combine them on one track. This creates clarity and helps with an intuitive workflow. Of course, this only works if the tracks don‘t play at the same time and can‘t be edited in a similar way.

In the next step you develop a first rough mix. Let all tracks run and try to achieve a transparen­t sound image by only changing the level and stereo distributi­on. Determine which elements can be easily heard and what may be getting in the way of each other. If different instrument­s obscure each other, you can often remedy the situation simply through clever panning, without having to use any processors: basses, snare drums and lead instrument­s as well as vocals are traditiona­lly and sensibly located in the middle, all other elements can be placed in the middle to your heart‘s content Distribute panorama. If a Rhodes piano and an organ are in the same frequency range, you simply place them to the right and left and thus have enough transparen­cy to be able to assess the track. Important: Try to LISTEN to where the problems are - even if it doesn‘t seem easy at the beginning, you will gradually develop a feeling for them. The use of analyzers rarely helps you. We have summarized more informatio­n for you in the box.

Once you have the rough mix, you can analyze the track in more detail. It starts with the individual song parts: markers for intros, verses, choruses and bridges ensure an overview in the timeline. Now you can already see how sensible the arrangemen­t is: A chorus should represent an increase, an intro can also be a little quieter. If many instrument­s overlap in frequency over longer periods of time, you can often clean up the mix right from the start, for example by using them alternatel­y in the verse and chorus instead of constantly having them play at the same time. A sensible arrangemen­t is at least half the battle for a good mix! At this point I always like to remind you of a classical symphony orchestra: up to 120 musicians play together without a mixing console. This has worked perfectly for centuries - because compositio­ns and arrangemen­ts are correspond­ingly sophistica­ted.

Next, you should take a close look at the timing of each track. This is particular­ly important for kick drums and bass, where sidechain compressio­n is often recommende­d. This can be a stylistic device with electronic tracks, but with handmade music I would rather call it the “sledgehamm­er method” because it represents a massive interventi­on in the sound behavior of the bass track. Instead, layer the kick drum and bass tracks in the arrange window and edit the bass so that it plays perfectly with the kick drum. Yes, it can be a Sisyphean task, but it‘s definitely worth it!

When it comes to electronic production­s where everything is quantized, you usually have fewer problems in this regard - although here too you should pay attention to how well the sounds used work together in terms of time. If a sound has a longer attack phase, it can negatively affect your groove. You can then try to move the relevant events forward a little. In the case of strings that are to be mixed into the background, it can also have a positive effect if they occur a little “too late”. There is no general “right” or “wrong” here.

When it comes to vocals, a lot of emphasis is traditiona­lly placed on correct intonation. The timing is often neglected. I can remember a situation many years ago when I was mixing a Pop track with a producer. After some experiment­ation with the equalizer and compressor settings of the vocal track, my partner suggested simply moving the entire vocal track forward a few ticks on the timeline: The vocals immediatel­y stood out acoustical­ly perfectly in front of the rest of the band - without any additional sound processing.

Any mix can only be as good as the recording, the source material and the arrangemen­t. So take your time and edit carefully.

3 Objective: What should it actually sound like?

Of course, the vision includes a goal, and at the latest after we have familiariz­ed ourselves with the material we should think more carefully about where the acoustic journey should go. Of course, this depends primarily on the style and secondaril­y on taste. Even within a genre, there is certainly scope for creativity: older rock production­s by AC/DC or Faith No More sound organic and natural, despite their aggressive­ness, while more modern production­s are often compressed to death until the bitter end. And some lovers of acoustic production­s swear by a pure and unadultera­ted sound, while colleagues like Günther Pauler from Stockfisch Records manage to combine this with an almost magical “highend flair”.

In order to be able to define your own goals more precisely, it is once again helpful to analyze reference production­s in the relevant genre. When I mix for clients, I like to have a few of the artist‘s favorite tracks sent along so I can get to know their sonic preference­s. But don‘t depend too meticulous­ly on references, instead try to develop your own sound personalit­y that supports the performanc­e of the respective track in the best possible way. Take the artist by the hand and show him your vision of his artistic performanc­e.

So that you can explore and understand the sound of a reference production in depth, I would like to suggest a few parameters to which you can usefully focus your attention.

• Frequency response: Even if this doesn‘t say as much about the sound of a production as is widely assumed, it is still not an unimportan­t quantity. While production­s by Steely Dan, for example, sound technicall­y perfect, but very linear and almost well-behaved on some recordings, John Mayer certainly allows himself a little more low end. And in Rock music, too, the spectrum ranges from downright bony bass drums (as they are essential for fast double bass drum passages) to powerful basses. And while singers were still allowed to have almost painful S sounds in the 1990s and early 2000s, many modern singer-songwriter production­s today feature a mid-emphasized and warm vocal sound, which is, however, more difficult to integrate into an opulent playback. So this is where the arrangemen­t comes into play again. As you can see, when it comes to mixing, everything is literally connected to everything else. To get a feel for the different frequency spectrum of different production­s, you can look at them in a spectrum analyzer. We recommend the Hawkeye PlugIn from SPL or the profession­al Pinguin Audio Meter, but the on-board tools of most DAWs also do a good job.

• Dynamics: Of course, the first thing that comes to mind here is mastering: How loud is a production? Are even quieter passages pushed to the limit or do you sometimes dare to lower the overall level? But apart from that, there are serious difference­s in the mixing: How loud are the drums mixed? Are they prominent or rather embedded in an overall playback? In the former case, the entire production will sound more dynamic and „exciting“, while the latter approach can give a ballad the necessary calm. Acoustic guitars have very fast transients, which I can either suppress or emphasize with a compressor using different attack time settings, which has a significan­t impact on the dynamic impression of my overall sound.

Stereo distributi­on: How are the instrument­s distribute­d across the stereo stage? How are they received? While some instrument­s such as individual brass instrument­s, basses and vocals are almost always mono, a piano, an organ with a Leslie or an

acoustic guitar can be stereo. Try to understand how the producers of your reference tracks deal with this topic. Test what happens when you listen to the reference in mono: How does the sound change? • Spatiality: If pure panning is just about a leftright distributi­on of the signals, the placement further to the front or back of the imaginary stage is also important for the spatiality. And no, you don‘t necessaril­y need a surround setup for this: stereo playback can also appear impressive­ly three-dimensiona­l. An example is the title “Cry me a River” by Diana Krall, where the orchestra stretches out almost endlessly behind the band. But such an impressive spatiality is not suitable for every genre: it can even be counterpro­ductive for a powerful hip-hop or rock sound. In any case, the right mix between dry and reverberat­ed signals is important. Analyze the reference tracks in this regard and decide how much space your current production needs and can tolerate.

In the following, we would like to introduce you to the most important tools for sound editing.

4 Tools and their use: Equalizer

The equalizer is probably the most famous and at the same time one of the most powerful tools in sound engineerin­g. However, here too the motto is: less is often more. Try to analyze the signal and narrow down where the problem frequencie­s are before you touch an equalizer. At this point, I would also like to question a tip that is often propagated: „making room“for other instrument­s. The idea is to make the mix transparen­t by lowering certain frequencie­s on one instrument and boosting them on another. Although this may work in individual cases, it can easily lead to a discolored sound of the respective instrument­s. It makes more sense to design the arrangemen­t in such a way that nothing gets in the way and to compensate for any frequency overlaps with a clever stereo distributi­on.

Be aware that almost every instrument has two frequency ranges that make up its characteri­stic sound: a fundamenta­l tone range and the range in which there are noisy signal components, i.e. the picking noises of guitars and basses or the attack of a piano. Take a rather broadband bell filter, raise it a few decibels and cover the frequency range.

Pay attention to which frequencie­s distort the natural sound of the instrument and where its character is emphasized. You can subtly emphasize the latter signal components to make an instrument easier to locate. The upper midrange between approx. 400 Hz and 4 - 5 kHz is more suitable than lower frequencie­s, which tend to be „mulming“. If possible, you should also avoid boosting the same frequencie­s when there are too many signals.

It is important to know that when it comes to equalizers, the character and quality of the individual device or plug-in are very important. A passive example like the classic Pultec or a Passeq from SPL has fundamenta­lly different properties and areas of applicatio­n than a digital EQ. When it comes to emphasizin­g the character of an instrument or making the sound more powerful and assertive, analog equalizers and the correspond­ing emulations are ahead. So feel free to try out SSL, API, Neve or other analog simulation­s. Do you still have an analogue mixing console? All the better, then you‘ll really have fun.

However, purely digital equalizers are more suitable for so-called “surgical” procedures in which resonances and noise are to be specifical­ly filtered out, because narrow-band filters in particular can be implemente­d better with them. Here you can confidentl­y rely on the on-board resources of your DAW or classics like the Q10 from Waves. With the rule of thumb “analog increase, digital decrease” you are rarely wrong. But feel free to take the trouble to compare the sound of different plug-ins. Many years ago I had the problem that I wanted to filter out disturbing resonances in a classical guitar recording without affecting the sound of the rest of the instrument. Of the 10 plug-ins tested, only one really achieved this: it was Epure V.3 from the French software company Flux, which is still one of my most important standard tools today. 5

Tools and their use: compressor­s

„Anyone can do an equalizer, but you need a gun license to use a compressor…” – there is more than a grain of truth in this joking statement from one of my studio customers. While the functions of an equalizer are relatively easy to understand, the parameters of a compressor are much more complex to understand and require more listening experience to understand their effects. At the same time, with too much compressio­n you run the risk of producing lifeless mixes. In any case, no compressor is better than an incorrectl­y set device, and since how it works depends largely on the level and frequency spectrum of the signal being used, presets and general setting templates are rarely effective.

Of course it is true that a compressor fundamenta­lly limits dynamics. In practice, however, a different approach is much more effective: Just as an equalizer processes the frequency spectrum, the compressor influences the dynamics and in particular the volume progressio­n over time - the envelope - of our signal. With cleverly adjusted attack and release times, a compressed signal can even sound more dynamic than the original audio file and at the same time have more assertiven­ess in the mix.

Imagine an acoustic guitar, a bass or even a piano: all of these sounds consist of a attack noise, followed by a tonal sustain phase. As an example, set the ratio to 3:1 and adjust the threshold so that it results in 5-7 dB gain reduction. With long attack and medium release times, the attacks are highlighte­d and the instrument gets more definition in the mix. Short attack and release times, on the other hand, reduce attack noise, emphasize tonal components and make the signal tamer. Feel free to compress much more than intended in the mix and experiment with the time parameters to get a feel for how they work.

Real profession­als sometimes use two compressor­s one behind the other, for example to emphasize transients and still absorb peaks that are too strong. In fact, in some cases it can sound more musical if several compressor­s are used, each with moderate settings, instead of smoothing out the signal with a single processor.

In contrast to compressor­s, limiters usually have neither ratio nor attack controls. They are there to really cut off the signal completely at a certain level in order to either protect equipment or drive signal components to the digital overload limit without exceeding it (brickwall limiter). Limiters are therefore generally used less in the mixing process than in the mastering process. I have also used limiters to create consistent­ly loud kick or snare drums in pop and rock production­s, for example.

Also with compressor­s, there are serious difference­s in sound between different devices and plug-ins. So compare and find out which processors you like best for certain signals.

A very important point is the order of the plugins in your signal chain. It depends on the individual case: For example, if I use an equalizer to reduce low-frequency signal components or even use a locut filter, it makes sense to arrange the compressor accordingl­y. Otherwise it would just be unnecessar­ily triggered by signal components that I filter anyway and the desired processing would be correspond­ingly less precise.

If, on the other hand, frequencie­s - here too, primarily bass - are to be increased, it is often more effective to compress first so that the compressor does not immediatel­y level out the high-energy bass increase. In some cases it can be helpful to work with two equalizer plugins for cuts and boosts, which are put before and after the compressor.

6 Tools and their use: reverb and delay

As already mentioned at the beginning, spatiality in a mix not only means a left-right distributi­on of our signals, but also a distinctio­n between front and back. This is where reverb and delay processors come into play. At the same time, I have already mentioned one of the main problems with using reverb: Reverb, for example, immediatel­y makes a voice bigger and more beautiful, so that you are inclined to use a lot of it during basic processing. However, you should always be aware that reverb components push the signal backwards in terms of hearing psychology. Since the voice should usually be in front of the mix, this is counterpro­ductive.

Therefore, initially try to create a powerful and balanced sound with the help of volume ratios and panning as well as the use of equalizers and compressor­s. You only add reverb at the very end. This allows you to consciousl­y and purposeful­ly edit the depth of your instrument­s. There is also another phenomenon: your hearing gets used to spatial in

formation in the sound image very quickly. Anyone who works with reverb in the mix right from the start runs the risk of unknowingl­y using too much of it in the end.

Reverb is a complex matter and therefore different processors often have different control options. The most important parameters are:

Reverb time: The time until the reverb tail has subsided by 60 dB. The longer the reverb tail, the larger the room and the more noticeable the reverb.

Size: In many processors you can set the room size independen­tly of the reverb time. They are still connected: a very small room with a long reverb tail will sound just as unnatural as the other way around.

High Cut and High Damp: In reality, reverb is highly frequency dependent. The most important phenomenon, however, is height attenuatio­n, which can vary depending on the materials used in the room. While Hi Cut generally cuts the frequency response above the set frequency, the High Damp parameter simulates the behavior of a natural room in which the reverb tail has less and less treble components as the duration increases. Reduced treble makes a reverb tail less noticeable and more pleasant - on the other hand, too few lead to a potty sound. As a rule of thumb, a Hi Cut at around 4 to 5 kHz has proven to work well.

Predelay: Along with the reverb time, the predelay is one of the most important reverb parameters. It describes the delay between the original signal and the onset of the reverb tail. Higher predelay values create the impression that the sound source is closer to the listener in the room.

A good method of distributi­ng signals between the front and rear is to use two initially identical reverb chambers. For signals located at the back, make the reverb time a little longer and the predelay shorter; for elements located at the front, do the opposite. Of course, this is just a rule of thumb for the beginning, experiment­ing will once again give you the necessary experience.

Vocals are a special chapter: Here you want a “big” sound, but at the same time they should be in front of the mix. This can be remedied by using delays, i.e. echoes instead of reverb. Experiment with

delay times that correspond to an eighth, a quarter or other note values and use hi-cut filters to suit your taste. Real profession­als compress the delay channel with the original vocal track as a sidechain source. If the compressor is cleverly adjusted, the delay only becomes apparent during vocal breaks and can be mixed more prominentl­y without affecting speech intelligib­ility. Last but not least, it is an excellent idea to make the delay signal fade away. This gives you big, impressive vocals that don‘t get lost in the mix.

7 Fine tuning

If you have followed all the rules described, experiment­ed enough and - very importantl­y - taken enough breaks during your work, there is a good chance that you have achieved a pleasant-sounding mix. However, that certain something is missing - it still sounds like individual tracks, the overall context is missing?

Of course, you could leave this final touch to mastering, but there are two steps you can take to further perfect the mix:

Plays with the proportion­s of space. It‘s definitely a good idea to put ALL instrument­s in a common room. This can be very subtle - make sure that you only hear it when you switch it off. Use additional, more prominent rooms for snare drums or delays for vocals as desired. The common space is only intended to create a basis.

The second method is the use of compressor­s in buses or the sum. Sum compressio­n in the mix should be viewed critically; you should generally not use brickwall limiters, as this will deprive the mastering engineer of the dynamic material he is working with. A great strategy, on the other hand, is to use subgroups, i.e. individual buses. For example, you can group drums and other instrument­s together and compress them individual­ly. Here, too, less is usually more; around 3 dB gain reduction is often completely sufficient.

As the last “magic trick” of this workshop, I would like to introduce the so-called “New York Compressio­n”: Route all the drums to one bus and compress them really extremely. 10 to 15 dB gain reduction is fine, and if the sound really “pums” with a short release setting, all the better. Now you use a second drum bus without processing and mix the compressed drums with the clean signal. My personal favorite device for this is the Elysia XPressor, which already has New York mode built in with its mix control and also offers a really unique feature for drum processing with the gain reduction limiter.. ⸬

 ?? ??
 ?? ?? Sound engineerin­g is a wonderful adventure playground
Sound engineerin­g is a wonderful adventure playground
 ?? ?? You can use the Youlean Loudness Meter to find out the loudness of a master.
You can use the Youlean Loudness Meter to find out the loudness of a master.
 ?? ?? Try to organize your tracks in a meaningful way
Try to organize your tracks in a meaningful way
 ?? ??
 ?? ?? Grace Jones‘ „Privat Live“is much more dynamic than John Mayer‘s „Come Back To Bed“
Grace Jones‘ „Privat Live“is much more dynamic than John Mayer‘s „Come Back To Bed“
 ?? ?? Kick drum and bass should be edited carefully.
Kick drum and bass should be edited carefully.
 ?? ?? The SPL Passeq is a characterf­ul passive equalizer
The SPL Passeq is a characterf­ul passive equalizer
 ?? ?? As a brickwall limiter, the Weiss MM1 has neither attack nor release controls
As a brickwall limiter, the Weiss MM1 has neither attack nor release controls
 ?? ?? The Elysia XPressor is ideal for the drum bus.
The Elysia XPressor is ideal for the drum bus.
 ?? ?? The MTurbo Reverb from Melda Production­s is a universal and good-sounding reverb plug-in.
The MTurbo Reverb from Melda Production­s is a universal and good-sounding reverb plug-in.

Newspapers in English

Newspapers from Germany