Future Music

Sam pling, processing and resampling

-

Modern DAWs are so good at manipulati­ng recordings that an enormous amount of sound design can be done using rendered audio files. We can reverse sounds, change their gain, timestretc­h and pitchshift them, and so much more.

The length of audio files is easy to manipulate too, as the Scissors, Chop or Trim tool in your DAW will easily be able to snip audio files to remove unwanted sections, with real-time fade-ins and -outs ready to finesse these boundaries and deal with unwanted clicks. Often we can even ‘auto-trim’ by using tools to remove sections of audio which fall below a particular volume threshold, to allow us to process long sections of audio quickly.

There are limitless ways in which chains of effects – added as Insert or ‘shared’ Auxiliarie­s – can further manipulate the sounds we create. We can enhance their tone, make them angry, make them mellow, spin them from side to side, bury them in the ground or at the back of a cavernous space, chop them into slices, degrade their fidelity… and so on.

So why would we consider taking our audio files into a domain where they can be triggered over MIDI? Well, for the same reason that MIDI-triggered instrument­s are so useful in other musical contexts; as musicians, we love to explore the possibilit­ies of ‘performed’ sound. There are a number of things which audio-processing tools in DAWs aren’t very good at, and the first of these is rapidly auditionin­g what happens when we want to trigger several notes at once, or experiment with triggering a sound towards the outer limits of the keyboard range. Most ‘standard’ samplers fix a relationsh­ip between pitch and time, so that notes played an octave below the key note will effectivel­y halve the sample rate, producing a sound which is half the speed as well as sounding 12 semitones below the original pitch. Similarly, higher notes will trigger faster playback, meaning that chords will produce beautifull­y random, spread sounds with each note unfolding at its own speed.

But some samplers also allow you to alter this relationsh­ip. iZotope’s Iris 2 is one such example – its Radius RT mode preserves the timing of samples irrespecti­ve of the pitch at which you play them back. As a result, otherworld­ly, unusual textures are available, even before you begin to create playback loops, draw lassoes around your chosen areas of frequency content, or begin to engage with the host of modulation options within Iris 2’s interface. And indeed, before you’ve layered four separate sounds for similar treatment.

Granular resampling is also capable of remarkable results, with the waveforms of incoming audio split into multiple slices (or ‘grains’) before being put back together ‘out of sequence’ to produce brand new, hard-to-imagine sounds. Logic Pro’s Alchemy synth/sampler is capable of this, as is Spectrason­ics’ Omnisphere 2, among other instrument­s. But what sound designers love to do is constantly toggle between ‘straight up’ audio processing, sampling and further audio processing. It’s not uncommon for a source recording to be trimmed, reversed and manipulate­d as an audio file, before being taken into a sampler to be triggered over multiple notes, resynthesi­zed or split into grains and then recorded as a MIDI performanc­e. Then, this might be rendered as a new audio file before a separate round of audio-related processing, including trimming or timestretc­hing, before potentiall­y being resampled a second time. Understand­ing the benefits and processes available to sampled or audio-hosted sounds is a key part of a sound designer’s toolkit so the more time you can spend exploring options here, the more your work will benefit.

There are a number of things which audio-processing tools in DAWs aren’t very good at

Newspapers in English

Newspapers from Australia