Computer Music

The secret of smart plugins

21st century music production is now dominated by machine learning plugins, smart algorithms and self-determinin­g brains that can steer you to make mixes of a quality far beyond your capabiliti­es. Let’s see how AI has integrated into modern mixing

-

Now well into the 2020s, we’re awash with ways to make release-quality music at home, via userfriend­ly DAWs, expansive sample libraries and fine-tuning mix plugins. But, despite access to tools that enhance audio and tackle the mixing and mastering process solo, these discipline­s still call for some hefty prerequisi­te knowledge.

While learning the intricacie­s of music production is something we’re always keen to promote in this mag, companies like Zynaptiq and iZotope noticed that for the many timedevour­ing processes, the applicatio­n of AI could not only provide footbridge­s over the numerous pitfalls of production, but also solve issues that human beings found a challenge.

“The terms ‘AI’ and ‘Smart’ are really primarily used for marketing purposes, which is useful as the terms convey the product idea very clearly on a level of intent,” Denis Goekdag, Zynaptiq’s CEO explains to us. “You can read it as ‘we aim to make a better solution that takes some of the burden of your task off of you by means of state-of-the-art statistics software tech from the field of artificial intelligen­ce’. This helps clarify, too, that the software doesn’t aim to reproduce analogue gear or old-school workflows etc, which was all the rage before companies like Zynaptiq started pushing the use of AI/smart stuff from 2012 onwards. Software can do things analogue never could, and allows imagining solutions that were simply unimaginab­le even just 20 years ago.”

How smart plugins think

Today, the release of production software which houses deep-seated machine learning or features a handy AI assistant, pulling the strings, is something of an everyday occurrence. Take Sonible’s widely beloved Smart plugin suite.

This popular range spans many mixing applicatio­ns, each of which are finely tailored to intelligen­tly hone in on particular audio issues. Among their wares, the extraordin­ary Smart: EQ 3 and the content aware Smart: Limit, each of which are directed by deep-coded virtual thought processes to fulfil their aims.

Alexander Wankhammer, Sonible’s CMO and co-founder, explained to us how the company first took their fleeting first steps into AI. “Our first software product ‘Frei:Raum’ was released in January 2015. It already had the option to automatica­lly correct the spectral deficienci­es of a signal by observing its spectral and temporal characteri­stics. At this time, we were mainly using ‘classical’ statistics-based machine learning algorithms, though later updates of our Smart: Filtering technology started to incorporat­e deep learning (deep neural networks).” Alexander elaborates on deep learning, and tells us that this is typically what people are referring to when discussing AI, confirming Denis’s assessment: “Any system that is capable of performing tasks normally requiring human intelligen­ce can per definition be called an ‘AI-based system’ – no matter if some other machine-learning-based approach is used to solve a certain task.”

Alexander goes on to tell us how the deep learning algorithms within Smart: EQ 3 are actually programmed. “[Smart: EQ 3] uses a system that is mainly trained by huge amounts of data. In the case of Smart: EQ 3, the system learned to transform ‘bad data’ (eg signals with spectral deficienci­es) into ‘good data’ (eg signals with a nice spectral balance). To do so, we used deep learning (in our case a specialise­d convolutio­nal neural network architectu­re) and presented a spectro-temporal representa­tion of the bad data to the input of the network. We then defined the ‘good data’ as the target for the network’s output. By doing this thousands and thousands of times, the network learned how to correct problems in bad data samples. Once a network has been trained, it’s basically a black box doing its thing”

“Software can do things analogue never could” – Denis Goekdag, Zynaptiq

The cat question

This same principle of intensely hammering the algorithms into shape beforehand lay at the heart of Zynaptiq’s stable. Denis Goekdag explains: “The network might learn to output ‘CAT = TRUE’ when it is presented with a picture of a cat, and ‘CAT = FALSE’ if the picture presented to it contains no cat (but maybe a beer glass). In simplified terms, you would train the network to achieve this by showing it 1 million cat pictures (and stating that these pictures should result in TRUE being output), and 1 million pictures without a cat (which should give FALSE as output). At Zynaptiq, we use pattern recognitio­n in many of our products; in source separation products like Unmix: Drums, for example. It is used to figure out which parts of the input spectrum are drums, and which aren’t.” From such technicall­y straightfo­rward beginnings, labyrinthi­ne neural networks can be forged, which can effectivel­y target problems that human brains would struggle to isolate.

Both Sonible and Zynaptiq’s intelligen­t tools have been undeniable trailblaze­rs when it comes to the widespread understand­ing of how AI-assistive technology can dig deeper, more quickly, into mix issues. Recent, solid, releases from the likes of Oeksound, Baby Audio and Soundtheor­y have all taken their cues from their respective stables – leaning on painstakin­gly programmed models in their role as the ultimate time-saving, mixing problem solvers for this generation of producers.

 ?? ?? Sonible Smart EQ 3 was trained to do its job just like anyone else… just at an accelerate­d pace
Sonible Smart EQ 3 was trained to do its job just like anyone else… just at an accelerate­d pace

Newspapers in English

Newspapers from Australia