The genesis of AI in music production
As our hard drives get more crowded with AI-driven plugins, it’s easy to forget that not too long ago, the very idea of computer-driven musical decision-making seemed like scifi. So, how did AI grow from the giddy witterings of futurists, to an everyday n
While many manufacturers and tech companies enthusiastically point to their AI-driven wares as an indicator of their cutting edge status, explorations into computer-assisted musicmaking have been developing since at least the 1950s. It should be made clear at the outset that the term ‘AI’ when applied to music-making encompasses a few rather different things. There’s the development of so-called ‘smart’ algorithms that we now commonly encounter in mixing or audio editing plugins. Many of these actually lean on a wealth of human experience to apply frequency-correction or normalisation presets. Elsewhere, there are those bona fide virtual brains, such as the ones driving Sonible’s stable of exemplary AI plugins, or mastering platform LANDR’s ‘Synapse’. These take a look at the specifics of your audio, and determine the best course of action, based on their particular field of expertise.
Play it again, RAM
Beyond music production, there’s the arguably separate universe of composition where AI has made a major dent. There’s been notable growth in the number of music-generating services that are able to knock out bewilderingly beautiful music, without breaking a bead of sweat. It’s all based on an abundance of theory know-how, coupled with the ability to draw from a hub of pre-existing music.
A leading example is Aiva, a subscriptionbased music engine that relies on deep learning algorithms to improve its ability to make dazzling music, and is able to be tailored to a huge range of genres and styles. Aiva’s virtual brain is founded on a pattern-forming model of how a human brain works – with a neural network that recalls its past experiences, and prior problem solving, to continually refine its results. This is often known as ‘Reinforcement’ learning, whereby the AI maneouvres its way around huge amounts of data (such as an archive of classical music, in Aiva’s case).
This brain is able to recognise patterns and commonalities, such as chord structure, melody construction and typical arrangement choices. Armed with this knowledge, it’s then able to create a similarly-structured variant. In 2022, these AI-generated tracks are increasingly indiscernible from music that was composed by human beings.
The idea of a machine that is able to spawn music stems back decades, with a major first step being the programming of the Illiac Suite (String Quartet No.4) by university professors Lejaren Hiller and Leonard Issacson. This initial footstep would eventually lead to David Cope’s 1997’s ‘Experiments in Musical Intelligence’
(EMI) program. This sought to initially analyse the composer’s original scores and throw out some new variations based on them to aid his productivity. Before long the concept of the EMI software was able to astoundingly replicate the intricacies of classic composers. “When I first started working with Bach and other composers I did it for only one reason – to refine and help me understand what style was. No other reason,” Cope reflected. Despite his original singular aim, the new data-driven paradigm of virtual music-making had been established.
Hi, robot
While the rise of generative music platforms that can conveniently create ready-to-go music has been predictably controversial, particularly among soundtrack-composers, the concurrent appearance of machine-learning tools that work with the composer and producer has been more warmly embraced. Tools such as Orb Plugins Producer Suite 3, seek to aid composition by
constructing fresh melodies, chord sequences, keys, arpeggios and synth sounds that can significantly enhance a project.
Rather than handing off all duties to the AI, the user is invited to scale the parameters that its digital brain works with. This AI will habitually re-model itself based on the types of arrangements that you’re concocting, serving up new deviations that, in Orb’s case, intend to enhance how expressive and cohesive the resulting composition is.
While music theory is one such aspect of track-building that AI can undoubtedly smoothen, beat-making is also well within its purview. In these pages recently, we reviewed Audiomodern’s Playbeat 3, a fantastic beatmaking toolkit, complete with an incredibly manipulable step sequencer. Playbeat’s smart algorithms work concurrently to either build on, or totally remix, your rhythmic designs at an incredibly fast rate. This type of virtual pro suggestion-maker represents AI at perhaps its most constructively human-like. Although the thought of Playbeat’s (and similar software’s) ‘always watching’ learning algorithms monitoring your every move is perhaps a little unsettling to old-school sci-fi aficionados.
An ode to code
As these myriad AI algorithms continue to learn, and more thoroughly permeate across every facet of music-making, it’s likely that machineassisted workflows will become increasingly normalised over the next few decades. While there are legitimate conversations to be had about the impact of generative music platforms on professional composers, the application of AI-assisted algorithms can undoubtedly be a major benefit to all.
In the next section we’ll delve much deeper into some of the finest AI-driven software, platforms and apps currently available, and illustrate just how and why AI’s suffusion throughout the world of music technology has become a force for so much time-saving and idea-aiding good.
This suggestionmaker is AI at its most constructively human-like