Will tomorrow's artists be slaves to the algorithm?
AI is giving artists a new way to make music. Is the ability to create music an innately human idea? Or are we all about to become slaves to the algorithm?
The first testing sessions for SampleRNN – an artificially intelligent software developed by computer scientist duo CJ Carr and Zack Zukowski, AKA Dadabots – sounded more like a screamo gig than a machine-learning experiment. Carr and Zukowski hoped their program could generate full-length black metal and math rock albums by feeding it small chunks of sound. The first trial consisted of encoding and entering in a few Nirvana a cappellas.
“When it produced its first output,” Carr says, “I was expecting to hear silence or noise because of an error we made, or else some semblance of singing. But no. The first thing it did was scream about Jesus. We looked at each other like, ‘What the fuck?’” But while the platform could convert Cobain’s grizzled pining into bizarre testimonies to the goodness of the Lord, it couldn’t create a coherent song.
Artificial intelligence is already used in music by streaming services such as Spotify, which scan what we listen to so they can better recommend what we might enjoy next. But AI is increasingly being asked to compose music itself – and this is the problem confronting many more computer scientists besides Dadabots.
Musicians – popular, experimental and otherwise – have been using AI to varying degrees over the last three decades. Pop’s chief theoretician, Brian Eno, used it not only to create new endlessly perpetuating music on his recent album Reflection but to render an entire visual experience in 2016’s The Ship. The arrangements on Mexican composer Ivan Paz’s album Visions of Space, which sounds a bit like an intergalactic traffic jam, were done by algorithms he created himself. Most recently, producer Baauer – who topped the US charts in 2012 with his viral track Harlem Shake – made Hate Me with Lil Miquela, an artificial digital Instagram avatar. The next step for synthetic beings like these is to create music on their own – that is, if they can get the software to shut up about Jesus.
The first computer-generated score, a string quartet called the Illiac Suite, was developed in 1957 by Lejaren Hiller, and was met with massive controversy among the classical community. Composers at the time were intensely purist. “Most musicians, academic or composers have always held this idea that the creation of music is innately human,” California music professor David Cope explains. “Somehow the computer program was a threat to that unique human aspect of creation.”
Fast forward to 1980, and after an insufferable bout of composer’s block, Cope began building a computer that could read music from a database written in numerical code. Seven years later, he’d created Emi (Experiments in Musical Intelligence, pronounced “Emmy”). Cope would compose a piece of music and pass it along to his staff to transcribe the notation into code for Emi to analyse. After many hours of digestion, Emi would spit out an entirely new composition written in code that Cope’s staff would re-transcribe on to staves. Emi could respond not just to Cope’s music, but take in the sounds of Bach, Mozart and other classical music staples and conjure a piece that could fit their compositional style. In the nearly 40 years since, this foundational process has been improved. Y ouTube singing sensation Taryn Southern has constructed an LP composed and produced completely by AI using a reworking of Cope’s methods. On her album I AM AI, Southern uses an open source AI platform called Amper to input preferences such as genre, instrumentation, key and beats per minute. Amper is an artificially intelligent music composer founded by film composers Drew Silverstein, Sam Estes and Michael Hobe: it takes commands such as “moody pop” or “modern classical” and creates mostly coherent records that match in tone. From there, an artist can choose to select specific changes in melody, rhythm instrumentation and more.
Southern, who says she “doesn’t have a traditional music background”, sometimes rejects as many as 30 versions of each song generated by Amper from her parameters; once Amper creates something she likes the sound of, she exports it
to GarageBand, arranges what the program has come up with and adds lyrics. Southern’s DIY model foretells a future of musicians making music with AI on their personal computers. “As an artist,” she says, “if you have a barrier to entry, like whether costs are prohibiting you to make something or not having a team, you kind of hack your way into figuring it out.”
AI isn’t just a useful tool – it can be used to explore vital questions about human expression. This self-reflective impulse epitomises the ethic of New York’s art-tech collective the Mill. “The overarching theme of my work,” explains creative director Rama Allen, “is playing with the concept of the ‘ghost in the machine’: the ghost being the human spirit and the machine being whatever advanced technology we try to apply. I’m interested in the collaboration between the two and the unexpected results that can come from it.”
This is the central theme behind the Mill’s musical AI project See Sound – a highly reactive sound-sculpture program engineered by the human voice. Hum, sing or rap and See Sound etches a digital sculpture from your vocals on its colourful interface. From there, Allen and his team 3D-print the brand new shape.
An AI-assisted future raises questions around existing inequalities, corporate domination and artistic integrity: how can we thrive in a world of automation and AI-assisted work without exacerbating the social and economic schisms that have persisted for centuries? It’s likely we won’t. But in the most utopian vision, music will be the first foray into machine-learning for many people, allowing collaboration that edifies the listener, the musician and the machine. TIRHAKAH LOVE IS A PHILADELPHIA-BASED WRITER
We were expecting a noise, but no. The first thing it did was scream about Jesus
▲ So solid Digital sculpture from the See Sound project