Will to­mor­row's artists be slaves to the al­go­rithm?

AI is giv­ing artists a new way to make mu­sic. Is the abil­ity to cre­ate mu­sic an in­nately hu­man idea? Or are we all about to be­come slaves to the al­go­rithm?

The Guardian Weekly - - Inside - By Tirhakah Love

The first test­ing ses­sions for Sam­pleRNN – an ar­ti­fi­cially in­tel­li­gent soft­ware de­vel­oped by com­puter sci­en­tist duo CJ Carr and Zack Zukowski, AKA Dad­abots – sounded more like a screamo gig than a ma­chine-learn­ing ex­per­i­ment. Carr and Zukowski hoped their pro­gram could gen­er­ate full-length black metal and math rock al­bums by feed­ing it small chunks of sound. The first trial con­sisted of en­cod­ing and en­ter­ing in a few Nir­vana a cap­pel­las.

“When it pro­duced its first out­put,” Carr says, “I was ex­pect­ing to hear si­lence or noise be­cause of an er­ror we made, or else some sem­blance of singing. But no. The first thing it did was scream about Je­sus. We looked at each other like, ‘What the fuck?’” But while the plat­form could con­vert Cobain’s griz­zled pin­ing into bizarre tes­ti­monies to the good­ness of the Lord, it couldn’t cre­ate a co­her­ent song.

Ar­ti­fi­cial in­tel­li­gence is al­ready used in mu­sic by stream­ing ser­vices such as Spo­tify, which scan what we lis­ten to so they can bet­ter rec­om­mend what we might en­joy next. But AI is in­creas­ingly be­ing asked to com­pose mu­sic it­self – and this is the prob­lem con­fronting many more com­puter sci­en­tists be­sides Dad­abots.

Mu­si­cians – pop­u­lar, ex­per­i­men­tal and oth­er­wise – have been us­ing AI to vary­ing de­grees over the last three decades. Pop’s chief the­o­reti­cian, Brian Eno, used it not only to cre­ate new end­lessly per­pet­u­at­ing mu­sic on his re­cent al­bum Re­flec­tion but to ren­der an en­tire visual ex­pe­ri­ence in 2016’s The Ship. The ar­range­ments on Mex­i­can com­poser Ivan Paz’s al­bum Vi­sions of Space, which sounds a bit like an in­ter­ga­lac­tic traf­fic jam, were done by al­go­rithms he cre­ated him­self. Most re­cently, pro­ducer Baauer – who topped the US charts in 2012 with his vi­ral track Har­lem Shake – made Hate Me with Lil Miquela, an ar­ti­fi­cial dig­i­tal In­sta­gram avatar. The next step for syn­thetic be­ings like these is to cre­ate mu­sic on their own – that is, if they can get the soft­ware to shut up about Je­sus.

The first com­puter-gen­er­ated score, a string quar­tet called the Il­liac Suite, was de­vel­oped in 1957 by Le­jaren Hiller, and was met with mas­sive con­tro­versy among the clas­si­cal com­mu­nity. Com­posers at the time were in­tensely purist. “Most mu­si­cians, aca­demic or com­posers have al­ways held this idea that the cre­ation of mu­sic is in­nately hu­man,” Cal­i­for­nia mu­sic pro­fes­sor David Cope ex­plains. “Some­how the com­puter pro­gram was a threat to that unique hu­man as­pect of cre­ation.”

Fast for­ward to 1980, and after an in­suf­fer­able bout of com­poser’s block, Cope be­gan build­ing a com­puter that could read mu­sic from a data­base writ­ten in nu­mer­i­cal code. Seven years later, he’d cre­ated Emi (Ex­per­i­ments in Mu­si­cal In­tel­li­gence, pro­nounced “Emmy”). Cope would com­pose a piece of mu­sic and pass it along to his staff to tran­scribe the no­ta­tion into code for Emi to an­a­lyse. After many hours of di­ges­tion, Emi would spit out an en­tirely new com­po­si­tion writ­ten in code that Cope’s staff would re-tran­scribe on to staves. Emi could re­spond not just to Cope’s mu­sic, but take in the sounds of Bach, Mozart and other clas­si­cal mu­sic sta­ples and con­jure a piece that could fit their com­po­si­tional style. In the nearly 40 years since, this foun­da­tional process has been im­proved. Y ouTube singing sen­sa­tion Taryn South­ern has con­structed an LP com­posed and pro­duced com­pletely by AI us­ing a re­work­ing of Cope’s meth­ods. On her al­bum I AM AI, South­ern uses an open source AI plat­form called Am­per to in­put pref­er­ences such as genre, in­stru­men­ta­tion, key and beats per minute. Am­per is an ar­ti­fi­cially in­tel­li­gent mu­sic com­poser founded by film com­posers Drew Sil­ver­stein, Sam Estes and Michael Hobe: it takes com­mands such as “moody pop” or “mod­ern clas­si­cal” and cre­ates mostly co­her­ent records that match in tone. From there, an artist can choose to select spe­cific changes in melody, rhythm in­stru­men­ta­tion and more.

South­ern, who says she “doesn’t have a tra­di­tional mu­sic back­ground”, some­times re­jects as many as 30 ver­sions of each song gen­er­ated by Am­per from her pa­ram­e­ters; once Am­per cre­ates some­thing she likes the sound of, she ex­ports it

to GarageBand, ar­ranges what the pro­gram has come up with and adds lyrics. South­ern’s DIY model fore­tells a fu­ture of mu­si­cians mak­ing mu­sic with AI on their per­sonal com­put­ers. “As an artist,” she says, “if you have a bar­rier to en­try, like whether costs are pro­hibit­ing you to make some­thing or not hav­ing a team, you kind of hack your way into fig­ur­ing it out.”

AI isn’t just a use­ful tool – it can be used to ex­plore vi­tal ques­tions about hu­man ex­pres­sion. This self-re­flec­tive im­pulse epit­o­mises the ethic of New York’s art-tech col­lec­tive the Mill. “The over­ar­ch­ing theme of my work,” ex­plains cre­ative di­rec­tor Rama Allen, “is playing with the con­cept of the ‘ghost in the ma­chine’: the ghost be­ing the hu­man spirit and the ma­chine be­ing what­ever ad­vanced tech­nol­ogy we try to ap­ply. I’m in­ter­ested in the col­lab­o­ra­tion be­tween the two and the un­ex­pected re­sults that can come from it.”

This is the cen­tral theme be­hind the Mill’s mu­si­cal AI project See Sound – a highly re­ac­tive sound-sculp­ture pro­gram en­gi­neered by the hu­man voice. Hum, sing or rap and See Sound etches a dig­i­tal sculp­ture from your vo­cals on its colour­ful in­ter­face. From there, Allen and his team 3D-print the brand new shape.

An AI-as­sisted fu­ture raises ques­tions around ex­ist­ing in­equal­i­ties, cor­po­rate dom­i­na­tion and artis­tic in­tegrity: how can we thrive in a world of au­to­ma­tion and AI-as­sisted work with­out ex­ac­er­bat­ing the so­cial and eco­nomic schisms that have per­sisted for cen­turies? It’s likely we won’t. But in the most utopian vi­sion, mu­sic will be the first foray into ma­chine-learn­ing for many peo­ple, al­low­ing col­lab­o­ra­tion that ed­i­fies the lis­tener, the mu­si­cian and the ma­chine. TIRHAKAH LOVE IS A PHILADEL­PHIA-BASED WRITER

We were ex­pect­ing a noise, but no. The first thing it did was scream about Je­sus

▲ So solid Dig­i­tal sculp­ture from the See Sound project

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.