Artists at the forefront of AI
Perhaps the best way to demonstrate AI’s creative abilities is to zoom in on how numerous artists have already worked with it, and – together – crafted some quite astounding music
Despite the numerous AI platforms which serve up routes to auto-generate functional music, many artists who have overtly worked with AI have approached the concept via more individual means.
Take Holly Herndon, the Berlin-based composer and musicologist who recently created her own intelligent musical accomplice. Dubbed ‘Spawn’, this vocal-sample generator was taught by Herndon and partner Mat Dryhurst to reproduce a bank of vocal-types (including her own) via months of training its complex neural network. Spawn was able to organically add vocals to tracks presented to it. Though, as Herndon told Art in America, the process is still finding its feet: “AI is not that smart, it’s very low fidelity, it’s not real time, it’s very slow and unwieldy. Spawn can take more than 24 hours to process someone’s vocal input. On the other hand, it has some unique capabilities that are pretty exciting-slash-scary. The AI can extract the logic of something outside its operator’s own logic and re-create it. This is entirely new for computer music.”
A virtual colleague
Herndon’s approach – to upturn the often predictable creative choices of the human musician, and hack out new musical avenues of exploration – is a commonality shared across numerous artists who have worked with AI.
Alex Da Kid’s ‘collaboration’ with IBM’s Watson was triggered by the Grammy-winning producer’s interest in whether it was genuinely possible to make a song with a virtual colleague.
Watson uses an accumulation of data gathered by a web of smart APIs. These include Watson Alchemy Language, which studies five years of media to determine current pop cultural themes, Watson Tone Analyser, which similarly analysed around 26,000 lyrics, and, crucially, Watson Beat, which determines the best chords, keys and frameworks to correspond with a certain theme. With the track Not Easy, the pair explored the theme of heartbreak – and produced a stunning statement, that also saw interjections from Wiz Khalifa, Elle King and the X Ambassadors. The brilliant end result was popular enough to top both the iTunes and Spotify charts.
While Alex and Holly’s involvement with AI was driven by a desire to research the potential of AI, YouTube star Taryn Southern’s stunning I Am AI LP was conceived when the singer/ songwriter was finding it difficult to realise the musical ideas she had in her head. Using a combination of AI-music generators Amper Music and Aiva as well as Google’s Magenta and IBM Watson, Taryn created a new musical toolkit with which to work. As Southern told Digital Trends, her approach was to use the finetuning tools of software like Amper to deviate from her original ideas, download the stems and then rearrange the end results in her DAW.
These artists, each venturing further down the AI rabbit hole, albeit differently, are at the vanguard of a new paradigm for creators. It’s almost a question of ‘when’ as opposed to ‘if’ AI will cease to be regarded as an industry buzzword and become an everyday facet of music creation. Via their work, these artists are laying the foundations for the fruitful absorption of AI into the creative process.
Learning to fly
While these examples only scratch the surface of how AI has been applied by artists in various genres and contexts (see our list of ten AI-built records), in other areas, AI has already fully started informing our daily lives. It’s entirely suffused itself into how we listen to our music. Pervading streaming platforms and musiclistening services are algorithms which smartly serve up similar tracks to the ones we regularly play, building cleverly curated playlists based on the data imparted by our listening habits.
Spotify’s ‘Discover Weekly’ smart playlists were designed in collaboration with French AI startup Niland. This smart neural network determines how to best populate these lists by scanning other users’ playlists that feature these tracks, as well as analysing the waveforms of the tracks to determine musicological similarities. “We’re working on a number of ways to elevate the experience even further.”
Spotify’s research lead, Rishabh Mehrotra explained to AI News, “Reinforcement learning will be an important focus point as we look into ways to optimise for a lifetime of fulfilling content, rather than optimise for the next stream. In a sense this isn’t about giving users what they want right now as opposed to evolving their tastes and looking at their long term trajectories.”
While some will undoubtedly scoff at AI’s growing ubiquity – and there are lines to be drawn – it’s clear that when used in sympatico with an open-minded creative, it can unlock completely new oceans of possibility. For listeners too, the increasing prevalence of algorithms which can gently guide users around genres, scenes and moods is something that many casual listeners are thankful for. Across these pages, we’ve seen how AI has grown, can be used to handle complex audio editing tasks, provide new compositional routes forward and how a few artists are already taking the virtual hand of AI to forge new ground. We hope this has shed light on AI’s potential as a critical evolution of computer music.