AS A.I. CLEANS UP ‘BROTHER’ JACK McDUFF LIVE IN ’82, WHERE NEXT FOR DIGITAL AUDIO RESTORATION?
WHEN JAZZ organ freak Scott Hawthorn watched Hammond B3 great ‘Brother’ Jack McDuff at Parnell’s club in Seattle in June 1VU2, he was aware that the keyboardist’s Leslie cabinet had a rip in its bass woofer. “Because of the particular notes that ‘farted’,” Hawthorn says today of the damaged speaker’s distorted sound, “it really did seem to add to the funk.”
Hawthorn taped the four nights and shared them – audio murk, hiss, ‘farts’ and all – with other McDuff aficionados. But 40 years on, via advances in A.I. sound restoration technology, the recordings are to be made available in uncannily listenable, scrubbed-up form as Live At Parnell’s. How did this happen?
Thank Greg Boraman of the Soul Bank reissue label, who sent the recordings to Claudio Passavanti of London mastering outfit Dr Mix, who set to work using the RX9 audio editor developed by US firm iZotope. “RX uses artificial intelligence to repair flaws in audio recordings,” says iZotope’s Christoph Hartwig, who points out the process is common practice in TV and film. “This includes removing complex noise or reverb from recordings as well as restoring missing information in audio data. It’s almost like a photo editing tool for sound.”
“We were blown away by the level of audio manipulation that was possible with A.I. tech.
It’s like black magic!” says Passavanti, who adds that other tools were used to smooth out hiss and other distortions. “Once the recording was cleaned up, we manually mastered it using mostly hardware analogue gear.”
Another user of A.I. to de-shroud vital hidden audio treasure was Peter Jackson, who “de-mixed” Michael Lindsay-Hogg’s Let It Be footage to strip out other sound sources and make previously hidden conversations between Beatles audible in pristine audio, on last year’s Get Back doc. Furthermore, individual instruments and vocal takes recorded in mono during rehearsals were isolated and given greater clarity. “What’s exciting about A.I. audio is that we can now extract vocals, basslines and other elements of a mix straight off any recording,” says Passavanti, “which opens the door to the possibility of making music beyond the constraints of traditional multi-track technology. I think this is very, very exciting.”
That these individual stems are then in a form ripe for manipulation and remixing – much more easily than ever before – makes for tantalising speculation. Imagine flawed
live documents such as The Beatles At The
Hollywood Bowl, Metallic KO (below) by The Stooges or Syd Barrett live at the Olympia in 1970 getting an audio reconstruction. There’s also the prospect of the democratised deconstruction of the canon, like last year’s creditable Clash project Mohawk Revenge, where Joe Strummer’s voice was extracted from 1VU5’s unloved, electronic Cut The Crap
LP and set to a guitar-bass-drums punk backing. Add to this ABBA’s digital rebirth and the growth of the deepfake, and feverish dreams of a quantum leap into A.I.-generated original songs sung by computer-resurrected singers don’t seem so outré.
Scott, who also befriended McDuff before his death in 2001, sounds a note of caution. “There was much more to being there than just audio,” he says. “The smells, the conversations, the soul food, the sense of danger in his performances… it all added to the atmosphere.” Yet Hartwig is ready to suspend his scepticism. “Increasing processing power in computers and artificial intelligence has taken many people by surprise,” he says. “I know that there will be more innovations that will leave us speechless.”
Ian Harrison Live At Parnell’s is out on September 2 via Soul Bank Music. “It’s almost like a photo editing tool for sound.” CHRISTOPH HARTWIG