Get with the pro­gram­mers

Can plugins be ‘in­tel­li­gent’? The ex-Even­tide en­gi­neer and New­fan­gled Au­dio founder cer­tainly thinks so…

Computer Music - - News - URL­fan­gledau­

New­fan­gled Au­dio prod­ucts are “com­bin­ing DSP tech­nol­ogy with ad­vances in the field of ma­chine learn­ing.” For the unini­ti­ated, please ex­plain…

DG “When you brush away the mar­ket­ing buz­zwords, this is re­ally about new tech­niques for solv­ing prob­lems with more than one right an­swer. The ques­tion be­comes: how do you choose the best an­swer out of a num­ber of good an­swers? In El­e­vate, we break the sig­nal up into a num­ber of fre­quency bands and need to set the level for each one so that the sum of these bands doesn’t go above the ceil­ing. So what should the lev­els be? There are a lot of right an­swers, but we want the one that gives you the loud­est out­put while sat­is­fy­ing other cri­te­ria that we know make a sig­nal sound good. When you trans­late those ideas into math, it be­comes a ma­chine learn­ing prob­lem, and we can use ma­chine learn­ing tech­niques to get the best an­swer.

“The other nice thing about the ma­chine learn­ing ap­proach is that these cri­te­ria be­come the adap­tive pa­ram­e­ters, so the user can still have con­trol over the ma­chine learn­ing al­go­rithm.”

El­e­vate, Equiv­o­cate and Punc­tu­ate use lin­ear-phase fil­ters to di­vide the sig­nal into 26 ear-sym­pa­thetic fre­quency bands (known as the Crit­i­cal Bands). How did this con­cept evolve, and how did this ap­proach af­fect the de­vel­op­ment, cod­ing and/or test­ing process?

DG “Cir­cuit mod­el­ling has been a big topic for plugin de­vel­op­ers re­cently, but why are we spend­ing so much ef­fort mod­el­ling ana­logue cir­cuits when we could be di­rectly mod­el­ling the hu­man ear? It’s un­de­ni­able that some ana­logue cir­cuits sound great, but it’s also true that those clas­sic pro­ces­sors were built with ana­logue cir­cuits be­cause that’s what the cre­ators had ac­cess to, not be­cause it was al­ways the best tool for the job. I started look­ing into mod­el­ling the ear be­cause I’m look­ing for ar­eas where we can do bet­ter than what the best ana­log cir­cuits have given us.

“The sci­ence be­hind the Mel Scale and the 26 Crit­i­cal Bands goes back to the 1930s at Bell Lab, and the au­di­tory mod­els that have come from it are used in speech de­tec­tion, au­dio cod­ing and other au­dio tech­nolo­gies. I would en­cour­age readers to go read the Wikipedia ar­ti­cle on Crit­i­cal Bands – it’s short, and you might learn a lot about mu­si­cal per­cep­tion.”

“This is about solv­ing prob­lems with more than one right an­swer”

What other ‘in­tel­li­gent’ tech­nolo­gies are you us­ing? DG “The in­tel­li­gence in these tools re­ally boils down to: 1. Model the hu­man ear to create a nat­u­ral rep­re­sen­ta­tion of the au­dio you want to process, and 2. Use ma­chine learn­ing to (at­tempt to) model the hu­man brain to make good de­ci­sions about how best to process au­dio. There are a cou­ple of other neat DSP tricks here and there, but when I say ‘in­tel­li­gence’ in the mar­ket­ing, I re­ally do mean that, so that it can pick the best an­swer all the time.” What’s next for New­fan­gled Au­dio? DG “I’m try­ing to fig­ure that out at the mo­ment. I’ve got a bunch of ideas and projects that I’ve started, but I want to make sure that what­ever is re­leased next will be use­ful to peo­ple while also push­ing the tech­nol­ogy.”

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.