Open Source for you

Cornell University researcher­s discover ‘codepoison­ing’ attack

-

Cornell Tech researcher­s have claimed to have uncovered a new type of online attack that can manipulate natural-language modelling systems and evade any known defence. The team forecasts possible consequenc­es ranging from modifying movie reviews to manipulati­ng investment banks’ machine learning models so that they ignore negative news coverage that would affect a specific company’s stock.

Without any access to the original code or model, these backdoor attacks can upload malicious code to open source sites frequently used by many companies and programmer­s.

In a new paper, researcher­s found the implicatio­ns of these types of hacks – which they call ‘code poisoning’– to be wide-reaching for everything from algorithmi­c trading to fake news and propaganda.

“With many companies and programmer­s using models and codes from open source sites on the Internet, this research shows how important it is to review and verify these materials before integratin­g them into your current system,” said Eugene Bagdasarya­n, a doctoral candidate at Cornell Tech.

“If hackers are able to implement code poisoning,” Bagdasarya­n said, “they could manipulate models that automate supply chains and propaganda, as well as resumescre­ening and toxic comment deletion.”

As opposed to adversaria­l attacks, which require knowledge of the code and model to make modificati­ons, backdoor attacks allow the hacker to have a large impact without actually having to directly modify the code and models.

This research was supported in part by National Science Foundation grants, the Schmidt Futures program and Google Faculty Research Award.

Newspapers in English

Newspapers from India