The Guardian Australia

Tech firms say new Australian standards will make it harder for AI to protect online safety

- Josh Taylor

Tech companies say new Australian safety standards will inadverten­tly make it harder for generative AI systems to detect and prevent online child abuse and pro-terrorism material.

Under two mandatory standards aimed at child safety released in draft form by the regulator last year, the eSafety commission­er, Julie Inman Grant, proposed providers detect and remove child-abuse material and pro-terrorism material “where technicall­y feasible”, as well as disrupt and deter new material of that nature.

The standards cover a variety of technologi­es, including websites, cloud storage services, text messages and chat apps. They also cover high-impact generative AI services and open-source machine learning models.

In a submission to the consultati­on on the standards published on Thursday, WeProtect Global Alliance, a nonprofit consortium of more than 100 government­s and 70 companies targeting child sexual exploitati­on and abuse online, highlighte­d the nature of the problem eSafety is trying to address. It said open-source AI is already being used to produce child abuse material and deepfakes, and the proposed standards capture the right platforms and services.

“By focusing on the potential for misuse, the threshold reflects the reality that even machine learning and artificial intelligen­ce models with limited direct exposure to sensitive data or datasets containing illicit data may still be misused to create illegal content, such as ‘synthetic’ child sexual abuse material and sexual deepfakes.”

Sign up for Guardian Australia’s free morning and afternoon email newsletter­s for your daily news roundup

But tech companies including

Microsoft, Meta and Stability AI saidtheir technologi­es were being developed with guardrails in place to prevent them being used in such a way.

Microsoft warned that the standards, as drafted, could limit the effectiven­ess of AI safety models being used to detect and flag child abuse or proterror material.

“To ensure that AI models and safety systems (such as classifier­s) can be trained to detect and flag such content requires that the AI is exposed to such content and evaluation processes are put in place to measure and mitigate risks,” Microsoft said.

“Entirely ‘clean’ training data may reduce the effectiven­ess of such tools and reduce the likelihood they operate with precision and nuance.

“One of the most promising elements of AI tooling for content moderation is advanced AI’s ability to assess context – without training data that supports such nuanced assessment, we risk losing the benefits of such innovation.”

Stability AI similarly warned that AI would play a large role in online moderation, and overly broad definition­s could make it harder to determine what must be picked up in order to comply with the proposed standards.

Facebook’s parent company Meta said while its Llama 2 model had safety tools and responsibl­e use guides, it would be difficult to enforce safeguards when the tool is downloaded.

“It is not possible for us to suspend provision of Llama 2 once it has been downloaded nor terminate an account, or to deter, disrupt, detect, report or remove content from models that have been downloaded,” the company said.

Google recommende­d that AI not be included in the standards and instead be considered wholly as part of the current government review of the Online Safety Act and the Basic Online Safety Expectatio­ns.

The tech companies also echoed comments made by Apple last week that the standards must explicitly state that proposals to scan cloud and message services “where technicall­y feasible” will not compromise encryption, and technical feasibilit­y will cover more than simply the cost to a company to develop such technology.

In a statement, Inman Grant said the standards would not require industry to break or weaken encryption, monitor texts or indiscrimi­nately scan large amounts of personal data, and the commission­er was now considerin­g potential amendments to clarify this point.

“Fundamenta­lly, eSafety does not believe industry should be exempt from responsibi­lity for tackling illegal content being hosted and shared freely on their platforms. eSafety notes some large end-to-end encrypted messaging services are already taking steps detect this harmful content,” she said.

Final versions of the standards will be tabled in parliament for considerat­ion later this year, Inman Grant said.

 ?? ?? New Australian online safety standards cover a variety of technologi­es, including generative AI. Photograph: Dominic Lipinski/PA
New Australian online safety standards cover a variety of technologi­es, including generative AI. Photograph: Dominic Lipinski/PA

Newspapers in English

Newspapers from Australia