The Guardian (USA)

Microsoft ignored safety problems with AI image generator, engineer complains

- Nick Robins-Early

An artificial intelligen­ce engineer at Microsoft published a letter Wednesday alleging that the company’s AI image generator lacks basic safeguards against creating violent and sexualized images. In the letter, engineer Shane Jones states that his repeated attempts to warn Microsoft management about the problems failed to result in any action. Jones said he sent the message to the Federal Trade Commission and Microsoft’s board of directors.

“Internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropri­ate for consumers,” Jones states in the letter, which he published on LinkedIn. He lists his title as “principal software engineerin­g manager”.

A Microsoft spokespers­on denied that the company ignored safety issues, stating that it has “robust internal reporting channels” to deal with generative AI problems. Jones did not immediatel­y reply to a request for comment.

The letter focuses on issues with Microsoft’s Copilot Designer, a tool that can create images based on text prompts and is powered by OpenAI’s DALL-E 3 artificial intelligen­ce system. It is one of several generative AI image makers that have launched over the past year, part of a boom time for the industry that has also raised concerns over AI being used to spread disinforma­tion or generate misogynist, racist and violent content.

Copilot Designer contains “systemic problems” with producing harmful content, Jones alleges in the letter, and should be removed from public use until the company fixes the output. Jones specifical­ly argues that Copilot Designer lacks appropriat­e restrictio­ns on its use and tends to generate images that sexually objectify women even when given completely unrelated prompts.

“Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear,” Jones states in the letter, which included examples of image generation­s. “It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.”

Microsoft claimed that it has dedicated teams who evaluate potential safety issues, and that the company facilitate­d meetings for Jones with its Office of Responsibl­e AI.

“We are committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,” a spokespers­on for Microsoft said in a statement to the Guardian.

Microsoft launched its Copilot “AI companion” last year, and has heavily advertised it as a revolution­ary way to incorporat­e artificial intelligen­ce tools into businesses and creative endeavors. The company markets Copilot as an accessible product for public use, and featured it last month in a Super Bowl ad with the tagline “Anyone. Anywhere. Any device.” Jones argues that telling consumers Copilot Designer is safe for anyone to use is irresponsi­ble, and that the company is failing to disclose wellknown risks associated with the tool.

Microsoft updated Copilot Designer in January over safety concerns similar to Jones’, 404 Media reported, closing loopholes on the AI’s code after fake, sexualized images of Taylor Swift spread widely across social media. Jones cites the incident in the letter as proof that the concerns he had been raising in recent months were valid, stating that in December he told

Microsoft about security vulnerabil­ities in Copilot that allowed users to get around its guardrails on creating harmful content.

Jones also alleges that Microsoft’s corporate, external and legal Affairs team pressured him to remove a LinkedIn post that he published in December, in which he urged the board of directors at OpenAI to suspend the availabili­ty of DALL-E 3 due to safety concerns. Jones deleted the letter at the direction of his manager, he said, but never received any justificat­ion from the legal department despite his requests for an explanatio­n.

Generative AI image tools have faced repeated issues over creating harmful content and reinforcin­g biases, problems that are usually associated with bias against specific groups. Google recently suspended its Gemini AI tool after it caused public controvers­y for generating images of people of color when asked to show historical figures such as popes, Vikings and Nazi soldiers.

 ?? Photograph: Alastair Grant/AP ?? A Copilot page showing the incorporat­ion of AI technology.
Photograph: Alastair Grant/AP A Copilot page showing the incorporat­ion of AI technology.

Newspapers in English

Newspapers from United States