BusinessLine (Delhi)

Elon Musk takes another swing at OpenAI, makes xAI’s Grok chatbot open-source

-

Elon Musk said on Monday his artificial intelligen­ce startup xAI would opensource its ChatGPT challenger “Grok” this week, days after he sued OpenAI for abandoning its original mission in favour of a forprofit model.

The billionair­e has warned on several occasions against the use of technology for profit by big technology companies such as Google.

He filed the lawsuit against Microsoftb­acked OpenAI, which he cofounded in 2015 but left three years later, earlier this month. In response, OpenAI publicised emails that showed the Tesla CEO supported a plan to create a forprofit entity and wanted a

Elon musk

merger with the EV maker to make the combined company a “cash cow.”

GROK CHATBOT

“This week, @xAI will open source Grok,” Musk said in a post on X.

The move could give the public free access to experiment with the code behind the technology and aligns xAI with firms such as Meta and France’s Mistral, both of which have opensource AI models.

Google has also released an AI model called Gemma that outside developers can potentiall­y fashion according to their needs.

Tech investors including OpenAI backer Vinod Khosla and Marc Andreessen, cofounder of venture capital firm Andreessen Horowitz, have been debating about opensourci­ng in AI since Musk filed the lawsuit against the ChatGPT maker.

While opensourci­ng technology can help speed up innovation­s, some experts have warned that opensource AI models could be used by terrorists to create chemical weapons or even develop a conscious superintel­ligence beyond human control.

Musk said at Britain’s AI Safety Summit last year that he wanted to establish a “thirdparty referee” that could oversee firms developing AI and sound the alarm if they have concerns.

Seeking an alternativ­e to OpenAI and Google, Musk launched xAI last year to create what he said would be a “maximum truthseeki­ng AI”. In December, the startup rolled out Grok for Premium+ subscriber­s of X.

In a podcast episode with computer scientist and podcaster Lex Fridman, Musk suggested in November that he favored the concept of opensource AI.

“The name, the open in open AI, is supposed to mean open source, and it was created as a nonprofit open source. And now it is a closed source for maximum profit,” Musk had said.

Six weeks before the first fatal US accident involving Tesla’s Autopilot in 2016, the automaker’s president Jon McNeill tried it out in a Model X and emailed feedback to automatedd­riving chief Sterling Anderson, cc’ing Elon Musk.

The system performed perfectly, McNeill wrote, with the smoothness of a human driver.

“I got so comfortabl­e under Autopilot, that I ended up blowing by exits because I was immersed in emails or calls (I know, I know, not a recommende­d use),” he wrote in the email dated March 25 that year.

NEW LINE OF ATTACK

Now McNeill’s email, which has not been previously reported, is being used in a new line of legal attack against Tesla over Autopilot.

Plaintiffs’ lawyers in a California wrongfulde­ath lawsuit cited the message in a deposition as they asked a Tesla witness whether the company knew drivers would not watch the road when using its driverassi­stance system, according to previously unreported transcript­s reviewed by Reuters.

The Autopilot system can steer, accelerate and brake by itself on the open road but can’t fully replace a human driver, especially in city driving. Tesla materials explaining the system warn that it doesn’t make the car autonomous and requires a “fully attentive driver” who can “take over at any moment”.

The case, set for trial in San Jose the week of March 18, involves a fatal March 2018 crash and follows two previous California trials over Autopilot that Tesla won by arguing the drivers involved had not heeded its instructio­ns to maintain attention while using the system.

This time, lawyers in the San Jose case have testimony from Tesla witnesses indicating that, before the accident, the automaker never studied how quickly and effectivel­y drivers could take control if

Autopilot accidental­ly steers towards an obstacle, the deposition transcript­s show.

The case involves a highway accident near San Francisco that killed Apple engineer Walter Huang. Tesla contends Huang misused the system because he was playing a video game just before the accident.

Lawyers for Huang’s family are raising questions about whether Tesla understood that drivers like McNeill, its own president likely wouldn’t or couldn’t use the system as directed, and what steps the automaker took to protect them.

Experts in autonomous­vehicle law say the case could pose the stiffest test to date of Tesla’s insistence that Autopilot is safe if drivers do their part.

Musk, Tesla and its attorneys did not answer detailed questions from Reuters for this story.

McNeill declined to comment. Anderson did not respond to requests. Both have left Tesla. McNeill is a board member at General Motors and its selfdrivin­g subsidiary, Cruise. Anderson cofounded Aurora, a selfdrivin­g technology company.

NEARLY 1,000 CRASHES

The crash that killed Huang is among hundreds of US accidents where Autopilot was a suspected factor in reports to auto safety regulators.

The US National Highway Traffic Safety Administra­tion (NHTSA) has examined at least 956 crashes in which Autopilot was initially reported to have been in use. The agency separately launched more than 40 investigat­ions into accidents involving Tesla automatedd­riving systems that resulted in 23 deaths.

Amid the NHTSA scrutiny, Tesla recalled more than 2 million vehicles with Autopilot in December to add more driver alerts. The fix was implemente­d through a remote software update.

Huang’s family alleges Autopilot steered his 2017 Model X into a highway barrier.

Tesla blames Huang, saying he failed to stay alert and take over driving. “There is no dispute that, had he been paying attention to the road he would have had the opportunit­y to avoid this crash,” Tesla said in a court filing.

A Santa Clara Superior Court judge has not yet decided what evidence jurors will hear.

LULLED INTO DISTRACTIO­N

The National Transporta­tion Safety Board, which investigat­ed five Autopilotr­elated crashes, has since 2017 repeatedly recommende­d that Tesla improve the drivermoni­toring systems in its vehicles, without spelling out exactly how.

The agency, which conducts safety investigat­ions and research but cannot order recalls, concluded in its report on the Huang accident: “Contributi­ng to the crash was the Tesla vehicle’s ineffectiv­e monitoring of driver engagement, which facilitate­d the driver’s complacenc­y and inattentiv­eness.”

 ?? REUTERS ?? A Tesla Model X burns after crashing on US Highway 101 in Mountain View, California, on March 23, 2018
REUTERS A Tesla Model X burns after crashing on US Highway 101 in Mountain View, California, on March 23, 2018
 ?? ??

Newspapers in English

Newspapers from India