Integrating ChatGPT into cars
THE automotive industry is working toward improving the user experience in cars and allowing a more seamless transition from smart homes to smart cars. That is, the same digital assistants you are using in your smart home have also been available in your car for the past few years.
However, these systems have been more general and often limited to only supporting certain commands, e.g., unlock the door or start the engine.
Based on these powerful AI language models like ChatGPT, automakers could build their own digital assistants and train the AI model with automotive-specific information. Similar to how ChatGPT was trained with, e.g., Linux and Unix man pages, and C and Python programming languages, one could imagine an automaker training their digital assistant with information from the car user manual as well as information on how to support common use cases including route planning, integration with smart homes and devices, charging, etc.
This would allow a user to easily ask questions about a warning light blinking on the dashboard, plan an efficient route to the airport, open the garage door or connect a user device, find and reserve a charging spot etc., without having to dig through a large user manual or use and manage multiple devices or systems.
But what about the risks? It is extremely important to consider what type of training data is used as well as apply policies that define what responses with what type of information are allowed. Similar to how early usage of ChatGPT with limited restrictions allowed it to write malware and hacking tools or to gain information that could be used with malicious intent, a digital assistant in your car could also be abused to potentially gain certain harmful information, e.g., how to clone keys or run unauthorized commands, which could lead to attackers stealing cars.
While deploying a digital assistant in your car would provide many benefits and definitely improve the user experience, it is also important to consider the risks. Therefore, it’s imperative that automotive organizations consider what training data is used as well as consider providing some type of restrictions on content in responses, in order to prevent abuse or actions with malicious intent.
Moreover, Owasp has published the “OWASP Top 10 for LLM Applications,” which is a good source of information for automotive organizations to consider when developing their AI systems. It is important to be aware of the different types of cybersecurity concerns or attacks in order to develop proper security countermeasures. For example, a Prompt Injection attack is when an attacker feeds the AI system with certain data to make it behave in a way it was not intended for.
Sensitive Information Disclosure could occur if an attacker is able to extract specific IP-related data or privacy-related data. The AI model itself could also be targeted through a Training Data Poisoning attack, where it becomes tainted by being trained on incorrect data. There is also a concern of AI Model Theft, where attackers could reverse-engineer or analyze the contents of the model.
Additionally, previous studies have shown that AI systems generate appropriate content 80 percent of the time but 20 percent of the time it seemingly just makes up content, so-called “AI hallucinations.” Therefore, it is important to consider what tasks the AI system is used for and to avoid overreliance on the AI system.
Dennis Kengo Oka is the principal automotive security strategist at Synopsys Software Integrity Group, a company that provides integrated solutions that transform the way development teams build and deliver software.