Campbell says state laws will apply to AI
Attorney General Andrea Campbell is putting the AI world on notice: She’ll apply the state’s existing consumer protection, antidiscrimination, and data security laws for users of artificial intelligence, too. The attorney general’s office issued guidance on Tuesday aimed at developers, suppliers, and users of AI, to clarify their obligations under existing state laws. Campbell recognizes the benefits of AI but she also wants to address the shortcomings by explaining how and when she will pursue evidence of bias, illegal use of personal data, or misrepresentations such as voice cloning and “deepfakes” in which digital images of people have been altered to make it seem like they are someone else. Deepfakes and voice cloning are on a list of acts that could be considered unfair or deceptive under the state’s Consumer Protection Act, per the AG’s new guidance. Others include falsely advertising the quality of an AI system, or supplying a defective system. Legal advocates praised the guidance, saying that AI-assisted decisions are often leading to unfair denials of credit and homeownership opportunities. Campbell’s release of the guidance coincided with an event at UMass Boston on Tuesday where she talked about her approach to regulating AI. “AI systems have already been shown to pose serious risks to consumers, including bias, lack of transparency or explainability, implications for data privacy, and more,” Campbell’s office wrote in the advisory. “Despite these risks, businesses and consumers are rapidly adopting and using AI systems which now impact virtually all aspects of life.”