Las Vegas Review-Journal (Sunday)

AI’S role in health care industry grows

D.C. plays catch-up on regulating already widespread presence

- By Darius Tahir KFF Health News KFF Health News is a national newsroom that produces in-depth journalism about health issues.

Lawmakers and regulators in Washington are starting to puzzle over how to regulate artificial intelligen­ce in health care — and the AI industry thinks there’s a good chance they will mess it up.

“It’s an incredibly daunting problem,” said Bob Wachter, the chair of the Department of Medicine at the University of California, San Francisco. “There’s a risk we come in with guns blazing and overregula­te.”

Already, AI’S impact on health care is widespread. The Food and Drug Administra­tion has approved some 692 AI products. Algorithms are helping to schedule patients, determine staffing levels in emergency rooms, and even transcribe and summarize clinical visits to save physicians time. They’re starting to help radiologis­ts read MRIS and X-rays. Wachter said he sometimes informally consults a version of GPT-4, a large language model from the company Openai, for complex cases.

The scope of AI’S impact — and the potential for future changes — means government is already playing catch-up.

“Policymake­rs are terribly behind the times,” Michael Yang, senior managing partner at OMERS Ventures, a venture capital firm, said in an email. Yang’s peers have made vast investment­s in the sector. Rock Health, a venture capital firm, says financiers have put nearly $28 billion into digital health firms specializi­ng in artificial intelligen­ce.

Lobbying and legislatio­n

One issue regulators are grappling with, Wachter said, is that, unlike drugs, which will have the same chemistry five years from now as they do today, AI changes over time. But governance is forming, with the White House and multiple health-focused agencies developing rules to ensure transparen­cy and privacy. Congress is also flashing interest. The Senate Finance Committee held a hearing Feb. 8 on AI in health care.

With regulation and legislatio­n comes increased lobbying. CNBC counted a 185 percent surge in the number of organizati­ons disclosing AI lobbying activities in 2023. The trade group Technet has launched a $25 million initiative, including TV ad buys, to educate viewers on the benefits of artificial intelligen­ce.

“It is very hard to know how to smartly regulate AI since we are so early in the invention phase of the technology,” Bob Kocher, a partner with venture capital firm Venrock who previously served in the Obama administra­tion, said in an email.

Kocher has spoken to senators about AI regulation. He emphasizes some of the difficulti­es the health care system will face in adopting the products. Doctors — facing malpractic­e risks — might be leery of using technology they don’t understand to make clinical decisions.

An analysis of Census Bureau data from January by the consultanc­y Capital Economics found 6.1 percent of health care businesses were planning to use AI in the next six months, roughly in the middle of the 14 sectors surveyed.

Risks posed by AI

Like any medical product, AI systems can pose risks to patients, sometimes in a novel way. One example: They may make things up.

Wachter recalled a colleague, as a test, assigning Openai’s GPT-3 to write a prior authorizat­ion letter to an insurer for a purposeful­ly “wacky” prescripti­on: a blood thinner to treat a patient’s insomnia.

But the AI “wrote a beautiful note,” he said. The system so convincing­ly cited “recent literature” that Wachter’s colleague briefly wondered whether she had missed a new line of research. It turned out the chatbot had made it up.

There’s a risk of AI magnifying bias already present in the health care system. Historical­ly, people of color have received less care than white patients. Studies show that Black patients with fractures are less likely to get pain medication than white ones. This bias might get set in stone when artificial intelligen­ce is trained on that data and subsequent­ly acts.

Research into AI deployed by large insurers has confirmed that has happened. But the problem is more widespread. Wachter said UCSF tested a product to predict no-shows for clinical appointmen­ts. Patients who are deemed unlikely to show up for a visit are more likely to be double-booked.

The test showed that people of color were more likely not to show. Whether or not the finding was accurate, “the ethical response is to ask, why is that, and is there something you can do,” Wachter said.

Hype aside, those risks will probably continue to grab attention over time. AI experts and FDA officials have emphasized the need for transparen­t algorithms, monitored over the long term by human beings — regulators and outside researcher­s. AI products adapt and change as new data is incorporat­ed. And scientists will develop new products.

Policymake­rs will need to invest in new systems to track AI over time, said University of Chicago Provost Katherine Baicker, who testified at the Finance Committee hearing. “The biggest advance is something we haven’t thought of yet,” she said in an interview.

 ?? Getty Images ?? AI’S impact on health care is already widespread, with algorithms used to help schedule patients, determine ER staffing levels and even transcribe clinical visits.
Getty Images AI’S impact on health care is already widespread, with algorithms used to help schedule patients, determine ER staffing levels and even transcribe clinical visits.

Newspapers in English

Newspapers from United States