Is AI good for our health?
● Artificial intelligence (AI) is arguably one of today’s most important technological advancements, but a bioethicist has cautioned that there is a lack of guidelines on its use for health care in South Africa.
Not only has AI been used to create electronic health records but it is increasingly used to diagnose and screen complex diseases such as cancer and discover potential therapies.
Bioethicist and head of medical ethics at Stellenbosch University, Keymanthri Moodley, says while AI has transformed health care and improved diagnostics and therapeutics, a lack of guidelines governing its use and other new technologies such as robotics in South Africa is a concern and could have serious ethical consequences if left unchecked.
“Technical debt is incurred when innovation occurs and is rapidly implemented without adequate safety checks. Ethical debt is incurred when AI tools are created and deployed without fully examining and addressing the potential consequences,” she said.
Writing in the South African Medical Journal (SAMJ), Moodley warned that with AI — as with all new technologies and therapeutics plagued by ethical, legal and social challenges — the “potential for harm is a constant and tangible concern”.
While some hospitals are performing robotic surgery and private practices are possibly using generative AI for administrative tasks, Moodley said many of these interventions still need to be tested in clinical trials as is the case with drugs.
One potential pitfall of AI in health care is privacy breaches. The more detailed patient information is fed into AI data systems the higher the risk of sensitive information becoming vulnerable to disclosure. This would undermine the confidentiality of patient information, threaten the doctor-patient relationship and undermine consent processes in AI-assisted health care, creating fertile ground for litigation.
“Despite attempts to protect data privacy via deidentification methods and anonymisation, concerns persist over data security, especially from multiple large data sets. This may unmask data assumed to be concealed,” Moodley said.
Data that could be unmasked includes 3D brain scans, providing clues to facial recognition despite software to protect identities.
Moodley argued that, just like humans, AI tools are capable of making errors and AI algorithms sometimes produce false data “potentially contaminating the integrity of evidence-based medicine”.
“The cornerstone of medical ethics is ‘first do no harm’, so the potential benefit of new technology and AI-driven systems in health care must be carefully weighed against harms that could later be catastrophic.”
While global guidelines have emerged to ensure governance in AI, such as the World Health Organisation’s 2021 guidance on ethics and governance for AI in health, Moodley said South Africa has yet to develop regulation to oversee liability and account for negligence in the same way health-care workers are held accountable.
Even though the Health Professions Council of SA (HPCSA) provides guidance on telehealth, it has no AI-specific guidelines and lacks content on the ethical impact of big data and AI on health research.
The use of AI-assisted tools not only opens the door to medico-legal challenges but legislative loopholes make its use in health care even more complex.
As in medical malpractice claims against human health-care providers, the potential for liability claims in digitally enhanced health care is complex. “A doctor could reject good advice from an AI tool or follow inaccurate advice from an AI tool. Concerns exist around responsibility for liability where technology is concerned,” she said.
Despite the risk of legal challenges, South Africa has no AI-specific legislation. Moodley said even though there is legislation such as the Consumer Protection Act and Protection of Personal Information Act, they are not adequate to regulate AI use in health care.
“AI legislation has been introduced to various degrees in other countries. China, Europe and the US are in the lead. South Africa has not paid much attention to AI-specific legislation.”
“It’s difficult to gauge what the HPCSA is planning around AI. As a statutory body to guide the profession and protect the public, they have been relatively quiet about AI.”
The HPCSA did not respond to repeated efforts to obtain comment.