Brave new world
Does the use of artificial intelligence in healthcare create a new equalities challenge? Analysis by Alexandra Ming of Dods Monitoring
Artificial intelligence (AI) presents the NHS with an exciting opportunity to revolutionise patient care and aid recovery from the pandemic. Experts warn, however, that unless an ethics-based approach is taken to AI, new innovations could lead to greater health inequalities. Researchers have cautioned that AI trained on insufficient datasets could lead to marginalised groups being excluded from the benefits of the technologies. AI designed to diagnose skin cancer, for example, could lead to misdiagnosis if trained predominantly on images of lighter-skinned people.
In September, the government published its longawaited national AI strategy to make the UK a “global AI superpower”. The strategy aims to bolster the growth of innovation across the public and private sectors in key areas such as health and social care.
In light of growing healthcare challenges, the NHS has responded in kind, led by NHSX, a joint unit of NHS England and the Department of Health and Social Care which aims to improve patient care by digitising services and connecting health services.
Through programmes such as the Artificial Intelligence in Health and Care Award, NHSX has been instrumental in helping develop AI-driven technologies which can help meet the aims of the NHS
Long Term Plan. To date, innovators who have won a share of the £140m award have developed a range of applications, from expediting cancer diagnosis, to turning smartphone cameras into clinical-grade tools to detect kidney disease, and addressing operational challenges within the NHS to save clinicians’ valuable time. However, while new AI technologies promise to advance healthcare provision in many areas, they have also raised ethical questions for the government to consider.
“The danger of developing AI tools without any kind of responsible innovation framework is that developers end up creating tools that may lead to unforeseen negative individual or societal consequences,” Dr James Wright, a researcher working on intercultural AI ethics and care robots at the Turing Institute – the national institute for data science and AI – tells The House. One of the challenges for policymakers to consider is how to avoid inequalities becoming embedded in algorithms, leading to discrimination or exclusion, as AI learns from human experience. “The data you have from clinical practice can be flawed because our own assumptions as clinicians can go into the data,” says Mavis Machirori, a senior researcher at the independent Ada Lovelace Institute, which works to ensure the benefits of data and AI are justly and equitably distributed.
NHSX’s NHS Artificial Intelligence Laboratory has teamed up with the Health Foundation, an independent healthcare charity, to help support research to advance AI and data-driven technologies in ways that better meet the needs of minority ethnic populations. They have awarded £1.4m to four projects, which include an automated chatbot to provide advice about sexually transmitted infections in minority ethnic groups, and using AI to help investigate factors that contribute to adverse maternity incidents involving Black mothers.
When considering what the future of AI ethics in the UK might look like, the UN’s Educational, Scientific and Cultural Organization (UNESCO), which adopted the first-ever international agreement on the ethics of artificial intelligence in November, could indicate a direction of travel.
Its recommendation on the ethics of artificial intelligence recognises how AI can “deepen existing divides and inequalities”. And at its base sits a foundation of “international law, focusing on dignity and human rights”.
The UK’s independent Equality and Human Rights Commission has also made AI and emerging digital technologies a focus of its strategic plan for 2022 to 2025, with a view to developing new guidance for public and private organisations. In the meantime, it was announced on 12 January that an AI Standard Hub would be developed to increase UK contributions to the development of AI technical standards. And, as set out in the national AI strategy, government says it will work with the Alan Turing Institute over the next 12 months to update guidance on AI ethics and safety in the public sector.
“AI trained on insufficient datasets could lead to marginalised groups being excluded”