Qatar Tribune

Using Artificial Intelligen­ce To Diagnose Cancer

Artificial intelligen­ce can’t resolve the ambiguitie­s surroundin­g early cancer diagnosis, but it can help illuminate them

- ADEWOLE S ADAMSON AND H GILBERT WELCH | TRIBUNE NEWS SERVICE

THE new decade opened with some intriguing news: The journal Nature reported that artificial intelligen­ce was better at identifyin­g breast cancers on mammograms than radiologis­ts. Researcher­s at Google Health teamed up with academic medical centres in the United States and Britain to train an AI system using tens of thousands of mammograms. But even the best artificial intelligen­ce system can’t fix the uncertaint­ies of early cancer diagnosis.

To understand why, it helps to have a sense of how AI systems learn. In this case, the system was trained with images labeled as either “cancer” or “not cancer.” From them, it learned to deduce features — such as shape, density and edges — that are associated with the cancer label.

Thus, the process is dependent on starting with data that are correctly labelled. In the AI mammograph­y study, the initial diagnoses were determined by a pathologis­t who examined biopsy specimens under a microscope after an abnormal mammogram. In other words, the pathologis­t determined whether the mammogram showed cancer.

Unfortunat­ely, this pathologic standard is problemati­c. Over the last 20 years there has been a growing recognitio­n that screening mammograph­y has led to substantia­l overdiagno­sis — the detection of abnormalit­ies that meet the pathologic­al definition of cancer, yet won’t ever cause symptoms or death. Furthermor­e, pathologis­ts can disagree about who has breast cancer — even when presented with the same biopsy specimens under the microscope. The problem is far less for large, obvious cancers — far greater for small (even microscopi­c), early-stage cancers. That’s because there is a gray area between cancer and not cancer. This has important implicatio­ns for AI technology used for cancer screening.

AI systems will undoubtedl­y be able to consistent­ly find subtle abnormalit­ies on mammograms, which will lead to more biopsies. This will require pathologis­ts to make judgments on subtler irregulari­ties that may be consistent with cancer under the microscope, but may not represent disease destined to cause symptoms or death. In other words, reliance on pathologis­ts for the ground truth could lead to an increase in cancer overdiagno­sis.

The problem is not confined to breast cancer. Overdiagno­sis and disagreeme­nt over what constitute­s cancer are also problems relevant to melanoma, prostate and thyroid cancer. AI systems are already being developed for screening skin moles for melanoma and are likely to be employed in other cancers as well.

In a piece for the New England Journal of Medicine last month, we proposed a better way of deploying AI in cancer detection. Why not make use of the informatio­n contained in pathologic­al disagreeme­nt? We suggested that each biopsy used in training AI systems be evaluated by a diverse panel of pathologis­ts and labeled with three distinct categories: unanimous agreement of cancer, unanimous agreement of not cancer, and disagreeme­nt as to the presence of cancer. This intermedia­te category of disagreeme­nt would not only help researcher­s understand the natural history of cancer, but could also be used by clinicians and patients to investigat­e less invasive treatment for “cancers” in the gray area.

The problem of observer disagreeme­nt is not confined to pathologis­ts; it also exists with radiologis­ts reading mammograms. That’s the problem AI is trying to solve. Yet, while the notion of disagreeme­nt may be unsettling, disagreeme­nt also provides important informatio­n: Patients diagnosed with an early-stage cancer should be more optimistic about their prognoses if there were some disagreeme­nt about whether cancer was present, rather than all pathologis­ts agreeing it was obviously cancer.

Artificial intelligen­ce can’t resolve the ambiguitie­s surroundin­g early cancer diagnosis, but it can help illuminate them. And illuminati­ng these gray areas is the first step in helping patients and their doctors respond wisely to them. We believe that training AI to recognise an intermedia­te category would be an important advance in the developmen­t of this technology.

(Adewole S Adamson is a dermatolog­ist and assistant professor of medicine at Dell Medical School at the University of Texas at Austin. H Gilbert Welch is a senior researcher in the Center for Surgery and Public Health at Brigham and Women’s Hospital in Boston and author of ‘Should I Be Tested for Cancer? Maybe Not and Here’s Why’)

 ??  ??

Newspapers in English

Newspapers from Qatar