This new breed of artificial intelligence technology is rapidly spreading across the medical field, as scientists develop systems that can identify signs of disease in a wide variety of images, from X-rays to CAT scans. These systems promise to help doctors evaluate patients more efficiently, and less expensively, than in the past.
Similar forms of AI are likely to move beyond hospitals into the computer systems used by healthcare regulators, billing companies and insurance providers. Just as AI will help doctors check your eyes, lungs and other organs, it will help insurance providers determine reimbursement payments and policy fees.
Ideally, such systems would improve the efficiency of the healthcare system. But they may carry unintended consequences, a group of researchers at Harvard and MIT warns.
In a paper published last week in the journal Science, the researchers raise the prospect of "adversarial attacks" - manipulations that can change the behaviour of AI systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an AI system into seeing an illness that is not really there, or not seeing one that is.
Software developers and regulators must consider such scenarios as they build AI technologies, the authors argue. The concern is less that hackers might cause patients to be misdiagnosed, although that potential exists. More likely is that doctors, hospitals and other organisations could manipulate AI in billing or insurance software in an effort to maximise their earnings.
Samuel Finlayson, a researcher at Harvard Medical School and MIT and one of the authors of the paper, warned that because so much money changes hands across the healthcare industry, stakeholders are already bilking the system by subtly changing billing codes and other data in computer systems that track healthcare visits. AI could exacerbate the problem.
"The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for highstakes decisions to swing on very subtle bits of information," he said.
An adversarial attack exploits a fundamental aspect of the way many AI systems are designed and built. Increasingly, AI is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analysing vast amounts of data.
By analysing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This "machine learning" happens on such an enormous scale that it can produce unexpected behaviour of its own.
The implications are profound, given the increasing prevalence of biometric security and other AI systems. India has implemented the world's largest fingerprint-based identity system, Aadhaar, to distribute government services. Banks are introducing face-recognition access to ATMs. Companies such as Waymo are testing self-driving cars on public roads.
Now, Finlayson and his colleagues have raised the same alarm in medicine: As regulators, insurance providers and billing companies begin using AI in their software systems, businesses can learn to game the underlying algorithms.
If an insurance company uses AI to evaluate medical scans, for instance, a hospital could manipulate scans in an effort to boost payouts. If regulators build AI systems to evaluate new technology, device makers could alter images and other data in an effort to trick the system into granting regulatory approval.
In their paper, the researchers demonstrated that, by changing a small number of pixels in an image of a benign skin lesion, a diagnostic AI system could be tricked into identifying the lesion as malignant. Simply rotating the image could also have the same effect, they found.
Small changes to written descriptions of a patient's condition also could alter an AI diagnosis: "Alcohol abuse" could produce a different diagnosis than "alcohol dependence," and "lumbago" could produce a different diagnosis than "back pain."
In turn, changing such diagnoses one way or another could readily benefit the insurers and healthcare agencies that ultimately profit from them. Once AI is deeply rooted in the healthcare system, the researchers argue, business will gradually adopt behaviour that brings in the most money.
The end result could harm patients, Finlayson said. Changes that doctors make to scans or other patient data to satisfy the AI used by insurance companies could end up on a patient's permanent record and affect decisions down the road.