by Malia Tartt

There’s an old adage that “To err is human.” But what about “To err is robot?” Artificial intelligence (AI) powered medical technologies are rapidly evolving and finding a home in many clinical practices. The intention is to automate more tasks so that doctors can spend more time addressing patient concerns. But can a computer safely replace a skilled doctor responding in real-time?

While AI applications can do a job more quickly, objectively and accurately, they cannot replicate human intelligence. Healthcare utilizes the technology to diagnose conditions, develop medicine, monitor patients and more. The technology can learn and adapt the more it is used.

AI in imaging has already demonstrated great potential to improve patient safety, but researchers have determined significant practical risks. AI algorithms look at images to identify patterns and then use them to flag apparent abnormal findings or identify masses and fractures. But, if the program misinterprets those images, the consequences could be severe. Doctors make mistakes, but those mistakes are generally limited to individual patients. With AI, an untold number of patients could be at risk before the problem is detected, traced and corrected because fundamental issues with the program caused the error.

Scientists have also recently learned that AI could introduce errors because the systems can’t be trained to “err on the side of caution ”as a doctor would. The doctors’ approach may result in more false positives, which may be preferable to maximize patient safety.

In addition to the practical risks associated with AI-powered medical technologies, there are also ethical concerns to unravel. The first consideration is accountability. When an AI system injures someone, who should be held liable? The designer or manufacturer of the system? The doctor? The hospital?

Another ethical concern is data privacy. Lawsuits have already been filed based on data-sharing between large health systems and AI developers. Some patients are concerned that an AI system’s data collection may violate their privacy.

A final ethical concern is bias and inequality. AI systems learn from data that programmers enter and can incorporate those programmers’ biases. For instance, let’s say an AI machine is developed in a medical center and moved to a practice far away. The machine will know less and, therefore, make more errors because its program is based on the population near the medical center. The population near the practice may have entirely different patterns and needs. Even if AI systems learn from accurate, representative data, there are still inherent biases and inequalities in the American health system. For instance, on average, African American patients receive less pain treatment than white patients. Based on this data, an AI system might suggest lower doses of painkillers to African American patients.

With proper oversight and training, medical technology powered by AI could improve and transform healthcare. The FDA is already considering regulating AI technologies to ensure safety and effectiveness, but is that sufficient? Common products like CPAP machines or hip implants can cause serious injury or death, even though the FDA regulates them. While the future is hopeful, AI still has a long way to go