You just clicked a link to go to another website. If you continue, you may go to a site run by someone else.
It is possible that some of the products on the other site are not approved in your region or country.
Your browser is out of date
With an updated browser, you will have a better Medtronic website experience. Update my browser now.
The content of this website is exclusively reserved for Healthcare Professionals in countries with applicable health authority product registrations.
Click “OK” to confirm you are a Healthcare Professional.
AI will soon know things your physician won't.
By Andrew Burt and Samuel Volchenboum
May 8, 2018
Imagine that the next time you see your doctor, she says you have a life-threatening disease. The catch? A computer has performed your diagnosis, which is too complex for humans to understand entirely. What your doctor can explain, however, is that the computer is almost always right.
If this sounds like science fiction, it’s not. It’s what health care might seem like to doctors, patients, and regulators around the world as new methods in machine learning offer more insights from ever-growing amounts of data.
Complex algorithms will soon help clinicians make incredibly accurate determinations about our health from large amounts of information, premised on largely unexplainable correlations in that data.
This future is alarming, no doubt, due to the power that doctors and patients will start handing off to machines. But it’s also a future that we must prepare for — and embrace — because of the impact these new methods will have and the lives we can potentially save.
Take, for example, a study released today by a group of researchers from the University of Chicago, Stanford University, the University of California, San Francisco, and Google. The study, which one of us coauthored, fed de-identified data on hundreds of thousands of patients into a series of machine learning algorithms powered by Google’s massive computing resources.
With extraordinary accuracy, these algorithms were able to predict and diagnose diseases, from cardiovascular illnesses to cancer, and predict related things such as the likelihood of death, the length of hospital stay, and the chance of hospital readmission. Within 24 hours of a patient’s hospitalization, for example, the algorithms were able to predict with over 90% accuracy the patient’s odds of dying. These predictions, however, were based on patterns in the data that the researchers could not fully explain.
And this study is no outlier. Last year the same team at Google used data on eye scans from over 125,000 patients to build an algorithm that could detect retinopathy, the number one cause of blindness in some parts of the world, with over 90% accuracy, on par with board-certified ophthalmologists. Again, these results had the same constraints; humans could not always fully comprehend why the models made the decisions they made. Many more such examples are on the way.
Already, however, some are resisting these methods, calling for a complete ban on using “non-explainable algorithms” in high-impact areas such as health. Earlier this year, France’s minister of state for the digital sector flatly stated that any algorithm that cannot be explained should not be used.
But opposing these advances wholesale is not the answer. The benefits of an algorithmic approach to medicine are simply too great to ignore. Earlier detection of ailments like skin cancer or cardiovascular disease could lead to reductions in morbidity thanks to these methods. Poorer economies with limited access to trained physicians may benefit as well, as a host of diseases may be found and treated earlier. Individualized treatment recommendations may also improve, leading to saved lives for some and increased quality of life for many others.
This is not to suggest that machine learning models will replace physicians. Instead, what’s likely is a steady shift to ceding responsibility for more of the repetitive and programmable tasks to machines, allowing physicians to focus on issues more directly related to patient care. In some cases, doctors may have a legal obligation to use models that are more accurate than humans expertise, as legal scholars such as A. Michael Froomkin have noted. This won’t take doctors out of the loop entirely, but it will create new opportunities and new dangers as the technology evolves and becomes more powerful.
How should we ready ourselves for a future in which the burden of diagnosis rests more and more on algorithms?
First, medical providers, research institutions, and governments must devote more resources to the field of “explainable AI,” whose goal is to help humans better understand how to interact with complex, seemingly indecipherable algorithmic decisions. The Defense Advanced Research Projects Agency (DARPA), for example, has dedicated an entire project to the issue, and a growing research community has sprung up in recent years focused. Such research will be crucial to our ability to put these algorithms to use and to trust them when we do.
Health care regulators must also explore new ways to govern the use of these methods. The U.S. Food and Drug Administration’s pilot “Pre-cert” program, which is directed at finding new ways to evaluate technologies like machine learning, is one such example. Regulators should also draw from existing methods in the financial sector, known as model risk management frameworks, which were developed in response to similar challenges. As banks adopted complex machine learning methods over the last decade, regulators in the United States and European Union implemented these frameworks to maintain oversight.
Governments must ensure that the massive amounts of data these new methods require don’t become the province of only a few companies, as has occurred in the data-intensive worlds of online advertising and credit scoring. Regulators at the U.S. Department of Health and Human Services who enforce federal privacy rules on medical data, along with federal and state-level legislators, should encourage the sharing of medical data, with proper oversight.
Lastly, patients should be able to know when and why their doctors are relying on algorithms to make predictions. When appropriate, patients should retain the ability to request more traditional — and understandable — medical explanations. If an algorithm gives a patient a 90% chance of dying within the next week, for example, the patient should be able to learn more about the ways the algorithm was created, assessed for accuracy, and validated. And they should be able to view the diagnosis alongside a more traditional determination, even if the latter is less likely to be accurate.
Challenges to using machine learning in health care abound. But these challenges pale in comparison with the benefits these advances will bring. Lives could depend on it.
Andrew Burt is chief privacy officer and legal engineer at Immuta.
Samuel Volchenboum, MD, is an associate professor of pediatrics, director of the Center for Research Informatics, and a member of the Center for Healthcare Delivery Sciences and Innovation at the University of Chicago.
Referenced path does not exist