Share this article on:

Deep Learning and Clinical Decision Support

Twa, Michael, D.

doi: 10.1097/OPX.0000000000001210

Editor in Chief Optometry and Vision Science

Ken Jennings is a trivia champion who currently holds the record for the longest winning streak on the popular television game show Jeopardy! In 2011, Jennings and another contestant, Brad Rutter, competed against IBM's Watson—a super computer specifically designed to compete against human trivia masters. After 3 days of competition, Watson defeated the human contestants. Applications of artificial intelligence and machine learning methods are everywhere and growing, from security screenings to food inspection and self-driving cars. The influence of artificial intelligence on clinical practice is only beginning.

Artificial intelligence is the general term that refers to using computers to mimic human thinking. Machine learning is a subset of artificial intelligence, and deep learning is one of many approaches to machine learning. Machine learning methods apply statistical rules, conditional logic, and other algorithms to refine and improve decision performance. Machine learning improves further with additional experience and training. Deep learning is a growing application of machine learning whereby the algorithm trains itself and creates its own rules, based on large data sets. Examples of deep learning include speech recognition and feature detection in biomedical image processing.

In the 1990s, a neural network–based classification index known as the NFI was introduced as an easier way to interpret clinical imaging from a polarization sensitive confocal scanning laser ophthalmoscope. This was one of the first machine classification schemes to provide clinical decision support. This type of clinical interpretation assistance is now flourishing in ophthalmic imaging and will improve because of the influence of several factors. First, deep learning algorithms are more useful today because the power of modern machine learning algorithms has improved. In recent comparisons described below, deep learning methods matched or surpassed human experts when detecting and classifying skin cancers. A second factor enabling the proliferation of deep learning is the availability and growth in large data sets that provide the training data necessary to learn and improve algorithm performance. Third, advances in computing capabilities are another important factor in the growth of deep learning. The power of graphical processing units and the ability to leverage that power for parallel computing are one example but surely not the last. Estimates are that quantum computing may become a practical reality in 5 years, and regardless of when this occurs, it will likely have profound effects in all areas of machine learning.



So how is deep learning transforming health care delivery? One of the most common applications is in image analysis where learning algorithms can improve the speed and accuracy of feature detection, segmentation, pattern detection, and classification. In 2017, Esteva and colleagues1 published impressive results training a neural network with nearly 130,000 images to detect cancerous skin lesions. Their method outperformed a panel of 21 expert dermatologists. Others have used deep learning methods to analyze histological specimens and radiographic images. MD Anderson and Memorial Sloan Kettering have both worked with IBM's Watson group to augment expert review panels for selecting cancer therapies. In 2016, Gulshan and colleagues2 evaluated the ability of a deep learning algorithm trained on more than 125,000 images to detect diabetic retinopathy and diabetic macular edema. They showed high sensitivity and specificity when compared with trained expert examiners.

Another application of deep learning in health care is for genomic health. Early on, the use of genotyping for medical care involved a targeted search for a few specific variations in genetic code that could be linked to disease phenotypes. The idea was that if we could find the gene associated with type 2 diabetes we could rapidly develop effective cures for the disease. As it turns out, there are hundreds of genetic variants associated with developing type 2 diabetes, and no single base pair can explain why an individual develops the disease. As genotyping becomes more common and can be performed with higher resolution, we learn. With larger data sets, it becomes possible to see the probabilistic nature of how disease, demographics, environment, and genotype intersect. Deep learning can be leveraged to help identify genetic patterns associated with disease. Our understanding of genetic health risks will transform from identifying rare conditions driven by one or a few genetic markers to risk and the probability of disease taking into account a multitude of genes and their interaction with other individual environmental and behavioral risk factors.

Finally, what might the use of deep learning look like in vision care? One application could be the prediction of ocular growth and refractive error. We are currently making tremendous advances in our understanding of factors that can influence refractive error development, and this would be an excellent opportunity to integrate the many optical, behavioral, and biological factors that influence myopia development. Glaucoma diagnosis is another multifactorial process that has been the subject of experts in machine learning for more than 15 years. The existence of large longitudinal studies in glaucoma, diabetes, and macular degeneration creates ideal opportunities for machine learning to better understand and improve our care for these conditions. Telemedicine and health screening are other applications that can leverage the power of technology to increase access to quality care for individuals in low-resource environments.

If we were to really leverage the power of deep learning to improve health care, we could use this technology beyond refining our current treatment-oriented health care system to help address disease prevention. What if we could inform individuals about how their daily choices affect their long- and short-term health? Could a better understanding of dietary habits, social support structure, or behavioral risk factors change how we help our patients manage their long-term health risks? What if we could better understand how nonclinical factors predict health outcomes? For example, what if we could determine the factors driving the health needs within a community? Moreover, what if deep learning could help identify the most valuable social and economic investments that would positively influence health and lessen disease burden? Machines can provide powerful information (i.e., insights and perspectives) that we can use to develop solutions to the problems that we prioritize. In the end, despite our enhanced vision and more nuanced understanding of the problems we face, it will still come down to probabilities and individual choices.

Michael D. Twa

Editor in Chief

Birmingham, AL

Back to Top | Article Outline


1. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature 2017;542:115–8.
2. Gulshan V, Peng L, Coram M, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016;316:2402–10.
© 2018 American Academy of Optometry