Technological advances continue to be implemented in all fields of medicine, especially in ophthalmology. As image acquisition and data storage expand and as our healthcare system shifts to the use of electronic medical records, gathering and organizing digital clinical data is becoming easier and more accessible.1 Moreover, as computer processing speed improves, artificial intelligence (AI) programs can better assist in the diagnosis and management of medical diseases. Specifically, our access to patient data and images has drastically increased, and AI programs can be used for detailed analysis as well as pattern recognition of such clinical data at volumes and speeds impossible to accomplish by manual review.1,2
Moreover, ophthalmology is a field that lends itself well to the implementation of AI. With an abundance of patient images, especially with the use of optical coherence tomography (OCT), AI programs provide us with the unique opportunity to analyze this plethora of information and assist in making clinical decisions. For AI to be successfully implemented in any medical field, including ophthalmology, it is important to understand how it works in addition to its potential benefits and limitations. We provide a brief background of AI followed by its implementation using imaging via OCT.1,2
ARTIFICIAL INTELLIGENCE
Artificial intelligence is a branch of computer science that seeks to simulate intelligent human behavior in computers. It is an umbrella term that encompasses multiple components that include machine learning, deep learning, and Natural Language Processing, which is a method that extracts information from unstructured data (such as clinical notes and medical journals) and turns it into structured data that can then be analyzed via machine learning techniques.1,2
Machine Learning
Machine learning is “a field of study that gives computers the ability to learn without being explicitly programmed”, as described by Arthur Samuel, one of the pioneers of machine learning.1,2 Machine learning programs are different from basic computer programs in that they can modify the parameters of their algorithms with exposure to more data.1 Since machine learning programs are designed to adapt in response to the data that are presented to the algorithms, they can make useful predictions based on the parameters of their algorithms. For example, a machine learning program learned the board game Checkers and eventually learned the game well enough to defeat a human champion.1,2
Deep Learning
Deep learning is a subset of machine learning and a technique that autonomously learns features and tasks from a training dataset. “Deep” refers to the multiple layers of algorithms that the presented data pass through during computation, and a network of interconnected algorithms is called a neural network.1 Thus, neural networks are sets of algorithms, inspired by the neural connectivity in the human brain, that are designed to recognize patterns in their tasks. Neural networks allow data to pass through layers of algorithms in a multistep process of pattern recognition in order to produce an output, and deep learning refers to the powerful set of techniques for learning via these neural networks.1,2
Moreover, each step in deep learning allows the program to continually learn and evaluate its progress in order to reach a specific outcome. Unlike machine learning programs, deep learning programs require multiple layers of codes and do not require the programmer to explicitly identify specific features in an image as deep learning programs autonomously learn from training datasets. Therefore, a deep learning program also requires a larger training dataset and higher computational power than that required by a machine learning program.1,2
Artificial neural networks (ANNs) have discrete layers, connections, and directions of data propagation. Thus, the computations performed via deep neural networks can allow it to recognize and distinguish between a simple image, such as a stop light, and a more complex image, such as an abnormal chest radiograph.1 Moreover, if the program is presented with a plethora of images, both positive and negative training examples, the program can be trained to accurately determine if an image presented matches the positive training examples. Importantly, a subtype of deep learning, known as convolutional neural network, is a deep ANN that is capable of image recognition and classification and has become a pivotal component of deep learning applications in medicine, especially in ophthalmology.1,2
Given the potential of deep neural networks, the study and application of deep learning and AI has increased in recent years and is being explored by many large technology companies and institutions around the world. This review focuses on highlighting the use of these techniques in the diagnosis and management of ophthalmological diseases using OCT.
AI APPLIED TO OCT IMAGING
Groups from around the world have developed and evaluated AI programs that analyze images generated by OCT, an important diagnostic modality used in ophthalmology. In OCT imaging, interference patterns are used to generate a cross-sectional image of tissue with high resolution. This has greatly extended the ability of providers to diagnose pathology in the retina as well as other ocular tissues. As OCT imaging is easily and safely obtained, this technology has generated large volumes of clinical images that make it an excellent target for AI modalities. To date, most published studies in AI and OCT imaging focus on the posterior segment of the eye (retinal diseases and glaucoma), but recent studies have started to investigate its use in the anterior segment. These studies are summarized in Table 1.3-40 High levels of accuracy have been reported in interpreting various disease processes in the eyes. With the rapid advances of AI in OCT interpretation, AI strategies may soon dramatically improve ophthalmic diagnostics.
TABLE 1: Summary of Studies that Use AI and OCT in Various Ophthalmological Pathologies
Macular Edema
Macular edema is a sight-threatening pathology char-acterized by fluid accumulating in the macula, a posterior section of retina that services central vision. It may result from a variety of etiologies that involve inflammatory, vasculopathic, or degenerative processes. Vision-impairing edema is often difficult to identify with a traditional fundus examination but is easily detectable on OCT imaging, which can identify retinal thickening or pockets of fluid that characterize the condition. Automating the analysis of macular OCT images could help screen patients at risk of macular edema, such as those with diabetes. Several groups have developed AI programs in order to detect macular edema using OCT. An earlier study by Liu et al3 reported on the performance of an AI program to diagnose several retinal diseases including macular edema and age-related macular degeneration (AMD). They used machine learning techniques to develop an automated method to determine macular pathology — macular hole, macular edema, and AMD — from fovea-centered cross sections in 3-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images.3 To determine the program's accuracy, they examined how often its diagnosis concurred with a majority opinion from 3 ophthalmologists, who labeled each fovea-centered slice independently. Their program was found to be highly accurate, achieving an area under the receiver operating characteristic curve (AUC) of more than 0.94 for all 3 macular pathologies.3 In another study, Liu et al41 also used machine learning approaches, specifically support vector machine (SVM) classifiers, to identify the presence of normal macula, along with several macular pathologies, via analysis of OCT images, and their results based on 326 OCT scans demonstrated an AUC of more than 0.93.
One of the most common causes of macular edema is diabetic macular edema (DME), a major cause of vision loss in diabetic patients. Diabetic macular edema has been the focus of several studies analyzing how OCT can be applied to macular edema, and it was the target of a study by Chan et al4 to combine several deep learning architectures to distinguish OCT images between normal and DME subjects. Specifically, they combined deep learning features of AlexNet, VggNet, and GoogleNet and validated their program using 2 datasets (from the Singapore Eye Research Institute and the Chinese University of Hong Kong), and their program had an accuracy of 93.75%.4 Alsaih et al5 used a dataset from the Singapore Eye Research Institute containing 32 OCT image volumes (16 DME cases and 16 normal controls) and compared various machine learning techniques in order to find one that best identified DME with a sensitivity of 87.5% and specificity of 87.5%. Gerendas et al6 performed a pilot study that also highlighted the potential of machine learning programs for detecting and analyzing the prognosis of patients with DME. Their results showed 312 potentially predictive features for proper prognosis for best-corrected visual acuity, with the most important one being intraretinal cystic fluid in the outer nuclear layer in the 3-mm area around the fovea.6 Another group, Alsaih et al42 used SD-OCT volumes for detecting DME with a sensitivity of 75% and specificity of 87% on a challenging dataset.
Diabetes is the main cause of macular edema, and hence determining its etiology is important for treatment. These etiologies can often be determined with the help of OCT images, and several AI programs have been shown to help identify when edema may have resulted from diabetes, compared with another etiology. The program developed and evaluated by Hecht et al7 is a machine learning program that used parameters from OCT images — including existence of hard exudates, subretinal fluid, pattern of macular edema, and location of cysts within the retinal layers — to distinguish DME from pseudophakic cystoid macular edema. This distinction is important for treatment modality. Their program achieved a sensitivity of 94% to 98%, a specificity of 94% to 95%, and an AUC of 0.937 to 0.987, depending on the specific method, for confirming a diabetic etiology for edema.7 Another machine learning program developed and reported by Rodrigues et al8 could identify diabetic retinas from OCT by segmenting the retinal vascular network. This technique could distinguish diabetic from healthy retinas with an accuracy of 98%, a specificity of 99%, and a sensitivity of 83%. Similar results emerged when they applied the same technique to classify the optic nerve head region.8
High accuracy in identifying macular edema has been reported with other AI programs using various strategies. Jemshi et al9 developed an algorithm that detects macular edema using segmentation of the inner limiting membrane and the choroidal layer that is based on graph theory and dynamic programming, and their program achieved a high accuracy (99.5%), sensitivity (100%), and specificity (99%). Similarly, Murugeswari and Sukanesh10 evaluated and compared 3 machine learning algorithms — SVM, cascade neural network (CNN), and partial least square (PLS) — for their ability to identify macular edema using OCT images and were found to have accuracies of 98.33% (SVM), 97.16% (CNN), and 94.34% (PLS). Similarly, Balakrishnan et al11 developed a clinical assessment tool for macular edema using a machine learning approach, namely SVM classifier, that exhibits better performance of accuracy, sensitivity, and specificity than existing methods. Another group, Roy et al12 developed and evaluated a program based on a different strategy, deep learning, termed ReLayNet for detecting macular edema using OCT, and found it effective when compared with 5 other methods including 2 deep learning approaches to identify macular edema. Finally, Montuoro et al13 developed a machine learning program that combines unsupervised feature representation and heterogeneous spatial context with a graph-theoretic surface segmentation to better identify macular edema on OCT. Using this unique strategy on publicly available datasets, they achieved a mean dice coefficient of 0.78.13
Several groups have demonstrated that AI programs can distinguish macular edema apart from several causes of subretinal fluid accumulation. Syed et al14 developed a machine learning program to differentiate normal eyes from those with either macular edema or central serous chorioretinopathy, a cause of subretinal fluid accumulation. Their AI program uses a 3D reconstruction of OCT data. In a sample that included 30 normal retinas, 30 with macular edema, and 30 with central serous chorioretinopathy, the program identified the correct diagnosis with 98.88% accuracy.14 Sun et al15 also developed an automated classification program, based on machine learning approaches, for detecting the presence of DME and AMD, another cause of subretinal fluid accumulation. Its ability to correctly identify these 2 pathologies was evaluated using the Duke spectral-domain dataset and a large OCT dataset from Beijing, with each set having both DME and AMD patients along with normal eyes. The program correctly distinguished between normal, AMD, and DME eyes with an average accuracy of 97.8% in the Duke dataset and 99.8% in the Beijing dataset.15 These studies revealed that current AI programs can maintain high accuracy even when considering multiple pathologies.
Age-Related Macular Degeneration
Age-related macular degeneration is one of the most common causes of irreversible vision loss in the developed world. Its pathogenesis is not fully understood but involves degenerative changes in the outer retina. These retinal structures can be imaged with OCT, which is used to identify patients with the condition and grade the severity. Identifying severity is important as AMD can lead to choroidal neovascularization (CNV), a sight-threatening complication that requires urgent treatment. With the key role of OCT imaging in AMD, extending its application through the use of AI could facilitate its detection and management.
Over the past few years, multiple groups have reported AI strategies for detecting AMD. Treder et al16 developed and evaluated a deep learning program, using the TensorFlow framework developed by Google, for detection of AMD using SD-OCT. They used 100 untrained cross-sectional OCT images (50 with AMD and 50 healthy controls) to evaluate their program, and the results demonstrated an accuracy of 0.997 in the AMD group and 0.9203 in the healthy group with high significance (P < 0.001).16 Similarly, Kugelman et al17 developed a recurrent neural network, as opposed to the traditionally used convolutional neural network, to segment retinal layer boundaries in OCT images from healthy children and also from patients with AMD. Their results demonstrate that recurrent neural networks are a viable alternative to convolutional neural networks with high accuracy and consistency.17 Given that OCT and retinal fundus images are the main imaging modalities used to diagnose AMD, both were used by a deep learning program developed by Yoo et al43 for automatic detection of AMD. Their results showed an AUC of 0.906 and an accuracy of 82.6% with OCT alone, AUC of 0.914 and accuracy of 83.5% with fundus image alone, and the best AUC of 0.969 and accuracy of 90.5% with combined OCT and fundus images.43
Another study, performed by Seebock et al,18 developed a machine learning program that used unsupervised identification of anomalies as markers on retinal OCT scans for identification of AMD, and their results showed an accuracy of 81.40% and an AUC (using a publicly available dataset) of 0.944, displaying the value to unsupervised machine learning program in diagnosing AMD using OCT images. Similarly, Venhuizen et al19 evaluated a machine learning algorithm for automatic detection and classification of AMD using OCT images, and their results (similar performance to human graders) showed an AUC of 0.980, a sensitivity of 98.2%, and a specificity of 91.2%.
The progression of AMD is highly variable and can be complicated by the development of CNV, a sight-threatening complication that warrants urgent treatment. Schmidt-Erfurth et al20 proposed an AI program that individually predicts the progression of AMD to CNV or the dry type with geographic atrophy (GA) using OCT. Their program was evaluated using images from 459 eyes (159 of which progressed to advanced AMD with 114 eyes displaying CNV and 45 displaying GA), and their program was able to differentiate converting versus nonconverting eyes with a performance (AUC) of 0.68 (CNV) and 0.80 (GA).20 Also studying progression, Bogunovic et al21 developed a machine learning method for automated image analysis to identify and characterize individual drusen at baseline and follow their development over time via OCT images. Their results showed an AUC of 0.75, demonstrating the ability of AI and OCT to predict outcomes for patients with AMD.21
Another challenge in AMD management is predicting its impact on vision in the presence of other pathologies such as cataract. Estimating the visual impact of the AMD could guide management decisions, such as whether cataract extraction is worthwhile. To this and similar goals, several groups have developed AI algorithms that estimate visual acuity in AMD patients based on their OCT findings. Aslam et al22 estimated visual acuity from OCT images of patients with AMD using a neural network. Using 1210 OCT scans, they validated a machine learning program, with a 10-layer feed-forward neural network and 1 hidden layer of 10 neurons, and demonstrated a root mean square error of 8.2 letters for predicted compared with actual visual acuity and a mean regression coefficient of 0.85.22 They also demonstrated that, when the external limiting membrane is intact, there is a slower decline in acuity with increasing subretinal fluid but a faster decline with increasing subretinal hyperreflective material. On the other hand, when the external limiting membrane was not intact, all visual acuities were reduced.22
Retinal Disease Management
The reach of AI in OCT analysis is rapidly advancing and it is now being applied to many other retinal pathologies.44,45 Examples include vitreomacular adhesion during anti-vascular endothelial growth factor (anti-VEGF) therapy, retinal nerve fiber loss in multiple sclerosis, and photoreceptor changes in choroideremia and retinitis pigmentosa.46 Additionally, it is being used to automatically generate segmentation layers of the retina and choroid, with the potential to further advance the use of OCT in various disease processes.47,48 With the increased robustness and applicability of AI in this field, it is hoped that these algorithms can positively impact the diagnosis and management of various diseases soon.
An exciting example of how AI can be used to guide management was reported by De Fauw et al,23 who demonstrated that an OCT-analyzing AI program could match or exceed experts in making referral recommendations. The AI strategy used in this case was a deep learning architecture trained to generate 3D tissue segmentations. After training on 14,884 OCT scans, it was then tested on its ability to identify urgent ophthalmic conditions from OCT scans alone. It was deemed correct if its recommendation, such as immediate or routine follow-up, matched the patient's actual clinical course, which was known from the patient's clinical records. The program was found to match or exceed retina specialists who had access to the same OCT scan along with other clinical data such as patient's age and clinical history. Excitingly, the architecture was easily retrained to interpret images generated by other OCT devices on a much smaller second training set. The finding shows that this approach, a novel architecture that generates 3D tissue segmentation, can lead to OCT interpretation that is independent of device or at least can avoid the need for multiple large training sets.23 This robustness along with its accuracy in referrals show that current AI techniques could likely impact how patients are screened, helping to triage patients more efficiently than human providers alone.23
Artificial intelligence programs that guide screening and management based on OCT images have also been explored by other groups. Kermany et al24 developed a clinical decision support algorithm, using deep learning, that provided a diagnostic framework and screening for patients with blinding retinal diseases, including AMD and DME. Prahs et al25,49 have developed an AI program that evaluated the need for anti-VEGF treatment, a commonly used class of medications for treating macular edema and proliferative vasculopathies. Their program, a convolutional ANN, attempts to predict the indication for using anti-VEGF treatments based on central retinal OCT scans without human intervention.25 Their results demonstrated a prediction accuracy of 95.5% and suggested the potential usefulness of this program as a clinical support system for physician even though they still recommend a final evaluation by the treating physician.25 Even if ceding final decisions to a human provider, it is very feasible that similar programs may soon be used to help screen or monitor patients.
Glaucoma
Glaucoma is a potentially blinding condition in which a patient suffers progressive loss of the retinal nerve fiber layer (RNFL) and related nervous structures. This process is often asymptomatic until late in the disease course, hence early diagnosis and treatment can often halt progression and prevent irreversible vision loss. Detection and monitoring are therefore critical, and OCT imaging has revealed itself as an excellent addition to other tests. Here the structures of interest are the RNFL and the ganglion cell layer (GCL), which are both progressively thinned in glaucoma. Thinning of these structures can be directly analyzed along with corresponding changes to the optic nerve head.
Classification of RNFL thickness using AI programs and OCT has been an important way for predicting glaucoma diagnosis and progression.25,27,50,51 Bizios et al26 compared 2 machine learning classifiers — ANNs and SVMs — to diagnose glaucoma, and their results showed that both performed well, with an AUC of 0.982 for ANNs and 0.989 for SVMs. Similarly, Grewal et al27 developed and trained an ANN, using OCT images, to differentiate healthy eyes from eyes with primary open angle glaucoma (POAG) and POAG suspects, and their program had an overall classification rate of 65%, specificity of 60%, and sensitivity of 71.4%. Their program, however, tended to label POAG suspects as POAG.
Several studies have utilized AI to develop programs to automatically diagnose glaucomatous neuropathy based on OCT images.28-30 Burgansky-Eliash et al,28 one of the first groups in the field of glaucoma to use AI and OCT, developed a machine learning program that detects glaucomatous abnormalities from OCT findings, and their results demonstrated an AUC of 0.981, a sensitivity of 97.9%, and a specificity of 92.5%. Similarly, Huang and Chen,29 another pioneer in the field, developed an AI program, using different classifiers that differentiate between healthy and glaucomatous eyes. Muhammad et al31 developed a hybrid deep learning program that used GCL and RNFL data from OCT to classify glaucoma suspects. In 102 eyes of 102 patients, the program identified glaucoma suspects with 93.1% accuracy, demonstrating the ability of the deep learning program to distinguish healthy eyes from early glaucomatous eyes with OCT data alone.31
Glaucomatous changes to the optic nerve head have long been used by clinicians to identify at-risk patients, and optic nerve head changes are also a target of multiple AI strategies in OCT. One example is a program developed by Devalla et al32 that digitally stains OCT images of the optic nerve head, identifying the neural and connective tissues. This digital staining allows automated measurements of optic nerve parameters that are affected by glaucoma. Their program was able to stain 6 tissue layers (RNFL, retinal pigment epithelium, all other retinal layers, choroid, peripapillary sclera, and the lamina cribrosa) for 40 healthy and 60 glaucomatous eyes, and their results showed a dice coefficient of 0.84, a sensitivity of 92%, a specificity of 99%, and an accuracy of 94%.32
Another optic nerve head parameter, Bruch membrane opening-minimum rim width (BMO-MRW), is a quantitative parameter that can be determined from OCT images.33 This parameter is thought to estimate the nerve fiber bundles at the optic nerve head and might be useful to monitor the neuroretinal rim in glaucoma.33 An important precursor to this measurement is determining the boundary of the Bruch membrane opening (BMO). This can be done by manually deleting the boundary, but now can also be done using machine learning. Miri et al33 presented a program that used a machine-learning graph-based approach to identify the BMO with high accuracy. A potential application for the BMO-MRW has been recently reported by Thompson et al.34 They presented a deep learning program that could use BMO-MRW to predict glaucoma with an AUC of 0.933. Excitingly, they also showed that their deep learning program could use the BMO-MRW as a reference for training to analyze the optic discs on fundus photos. After analyzing 7426 fundus photos with the BMO-MRW as a comparison, their deep learning program could predict glaucoma from fundus photos with an AUC of 0.945.34
Given that glaucoma causes changes in multiple optic nerve and retinal structures, future strategies are likely to incorporate multiple measurements to best estimate glaucoma risk. For example, macular vessel density was found to be a useful measurement that could add predictive value when combined with other glaucoma-related measurements. In a report by Park et al,35 the macular vessel density and GCL thickness were analyzed by an ANN. Their results demonstrated an enhanced performance (AUC of 0.87) in diagnosing glaucoma when using macular vessel density in addition to GCL thickness.35 Numerous other multimodal approaches have been reported. Miri et al36 demonstrated the benefit of using a multimodal segmentation of optic disc and cup from OCT and color fundus photography, via machine learning program, to classify the optic disk and cup boundaries, and their results demonstrated that multimodal approaches outperform unimodal approaches (OCT only) in segmenting the optic cup and disc. Other multimodal strategies have combined OCT and standard automated perimetry. A study used a machine learning algorithm to differentiate between healthy and glaucomatous eyes, and the results showed an AUC of 0.805 and 0.931 (for the 2 classifiers).30
Glaucoma has numerous subtypes and several studies have sought to classify the type of glaucoma using AI and OCT images. A primary distinction is whether glaucoma is open-angle, meaning that the trabecular meshwork is open to the anterior chamber, or closed-angle, in which the trabecular meshwork is closed. This determination can be challenging, and anterior segment OCT (AS-OCT) is a promising method for an objective and standardized means of grading angle structures. Xu et al37 reported a machine learning program that could grade angles as open or closed with high accuracy and efficiency. Specifically, this is a multi-step process that converts an AS-OCT into a binary (black and white) image, and then uses several additional processing steps to identify the angle. Using this algorithm their program achieved an AUC of 0.921, an accuracy of 84% and a specificity of 85%, which they report it outperformed the clinical feature-based methods.37 In another study, Niwas et al38 developed a machine learning algorithm that uses AS-OCT images for determining the mechanism involving angle-closure glaucoma diagnosis. Interestingly, their machine learning program could distinguish between 5 classes of angle-closure glaucoma mechanisms — iris roll, lens, pupil block, plateau iris, and no mechanism — based on the OCT.38
Corneal Diseases
The multiple layers of the human cornea can be examined by OCT, generating cross-sectional images that can help identify pathologies or surgical complications. Treder et al39 evaluated a deep learning program that used AS-OCT to automatically detect graft detachment after Descemet membrane endothelial keratoplasty. Using data of 111 eyes of 91 patients, 1172 AS-OCT images (of 609 attached and 563 detached grafts) were used to train and test the program, and their results showed a sensitivity of 98%, a specificity of 94%, and an accuracy of 96%.39 Another group, Yousefi et al40 developed an unsupervised machine learning program that used OCT images from 12,242 eyes to identify and monitor stages of keratoconus. Their AI program classified the images into 4 main categories using Ectasia Status Index — a cluster of mainly healthy eyes, a cluster of mostly healthy eye and eyes with forme fruste keratoconus, a cluster of mostly healthy eyes with mild keratoconus, and a cluster of eyes with mainly advanced keratoconus. Their results suggest that AI can be applied to identifying the status and severity of eyes with keratoconus.40
LIMITATIONS OF AI
Although AI is likely to play an increasing role in healthcare and has the potential to greatly enhance patient care, there are several limitations and challenges that come along with the incorporation of AI in medicine.1 First, the increased reliance on automation and technology might lead to a phenomenon described as deskilling in physicians that negatively affects the ability of future physicians to make informed decisions and form opinions based on detectable signs and symptoms.1
Second, AI programs are often unable to take a holistic approach to clinical scenarios and cannot fully consider the social and psychological aspects of a clinical encounter that are often taken into consideration by a skilled clinician.1
Third, AI requires a strong training dataset as a reference standard in order to produce accurate outcomes. Even though an AI program can produce consistently accurate results through the use of a good training dataset, its performance using real-world images might not be accurate. For example, Roach52 performed an experiment in which they changed a few pixels in fundus photos with diabetic retinopathy. These changes were undetectable to an ophthalmologist's eye, who still correctly identified the images as having diabetic retinopathy, but the AI program altered the outcomes, more than half of the time, and inaccurately marked these images as normal.52 Additionally, as reported by Ting et al,2 although many AI programs perform well during testing, more groups should demonstrate power calculations for evaluating the performance of their AI programs using independent datasets.
Lastly, AI programs are often unable to incorporate the ambiguity and variability that is intrinsic to the observation and decision making in clinical medicine. This is especially true for diseases like glaucoma and retinopathy of prematurity because of the inherent disagreement and interobserver variability that exists among clinicians in the classification of the diseases.2 Although AI programs can analyze and learn from data at a level that is not possible by any single physician, the process by which deep learning algorithms learn to produce an output is not fully understood and introduces a level of uncertainty in clinical decision making. Therefore, it is imperative that we seek to better understand the technology that we will likely use to treat our patients in the near future.1
CONCLUSION
With a gradual shift towards more outpatient health care administration, electronic medical records, the ongoing integration of technology in medicine, and increasing role of imaging modalities — such as OCT — AI is likely to play a major role in healthcare, especially in ophthalmology, in the near future.1 Artificial intelligence programs seek to simulate intelligent human behavior in computers, and studies around the world have demonstrated the ability of different types of AI using OCT to aid in the diagnosis and management of ophthalmological diseases with high accuracy.1 As highlighted in our review, a large number of AI programs using OCT have been developed, and, given the outpatient nature of practice and the increased use of OCT imaging, ophthalmology will lend itself well to the implementation of AI for improving patient care.
REFERENCES
1. Kapoor R, Walters SP, Al-Aswad LA. The current state of artificial intelligence in ophthalmology.
Surv Ophthalmol. 2019;64:233-240.
2. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology.
Br J Ophthalmol. 2019;103:167-175.
3. Liu YY, Ishikawa H, Chen M, et al. Computerized macular pathology diagnosis in spectral domain optical coherence tomography scans based on multiscale texture and shape features.
Invest Opthalmol Vis Sci. 2011;52:8316-8322.
4. Chan GCY, Kamble R, Muller H, et al. Fusing results of several deep learning architectures for automatic classification of normal and diabetic macular edema in optical coherence tomography.
Conf Proce IEEE Eng Med Biol Soc. 2018;2018:670-673.
5. Alsaih K, Lemaitre G, Rastgoo M, et al. Machine learning techniques for diabetic macular edema (DME) classification on SD-OCT images.
Biomed Eng Online. 2017;16:68.
6. Gerendas BS, Bogunovic H, Sadeghipour A, et al. Computational image analysis for prognosis determination in DME.
Vision Res. 2017;139:204-210.
7. Hecht I, Bar A, Rokach L, et al. Optical coherence tomography biomarkers to distinguish diabetic macular edema from pseudophakic cystoid macular edema using machine learning algorithms.
Retina. 2018 Oct 3. Epub ahead of print.
8. Rodrigues P, Guimarães P, Santos T, et al. Two-dimensional segmentation of the retinal vascular network from optical coherence tomography.
J Biomed Opt. 2013;18:126011.
9. Jemshi KM, Gopi VP, Issac Niwas S. Development of an efficient algorithm for the detection of macular edema from optical coherence tomography images.
Int J Comput Assist Radiol Surg. 2018;13:1369-1377.
10. Murugeswari S, Sukanesh R. Investigations of severity level measurements for diabetic macular oedema using machine learning algorithms.
Ir J Med Sci. 2017;186:929-938.
11. Balakrishnan U, Venkatachalapathy K, Marimuthu GS. A hybrid PSO-DEFS based feature selection for the identification of diabetic retinopathy.
Curr Diabetes Rev. 2015;11:182-190.
12. Roy AG, Conjeti S, Karri SPK, et al. ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks.
Biomed Opt Express. 2017;8:3627-3642.
13. Montuoro A, Waldstein SM, Gerendas BS, et al. Joint retinal layer and fluid segmentation in OCT scans of eyes with severe macular edema using unsupervised representation and auto-context.
Biomed Opt Express. 2017;8:1874-1888.
14. Syed AM, Hassan T, Akram MU, et al. Automated diagnosis of macular edema and central serous retinopathy through robust reconstruction of 3D retinal surfaces.
Comput Methods Programs Biomed. 2016;137:1-10.
15. Sun Y, Li S, Sun Z. Fully automated macular pathology detection in retina optical coherence tomography images using sparse coding and dictionary learning.
J Biomed Opt. 2017;22:016012.
16. Treder M, Lauermann JL, Eter N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning.
Graefes Arch Clin Exp Ophthalmol. 2018;256:259-265.
17. Kugelman J, Alonso-Caneiro D, Read SA, et al. Automatic segmentation of OCT retinal boundaries using recurrent neural networks and graph search.
Biomed Opt Express. 2018;9:5759-5777.
18. Seebock P, Waldstein SM, Klimscha S, et al. Unsupervised identification of disease marker candidates in retinal OCT imaging data.
IEEE Trans Med Imaging. 2018 Oct 22. Epub ahead of print.
19. Venhuizen FG, van Ginneken B, van Asten F, et al. Automated staging of age-related macular degeneration using optical coherence tomography.
Invest Opthalmol Vis Sci. 2017;58:2318-2328.
20. Schmidt-Erfurth U, Waldstein SM, Klimscha S, et al. Prediction of individual disease conversion in early AMD using artificial intelligence.
Invest Opthalmol Vis Sci. 2018;59:3199-3208.
21. Bogunovic H, Montuoro A, Baratsits M, et al. Machine learning of the progression of intermediate age-related macular degeneration based on OCT imaging.
Invest Opthalmol Vis Sci. 2017;58:BIO141-BIO150.
22. Aslam TM, Zaki HR, Mahmood S, et al. Use of a neural net to model the impact of optical coherence tomography abnormalities on vision in age-related macular degeneration.
Am J Ophthalmol. 2018;185:94-100.
23. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease.
Nat Med. 2018;24:1342-1350.
24. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning.
Cell. 2018;172:1122-1131.e9.
25. Prahs P, Radeck V, Mayer C, et al. OCT-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications.
Graefes Arch Clin Exp Ophthalmol. 2018;256:91-98.
26. Bizios D, Heijl A, Hougaard JL, et al. Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT.
Acta Ophthalmol. 2010;88:44-52.
27. Grewal DS, Jain R, Grewal SP, et al. Artificial neural network-based glaucoma diagnosis using retinal nerve fiber layer analysis.
Eur J Ophthalmol. 2008;18:915-921.
28. Burgansky-Eliash Z, Wollstein G, Chu T, et al. Optical coherence tomography machine learning classifiers for glaucoma detection: a preliminary study.
Invest Opthalmol Vis Sci. 2005;46:4147-4152.
29. Huang ML, Chen HY. Development and comparison of automated classifiers for glaucoma diagnosis using stratus optical coherence tomography.
Invest Opthalmol Vis Sci. 2005;46:4121-4129.
30. Shigueoka LS, Vasconcellos JPC, Schimiti RB, et al. Automated algorithms combining structure and function outperform general ophthalmologists in diagnosing glaucoma.
PLoS One. 2018;13:e0207784.
31. Muhammad H, Fuchs TJ, De Cuir N, et al. Hybrid deep learning on single wide-field optical coherence tomography scans accurately classifies glaucoma suspects.
J Glaucoma. 2017;26:1086-1094.
32. Devalla SK, Chin KS, Mari JM, et al. A deep learning approach to digitally stain optical coherence tomography images of the optic nerve head.
Invest Opthalmol Vis Sci. 2018;59:63-74.
33. Miri MS, Abràmoff MD, Kwon YH, et al. A machine-learning graph-based approach for 3D segmentation of Bruch's membrane opening from glaucomatous SD-OCT volumes.
Med Image Anal. 2017;39:206-217.
34. Thompson AC, Jammal AA, Medeiros FA. A deep learning algorithm to quantify neuroretinal rim loss from optic disc photographs.
Am J Ophthalmol. 2019 Jan 25. Epub ahead of print.
35. Park K, Kim J, Lee J. Macular vessel density and ganglion cell/inner plexiform layer thickness and their combinational index using artificial intelligence.
J Glaucoma. 2018;27:750-760.
36. Miri MS, Abràmoff MD, Lee K, et al. Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach.
IEEE Trans Med Imaging. 2015;34:1854-1866.
37. Xu Y, Liu J, Cheng J, et al. Automated anterior chamber angle localization and glaucoma type classification in OCT images.
Conf Proc IEEE Eng Med Biol Soc. 2013;2013:7380-7383.
38. Niwas SI, Lin W, Kwoh CK, et al. Cross-examination for angle-closure glaucoma feature detection.
IEEE J Biomed Health Inform. 2016;20:343-354.
39. Treder M, Lauermann JL, Alnawaiseh M, et al. Using deep learning in automated detection of graft detachment in Descemet membrane endothelial keratoplasty.
Cornea. 2018;38:157-161.
40. Yousefi S, Yousefi E, Takahashi H, et al. Keratoconus severity identification using unsupervised machine learning.
PLoS One. 2018;13:e0205998.
41. Liu YY, Chen M, Ishikawa H, et al. Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding.
Med Image Anal. 2011;15:748-759.
42. Alsaih K, Lemaitre G, Vall JM, et al. Classification of SD-OCT volumes with multi pyramids, LBP and HOG descriptors: application to DME detections.
Conf Proc IEEE Eng Med Biol Soc. 2016;2016:1344-1347.
43. Yoo TK, Choi JY, Seo JG, et al. The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment.
Med Biol Eng Comput. 2018;57:677-687.
44. Garcia-Martin E, Pablo LE, Herrero R, et al. Neural networks to identify multiple sclerosis with optical coherence tomography.
Acta Ophthalmol. 2013;91:e628-e634.
45. Camino A, Wang Z, Wang J, et al. Deep learning for the segmentation of preserved photoreceptors on en face optical coherence tomography in two inherited retinal diseases.
Biomed Opt Express. 2018;9:3092-3105.
46. Waldstein SM, Montuoro A, Podkowinski D, et al. Evaluating the impact of vitreomacular adhesion on anti-VEGF therapy for retinal vein occlusion using machine learning.
Sci Rep. 2017;7:2928.
47. Shiihara H, Sonoda S, Terasaki H, et al. Automated segmentation of en face choroidal images obtained by optical coherent tomography by machine learning.
Jpn J Ophthalmol. 2018;62:643-651.
48. Hassan T, Akram MU, Akhtar M, et al. Multilayered deep structure tensor delaunay triangulation and morphing based automated diagnosis and 3D presentation of human macula.
J Med Syst. 2018;42:223.
49. Prahs P, Märker D, Mayer C, et al. Deep learning to support therapy decisions for intravitreal injections.
Ophthalmologe. 2018;115:722-727.
50. Christopher M, Belghith A, Weinreb RN, et al. Retinal nerve fiber layer features identified by unsupervised machine learning on optical coherence tomography scans predict glaucoma progression.
Invest Opthalmol Vis Sci. 2018;59:2748-2756.
51. Barella KA, Costa VP, Gonçalves Vidotti V, et al. Glaucoma diagnostic accuracy of machine learning classifiers using retinal nerve fiber layer and optic nerve data from SD-OCT.
J Ophthalmol. 2013;2013:789129.
52. Roach L. Artificial intelligence.
EyeNet Magazine. November 2017. Available from:
https://www.aao.org/eyenet/article/artificial-intelligence.