Artificial intelligence in glaucoma: posterior segment optical coherence tomography : Current Opinion in Ophthalmology

Secondary Logo

Journal Logo

TRANSLATIONAL RESEARCH: Edited by Jason Hsu & Sunir J. Garg

Artificial intelligence in glaucoma: posterior segment optical coherence tomography

Gutierrez, Alfredoa,b; Chen, Teresa C.b,c

Author Information
Current Opinion in Ophthalmology 34(3):p 245-254, May 2023. | DOI: 10.1097/ICU.0000000000000934

Abstract

INTRODUCTION

Glaucoma is the leading cause of permanent blindness worldwide, and its global prevalence is projected to reach 111.8 million in 2040 [1]. Early detection and treatment can mitigate vision loss, but this is a challenge as the disease is asymptomatic in its early stages. Recent studies have demonstrated that deep learning (DL) models may be employed for the early detection of glaucoma and can identify glaucomatous change more quickly and accurately than traditional methods [2–5]. DL models use layers within neural networks to process and identify specific patterns in images and use this information to make predictions. These algorithms have been used in a number of medical fields for diagnostic purposes, ophthalmology being one of its early adopters. Although there are benefits of using DL to classify glaucoma, the vast majority of models remain in the research phase due to concerns over security, privacy, generalizability, and clinical utility. Despite these challenges, recent studies demonstrate great progress in the field. This review will outline the benefits of using a DL approach as a diagnostic and surveillance tool in glaucoma. Specifically, this review will summarize recent developments in DL models that utilize posterior segment optical coherence tomography (OCT) imaging for glaucoma classification. 

FB1
Box 1:
no caption available

THE CHALLENGES OF DIAGNOSING GLAUCOMA WITH ARTIFICIAL INTELLIGENCE

The pathophysiology of glaucoma involves an increase in intraocular pressure that leads to progressive loss of retinal ganglion cells, optic nerve degeneration, and eventual vision loss. Diagnosis is complicated by the lack of a universal definition for glaucomatous optic neuropathy (GON), and specifically by the lack of a clear universal quantifiable cut off value to detect glaucomatous pathology. It is possibly for this reason that DL applications have not been widely disseminated and adopted by practicing ophthalmologists to identify glaucoma, but artificial intelligence (AI) has been used for other eye diseases that cause more clearly defined changes. For example, in 2018 the Food and Drug Administration (FDA) approved the first AI diagnostic platform for diabetic retinopathy that does not require physician input at the point-of care. IDx-DR (Digital Diagnostics, Coralville, IA) is a technology that uses DL to validate image quality and then identify diabetic retinopathy, which would trigger a referral to an ophthalmologist for further evaluation [6,7]. Unlike the obvious anatomical changes used for establishing a diagnosis of diabetic retinopathy (e.g. fluid accumulation in the retina), the imaging findings associated with glaucoma are less clear and may be hampered by the wide range of normal cupping, the wide range of normal peripapillary retinal nerve fiber layer (RNFL) thickness, as well as by confounding factors such as poor image quality, aging, peripapillary atrophy, myopia, and other ocular pathologies [8].

Other elements that impede the diagnosis of glaucoma, even by human image graders and ophthalmologists, include the involvement of multimodal assessments and a protracted natural progression. These factors especially contribute to a complex and time-consuming process, which lends itself to human error and which is further taxed by barriers to follow up and lack of patient awareness, especially in disadvantaged populations [9,10]. Furthermore, integrating vast amounts of OCT data is often impractical in the clinical setting, such as scrolling through the hundreds of images in a single volume scan in order to detect pathology using the naked eye. In contrast, DL models, which can analyze large datasets without the need for glaucoma specialists or image graders, can simplify and expedite the process of glaucoma diagnosis and surveillance. In summary, one of the biggest obstacles of AI in glaucoma is the lack of a universal gold standard for glaucoma diagnosis.

ARTIFICIAL INTELLIGENCE, MACHINE LEARNING, AND DEEP LEARNING

AI is an umbrella term that describes the simulation of human intelligence by machines to problem solve, self-teach, and carry out tasks (Fig. 1). Traditional machine learning (ML), a subset within AI, uses human designed code to extract desired features from raw datasets and use these to generate an output [11]. Examples of these algorithms include κ-nearest neighbors (KNN), random forests (RF), and support vector machines (SVM). Such feature engineering approaches are useful for targeting specific features within datasets to make decisions, but they are limited by human set parameters which may overlook important information within raw datasets. In the field of glaucoma, it has been shown that DL models outperform traditional ML models in GON detection from OCT volume scans [12].

F1
FIGURE 1:
The relationship between artificial intelligence (AI), machine learning (ML), and deep learning (DL) (original figure). Diagram providing an organizational framework for artificial intelligence, machine learning, and deep learning.

As opposed to the traditional ML algorithms, DL models (e.g. convolutional neural networks [CNNs]) use “representation learning”, which describes the automatic recognition of features within raw datasets without the need of human designed code to extract them (Fig. 2) [13]. The trade-off of this approach is that features identified by DL algorithms may be more comprehensive and less understood than defined features, earning these algorithms the label of “black-box” or difficult to understand and explain. On the other hand, this method allows researchers to identify novel structures that are implicated in the studied disease. This rapidly growing field offers the benefit of a less labor-intensive and time-consuming approach that considers the entirety of raw datasets to make predictions.

F2
FIGURE 2:
Fundamental architecture of traditional machine learning (ML) and convolutional neural networks (CNNs) (original figure). In traditional machine learning (ML) (top diagram), programmers must manually define features for extraction and later use algorithms such as κ-nearest neighbors, random forests, and support vector machines for classification to generate a desired outcome. In convolutional neural networks (CNNs) (bottom diagram), convolutional layers receive, transform, and output information to subsequent layers which process increasingly sophisticated patterns within images to generate a desired outcome. Features for classification need not be manually extracted, which differentiates artificial neural networks from traditional ML approaches. CNNs are the most widely used artificial neural networks in the field of ophthalmology, as these are especially useful in processing images.

Inspired by the human brain, DL models use neural networks which consist of several layers that successively receive input data, perform computations, and generate an output which is transmitted to the next layer in a hierarchical manner [13]. Deep learning approaches have been used in glaucoma as early as 1994, when Goldbaum et al.[14] trained neural networks for the interpretation of automated perimetry. However, recent advances in computational power have allowed researchers to develop DL networks that can process far more complex information and produce better outcomes. CNNs are a type of DL network that have recently gained massive popularity due to their useful application in image classification and pattern recognition. These models use spatially aware filters (i.e. convolutional layers) to process images generating a weighted sum to make predictions (e.g. “healthy” or “disease”) [15]. As image testing is such a key component in ophthalmology, DL models are a promising approach to not only detect pathology, but also predict disease progression, classify disease severity, and identify novel structures of interest.

In order for DL models to carry out specific tasks and assess their performance, they must undergo training, validation, and testing using three different datasets. Training involves an iterative process in which the network processes input data, observes the generated results, and makes modifications to optimize performance. During validation, the model is fed a different dataset for fine-tuning. Lastly, a test set is used to assess the model's performance. It is important that the model has not processed data within the test set previously, because this will falsely lead to better performance. Training may be achieved using supervised learning, unsupervised learning, and semi-supervised learning. In supervised learning, the model is fed labeled data in pairs (i.e. raw data and ground truth). This way, the model can compare its predictions to an expected outcome (or ground truth) and readjust the weights of artificial neurons in a cyclical manner, in a process known as backpropagation [16]. This is the predominant approach for training in ophthalmology applications, and several models utilizing supervised learning have been shown to improve glaucoma detection using both fundus photographs and OCT images [17–21]. In unsupervised learning, models are trained with unlabeled data by recognizing patterns and underlying structures within datasets according to shared attributes. The advantage of this technique is that it can detect novel features without the need of labeling datasets [22,23]. Unsupervised learning approaches have been used to classify visual field patterns in glaucoma and detect glaucomatous change over time [24,25]. In semi-supervised learning an initial, smaller dataset that is labeled is used as a primer to kickstart training followed by a larger, unlabeled dataset. Although this method is less common in ophthalmology, it has demonstrated good performance with OCT and fundus photographs for glaucoma detection [26–28].

CLASSIC GLAUCOMA OPTICAL COHERENCE TOMOGRAPHY PARAMETERS OF THE POSTERIOR SEGMENT

Common commercially available posterior segment OCT glaucoma parameters classically include three regions: peripapillary RNFL thickness, macular parameters [macular cube scans, ganglion cell complex (GCC), and ganglion cell layer plus inner plexiform layer (GC-IPL)], and optic disc parameters [Bruch's membrane opening - minimum rim width (BMO-MRW), and optic nerve head (ONH) cube scans] [29]. Although the RNFL thickness parameter is the most commonly used clinical OCT parameter, DL algorithms have used combinations of these aforementioned parameters. In the analysis of individual OCT parameters, the BMO-MRW, which is defined as the minimum distance between Bruch's membrane opening and the internal limiting membrane (ILM), has been identified as another accurate biomarker in the progression of glaucoma that provides comparable or better detection of glaucoma as compared to RNFL thickness [30,31].

DEEP LEARNING MODELS FOR THE DETECTION OF GLAUCOMA

DL models which aim to detect glaucoma from OCT data have included: the peripapillary RNFL, ONH parameters, macular parameters, and a combination of peripapillary, ONH, and macular parameters.

Deep learning models using the retinal nerve fiber layer

To help diagnose glaucoma, many clinicians rely on OCT-measured RNFL thickness, whose raw B-scans have been used to train DL models. In a 2020 study, Mariottoni et al.[32▪] used a pretrained residual neural network which was further trained with their own dataset of unsegmented B-scans paired with RNFL thickness measurements to predict RNFL thickness from raw B-scans. This way, the DL algorithm could identify B-scan features relevant to predict RNFL thickness without relying on conventional segmentation by spectral-domain OCT (SD-OCT) machines. The model performed similarly to conventional SD-OCT software in good quality images and better than SD-OCT software in lower quality images. Although these are promising results, the study's main limitation is that the model was trained with RNFL thickness measurements by OCT software, which have high artifact rates. OCT devices measure RNFL thickness by automatically segmenting retinal layers using machine software providing thickness and deviation maps by comparing measured values with population averages. Although this feature is convenient, segmentation errors and artifacts are present on 19.9% to 46.3% of spectral-domain OCT (SD-OCT) scans of the RNFL [33–34]. Thompson et al.[35▪] bypassed this limitation by training a CNN model to predict probability of glaucoma using peripapillary B-scans graded by experts. Not only was this model more accurate in differentiating glaucomatous eyes from normal eyes than SD-OCT global RNFL thickness [area under the receiver operating characteristic curve (AUROC) = 0.96 vs. 0.87], but it also produced heat maps of areas within B-scans that most contributed to algorithm classification.

Deep learning models using optic nerve head parameters

Several studies have developed models to detect GON from ONH centered cube scans. DL models trained with volumetric cube scans have the advantage of using 3D spatial context to identify patterns and structural changes. In 2019, Ran et al.[36] developed a DL model with optic disc volume scans and compared its performance to a model trained using 2D en face fundus images. The 3D DL system significantly outperformed the 2D model [area under the curve (AUC) = 0.969 vs. 0.921] and performed similarly to two specialists in detecting GON across three different datasets encompassing different ethnicities, thereby providing evidence that there is a significant advantage in using volumetric data. Additionally, generated Class Activation Maps (CAMs) demonstrated that areas in optic disc cubes used by the algorithm to detect GON correlated with those that ophthalmologists use in practice.

In contrast to the above model which uses a 3D architecture for the analysis of SD-OCT cube scans, Garcia et al.[37▪▪] developed a 2D model that extracts spatial dependencies between cube B-scans with a novel slide-level discriminator for glaucoma detection. Although this CNN does not operate in the 3D space, it processes SD-OCT ONH cubes by preserving the feature dependencies in the latent space using a Long Short-Term Memory (LTSM) network. The feature extractor achieved an AUC higher than 0.93 in primary and external test sets and the combination of CNN and LTSM networks achieved an AUC of 0.8847. Additionally, CAM generation by this model allows for the interpretation of SD-OCT cubes based on regions of interest on each B-scan. Using this feature, Garcia and colleagues identified the areas within a cube scan that were most relevant for the DL algorithm to discriminate between healthy and glaucomatous eyes, and these areas included the RNFL, the neuroretinal rim, and the lamina cribrosa (Fig. 3). These regions are consistent with those reported in the literature as important potential glaucoma biomarkers.

F3
FIGURE 3:
Long short-term memory (LTSM) generated heat maps revealing areas of high discriminatory power (previously published figure). Long short-term memory (LTSM) generated heat maps that use color classification to identify areas within optic nerve head cube scans with highest discriminatory power for distinguishing healthy from glaucomatous samples (areas marked in red). This figure demonstrates that regions corresponding to the retinal nerve fiber layer (RNFL), the cup, and the lamina cribrosa were most relevant for identifying glaucomatous eyes (four first columns of volumes vGr1, ..., vGr4 bounded by the red rectangle) while the peripapillary retina with less localized patterns were important for identifying healthy eyes (vHr1, ..., vHr4 bounded by the green rectangle). Source: Reproduced with permission by Garcia and Naranjo [37▪▪].

In a 2022 study, Akter et al.[38▪▪] developed the first AI method for glaucoma detection which accounts for functional, structural, and risk factor data. Notably, a new region of interest from ONH OCT B-scans, the cup surface area, was calculated and used as a parameter to train three DL models which achieved an AUC of 0.99 when discriminating glaucomatous from healthy eyes. Furthermore, generated heatmaps by the DL models reveal that the cup surface area provided high discriminatory power in successfully classifying GON eyes (Fig. 4). These findings demonstrate that cup surface area is a promising disc parameter for DL training and glaucoma detection.

F4
FIGURE 4:
Heatmaps reveal cup surface has high discriminative power in glaucoma detection by deep learning (DL) models (previously published figure). Three deep learning (DL) models, VGG16, ResNet and a novel DL model (from left to right), used segmented OCT B-scans of glaucomatous eyes to illustrate how the region corresponding to the cup surface was important for algorithms to detect glaucomatous eyes from healthy eyes. Generated heat maps identify such areas of high discriminative power in red. Reproduced with permission from Roy M, corresponding author. Source: Akter et al. [38▪▪].

Deep learning model using macular parameters

Macula centered SD-OCT cube scans have also been used as input data for DL detection of glaucoma. Russakoff et al.[39▪] trained a CNN with such macular cubes with the novel addition of including myopic eyes of varying severities. As previously stated, relying on RNFL for glaucoma detection in myopic eyes may provide false positives as myopia can lead to RNFL thinning on OCT with no associated GON. Nevertheless, this CNN network was fairly successful in detecting glaucoma in mild, moderate, and severe myopia (AUC = 0.95, 0.92, and 0.85, respectively).

Deep learning models using combinations of parameters

One of the benefits of using an automated approach for diagnostic purposes is that algorithms can integrate vast amounts of data, which may be an arduous task for a single clinician. Recent studies in this field more commonly use several parameters derived from commercially available OCT devices to aid in model training. Swept-source OCT (SS-OCT) provides wide-angle images which capture a large area of the posterior segment and which provide measurements from both macular and disc regions. Additionally, wide-field OCT gives a single-page report, which includes RNFL and RGC+ (RGC+ = retinal ganglion cell plus inner plexiform layer) thickness and probability maps, which facilitate glaucoma diagnosis [40–41]. In a 2017 study, Muhammad et al.[42] argued that this single wide-field OCT report is enough to distinguish healthy from glaucomatous eyes and therefore sought to train a hybrid DL and ML model to carry out this task. They found that the hybrid model performed better than traditional OCT and visual field metrics, and that training the model with the RNFL probability map yielded the highest diagnostic accuracy (AUC = 0.93).

Single-page wide-field OCT reports have also been used to train CNNs without traditional ML. In a 2021 study, Shin et al.[43▪▪] compared the diagnostic accuracy of two types of CNNs to a conventional parameter (i.e. RNFL thickness). Both CNNs [fusion by convolution network (FCN) and fully connected network (FFC)] were able to combine data from the single-page report, but they differed in the level of fusion of derived features. Both FCN and FFC outperformed the conventional method (i.e. RNFL thickness value) in distinguishing healthy from glaucomatous eyes; but FCN, which is effective in fusing images with spatially similar structures, achieved the highest AUC of 0.987. These findings demonstrate that combining data from wide-field single-page reports in a specific manner compatible with CNN architecture may improve model performance.

Lee et al.[44▪] also used a combination approach, but they instead trained four independent DL models with input from four kinds of SD-OCT images: GC-IPL thickness maps, GC-IPL deviation maps, RNFL thickness maps, and RNFL deviation maps. An additional “ensemble model” was trained with a combination of the four types of images. The ensemble model achieved the best performance (AUC = 0.990) followed by the GC-IPL deviation map (AUC = 0.986), which suggests that integrating macular and peripapillary parameters is optimal for distinguishing glaucoma from healthy eyes.

Transfer learning, or fine-tuning a pretrained DL model with additional data from another institution, improves glaucoma diagnostic performance and model generalizability [2]. In a 2021 study, Thakoor et al. demonstrated the deteriorating performance of DL models when applied to datasets from different sites, and they then successfully designed a DL model with architectural improvements to reduce this drop in performance by combining three end-to-end DL models (OCT-fine-tuned ResNet-18, VGG-16, and InceptionV3). Additionally, this study sought to enhance explainability of extracted DL features by generating CAMs and using Testing with Concept Activation Vectors (TCAVs), which quantify human-interpretable regions of importance in OCT images resulting in accurate glaucoma classification. They found that RNFL probability maps, GC-IPL thickness maps, and GC-IPL probability maps had relatively high TCAV scores, meaning these are important for correct glaucoma classification by the DL models, while RNFL thickness maps had low TCAV scores. Furthermore, they corroborated TCAV scores by quantifying human eye fixation from two OCT graders across eight OCT reports and found that these were consistent [45▪▪]. Such statistical analyses may help change clinicians’ perception that DL models are “black box” and may enhance explainability, which is an important step in implementing automated models into clinical practice.

The BMO-MRW is a relatively new parameter that has shown promise in detecting glaucoma, and it has been reported to have a stronger correlation to visual acuity than other disc parameters [30]. Seo et al.[46▪] used BMO-MRW along with RNFL thickness and RNFL color code classification to train a DL model which was able to discriminate glaucoma suspect eyes from early normal tension glaucoma eyes with a 96.7% accuracy. When using single parameters, BMO-MRW produced the best performance (AUC = 0.959), proving that newer optic disc parameters such as BMO-MRW should be considered in future studies for improving CNN diagnostic accuracy.

DEEP LEARNING MODELS FOR DETECTING GLAUCOMA PROGRESSION

Although traditional ML algorithms have been used to detect glaucoma progression from OCT scans, DL models that achieve this task are rare in the literature [47]. This may be explained by the fact that monitoring glaucoma progression is more difficult than detecting disease, as it requires longitudinal testing and is often complicated by patient noncompliance. In a recent publication, Bowd et al.[48▪▪] used unsupervised DL auto-encoders (DL-AE) to develop RNFL-based region-of-interest (ROI) maps for the classification of glaucoma: likely progression, not likely progression, and no change. Compared to global circumpapillary RNFL (cpRNFL) thickness measurements, the DL-AE ROIs had greater sensitivity for detecting change in progressing eyes (0.63 and 0.90), and the specificity for detecting nonprogressing eyes was similar (0.93 and 0.92). Importantly, the DL-AE ROI model identified 40% more eyes with glaucomatous progression than the average cpRNFL annulus thickness derived from the same optic disc cube scan patterns. This model exhibits great clinical relevance because it produces eye-specific results which account for individual anatomic differences that may not be detected with conventional instrument-defined global or regional measurements.

CONCLUSION

It is clear that a DL approach can greatly improve, simplify, and expedite glaucoma detection, but the limitations to this approach pose challenges for future incorporation into clinical practice. First, large amounts of data are needed to optimize DL models. Obtaining and grading such quantities of data may be challenging and time consuming. Second, the generalizability of DL models in glaucoma detection has yet to be determined. Lastly, other pathologies may affect a model's ability to detect desired pathologies. The field of artificial intelligence offers great potential benefits in medical practice, but further research is needed to prove its clinical utility.

Acknowledgements

None.

Financial support and sponsorship

Fidelity Charitable Fund; NIH R01 EB033321; NIH R44EY034409; Alcon Laboratories, Inc.

Conflicts of interest

There are no conflicts of interest.

REFERENCES AND RECOMMENDED READING

Papers of particular interest, published within the annual period of review, have been highlighted as:

▪ of special interest

▪▪ of outstanding interest

REFERENCES

1. Tham YC, Li X, Wong TY, et al. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology 2014; 121:2081–2090.
2. Asaoka R, Murata H, Hirasawa K, et al. Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images. Am J Ophthalmol 2019; 198:136–145.
3. Asaoka R, Murata H, Iwase A, Araie M. Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier. Ophthalmology 2016; 123:1974–1980.
4. Li Z, He Y, Keel S, et al. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 2018; 125:1199–1206.
5. Shibata N, Tanito M, Mitsuhashi K, et al. Development of a deep residual learning algorithm to screen for glaucoma from fundus photography. Sci Rep 2018; 8:14665.
6. US Food and Drug Administration (FDA). FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems; 2020. Available at: https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye.
7. Digital Diagnostics. Available at: https://www.digitaldiagnostics.com/about/digital-diagnostics/.
8. Biswas S, Lin C, Leung CKS. Evaluation of a myopic normative database for analysis of retinal nerve fiber layer thickness. JAMA Ophthalmol 2016; 134:1032–1039.
9. Maharana PK, Rai VG, Pattebahadur R, et al. Awareness and knowledge of glaucoma in central India: a hospital-based study. Asia Pac J Ophthalmol (Phila) 2017; 6:243–249.
10. Ford BK, Angell B, Liew G, et al. Improving patient access and reducing costs for glaucoma with integrated hospital and community care: a case study from Australia. Int J Integr Care 2019; 19:5.
11. Shinde PP, Shah S. A review of machine learning and deep learning applications. 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA) 2018. 1–6.
12. Maetschke S, Antony B, Ishikawa H, et al. A feature agnostic approach for glaucoma detection in OCT volumes. PLoS One 2019; 14:e0219126.
13. Vermeulen AF. Unsupervised learning: deep learning. Industrial machine learning 2020; Berkeley, CA, USA: Apress, 225–241.
14. Goldbaum MH, Sample PA, White H, et al. Interpretation of automated perimetry for glaucoma by neural network. Invest Ophthalmol Vis Sci 1994; 35:3362–3373.
15. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM 2017; 60:84–90.
16. Chollet F. Deep learning with Python. Shelter Island, NY: Manning Publications Co.; 2018.
17. An G, Omodaka K, Hashimoto K, et al. Glaucoma diagnosis with machine learning based on optical coherence tomography and color fundus images. J Healthc Eng 2019; 2019:4061313.
18. Phan S, Satoh S, Yoda Y, et al. Evaluation of deep convolutional neural networks for glaucoma detection. Jpn J Ophthalmol 2019; 63:276–283.
19. Diaz-Pinto A, Morales S, Naranjo V, et al. CNNs for automatic glaucoma assessment using fundus images: an extensive validation. Biomed Eng Online 2019; 18:29.
20. Li F, Yan L, Wang Y, et al. Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs. Graefes Arch Clin Exp Ophthalmol 2020; 258:851–867.
21. Thompson AC, Jammal AA, Berchuck SI, et al. Assessment of a segmentation-free deep learning algorithm for diagnosing glaucoma from optical coherence tomography scans. JAMA Ophthalmol 2020; 138:333–339.
22. Hogarty DT, Mackey DA, Hewitt AW. Current state and future prospects of artificial intelligence in ophthalmology: a review. Clin Exp Ophthalmol 2019; 47:128–139.
23. Hosoda Y, Miyake M, Yamashiro K, et al. Deep phenotype unsupervised machine learning revealed the significance of pachychoroid features in etiology and visual prognosis of age-related macular degeneration. Sci Rep-UK 2020; 10:18423.
24. Wang M, Tichelaar J, Pasquale LR, et al. Characterization of central visual field loss in end-stage glaucoma by unsupervised artificial intelligence. JAMA Ophthalmol 2020; 138:190–198.
25. Yousefi S, Balasubramanian M, Goldbaum MH, et al. Unsupervised Gaussian mixture-model with expectation maximization for detecting glaucomatous progression in standard automated perimetry visual fields. Transl Vis Sci Technol 2016; 5:2.
26. Diaz-Pinto A, Colomer A, Naranjo V, et al. Retinal image synthesis and semi-supervised learning for glaucoma assessment. IEEE Trans Med Imaging 2019; 38:2211–2218.
27. Wang X, Tang F, Chen H, et al. UD-MIL: uncertainty-driven deep multiple instance learning for OCT image classification. IEEE J Biomed Health Informatics 2020; 24:3431–3442.
28. Zhao R, Chen X, Liu X, et al. Direct cup-to-disc ratio estimation for glaucoma screening via semi-supervised learning. IEEE J Biomed Health Informatics 2020; 24:1104–1113.
29. Pazos M, Dyrda AA, Biarnés M, et al. Diagnostic accuracy of spectralis SD OCT automated macular layers segmentation to discriminate normal from early glaucomatous eyes. Ophthalmology 2017; 124:1218–1228.
30. Fan KC, Tsikata E, Khoueir Z, et al. Enhanced diagnostic capability for glaucoma of 3-dimensional versus 2-dimensional neuroretinal rim parameters using spectral domain optical coherence tomography. J Glaucoma 2017; 26:450–458.
31. Chen TC, Hoguet A, Junk AK, et al. Spectral-domain OCT: helping the clinician diagnose glaucoma: a report by the American Academy of Ophthalmology. Ophthalmology 2018; 125:1817–1827.
32▪. Mariottoni EB, Jammal AA, Urata CN, et al. Quantification of retinal nerve fibre layer thickness on optical coherence tomography with a deep learning segmentation-free approach. Sci Rep 2020; 10:402.
33. Mansberger SL, Menda SA, Fortune BA, et al. Automated segmentation errors when using optical coherence tomography to measure retinal nerve fiber layer thickness in glaucoma. Am J Ophthalmol 2017; 174:1–8.
34. Miki A, Kumoi M, Usui S, et al. Prevalence and associated factors of segmentation errors in the peripapillary retinal nerve fiber layer and macular ganglion cell complex in spectral-domain optical coherence tomography images. J Glaucoma 2017; 26:995–1000.
35▪. Thompson AC, Jammal AA, Berchuck SI, et al. Assessment of a segmentation-free deep learning algorithm for diagnosing glaucoma from optical coherence tomography scans. JAMA Ophthalmol 2020; 138:333–339.
36. Ran AR, Cheung CY, Wang X, et al. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. Lancet Digit Health 2019; 1:e172–182.
37▪▪. Garcia G, Colomer A, Naranjo V. Glaucoma detection from raw SD-OCT volumes: a novel approach focused on spatial dependencies. Comput Methods Programs Biomed 2021; 200:105855.
38▪▪. Akter N, Fletcher J, Perry S, et al. Glaucoma diagnosis using multifeature analysis and a deep learning technique. Sci Rep 2022; 12:8064.
39▪. Russakoff DB, Mannil SS, Oakley JD, et al. A 3D deep learning system for detecting referable glaucoma using full OCT macular cube scans. Transl Vis Sci Technol 2020; 9:12.
40. Hood DC, De Cuir N, Blumberg DM, et al. A single wide-field OCT protocol can provide compelling information for the diagnosis of early glaucoma. Transl Vis Sci Technol 2016; 5:4.
41. Lee WJ, Na KI, Kim YK, et al. Diagnostic ability of wide-field retinal nerve fiber layer maps using swept-source optical coherence tomography for detection of preperimetric and early perimetric glaucoma. J Glaucoma 2017; 26:577–585.
42. Muhammad H, Fuchs TJ, De Cuir N, et al. Hybrid deep learning on single wide-field optical coherence tomography scans accurately classifies glaucoma suspects. J Glaucoma 2017; 26:1086–1094.
43▪▪. Shin Y, Cho H, Jeong HC, et al. Deep learning-based diagnosis of glaucoma using wide-field optical coherence tomography images. J Glaucoma 2021; 30:803–812.
44▪. Lee J, Kim YK, Park KH, Jeoung JW. Diagnosing glaucoma with spectral-domain optical coherence tomography using deep learning classifier. J Glaucoma 2020; 29:287–294.
45▪▪. Thakoor KA, Koorathota SC, Hood DC, Sajda P. Robust and interpretable convolutional neural networks to detect glaucoma in optical coherence tomography images. IEEE Trans Biomed Eng 2021; 68:2456–2466.
46▪. Seo S, Cho H. Deep learning classification of early normal-tension glaucoma and glaucoma suspects using Bruch's membrane opening-minimum rim width and RNFL. Sci Rep 2020; 10:19042.
47. Christopher M, Belghith A, Weinreb RN, et al. Retinal nerve fiber layer features identified by unsupervised machine learning on optical coherence tomography scans predict glaucoma progression. Investig Ophthalmol Vis Sci 2018; 59:2748–2756.
48▪▪. Bowd C, Belghith A, Christopher M, et al. Individualized glaucoma change detection using deep learning auto encoder-based regions of interest. Transl Vis Sci Technol 2021; 10:19.
Keywords:

artificial intelligence; deep learning; glaucoma; optical coherence tomography

Supplemental Digital Content

Copyright © 2022 The Author(s). Published by Wolters Kluwer Health, Inc.