Applications of Artificial Intelligence and Deep Learning in Glaucoma : The Asia-Pacific Journal of Ophthalmology

Secondary Logo

Journal Logo

Review Articles

Applications of Artificial Intelligence and Deep Learning in Glaucoma

Chen, Dinah MD*,†; Ran, Emma Anran PhD; Tan, Ting Fang§,∥; Ramachandran, Rithambara MD; Li, Fei MD, PhD#,**; Cheung, Carol PhD; Yousefi, Siamak PhD††; Tham, Clement C.Y. FCOphth (HK), FRCOphth‡‡,§§; Ting, Daniel S.W. MD, PhD§,∥,∥∥; Zhang, Xiulan MD, PhD#,**; Al-Aswad, Lama A. MD, MPH¶¶

Author Information
Asia-Pacific Journal of Ophthalmology 12(1):p 80-93, January/February 2023. | DOI: 10.1097/APO.0000000000000596
  • Open



Although the use of artificial intelligence (AI) algorithms as clinical decision support tools is still in its infancy, over the last 10 years, there has been exponential growth, particularly across specialties such as radiology, ophthalmology, and cardiology.1 In 2018, IDx-DR, now Digital Diagnostics, received US Food and Drug Administration approval for the first AI-based device for autonomous diagnosis of diabetic retinopathy (DR) in medicine.2 Since then, EyeArt was approved for the same indication.3 To date, DR remains the most robust example of AI tools in ophthalmology. Although research in other ophthalmologic subspecialities is extensive, there are unique challenges to developing algorithms for glaucoma.

Glaucoma is a chronic, progressive optic neuropathy leading to the loss of retinal nerve fibers and though in its early stages it is asymptomatic, it can lead to severe, permanent vision loss over time. Because of this, early detection is essential. Glaucoma is the leading cause of irreversible blindness worldwide and prevalence is estimated to increase to ~111.8 million by 2040.4,5 There is an urgent unmet need for improving screening and diagnosis; half of all patients with glaucoma remain undiagnosed in the United States.6 AI, and deep learning (DL) in particular, not only represents a potential solution to meet this growing need through screening at scale but also represents an opportunity for personalization of treatment and prognosis. Although there is great potential for AI to impact glaucoma screening, diagnosis, and prediction, there are no algorithms that can reliably fulfill these needs. There are several challenges specific to model design for glaucoma use cases that remain barriers to the development of clinically deployable algorithms.

Unlike DR, for which a diagnostic consensus exists and teleophthalmologic diagnosis using color fundus photography (CFP) is well validated and accepted,7,8 glaucoma remains a screening and diagnostic challenge for AI. Currently, clinical diagnosis of glaucoma relies not only on objective anatomic changes assessed via multimodal imaging, but also requires subjective measures of visual function. Before the advent of optical coherence tomography (OCT), glaucoma diagnosis was based on a combination of clinical findings (observations of the neuroretinal rim and optic cup size), intraocular pressure (IOP) measurements, gonioscopy, and central corneal thickness, and functional perimetry testing. With the introduction of OCT, structural changes associated with glaucoma and progression of disease could be visualized and quantified through optic nerve head (ONH) scans, retinal nerve fiber layer (RNFL), and ganglion cell layer thicknesses. Although OCT is now commonly used in diagnosis and monitoring, glaucoma evaluation remains subjective with high interprovider variability. There are no universally accepted diagnostic or progression criteria, and treatment paradigms can vary according to cultural, regional, and individual provider preferences.9,10 Beyond this, the etiology of glaucoma remains poorly understood, and to date, IOP control remains the only modifiable risk factor for the prevention and treatment of disease. The development of clinically usable AI for glaucoma will require consensus and standardization of diagnostic criteria.

At the moment, imaging-based AI research in glaucoma has focused primarily on image segmentation, screening, and diagnosis from individual imaging modalities and multimodal imaging. More recently, emerging research has also focused on the use of AI for disease progression prediction through the automated identification of risk factors. In this review, we provide an overview of the current state of AI research in glaucoma before turning to topics important to a broader discussion of the use of this technology in clinical practice, including data access, standardization, personalization, equity, and transparency. Finally, we discuss possible future directions to address the challenges in furthering the development of AI-based clinical tools in glaucoma.


This was a nonsystematic literature review using the search terms “Artificial Intelligence,” “Machine Learning,” and “Glaucoma,” in combination with imaging modalities “Fundus Photo,” “OCT,” “Visual Field,” “Goniophotographs,” and/or “Multimodal,” based on PubMed.

Fundus Photo–Based Algorithms

Loss of ganglion cell axons at the ONH is a hallmark of glaucomatous optic neuropathy (GON). Anatomic changes of the ONH, namely thinning of the neuroretinal rim, result in an excavation of the optic disc, or optic disc cupping. In clinical settings, fundus photography is one of the most frequently used modalities for evaluation of such changes. Fundus photographs are easy to obtain and relatively inexpensive, and with advancing technology, there is also a possibility of using portable cameras for nonmydriatic photography. Thus, there are several advantages toward the utilization of fundus photographs for low-cost, high-impact glaucoma screening.

To accurately evaluate optic nerve changes, recognition of key features such as the borders of the optic disc and the bright central elliptical optic cup is crucial. Accurate segmentation of nerve head borders allows the calculation of disc area, cup volume, and horizontal and vertical cup-to-disc ratios (CDRs), all clinically meaningful markers for the detection and monitoring of glaucomatous disease. Unfortunately, studies have shown that there is high intergrader variability in the subjective interpretation of CDR measurements from fundus photography. For example, Almazroa and colleagues investigated the agreement between 6 ophthalmologists in marking the optic disc, optic cup, and horizontal and vertical CDRs. Agreement for horizontal and vertical CDRs was around 45%, or less than half the images in the data set.11 Even fellowship-trained glaucoma specialists tend to underestimate or overestimate signs of glaucoma damage when assessing photographs, exaggerated in eyes with large physiological cups or small optic discs.12,13 As such, automatic feature detection through DL models is an attractive pursuit.14,15

The first implementation of DL architecture in ONH analysis was proposed by Lim et al16 in 2015. The authors developed a convolutional neural network (CNN) algorithm for calculating CDR. Since then, algorithms have become increasingly sophisticated and clinically targeted toward the identification of referable glaucoma.14,17–23 For example, Bhuiyan and colleagues developed one of the first proprietary semiautomated software to quantify vertical CDR. CDRs above 0.5 were considered “suspect.” Their DL architecture was then trained and tested on a data set of 1546 images, achieving an accuracy of 89.67% [sensitivity, 83.33%; specificity, 93.89%; and area under the curve (AUC), 0.93]. The model achieved a similar accuracy of 83.54% [sensitivity, 80.11%; specificity, 84.96%; and area under the receiver operating characteristic curve (AUROC), 0.85) when externally validated on a test set of 638 fundus images.19

When tested against expert glaucoma specialists, DL models have shown equal, if not better, accuracy in differentiating normal from glaucomatous eyes. In a study by Al-Aswad and colleagues, 6 ophthalmologists and the DL system, Pegasus, graded 110 CFP randomly sampled from the Singapore Malay Eye Study for the presence of GON. Pegasus achieved an AUROC of 92.6% compared with ophthalmologist AUROCs that ranged from 69.6% to 84.9%. The agreement between Pegasus and the gold standard label was 0.715, whereas the highest ophthalmologist agreement with the gold standard was only 0.613.24 Ahn and colleagues further demonstrated that DL techniques can be used to distinguish between normal controls and both severe and early glaucoma patients based on fundus photos alone. On a data set of 1542 images, they achieved an overall accuracy and AUROC of 92.2% and 0.98 on the training data, 88.6% and 0.95 on the validation data, and 87.9% and 0.94 on the test data.25 Similarly, Liu et al26 further found that their DL models showed a sensitivity and specificity of >90% when tested on a local validation data set, in 3 clinical-based data sets, and in a real-world distribution data set, thus speaking to the generalizability of their DL model. DL networks assign weights not only to the optic disc area but also to the adjacent peripapillary area and to the anterior surface of the lamina cribrosa whenever visible. In a 2021 study by Hemelings et al27, DL models were able to identify glaucomatous nerves when trained both on fundus photographs with the ONH present (AUROC of 0.94) and when trained on images where the ONH was cropped out (0.88).

To better understand the association of ONH features with referable GON assessment, Phene and colleagues plotted the distributions of ONH features in the refer and no-refer categories and found that the vertical CDR distributions were significantly different between the refer and no-refer categories and that there were significantly increased referral rates when RNFL defects, disc hemorrhages, laminar dot signs, and β-zone parapapillary atrophy were present.21 Understanding what optic disc features are salient predictors of glaucoma facilitates tasks beyond binary classification.

DL algorithms are also being developed to quantify features of the ONH including CDR measures28,29 and associated RNFL thicknesses.12,30 The ability to predict quantifiable markers can help establish cutoffs of sensitivity and specificity for screening studies. In addition, quantitative measurements allow for longitudinal tracking and monitoring of progression. Li et al22 developed DL algorithms to predict future glaucoma incidence and progression based on the fundus photographs of 17,497 eyes in 9346 patients. The images in this study were captured on various modalities including smartphone cameras. In the external test sets, the models achieved excellent predictive performance in identifying high-risk individuals for developing glaucoma or having glaucoma progression.22

Widefield Fundus Photo Algorithms

Few studies have been published regarding AI algorithms for glaucoma detection from widefield fundus photos. Shin et al31 compared CNN-based algorithms using ultrawide-field (UWF) fundus images versus true-color confocal scans for the diagnosis of glaucoma. In this study, the DL algorithm based on UWF imaging achieved a higher accuracy and AUC (ACC 83.62%, AUC 0.904 vs. ACC 81.46%, AUC 0.868). Li et al32 compared their DL system for glaucoma detection utilizing UWF imaging to ophthalmologist graders and achieved AUCs of 0.983 to 0.999. This study conducted external validation testing among data sets with images from patients with diverse ethnic backgrounds. The performance of both these algorithms utilizing UWF imaging was similar to those utilizing traditional fundus photos. As UWF photography becomes more commonplace, these models may become increasingly clinically relevant.

Fundus Photo–Based Glaucoma Prediction

Most of the application of AI models has been centered around glaucoma detection for the screening and diagnosis purposes; however forecasting glaucoma could play an important role in identifying those with future disease development and potential vision loss. Thakur and colleagues developed a DL model based on over 60,000 fundus photographs to forecast glaucoma before disease development. They achieved AUCs up to ~0.88 for forecasting glaucoma 1–3 years before onset. Their model achieved an AUC of ~0.95 for glaucoma diagnosis.33 In general, utilizing fundus photos for disease progression prediction remains an underexplored area of research.

Figure 1 provides an overview of the contribution of modality-specific AI algorithms to the larger body of AI-based glaucoma literature. Currently, fundus photos–based AI algorithms are the most well represented, followed by OCT and visual field (VF)–based algorithms. Figure 2 depicts the percentage of studies addressing problems in glaucoma screening/diagnosis, prognosis, treatment prediction, and clinical trial design by imaging modality. Studies that did not fall into these categories included algorithms designed for VF pattern analysis, comparison of modalities (swept-source OCT vs. spectral-domain OCT), intergrader variability, image segmentation, and quality of life. Among fundus photo–based algorithms, the majority were for screening and/or diagnosis. A few algorithms were designed for glaucoma prognosis. None tackled glaucoma treatment prediction nor clinical trial design.

Contribution in the literature of all glaucoma imaging-based artificial intelligence (AI) algorithms. The majority of published studies were fundus photo–based AI algorithms. Both anterior segment OCT and goniophotograph-based AI algorithms were the least represented imaging modalities in the literature. AS-OCT indicates anterior segment OCT; OCT, optical coherence tomography.
VF, CFP, MM, and AS-OCT. This figure depicts the number of published articles for glaucoma screening and/or diagnosis and prognosis (or progression prediction). There were no articles for glaucoma treatment response prediction and none concerning clinical trial design. Articles in the “other” category did not fit into these groups and represent topics including VF pattern analysis (not for diagnosis), comparison of different OCT modalities (swept-source vs. spectral-domain), intergrader variability, image segmentation, and quality of life. AS-OCT indicates anterior segment OCT; CFP, color fundus photo; MM, multimodal imaging; OCT, optical coherence tomography; VF, Visual field.

Fundus Photo Algorithms: Challenges

The current literature in glaucoma detection from fundus photos indicates a high level of performance; recent meta-analyses of machine learning (ML)–based glaucoma detection utilizing fundus, OCT, and/or both image types showed that fundus photo–based ML systems performed with AUCs of >0.90.34,35 However, there remains many limitations. First, as discussed previously, segmentation and identification of areas of neuroretinal rim loss can be both subjective and challenging. Application of DL to fundus photographs is limited by reliance on ground truth labels, generated by humans. Normal anatomic variability, as well as pathologic conditions, such as high myopia, can affect ONH appearance. In fact, Liu et al26 found that the most common reason for both false-negative and false-positive grading by their DL model was pathologic or high myopia [51 of 119 (46.3%) and 191 of 588 (32.3%)]. Especially in such instances, image resolution becomes a key limiting factor to analysis and can affect preprocessing steps such as image channel selection, illumination normalization, contrast enhancement, and the extraction of blood vessels.36 High-quality images from eyes with media opacities, with movement artifacts, or anterior segment pathology can be difficult to obtain. Another fundamental challenge is that clinical glaucoma evaluation generally requires integrated analysis of multiple modalities (eg, clinical examination, ONH imaging, and VF testing) to determine the glaucoma subtypes and any progression, not only fundus photo findings.37,38 Groups have attempted to address this by developing algorithms utilizing multimodal imaging, which will be discussed later in this paper.

OCT-Based Algorithms

OCT, a noncontact and noninvasive imaging technology for cross-sectional and 3-dimensional (3D) view of the retina and ONH, is now commonly used to evaluate the structural changes associated with glaucoma.39,40 The conventional OCT report can provide the thickness and deviation maps to quantify different layers, such as RNFL and ganglion cell layer–inner plexiform layer (GCIPL), which are sensitive and specific for detecting glaucoma, especially when combined with other ophthalmoscopic modalities.39,41 In addition to structural imaging, OCT has been explored for “dynamic” imaging and en face imaging to map the retinal capillary network and choriocapillaris without the use of exogenous intravenous dye injection, namely OCT angiography (OCTA). OCTA imaging uses sequential OCT sagittal scans to map red blood cell movement over time at a given cross-section. Recently, quantitative OCTA metrics and features have been defined and explored for assessing glaucoma. For example, vessel density loss associated with glaucoma can be detected by OCTA. Peripapillary, macular, and choroidal vessel density parameters may complement VF and structural OCT measurements in the diagnosis of glaucoma.42 To date, the majority of glaucoma studies utilizing OCT are for screening and/or diagnosis purposes (Fig. 2) with excellent performance. A meta-analysis of OCT-based algorithms demonstrated a pooled sensitivity and specificity of 90% and 95%, respectively, and AUC of 0.96 for glaucoma detection.34

DL-based automated image analysis has been developed to detect GON from different types of OCT and OCTA images, including the OCT conventional report, 2-dimensonal (2D) B scans, 3D volumetric scans, and OCTA en face images. Studies showed that DL models trained with images extracted from the OCT single report had high accuray in glaucoma detection.43–46 For instance, Muhammad et al43 developed a hybrid DL method to access to information from a single, widefield (9×12 mm) swept-source OCT scan per patient. DL networks were used to extract features from different input images (eg, RNFL thickness and deviation maps, GCIPL thickness and deviation maps), and a conventional ML method (ie, a random forest classifier) was used to train a model based on these features to detect GON.

The hybrid DL method achieved accuracies of 63.7%–93.1% with different input, outperforming conventional OCT reports and VF clinical metrics. Beyond this, Thakoor et al44 further investigated an interpretable and end-to-end DL model trained with the OCT conventional report, which demonstrated that RNFL and GCIPL deviation maps, as well as GCIPL thickness maps were the most critical components for glaucoma detection derived from the full OCT report. Hood et al45 also commented that it was possible to use OCT as a single tool for glaucoma detection and AI models further enhanced the feasibility of detecting glaucoma with images extracted from 1 single OCT report.

OCT raw images may also be used for AI model training, which could potentially reduce the effects of segmentation error inherent to built-in segmentation software. Thompson et al47 developed a segmentation-free DL algorithm for GON assessment using the entire circle 2D B scans. The DL algorithm performed better than conventional RNFL thickness parameters for discriminating GON on OCT scans, especially in early stages of the disease such as preperimetric and mild perimetric glaucoma.

OCT volumetric scans, including ONH-centered and macula-centered ones, can potentially provide more comprehensive features, such as the changes in RNFL, GCIPL, Bruch membrane opening, neuroretinal rim area, the lamina cribrosa, and choroidal. Thus, several studies investigated the potential of 3D DL models in glaucoma detection.48–52 Maetschke et al48 developed a 3D DL model using ONH-centered volumetric OCT scans and compared its performance with 8 kinds of ML classifiers. The DL-based approach achieved a peak test AUROC of 0.94, which was substantially higher (P<0.05) than the best classic ML method (AUROC of 0.89) on segmentation-based features. Ran et al49 developed a 3D DL model with 4877 ONH-centered volumetric scans and further tested it on 3 external validation data sets from different eye centers with 546, 267, and 1231 scans, respectively. The model consistently performed well with AUROC of 0.969 in the internal validation and 0.893–0.897 in external testing, which verified its potential generalizability in unseen data sets.49 Moreover, this 3D DL model had noninferior performance to 2 glaucoma specialists and significantly outperformed a 2D DL model trained with corresponding en face fundus images.

Currently, the application of AI-based OCTA image analysis has not yet been deeply explored but holds great potential to enhance glaucoma detection. Bowd et al53 trained and tested a DL model on entire en face 4.5×4.5-mm radial peripapillary capillary OCTA ONH vessel density images. The model was compared with gradient boosting classifier analysis of the built-in software in the OCTA device. The results showed that DL-based en face image analysis can improve on feature-based gradient boosting classifier analysis for glaucoma classification.

Beyond binary classification tasks, DL models for OCT image segmentation54 and reconstruction55 represent possible tools for assisting clinicians to further understand the structural phenotypes and the morphologies of the ONH. Conventional classifiers56 and unsupervised ML model57 were used to identify glaucoma progression from ONH-centered volumetric OCT scans or RNFL measurement, respectively. More investigations on AI-based glaucoma progression prediction across different types of OCT images are still warranted. To the best of our knowledge, there are no algorithms for treatment prediction or clinical trial design utilizing OCT images (Fig. 2).

OCT Algorithms: Challenges

Although AI shows tremendous promise in glaucoma detection, progression prediction, and layer segmentation from OCT images, there are still aspects in need of further investigations and improvements. First, just as for fundus images, some anatomic morphologies such as myopic optic disc morphologies can influence the results, especially the RNFL thickness measurement for glaucoma detection from OCT. Groups have attempted to address this particular challenge. Russakoff et al51 and Noury et al52 developed 3D DL models based on macula-centered and ONH-centered scans, respectively. The models performed reasonably well across different myopic severity distributions. Ran et al58 found that a multitask 3D DL model was potentially helpful for detecting GON and myopic optic disc morphologies simultaneously. Other OCT-based parameters, such as RNFL optical texture analysis, may also play a significant role in identifying GON from nonglaucomatous optic neuropathies.59 Second, there is a lack of interchangeability across different OCT devices. DL algorithms–based layer segmentation using different images from various kinds of devices can potentially improve the feasibility of interdevice application.54 Third, using an OCT device itself may be less feasible in lower-resourced regions. Thus, machine-to-machine approach could help predict OCT-based measurements such as RNFL thickness and neuroretinal rim loss from fundus photographs.30,60 Fourth, integrating DL models with image quality filters,61,62 image quality enhancement,63 or functional change prediction50,64 is also essential to facilitate OCT as a single screening or diagnostic tool for glaucoma. Fifth, multimodal AI models combining OCT and OCTA, including both optic disc and macula scans, could provide more comprehensive information for glaucoma detection, especially at an earlier stage.65

VF-Based Algorithms

As the predominant test of functional visual changes, standard automated perimetry (SAP) remains crucial in the diagnosis of glaucoma and monitoring of disease progression. SAP assesses light sensitivities throughout the VF. Although there are many manufacturers, the most commonly used perimeter is the Humphrey Field Analyzer (Carl Zeiss Meditec) and most commonly performed tests assess the central 30, 24, or 10 degrees of vision. The results are displayed in reports produced by each device and include numerical parameters such as mean deviation (MD), pattern SD (PSD), VF index, as well as spatial representations of deficits derived from individual threshold points in the form of total deviation (TD) and pattern deviation (PD) probability plots. Probability plots may reveal VF defects that form patterns characteristic of glaucomatous loss, such as a nasal step or arcuate defects. Test reports also include reliability indices in the form of fixation losses, false positives, and false negatives. Beyond this, devices can also produce progression analyses using device-specific proprietary software. All these report parameters are frequently used to evaluate functional changes in clinical practice.

Although SAP is essential in determining the functional changes that help define glaucoma, VF testing is subjective, complicated by both patient factors, which may affect quality, and inconsistent interpretation by clinicians and human graders. VF variability is also known to increase with severity of disease.66 AI-based interpretation of VF data represents an opportunity to improve reproducibility of evaluation; however, it is still mired by many of the same challenges that clinicians face.

Unlike the other modalities reviewed, VF reports are not technically images. There are generally 2 approaches to the use of VF testing in AI development; input data for algorithms may take the form of probability plots with spatial representation of defects as images, or as raw numerical data (threshold sensitivities, MD, PSD, and/or VF index). Because it is not an imaging modality, SAP data does not require pixel level segmentation as seen with CFP or OCT-based AI. Generally, VF-based algorithms are focused on binary classification tasks, both for screening and diagnosis of glaucoma, and predictive tasks for disease progression (Fig. 2). There are no published algorithms designed for treatment response prediction or clinical trial design.

VF: Glaucoma Classification

VF-based DL algorithms are able to detect glaucomatous fields with a high level of accuracy, often equaling67 or outperforming ophthalmologist graders.68,69 Huang et al67 developed an algorithm for detection of glaucoma on VFs using a combination of publicly available and real-world data compared with a reference standard of human ophthalmologist grading. Accuracy was reported at 0.85 and 0.90 for the Humphrey and Octopus data, respectively, with equal performance to clinicians.67 Similarly, Kucur et al70 were able to demonstrate the detection of early glaucoma on VF using AI. Both of these groups used training data from multiple devices, the Octopus and Humphrey perimeters, and used similar techniques; numerical TD values from VF reports were converted to images using a preprocessing technique known as voronoi parcellation. Kucur et al70 found that voronoi images outperformed other measures including MD.

Among the most robust algorithms using VFs for glaucoma diagnosis is one developed by Li et al.69 iGlaucoma is a smartphone cloud-based DL algorithm for the detection of glaucoma from VF data that unlike most others, has undergone real-world prospective external validation testing.69 This algorithm was developed with over 10,000 VFs, validated against 649 VFs, and compared with a reference standard of expert glaucoma specialists and ophthalmologists. External validation testing yielded a sensitivity and specificity of 95.4% and 87.3%, respectively, outperforming ophthalmologist readers.69 This algorithm analyzes PD probability plots as images, numerical displays, and numerical PD values, and was trained with multiple parameters from the VF report to classify glaucomatous VFs from nonglaucomatous ones from PD plot images alone.

Other groups have also focused on using AI to identify patterns of spatial loss on VFs that may be associated with future disease progression. Wen et al71 developed an algorithm to generate predictions for future 24-2 Humphrey VFs using extracted TD datapoints from real-world data sets. Using individual TD data rather than global parameters such as MD or PSD, allows for the analysis of spatial features of VF loss in glaucoma. Brusini and colleagues developed a model to identify local patterns of glaucomatous VF loss to classify and quantify the severity levels based on subjective assessments.72,73 As subjective and manual assessment of VF is labor-intensive and prone to error and variability, numerous automated models based on conventional AI models have been proposed to identify and classify patterns of VF defect using unsupervised Gaussian mixture modeling, archetypal analysis, or deep archetypal analysis.74–83 A recent study has provided quantification of the VF severity stages based on supervised and unsupervised conventional ML.84

A large segment of the models to quantify VFs are based on unsupervised ML. Wang colleagues developed an algorithm using unsupervised learning (unannotated training data) for both the quantification and classification of central VF loss in glaucoma. Using longitudinal 10-2 VF data collected in the United States, this group developed an algorithm to discern central VF patterns and found that certain subtypes with nasal defects were associated with more severe total central loss in the future.85 Similarly, using archetypal analysis, Yousefi et al86 were able to identify VF loss patterns associated with rapid progression of disease.

VF: Glaucoma Progression

Predicting glaucoma progression from VF using AI is also an important area of research. Sample et al87 introduced an unsupervised variational Bayesian model to detect glaucoma progression. Many follow-up ML models are also based on unsupervised ML applied to VFs.77,80,88,89 Recent models have used unsupervised archetypal analysis to detect glaucoma progression90 and deep archetypal analysis to predict those with future rapid glaucoma progression and vision loss.86

VF Algorithms: Challenges

As with the other modalities reviewed here, there are significant challenges to the development of clinically usable AI-based tools for the diagnosis and monitoring of glaucoma with VFs. Diagnosis of glaucoma in its early stages from VF alone remains particularly difficult for clinicians and AI tools alike, as structural changes are thought to precede functional deficits. In these cases, patients may demonstrate risk factors for glaucoma and evidence of optic nerve or RNFL changes suggestive of the disease without demonstrable changes on VF testing. Few studies have attempted to distinguish VFs of patients with preperimetric disease and healthy controls. Asaoka et al91 developed a DL model capable of detecting minute differences in VFs based on TD values that are otherwise indistinguishable by human graders. This is a promising area of AI research that requires further exploration.

Another key challenge for VF-based AI tools revolves around a lack of consensus regarding diagnostic criteria used to inform ground truth data. Some of the studies mentioned above used clinical trial criteria such as that defined in the Advanced Glaucoma Intervention Study92 or the UK Glaucoma Treatment Study68; however, the diagnosis of VF progression has been found to vary according to the criteria used.93 This becomes particularly significant when considering deployment in clinical settings.

Although, in general, studies demonstrated high levels of algorithmic performance, it is important to note that most groups set parameters for the quality of the VFs included in training sets, including MD cutoffs, limits on fixation losses, false-positives, and false-negatives errors. These VF quality-based inclusion criteria were not consistent among studies. In some cases, VFs with fixation losses as low as 2/13 were excluded.68,69 Where these quality thresholds are set has an impact on the number of VFs, and therefore patients, for whom these algorithms can be applied. In a real-world setting, the utility of these tools will depend on how reliably they can detect disease or changes across VFs of variable quality.

Across all imaging modalities, device standardization remains a key challenge in the development of algorithms for glaucoma. The most commonly used perimeter for which AI has been developed is the Humphrey Field Analyzer. Algorithms trained with data from 1 device may not necessarily perform at the same level when applied to alternative devices. In the case of DR, EyeArt and IDx-DR received approval for the use of their algorithms with specific fundus cameras. There are no standalone, device-agnostic, autonomous AI algorithms for clinical use in ophthalmology. Constraints related to device specificity will have implications for the populations that have access to these AI tools. Finally, few VF-based algorithms have been tested in prospective settings. This will be crucial for determining the accuracy and applicability of the algorithm in real-world clinics, required for regulatory approval.

Multimodal Models

Thus far, we have reviewed single-modality–based AI algorithms. However, there is evidence that combining structural and functional input improves the ability for DL models to diagnose glaucoma.30 Understanding the structure-function correlation is essential in the evaluation of glaucoma patients for diagnosis, monitoring, and prognostication at different disease severities.94,95 Common obstacles in interpretation of these multimodal investigations include the subjective nature and interobserver variability,96 low intertest reproducibility,97 and confounding factors such as aging and other ocular conditions.98

Following through the high-performing accuracies of DL algorithms developed for the respective investigative modalities, there has been expanding focus on utilizing multimodal data to further improve evaluation. Studies have used multimodal structural data to improve the quantification of glaucomatous structural damage from optic disc photographs for segmentation,99 detection,100 and prediction of glaucomatous damage.30,60 Most studies focused on prediction models101–104 to predict VF sensitivities from RNFL thickness on spectral-domain OCT for screening and/or diagnosis (Fig. 2). Shin et al105 also compared different OCT devices and found that the wide-angle swept-source OCT (DRI-OCT-1 Atlantis; Topcon, Tokyo, Japan) outperformed the Cirrus spectral-domain OCT (Carl Zeiss Meditec) in estimating VF sensitivities. Lee et al106 used a DL algorithm to predict MD of SAP from optic disc photographs, which may be more practical in clinical settings where OCT is not available. Another group found that the DL predictions of RNFL thickness based only on fundus photographs were able to predict future development of glaucomatous field defects in eyes of glaucoma suspects.107 Yousefi et al108 developed several supervised ML models to detect glaucoma based on VFs and OCT parameters and reported the superiority of OCT in detecting glaucoma progression compared with VFs.

Others have combined multimodal information including clinical and demographic parameters, structural, and functional data to help with predictions.109,110 The multimodal prediction model by Sedai et al109 used clinical (age, IOP, and intervisit interval), structural [circumpapillary (cp) RNFL thickness from OCT], and functional (VF sensitivities) data to predict cpRNFL thickness at the subsequent visit. Their model demonstrated consistent performance among glaucoma patients and glaucoma suspects, and demonstrated the potential to personalize the frequency of follow-up appointments.

Multimodal Imaging: Challenges

A key challenge in developing AI algorithms from multimodal assessment includes the complex nonlinear mapping of points on SAP to ONH structure. Kihara et al111 used a policy-driven multimodal DL model to predict VF sensitivities from structural OCT and infrared scanning laser ophthalmoscopy images, without the need for segmentation to reduce the dependence on the accuracy of OCT segmentation for matching. On the other hand, fluctuations in VF measurements may result in inaccuracies and limit the correlation of VFs to ONH structure. Asaoka et al112 used a variational autoencoder, one of the generative DL methods, to reconstruct VF sensitivities by filtering out measurement noise, which demonstrated improved structure-function correlation with cpRNFL thickness on OCT.

Furthermore, multimodal assessment in these studies require paired data of imaging modalities in training and testing data sets, which limits the availability and feasibility of data collection. As a result, data sets are relatively small, limiting prediction performance. Finally, similar to single-modality–based AI algorithms for glaucoma detection, most multimodal studies did not use real-world data and models were not externally validated, limiting their generalizability to other populations.

Other Imaging Modalities

Although fundus photographs, posterior segment OCT, and SAP are the predominant imaging modalities used in AI-based glaucoma diagnosis and progression, AI algorithms utilizing goniophotographs and anterior segment OCT have also been developed, primarily to distinguish different subtypes of glaucoma (Fig. 1). Glaucoma can be categorized into 2 main types, open angle glaucoma, and angle closure glaucoma, depending on whether the anterior chamber angle (ACA) is closed or not.4,113 As the ACA closes, it leads to greater resistance to aqueous humor outflow and thus increases IOP, a crucial risk factor for GON.

Goniophotograph-Based Algorithms

Presently, gonioscopy remains the gold standard for detecting ACA. However, gonioscopy often has a steep learning curve and agreement between different graders is quite poor, which immensely restricts the use of gonioscopy in clinical practice.114,115 Several researchers have attempted to apply AI to goniophotographs to assist clinical ophthalmologists in the diagnosis of glaucoma.116–128

In 2010, Cheng et al122 proposed an automated system that could automatically differentiate between open angle glaucoma and angle closure glaucoma on goniophotographs, which achieved a specificity and sensitivity of 92.6% and 97.8%, respectively, based on the combination of all goniophotographic images from each patient’s eye. Cheng et al125 then developed a system for automatic ACA grading through the focal edge. The system could correctly classify 87.3% of open angles and 88.4% of angle closure using a modified 3-level grading system. In addition, in the case of angle closure, it could successfully classify 75.0% of grade 1 cases (where scleral spur could be seen according to Shaffer Grading System).125 Baskaran et al117 evaluated a novel software capable of automated grading of angle closure on EyeCam goniophotographs. The results showed excellent diagnostic consistency (k=0.74; 95% confidence interval: 0.63–0.85) between manual (k=0.88; 95% confidence interval: 0.81–0.96) and automatic grading of angle closure. Automatic grading of EyeCam images achieved an AUROC of 0.954 for detecting angle closure, which was comparable to manual annotations (AUROC of 0.974).117

More recently, Chiang et al126 implemented a novel CNN classifier to assess whether ACA is closed. The CNN classifier is based on ResNet-50 architecture and achieved superior performance to a single-grader (AUC of 0.969, 95% confidence interval: 0.961–0.976) and consensus labels (AUC of 0.952, 95% confidence interval: 0.942–0.960). In addition, the kappa coefficient of agreement between the CNN classifier and single-grader labels was 0.823176, greater than that of glaucoma experts.

Others have attempted to build segmentation algorithms of angle structures on goniophotographs. Peroni et al123 performed semantic segmentation of the anatomic structure of the ACA with a DL system,129 achieving ~88% of average pixel classification accuracy in a 5-fold cross-validation on a very limited size annotated image data set. Subsequently, in 2021, Peroni et al128 continued to develop and test a new DL model. The new model achieved an average segmentation accuracy of ~91% in the test set.128

Goniophotographs: Challenges

All the studies utilizing goniophotograph-based AI had a relatively small sample size and overly homogeneous distribution of subjects.116–121,123,124,128 This may be due to the difficulty of collecting high-quality gonioscopic photographs, a key challenge for this modality.115,130,131 It is also noteworthy that the ethnic characteristics of the subjects were quite homogeneous in most of the studies, with almost all of the subjects in each relevant study coming from the same country, which may limit the generalizability of these models.132

Second, these AI tools may not perform satisfactorily in complicated clinical scenarios.116,117,119,123 In the study conducted by Peroni et al, the DL classification system often misclassified certain shadows as targets when the cornea was in direct contact with the iris, thus leading to segmentation failure.123 In the study by Baskaran et al117, the automatic grading system incorrectly identified angle closure if most of the open angles in the gonio-images had very slight trabecular meshwork pigmentation or dense pigmentation. These findings suggest that the AI systems established in the current studies are not yet sufficient to cope with the complex and variable clinical settings in the real world, although the AI performs relatively well for some simple tasks.

Anterior Segment OCT-Based Algorithms

The clinical standard for the diagnosis of primary angle closure disease (PACD) is gonioscopy.133 However, gonioscopy has several limitations, such as patient discomfort due to contact examinations and variability in readings by different providers.

Anterior segment OCT is a common imaging method for anterior segment structures. As a noncontact test, anterior segment OCT offers a more efficient, objective, and intuitive output of a sequence of 2D images or 3D reconstruction images.134,135 Many studies have been conducted to explore the potential of using AI algorithms in automatic PACD diagnosis based on anterior segment OCT images.

Xu et al used a CNN model based on ResNet-18 in combination with transfer learning to achieve fully automated discrimination of open angle and angle closure, as well as the identification of the presence or absence of PACD with an AUROC higher than 0.9.136 Fu et al134,137,138 developed a series of DL algorithms for PACD diagnosis using ACA images obtained from Visante OCT, with the best AUROC of 0.96. Li et al139 created a dual-functional “digital gonioscopy system” based on volumetric anterior segment OCT scans. Apart from PACD diagnosis, the digital gonioscopy system also supports the detection of peripheral anterior synechia and achieved an AUROC of 0.90 in external data sets. A part of the data has been released as a public data set with fine annotations for PACD diagnosis and scleral spur localization.138

Anterior Segment OCT: Challenges

A number of challenges exist in development and application of diagnostic AI algorithms based on anterior segment OCT images. First, the performance of most algorithms has not been verified in multiple ethnicities, limiting understanding of generalizability. Second, a majority of the algorithms reviewed here were designed to classify the chamber angle into binary categories of narrow or open. It is necessary to further classify the narrow angles into different grades, which helps estimate the severity of PACD in clinical practice. Third, the cost of the devices used to obtain volume scans of the anterior segment is quite high (~$200,000–$300,000), limiting their deployment in community screening.

Challenges for AI-Based Model Development in Glaucoma

Although DL algorithms have the potential to improve screening, diagnosis, and personalized treatment for glaucoma, significant barriers remain in bringing these tools to clinical practice.

Many of these challenges have been discussed in the context of their specific testing modalities. These include issues related to diagnostic criteria, the lack of device standardization and algorithms from progression prediction, and the need for prospective testing.

Data Scarcity and Publicly Available Data

Beyond this, AI-based model development often requires a considerable amount of data, not only for training but also for external validation. Publicly available benchmark data sets represent an important means for comparing algorithms with similar use cases and validating usability among different populations. There are 17 publicly available data sets with glaucoma data globally, the majority of which contain fundus photos.140 Several of these publicly available data sets have been used in the literature for training and external validation for segmentation and classification tasks for optic discs from fundus photos.26,36,141,142 For example, DRISHTI-GS, a publicly available retinal image data set, is comprised of 50 training images and 51 test images of normal and glaucomatous eyes that includes manual segmentation of ONH parameters performed by 4 specialists.36 Other commonly used data sets of fundus photos include ACRIMA, RIM-ONE, High-Resolution Fundus, the Optic Nerve Head Segmentation Data Set, ORIGA database, and the Singapore Chinese and Indian Eye Studies.36 More recently, the Retinal Fundus Glaucoma Challenge online competition was set up as part of the 2018 Medical Image Computing and Computer Assisted Intervention conference.143 This data set is comprised of 1200 fundus photographs with reliable glaucoma labels and disc segmentations, and is currently the largest publicly available data set for fundus photographs.143 These open-source data sets and benchmarking competitions present an opportunity for researchers to validate their models on new, unseen data and compare their work to others. Although there are relatively more fundus photo data sets, data scarcity remains a significant challenge for AI development in glaucoma; there is only one publicly available glaucoma OCT data set.140 Progress in AI research in glaucoma will require improved access to other types of imaging data for external validation.

Dataset Diversity

Alongside this, there is a need for diverse data sets. It is well known that AI models may reflect biases inherent in training data and in doing so, run the risk of perpetuating biases when applied to nonrepresentative populations. For example, the accuracy of DL algorithms may differ across different ethnicities depending on pigmentation of the fundus and normative optic disc sizes, although further studies are needed to better understand the potential impact that race/ethnicity has on DL algorithm performance.144,145 It is critical that training and validation data sets reflect diverse populations, as glaucoma is known to affect certain races disproportionately. Transparency in reporting of parameters such as demographic factors can help regulators and clinicians understand the appropriate scope of use of these models. Currently, there is little standardization or transparency in reporting at the level of AI development.

Transparency and Reporting Guidelines

With the acceleration of AI-driven health interventions in recent years and acknowledging the need for standardization of assessment of these interventions for clinical implementation, extensions of existing guidelines have been published to address issues unique to AI. For early preclinical stages, Standards for Reporting of Diagnostic Accuracy Study–Artificial intelligence was published for reporting diagnostic accuracy studies at the development stage or as offline validation in clinical settings.146 Transparent Reporting of a Multivariable Prediction Model of Individual Prognosis or Diagnosis–Artificial intelligence for reporting multivariable clinical prediction models using ML to improve interpretation and terminology.147 For early-stage clinical evaluation, the Developmental and Exploratory Clinical Investigations of Decision Support Systems Driven by Artificial Intelligence guidelines comprise a checklist to assess the actual clinical performance at a small scale.148 For large-scale clinical evaluation, guidelines for randomized controlled trials include the Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence149 focusing on reporting protocols, and the Consolidated Standards of Reporting Trials–Artificial Intelligence focusing on reporting of results.150

The Minimum Information About Clinical Artificial Intelligence Modeling proposed a minimum set of documentation of 6 components of a clinical AI model (study design; training, validation, test data sets; optimization of final model; performance evaluation; model examination; reproducible pipeline), with the aim of improving the transparency and interpretability of AI models.151 None of the studies reviewed specifically addressed items recommended in these reporting guidelines from preclinical to clinical stages. These guidelines call for greater emphasis on standardized development, assessment, and reporting of new AI-driven interventions to narrow the gap toward clinical deployment.

Opportunities For ai-based Models in Glaucoma

Federated AI and Data Privacy/Sharing

Though data scarcity represents a significant challenge for AI advancement, new infrastructures have been developed to address the need for large and diverse data sets. Multicenter collaboration represents an important solution for scaling up AI for glaucoma. However, big data collection and resource sharing are often complicated by practical concerns, including ethical and privacy-related issues. Federated learning, a kind of distributive learning, seeks to address these concerns by training models locally and only transferring model updates, rather than transferring and combining data into a single pool.154,155 This format allows multiple medical institutions to collaboratively train more generalizable AI models without sharing or accessing sensitive patient data across institutions. Distributive learning has the potential to facilitate privacy-preserving AI research and implementation in health care.

AI and Teleglaucoma Screening

AI also has the potential to accelerate teleophthalmologic evaluation of glaucoma and in doing so, reduce the burden of undiagnosed disease. Conventional population-wide screening is often considered cost-ineffective due to the time-consuming and labor-intensive nature of comprehensive glaucoma evaluation.154,155 Key challenges include the lack of appropriate and effective testing due to the intrafluctuations and interfluctuations in IOP level, intravariations and intervariations in ophthalmoscopic fundus examination interpretation, and the relative subjectivity and tediousness of VF testing. In addition, a large number of highly trained personnel are often required onsite for the interpretation of results and treatment decisions. With the advances in imaging devices and image digitalization, the application of teleglaucoma for glaucoma detection and monitoring is a promising solution with the potential to increase accessibility to ophthalmic care, improve health outcomes, lower costs, and facilitate efficient communication with offsite specialists.156,157 AI-based screening could further enhance be teleglaucoma services. Even in areas lacking adequate or stable internet connectivity, AI algorithms could be embedded locally into screening devices. A community-based, teleglaucoma detection program targeting high-risk populations holds promise for improving early detection and regular monitoring of glaucoma, particularly in small towns, rural areas, and inner cities where specialized medical services are often limited or nonexistent. Beyond this, AI-based screening represents an opportunity to address the often high false-positive rates associated with traditional teleglaucoma screening through enhanced patient risks stratification.158

AI-BASED Models for Personalization and Clinical Trial Design

Finally, another promising application of AI-based tools is for treatment personalization through advanced identification of biomarkers associated with disease progression. As of yet, there are no models for treatment response prediction in glaucoma, but this is an area of increasing interest across all ophthalmology subspecialities (Fig. 2). AI has also been proposed as a possible means of improving clinical trial design and expediting the deployment of new therapies.159 Specifically, AI tools could be used to facilitate patient selection and stratification. For example, in the case of glaucoma, an AI algorithm that is able to identify rate of disease progression from baseline data could be used to ensure equal distribution of fast and slow progressors among treatment and control arms. AI tools could also be used to standardize interpretation of testing during disease monitoring and at clinical trial endpoints. Furthermore, biomarker discovery could reveal new clinical trial endpoints.


AI models have the ability to reshape and improve clinical practice in glaucoma and further our understanding of the disease process. Currently, AI research has been focused on disease screening, diagnosis, and progression, and is highly promising. In the future, additional research is needed in treatment response prediction to facilitate personalization of care.

There are no FDA-approved AI tools for screening, diagnosis, or prediction of glaucoma. However, various companies have reported the development or ongoing development of commercial AI products for glaucoma screening including iHealthScreen and Eyenuk. Yet, there remain significant obstacles to bringing these models to clinical practice. Central to this issue will be the establishment of consensus definitions for assessment and imaging-based diagnosis of glaucoma. As of today, there is no clear or standard diagnostic algorithm for glaucoma screening as there is for DR, nor is there consensus for assessment of glaucoma staging or progression prediction. Because of this, these tools may face difficulty with regulatory approval when seeking commercialization.


1. Miller M FDA publishes approved list of AI/ML-enabled medical devices. IQVIA 2022. Accessed July 30.
2. US Food and Drug Administration. De novo classification request for IDX-DR. US Food and Drug Administration. 2018.
3. US Food and Drug Administration. EyeArt 510(k) Summary (K200667). US Food and Drug Administration. 2020. Accessed August 21, 2021.
4. Tham YC, Li X, Wong TY, et al. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology. 2014;121:2081–2090.
5. Lee EB, Wang SY, Chang RT. Interpreting deep learning studies in glaucoma: unresolved challenges. Asia Pac J Ophthalmol (Phila). 2021;10:261–267.
6. Shaikh Y, Yu F, Coleman AL. Burden of undetected and untreated glaucoma in the United States. Am J Ophthalmol. 2014;158:1121–1129.
7. Tozer K, Woodward MA, Newman-Casey PA. Telemedicine and diabetic retinopathy: review of published screening programs. J Endocrinol Diabetes. 2015;2.
8. Ramessur R, Raja L, Kilduff CLS, et al. Impact and challenges of integrating artificial intelligence and telemedicine into clinical ophthalmology. Asia Pac J Ophthalmol (Phila). 2021;10:317–327.
9. Cassard SD, Quigley HA, Gower EW, et al. Regional variations and trends in the prevalence of diagnosed glaucoma in the Medicare population. Ophthalmology. 2012;119:1342–1351.
10. Shah SM, Choo C, Odden J, et al. Provider agreement in the assessment of glaucoma progression within a team model. J Glaucoma. 2018;27:691–698.
11. Almazroa A, Sun W, Alodhayb S, et al. Optic disc segmentation for glaucoma screening system using fundus images. Clin Ophthalmol. 2017;11:2017–2029.
12. Jammal AA, Thompson AC, Mariottoni EB, et al. Human versus machine: comparing a deep learning algorithm to human gradings for detecting glaucoma on fundus photographs. Am J Ophthalmol. 2020;211:123–131.
13. Hong SW, Koenigsman H, Ren R, et al. Glaucoma specialist optic disc margin, rim margin and rim width discordance in glaucoma and glaucoma suspect eyes. Am J Ophthalmol. 2018;192:65–76.
14. Al-Aswad LA, Ramachandran R, Schuman JS, et al. Artificial intelligence for glaucoma: creating and implementing artificial intelligence for disease detection and progression. Ophthalmol Glaucoma. 2022;S2589–4196:00028–X.
15. Schuman JS, De Los Angeles Ramos Cadena M, McGee R, et al. A case for the use of artificial intelligence in glaucoma assessment. Ophthalmol Glaucoma. 2022;5:e3–e13.
16. Lim C, Cheng Y, Hsu W, et al. Integrated optic disc and cup segmentation with deep learning. In Proceedings of the IEEE 27th International Conference on Tools with Artificial Intelligence, Vietri sul Mare, Italy, November 9–11, 2015; Volume 19, pp.162–169.
17. Akter N, Fletcher J, Perry S, et al. Glaucoma diagnosis using multi-feature analysis and a deep learning technique. Sci Rep. 2022;12:8064.
18. Gheisari S, Shariflou S, Phu J, et al. A combined convolutional and recurrent neural network for enhanced glaucoma detection. Sci Rep. 2021;11:1945.
19. Bhuiyan A, Govindaiah A, Smith RT. An artificial-intelligence- and telemedicine-based screening tool to identify glaucoma suspects from color fundus imaging. J Ophthalmol. 2021;2021:6694784.
20. Sunanthini V, Deny J, Govinda Kumar E, et al. Comparison of CNN algorithms for feature extraction on fundus images to detect glaucoma. J Healthc Eng. 2022;2022:7873300.
21. Phene S, Dunn RC, Hammel N, et al. Deep learning and glaucoma specialists: the relative importance of optic disc features to predict glaucoma referral in fundus photographs. Ophthalmology. 2019;126:1627–1639.
22. Li Z, He Y, Keel S, et al. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology. 2018;125:1199–1206.
23. Rogers TW, Jaccard N, Carbonaro F, et al. Evaluation of an AI system for the automated detection of glaucoma from stereoscopic optic disc photographs: the European Optic Disc Assessment Study. Eye (Lond). 2019;33:1791–1797.
24. Al-Aswad LA, Kapoor R, Chu CK, et al. Evaluation of a deep learning system for identifying glaucomatous optic neuropathy based on color fundus photographs. J Glaucoma. 2019;28:1029–1034.
25. Ahn JM, Kim S, Ahn KS, et al. A deep learning model for the detection of both advanced and early glaucoma using fundus photography. PLoS One. 2018;13:e0207982.
26. Liu H, Li L, Wormstone IM, et al. Development and validation of a deep learning system to detect glaucomatous optic neuropathy using fundus photographs. JAMA Ophthalmol. 2019;137:1353–1360.
27. Hemelings R, Elen B, Barbosa-Breda J, et al. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep. 2021;11:20313.
28. Veena HN, Muruganandham A, Senthil Kumaran T. A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images. J King Saud Univ Sci. 2021;34:6187–6198.
29. Mvoulana A, Kachouri R, Akil M. Fully automated method for glaucoma screening using robust optic nerve head detection and unsupervised segmentation based cup-to-disc ratio computation in retinal fundus images. Comput Med Imaging Graph. 2019;77:101643.
30. Medeiros FA, Jammal AA, Thompson AC. From machine to machine: an oct-trained deep learning algorithm for objective quantification of glaucomatous damage in fundus photographs. Ophthalmology. 2019;126:513–521.
31. Shin Y, Cho H, Shin YU, et al. Comparison between deep-learning-based ultra-wide-field fundus imaging and true-colour confocal scanning for diagnosing glaucoma. J Clin Med. 2022;11. doi:10.3390/jcm11113168
32. Li Z, Guo C, Lin D, et al. Deep learning for automated glaucomatous optic neuropathy detection from ultra-widefield fundus images. Br J Ophthalmol. 2021;105:1548–1554.
33. Thakur A, Goldbaum M, Yousefi S. Predicting glaucoma before onset using deep learning. Ophthalmol Glaucoma. 2020;3:262–268.
34. Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic accuracy of artificial intelligence in glaucoma screening and clinical practice. J Glaucoma. 2022;31:285–299.
35. Wu JH, Nishida T, Weinreb RN, et al. Performances of machine learning in detecting glaucoma using fundus and retinal optical coherence tomography images: a meta-analysis. Am J Ophthalmol. 2022;237:1–12.
36. Camara J, Neto A, Pires IM, et al. Literature review on artificial intelligence methods for glaucoma screening, segmentation, and classification. J Imaging. 2022;8:19.
37. Mursch-Edlmayr AS, Ng WS, Diniz-Filho A, et al. Artificial intelligence algorithms to diagnose glaucoma and detect glaucoma progression: translation to clinical practice. Transl Vis Sci Technol. 2020;9:55.
38. Watanabe T, Hiratsuka Y, Kita Y, et al. Combining optical coherence tomography and fundus photography to improve glaucoma screening. Diagnostics (Basel). 2022;12:1100.
39. Leung CK, Cheung CY, Weinreb RN, et al. Retinal nerve fiber layer imaging with spectral-domain optical coherence tomography: a variability and diagnostic performance study. Ophthalmology. 2009;116:1257–1263.
40. Koh V, Tham YC, Cheung CY, et al. Diagnostic accuracy of macular ganglion cell-inner plexiform layer thickness for glaucoma detection in a population-based study: comparison with optic nerve head imaging parameters. Plos One. 2018;13:e0199134. 10.1371/journal.pone.0199134
41. Chang RT, Knight OJ, Feuer WJ, et al. Sensitivity and specificity of time-domain versus spectral-domain optical coherence tomography in diagnosing early to moderate glaucoma. Ophthalmology. 2009;116:2294–2299.
42. WuDunn D, Takusagawa HL, Sit AJ, et al. OCT angiography for the diagnosis of glaucoma: a report by the American Academy of Ophthalmology. Ophthalmology. 2021;128:1222–1235.
43. Muhammad H, Fuchs TJ, De Cuir N, et al. Hybrid deep learning on single wide-field optical coherence tomography scans accurately classifies glaucoma suspects. J Glaucoma. 2017;26:1086–1094.
44. Thakoor KA, Koorathota SC, Hood DC, et al. Robust and interpretable convolutional neural networks to detect glaucoma in optical coherence tomography images. IEEE Trans Biomed Eng. 2021;68:2456–2466.
45. Hood DC, La Bruna S, Tsamis E, et al. Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res. 2022;90:101052.
46. Shin Y, Cho H, Jeong HC, et al. Deep learning-based diagnosis of glaucoma using wide-field optical coherence tomography images. J Glaucoma. 2021;30:803–812.
47. Thompson AC, Jammal AA, Berchuck SI, et al. Assessment of a segmentation-free deep learning algorithm for diagnosing glaucoma from optical coherence tomography scans. JAMA Ophthalmol. 2020;138:333–339.
48. Maetschke S, Antony B, Ishikawa H, et al. A feature agnostic approach for glaucoma detection in OCT volumes. Plos One. 2019;14:e0219126.
49. Ran AR, Cheung CY, Wang X, et al. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. Lancet Digit Health. 2019;1:e172–e182.
50. Wang X, Chen H, Ran AR, et al. Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Med Image Anal. 2020;63:101695.
51. Russakoff DB, Mannil SS, Oakley JD, et al. A 3D deep learning system for detecting referable glaucoma using full OCT macular cube scans. Transl Vis Sci Techn. 2020;9:12.
52. Noury E, Mannil SS, Chang RT, et al. Deep learning for glaucoma detection and identification of novel diagnostic areas in diverse real-world datasets. Transl Vis Sci Technol. 2022;11:11.
53. Bowd C, Belghith A, Zangwill LM, et al. Deep learning image analysis of optical coherence tomography angiography measured vessel density improves classification of healthy and glaucoma eyes. Am J Ophthalmol. 2022;236:298–308.
54. Devalla SK, Pham TH, Panda SK, et al. Towards label-free 3D segmentation of optical coherence tomography images of the optic nerve head using deep learning. Biomed Opt Express. 2020;11:6356–6378.
55. Panda SK, Cheong H, Tun TA, et al. Describing the structural phenotype of the glaucomatous optic nerve head using artificial intelligence. Am J Ophthalmol. 2022;236:172–182.
56. Belghith A, Bowd C, Weinreb RN, et al. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images. Proc SPIE Int Soc Opt Eng. 2014;9035:90350O.
57. Christopher M, Belghith A, Weinreb RN, et al. Retinal nerve fiber layer features identified by unsupervised machine learning on optical coherence tomography scans predict glaucoma progression. Invest Ophthalmol Vis Sci. 2018;59:2748–2756.
58. Ran AR, Wang X, Chan PP, et al. Three-dimensional multi-task deep learning model to detect glaucomatous optic neuropathy and myopic features from optical coherence tomography scans: a retrospective multi-centre study. Front Med (Lausanne). 2022;9:860574.
59. Leung CKS, Lam AKN, Weinreb RN, et al. Diagnostic assessment of glaucoma and non-glaucomatous optic neuropathies via optical texture analysis of the retinal nerve fibre layer. Nat Biomed Eng. 2022;6:593–604.
60. Thompson AC, Jammal AA, Medeiros FA. A deep learning algorithm to quantify neuroretinal rim loss from optic disc photographs. Am J Ophthalmol. 2019;201:9–18.
61. Jammal AA, Thompson AC, Ogata NG, et al. Detecting retinal nerve fibre layer segmentation errors on spectral domain-optical coherence tomography with a deep learning algorithm. Sci Rep. 2019;9:9836.
62. Ran AR, Shi J, Ngai AK, et al. Artificial intelligence deep learning algorithm for discriminating ungradable optical coherence tomography three-dimensional volumetric optic disc scans. Neurophotonics. 2019;6:041110.
63. Cheong H, Krishna Devalla S, Chuangsuwanich T, et al. OCT-GAN: single step shadow and noise removal from optical coherence tomography images of the human optic nerve head. Biomed Opt Express. 2021;12:1482–1498.
64. Christopher M, Bowd C, Proudfoot JA, et al. Deep learning estimation of 10-2 and 24-2 visual field metrics based on thickness maps from macula OCT. Ophthalmology. 2021;128:1534–1548.
65. Wong D, Chua J, Tan B, et al. Combining OCT and OCTA for focal structure-function modeling in early primary open-angle glaucoma. Invest Ophthalmol Vis Sci. 2021;62:8.
66. Rabiolo A, Morales E, Afifi AA, et al. Quantification of visual field variability in glaucoma: implications for visual field prediction and modeling. Transl Vis Sci Technol. 2019;8:25.
67. Huang X, Jin K, Zhu J, et al. A structure-related fine-grained deep learning system with diversity data for universal glaucoma visual field grading. Front Med (Lausanne). 2022;9:832920.
68. Li F, Wang Z, Qu G, et al. Automatic differentiation of Glaucoma visual field from non-glaucoma visual filed using deep convolutional neural network. BMC Med Imaging. 2018;18:35.
69. Li F, Song D, Chen H, et al. Development and clinical deployment of a smartphone-based visual field deep learning system for glaucoma detection. NPJ Digit Med. 2020;3:123.
70. Kucur Ş, Holló G, Sznitman R. A deep learning approach to automatic detection of early glaucoma from visual fields. PLoS One. 2018;13:e0206081.
71. Wen JC, Lee CS, Keane PA, et al. Forecasting future Humphrey Visual Fields using deep learning. PLoS One. 2019;14:e0214875.
72. Brusini P. Clinical use of a new method for visual field damage classification in glaucoma. Eur J Ophthalmol. 1996;6:402–407.
73. Keltner JL, Johnson CA, Cello KE, et al. Classification of visual field abnormalities in the ocular hypertension treatment study. Arch Ophthalmol. 2003;121:643–650.
74. Sample PA, Chan K, Boden C, et al. Using unsupervised learning with variational bayesian mixture of factor analysis to identify patterns of glaucomatous visual field defects. Invest Ophthalmol Vis Sci. 2004;45:2596–2605.
75. Goldbaum MH, Sample PA, Zhang Z, et al. Using unsupervised learning with independent component analysis to identify patterns of glaucomatous visual field defects. Invest Ophthalmol Vis Sci. 2005;46:3676–3683.
76. Bowd C, Weinreb RN, Balasubramanian M, et al. Glaucomatous patterns in Frequency Doubling Technology (FDT) perimetry data identified by unsupervised machine learning classifiers. PLoS One. 2014;9:e85941.
77. Yousefi S, Goldbaum MH, Balasubramanian M, et al. Learning from data: recognizing glaucomatous defect patterns and detecting progression from visual field measurements. IEEE Trans Biomed Eng. 2014;61:2112–2124.
78. Yousefi S, Goldbaum MH, Zangwill LM, et al. Recognizing patterns of visual field loss using unsupervised machine learning. Proc SPIE Int Soc Opt Eng. 2014;9034:2014.
79. Elze T, Pasquale LR, Shen LQ, et al. Patterns of functional vision loss in glaucoma determined with archetypal analysis. J R Soc Interface. 2015:12. doi:10.1098/rsif.2014.1118
80. Yousefi S, Balasubramanian M, Goldbaum MH, et al. Unsupervised gaussian mixture-model with expectation maximization for detecting glaucomatous progression in standard automated perimetry visual fields. Transl Vis Sci Technol. 2016;5:2.
81. Wang M, Shen LQ, Pasquale LR, et al. Artificial intelligence classification of central visual field patterns in glaucoma. Ophthalmology. 2020;127:731–738.
82. Thakur A, Goldbaum M, Yousefi S. Convex representations using deep archetypal analysis for predicting glaucoma. IEEE J Transl Eng Health Med. 2020;8:3800107.
83. Gupta K, Thakur A, Goldbaum M, et al. Glaucoma precognition: recognizing preclinical visual functional signs of glaucoma. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 4393-4401.
84. Huang X, Saki F, Wang M, et al. An objective and easy-to-use glaucoma functional severity staging system based on artificial intelligence. J Glaucoma. 2022;31:626–633.
85. Wang M, Tichelaar J, Pasquale LR, et al. Characterization of central visual field loss in end-stage glaucoma by unsupervised artificial intelligence. JAMA Ophthalmol. 2020;138:190–198.
86. Yousefi S, Pasquale LR, Boland MV, et al. Machine-identified patterns of visual field loss and an association with rapid progression in the ocular hypertension treatment study. Ophthalmology. 2022;129:1402–1411.
87. Sample PA, Boden C, Zhang Z, et al. Unsupervised machine learning with independent component analysis to identify areas of progression in glaucomatous visual fields. Invest Ophthalmol Vis Sci. 2005;46:3684–3692.
88. Goldbaum MH, Lee I, Jang G, et al. Progression of patterns (POP): a machine classifier algorithm to identify glaucoma progression in visual fields. Invest Ophthalmol Vis Sci. 2012;53:6557–6567.
89. Yousefi S, Kiwaki T, Zheng Y, et al. Detection of longitudinal visual field progression in glaucoma using machine learning. Am J Ophthalmol. 2018;193:71–79.
90. Wang M, Shen LQ, Pasquale LR, et al. An artificial intelligence approach to detect visual field progression in glaucoma based on spatial pattern analysis. Invest Ophthalmol Vis Sci. 2019;60:365–375.
91. Asaoka R, Murata H, Iwase A, et al. Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier. Ophthalmology. 2016;123:1974–1980.
92. Lin A, Hoffman D, Gaasterland DE, et al. Neural networks to identify glaucomatous visual field progression. Am J Ophthalmol. 2003;135:49–54.
93. Katz J, Congdon N, Friedman DS. Methodological variations in estimating apparent progressive visual field loss in clinical trials of glaucoma treatment. Arch Ophthalmol. 1999;117:1137–1142.
94. Kim S, Lee JY, Kim SO, et al. Macular structure-function relationship at various spatial locations in glaucoma. Br J Ophthalmol. 2015;99:1412–1418.
95. Na JH, Kook MS, Lee Y, et al. Structure-function relationship of the macular visual field sensitivity and the ganglion cell complex thickness in glaucoma. Invest Ophthalmol Vis Sci. 2012;53:5044–5051.
96. Jampel HD, Friedman D, Quigley H, et al. Agreement among glaucoma specialists in assessing progressive disc changes from photographs in open-angle glaucoma patients. Am J Ophthalmol. 2009;147:39–44.
97. Artes PH, Iwase A, Ohno Y, et al. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies. Invest Ophthalmol Vis Sci. 2002;43:2654–2659.
98. Zangwill LM, Bowd C. Retinal nerve fiber layer analysis in the diagnosis of glaucoma. Curr Opin Ophthalmol. 2006;17:120–131.
99. Wu M, Leng T, de Sisternes L, et al. Automated segmentation of optic disc in SD-OCT images and cup-to-disc ratios quantification by patch searching-based neural canal opening detection. Opt Express. 2015;23:31216–31229.
100. Ganesh Babu TR, Shenbaga Devi S, Venkatesh R. Optic nerve head segmentation using fundus images and optical coherence tomography images for glaucoma detection. Biomed Pap Med Fac Univ Palacky Olomouc Czech Repub. 2015;159:607–615.
101. Mariottoni EB, Datta S, Dov D, et al. Artificial intelligence mapping of structure to function in glaucoma. Transl Vis Sci Technol. 2020;9:19.
102. Park K, Kim J, Lee J. A deep learning approach to predict visual field using optical coherence tomography. PLoS One. 2020;15:e0234902.
103. Hashimoto Y, Asaoka R, Kiwaki T, et al. Deep learning model to predict visual field in central 10° from optical coherence tomography measurement in glaucoma. Br J Ophthalmol. 2021;105:507–513.
104. Christopher M, Bowd C, Belghith A, et al. Deep learning approaches predict glaucomatous visual field damage from oct optic nerve head en face images and retinal nerve fiber layer thickness maps. Ophthalmology. 2020;127:346–356.
105. Shin J, Kim S, Kim J, et al. Visual field inference from optical coherence tomography using deep learning algorithms: a comparison between devices. Transl Vis Sci Technol. 2021;10:4.
106. Lee J, Kim YW, Ha A, et al. Estimating visual field loss from monoscopic optic disc photography using deep learning model. Sci Rep. 2020;10:21052.
107. Lee T, Jammal AA, Mariottoni EB, et al. Predicting glaucoma development with longitudinal deep learning predictions from fundus photographs. Am J Ophthalmol. 2021;225:86–94.
108. Yousefi S, Goldbaum MH, Balasubramanian M, et al. Glaucoma progression detection using structural retinal nerve fiber layer measurements and functional visual field points. IEEE Trans Biomed Eng. 2014;61:1143–1154.
109. Sedai S, Antony B, Ishikawa H, et al. Forecasting retinal nerve fiber layer thickness from multimodal temporal data incorporating OCT volumes. Ophthalmol Glaucoma. 2020;3:14–24.
110. Mehta P, Petersen CA, Wen JC, et al. Automated detection of glaucoma with interpretable machine learning using clinical data and multimodal retinal images. Am J Ophthalmol. 2021;231:154–169.
111. Kihara Y, Montesano G, Chen A, et al. Policy-driven, multimodal deep learning for predicting visual fields from the optic disc and OCT imaging. Ophthalmology. 2022;129:781–791.
112. Asaoka R, Murata H, Matsuura M, et al. Improving the structure-function relationship in glaucomatous visual fields by using a deep learning-based noise reduction approach. Ophthalmol Glaucoma. 2020;3:210–217.
113. Flaxman SR, Bourne RRA, Resnikoff S, et al. Global causes of blindness and distance vision impairment 1990–2020: a systematic review and meta-analysis. Lancet Global Health. 2017;5:e1221–e1234.
114. Baskaran M, Perera SA, Nongpiur ME, et al. Angle assessment by EyeCam, goniophotography, and gonioscopy. J Glaucoma. 2012;21:493–497.
115. Tejwani S, Murthy SI, Gadudadri CS, et al. Impact of a month-long training program on the clinical skills of ophthalmology residents and practitioners. Indian J Ophthalmol. 2010;58:340–343.
116. Lin KY, Urban G, Yang MC, et al. Accurate identification of the trabecular meshwork under gonioscopic view in real time using deep learning. Ophthalmol Glaucoma. 2022;5:402–412.
117. Baskaran M, Cheng J, Perera SA, et al. Automated analysis of angle closure from anterior chamber angle images. Invest Ophthalmol Vis Sci. 2014;55:7669–7673.
118. Matsuo M, Pajaro S, De Giusti A, et al. Automated anterior chamber angle pigmentation analyses using 360 degrees gonioscopy. Br J Ophthalmol. 2020;104:636–641.
119. Matsuo M, Kozuki N, Inomata Y, et al. Automated focal plane merging from a stack of gonioscopic photographs using a focus-stacking algorithm. Transl Vis Sci Technol. 2022;11:22.
120. Teixeira F, Sousa DC, Leal I, et al. Automated gonioscopy photography for iridocorneal angle grading. Eur J Ophthalmol. 2020;30:112–118.
121. De Giusti A, Pajaro S, Tanito M. Automatic pigmentation grading of the trabecular meshwork in gonioscopic images. Comput Pathol Ophthalmic Med Image Anal. 2018;11039:193–200.
122. Cheng J, Liu J, Lee BH, et al. Closed angle glaucoma detection in RetCam images. Annu Int Conf IEEE Eng Med Biol Soc. 2010;2010:4096–4099.
123. Peroni A, Cutolo CA, Pinto LA, et al. Papież B, Namburete A, Yaqub M, Noble J. A deep learning approach for semantic segmentation of gonioscopic images to support glaucoma categorization. Medical Image Understanding and Analysis MIUA Communications in Computer and Information Science, vol 1248. Springer; 2020.
124. Qian Z, Xie X, Yang J, et al. Detection of shallow anterior chamber depth from two-dimensional anterior segment photographs using deep learning. BMC Ophthalmol. 2021;21:341.
125. Cheng J, Liu J, Wong DWK, et al. Focal edge association to glaucoma diagnosis. Annu Int Conf IEEE Eng Med Biol Soc. 2011;2011:4481–4484.
126. Chiang M, Guth D, Pardeshi AA, et al. Glaucoma expert-level detection of angle closure in goniophotographs with convolutional neural networks: the Chinese American Eye Study. Am J Ophthalmol. 2021;226:100–107.
127. Peroni A, Paviotti A, Campigotto M, et al. On clinical agreement on the visibility and extent of anatomical layers in digital gonio photographs. Transl Vis Sci Technol. 2021;10:1.
128. Peroni A, Paviotti A, Campigotto M, et al. Semantic segmentation of gonio-photographs via adaptive ROI localisation and uncertainty estimation. BMJ Open Ophthalmol. 2021;6:e000898.
129. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444.
130. Spaeth GL. Gonioscopy: uses old and new. The inheritance of occludable angles. Ophthalmology. 1978;85:222–232.
131. Porporato N, Baskaran M, Husain R, et al. Recent advances in anterior chamber angle imaging. Eye (Lond). 2020;34:51–59.
132. Wang D, Qi M, He M, et al. Ethnic difference of the anterior chamber area and volume and its association with angle width. Invest Ophthalmol Vis Sci. 2012;53:3139–3144.
133. Casson RJ, Newland HS, Muecke J, et al. Gonioscopy findings and prevalence of occludable angles in a Burmese population: the Meiktila Eye Study. Br J Ophthalmol. 2007;91:856–859.
134. Fu H, Xu Y, Lin S, et al. Angle-closure detection in anterior segment oct based on multilevel deep network. IEEE Trans Cybern. 2020;50:3358–3366.
135. Zebardast N, Kavitha S, Krishnamurthy P, et al. Changes in anterior segment morphology and predictors of angle widening after laser iridotomy in South Indian eyes. Ophthalmology. 2016;123:2519–2526.
136. Xu BY, Chiang M, Chaudhary S, et al. Deep learning classifiers for automated detection of gonioscopic angle closure based on anterior segment OCT images. Am J Ophthalmol. 2019;208:273–280.
137. Fu H, Baskaran M, Xu Y, et al. A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. Am J Ophthalmol. 2019;203:37–45.
138. Fu H, Li F, Sun X, et al. AGE challenge: angle closure glaucoma evaluation in anterior segment optical coherence tomography. Med Image Anal. 2020;66:101798.
139. Li F, Yang Y, Sun X, et al. Digital gonioscopy based on three-dimensional anterior-segment OCT: an international multicenter study. Ophthalmology. 2022;129:45–53.
140. Khan SM, Liu X, Nath S, et al. A global review of publicly available datasets for ophthalmological imaging: barriers to access, usability, and generalisability. Lancet Digit Health. 2021;3:e51–e66.
141. Nawaz M, Nazir T, Javed A, et al. An efficient deep learning approach to automatic glaucoma detection using optic disc and optic cup localization. Sensors (Basel). 2022;22:434.
142. Diaz-Pinto A, Morales S, Naranjo V, et al. CNNs for automatic glaucoma assessment using fundus images: an extensive validation. Biomed Eng Online. 2019;18:29.
143. Orlando JI, Fu H, Barbosa Breda J, et al. REFUGE Challenge: a unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal. 2020;59:101570.
144. Christopher M, Nakahara K, Bowd C, et al. Effects of study population, labeling and training on glaucoma detection using deep learning algorithms. Transl Vis Sci Technol. 2020;9:27.
145. Raman R, Srinivasan S, Virmani S, et al. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye. 2019;33:97–109.
146. Sounderajah V, Ashrafian H, Golub RM, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open. 2021;11:e047709.
147. Collins GS, Dhiman P, Andaur Navarro CL, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021;11:e048008.
148. Vasey B, Nagendran M, Campbell B, et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med. 2022;28:924–933.
149. Cruz Rivera S, Liu X, Chan AW, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26:1351–1363.
150. Liu X, Rivera SC, Moher D, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension. BMJ. 2020;370:m3164.
151. Norgeot B, Quer G, Beaulieu-Jones BK, et al. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med. 2020;26:1320–1324.
152. Yang Q, Liu Y, Cheng Y, et al. Federated learning. Synth Lect Artif Intell Mach Learn. 2019;13:1–207.
153. Rieke N, Hancox J, Li W, et al. The future of digital health with federated learning. NPJ Digit Med. 2020;3:119.
154. Moyer VA. Force USPST. Screening for glaucoma: U.S. Preventive Services Task Force Recommendation Statement. Ann Intern Med. 2013;159:484–489.
155. Jonas JB, Aung T, Bourne RR, et al. Glaucoma. Lancet. 2017;390:2183–2193.
156. Hautala N, Hyytinen P, Saarela V, et al. A mobile eye unit for screening of diabetic retinopathy and follow-up of glaucoma in remote locations in northern Finland. Acta Ophthalmol. 2009;87:912–913.
157. Thomas S, Hodge W, Malvankar-Mehta M. The cost-effectiveness analysis of teleglaucoma screening device. Plos One. 2015;10:e0137913.
158. Ramachandran R, Joiner DB, Patel V, et al. Comparison between the recommendations of glaucoma specialists and OCT report specialists for further ophthalmic evaluation in a community-based screening study. Ophthalmol Glaucoma. 2022;5:602–613.
159. Harrer S, Shah P, Antony B, et al. Artificial intelligence for clinical trial design. Trends Pharmacol Sci. 2019;40:577–591.

artificial intelligence; deep learning; glaucoma; optical coherent tomography; visual fields

Copyright © 2023 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.