Secondary Logo

Journal Logo

Review Article

Artificial Intelligence Meets Neuro-Ophthalmology

Leong, Yuan-Yuh MBBS, MRCS; Vasseneix, Caroline MD†,‡; Finkelstein, Maxwell Toan; Milea, Dan MD, PhD∗,†,‡; Najjar, Raymond P. PhD†,‡,§

Author Information
Asia-Pacific Journal of Ophthalmology: March-April 2022 - Volume 11 - Issue 2 - p 111-125
doi: 10.1097/APO.0000000000000512
  • Open

Abstract

Artificial intelligence (AI) enables a technical system of human-like behavior that consists of receiving, interpreting, and learning from data before achieving a particular goal. The intricacies of machine learning (ML) and deep learning (DL) amongst other methods of AI have been discussed elsewhere.1–3 Briefly, ML is an advanced statistical technique that uses algorithms to sort through and “learn” from a dataset of extracted features to subsequently come up with learning-based predictions that can be applied to the same or different datasets (validation or external testing datasets). DL, a subset of ML, consists of a network of interconnected algorithms termed neural networks, almost akin to neuronal networks in the human brain. Such networks allow for a refined “learning” from large datasets (with or without labeling) passing through multiple layers of algorithms to automatically recognize features and subsequently classify objects within the data.4

Through advancements in computing power, AI boasts the ability to process large datasets in a consistent and fast manner, aiding physicians to make more accurate diagnoses in a shorter amount of time.4 Such technology is currently being used in numerous medical specialties including dermatology5 and radiology.6 The field of ophthalmology is well-positioned to harness the power of AI.2,7,8 With the routine accumulation of data from various clinical investigation modalities such as fundus photography, optical coherence tomography (OCT), and automated perimetry, AI can serve as a tool to analyze the vast amount of information and assist clinical decision making. In ophthalmology, AI algorithms have been developed for the detection of diabetic retinopathy,9,10 glaucoma,11–13 age-related macular degeneration,14–16 and retinopathy of prematurity.17,18 In this narrative review, we highlight recent advancements in the utilization of AI in neuro-ophthalmology.

NEURO-OPHTHALMOLOGY: AN INTEGRATIVE MEDICAL DISCIPLINE

The visual system extends from the eyes to the most posterior segments of the brain (ie, the occipital cortex). Consequently, intracranial pathologies can often lead to visual disruptions, leading patients to consult an ophthalmologist.19 Neuro-ophthal-mology is an integrative medical discipline that involves the study of pathologies along the visual pathway.20

The most commonly encountered neuro-ophthalmic conditions affect (1) the afferent visual system, leading to various forms of visual dysfunction and (2) the efferent pathway, leading to central ocular-motor disorders, ocular-motor cranial neuropathies, gaze instability, and pupillary disorders, in addition to systemic dysfunctions affecting the neuromuscular junction or the extraocular muscles. Alterations in the afferent and efferent pathways can originate from a wide range of conditions including autoimmune, infectious, inflammatory, ischemic, traumatic, com-pressive, congenital, or degenerative diseases. It is not infrequent that an isolated neuro-ophthalmic dysfunction (eg, inflammatory optic neuropathy) foreshows an underlying neurological disease (eg, multiple sclerosis). Similarly, swelling of the optic nerve head (ONH) can be the only manifestation of increased intracranial pressure due to potentially life-threatening conditions of the brain, which require urgent care.

Notwithstanding the prospect of detecting systemic or neurologic conditions through ocular examinations, the field of neuro-ophthalmology has not benefitted, until recently, from significant advances in the field of AI.21 Three main reasons may explain such a delay: (1) the low prevalence and heterogeneity of neuro-ophthalmologic conditions leading to a shortage of data required to efficiently train DL-algorithms; (2) the relatively small neuro-ophthalmologist community compared to other ophthalmology subspecialties; and (3) heterogeneity in the establishment of a diagnosis “ground truth” between neuro-oph-thalmic centers, especially when, in some conditions, neurologists provide the final diagnosis. This may lead to a loss of follow-up data and reduced reliability of “ground truth” necessary to train AI algorithms. Nevertheless, neuro-ophthalmology is not barren of advancements in the field of AI. In the following sections, we summarize and discuss the most prominent investigations utilizing classical machine learning (CML) and DL to detect ONH abnormalities and eye movement disorders encountered in a neuro-ophthalmic setting.

ARTIFICIAL INTELLIGENCE FOR THE CLASSIFICATION OF OPTIC NERVE HEAD APPEARANCE

Nervous signals arising from the phototransduction of light by the retina travel to the brain through the optic nerve. The appearance of the ONH, the proximal end of the optic nerve, is dependent on its structural integrity. Axonal loss typically causes ONH pallor (or atrophy at more advanced stages), cupping (mainly, but not only in glaucoma), and swelling in various neuropathies (eg, ischemic, inflammatory, infiltrative, infective, toxic, and compressive neuropathy).19 Various neurological conditions such as intracranial hypertension (especially if due to an intracranial mass or venous sinus thrombosis) represent true medical emergencies requiring prompt diagnosis and intervention. Of interest, raised intracranial hypertension is commonly associated with papilledema (defined as bilateral ONH swelling diagnosed exclusively in a context of high intracranial pressure), making its detection of paramount importance. Failure to detect papilledema and its cause can lead to neurologic dysfunction, permanent vision loss, or even death.22,23 Conversely, false diagnosis of papilledema can lead to unnecessary, expensive, and invasive investigations.24 Therefore, the visualization of the ONH appearance through direct ophthalmoscopy offers a valuable tool in the evaluation of a patient's ocular and neurological health. Although ophthalmologists are generally capable of identifying most ONH abnormalities using ophthalmoscopy, nonophthalmology trained health care personnel, including emergency department (ED) physicians, are less confident in visualizing and commenting on the ONH's appearance using ophthalmoscopy.25

Digital fundus cameras providing high-quality photographs of the ONH and retina offer an alternative to ophthalmoscopy.26 In a study by Sachdeva et al,27 fundus photographs were taken in patients who presented to the ED with chief complaints of headache, focal neurologic deficit, visual change, or diastolic hypertension, and ONH swelling was detected in 2.6% of patients (1 in 38 patients). Nonetheless, a trained neuro-ophthalmology expert was still required for interpretation of fundus photographs and diagnosis28,29 and such expertise might not always be readily available. As an alternative to trained neuro-ophthalmologists, ML algorithms may offer a solution for fast, automated, and accurate interpretation of ONH appearance and potentially, underlying diagnoses. A summary of studies using ML to detect ONH abnormalities is provided in Table 1.

Table 1 - Summary of Studies Utilizing Classical Machine Learning and Deep Learning to Detect Structural and Functional Optic Nerve and Optic Nerve Head Abnormalities
Performance Characteristics
Authors Artificial Intelligence Method Modality Analyzed Model Description Goals/Predicted Categories Datasets Sensitivity (%) Specificity (%) AUC Accuracy (%) Other Metrics
Studies on Optic Nerve Head Appearance/Structure
 Echegaray et al (2011)32 CML Color fundus photographs Image processing and extraction of features of vasculature, optic disc margin, and retinal nerve fiber layer changes Classification using decision tree forest To grade the severity of papilledema based on MFS and compared to trained neuro-ophthalmologist 294 images taken from patients with diagnosed papilledema obtained from local database. k = 0.71
 Agne et al (2015)33 CML Color fundus photographs Image processing and extraction of features of vasculature, optic nerve head, and peripapillary retinal areas. Textural feature extraction was aided by GLCM. Classification using random forest classifier To predict ONH volume from fundus images 48 images showing optic nerve edema obtained from local database. r = 0.77
 Akbar et al (2017)30 CML Color fundus photographs Image processing and extraction of features of vasculature, disc obscuration, and disc color. Textural feature extraction aided by GLCM. Classification using SVM with RBF kernel To detect papilledema from normal optic discs Grading papilledema severity into Mild (MFS 1 and 2) and Severe (MFS 3 to 5) 160 images including 50 normal and 40 with papilledema from the publicly available STARE database, and 40 normal and 30 with papilledema from local database. 90.097.3 96.497.0 92.997.9
 Fatima et al (2017)31 CML Color fundus photographs Image processing and extraction of features of vasculature, disc obscuration, and disc color. Textural feature extraction aided by GLCM. Classification using SVM using different combination of extracted features. Best classification performance, reported in this table, was obtained using texture, color, and disc obscuration. To detect papilledema from normal optic discs 160 images including 50 normal and 40 with papilledema from the publicly available STARE database, and 40 normal and 30 with papilledema from local database. 84.1 90.6 87.8
 Yang et al (2019)47 CML Color fundus photographs Parameters of optic disc pallor analyzed: Brightness correction ratio and temporal to nasal ratio Classification using logistic regression To detect optic disc pallor 230 images consisting of 107 with disc pallor and 123 normal discs from local database. 95.3 96.7 96.1
 Ahn et al (2019)34 DL Color fundus photographs CNN using Google's Tensorflow framework Transfer learning to Inception V3 Transfer learning to ResNet Transfer learning to VGG To differentiate true ONH swelling from pseudo-swelling 1396 images (295 with optic neuropathies, 295 with pseudo-papilledema, 779 normal) from local database. Training dataset: 876 Validation dataset: 274 Testing dataset: 219 images 95.996.498.696.8
 Milea et al (2020)35 DL Color fundus photographs BONSAI Deep Learning System (DLS): Segmentation network (U-net) to detect the location of ONH Classification network (DenseNet-121 and DenseNet-201) pretrained on ImageNet To distinguish papilledema from normal ONH and ONHs with other abnormalities To distinguish other ONH abnormalities from normal ONH and papilledema To distinguish normal ONH from papilledema and ONHs with other abnormalities Training dataset: 14,341 images (2148 with papilledema, 3037 with other optic disc abnormalities, 9156 with normal optic discs) from 19 sites and 11 countries Testing dataset: 1505 images (360 with papilledema, 532 with other optic disc abnormalities, 613 with normal optic discs) from 5 other centers in 5 countries. 96.485.786.6 84.778.695.3 0.960.900.98 87.581.191.8
 Biousse et al (2020)36 DL Color fundus photographs To distinguish papilledema from normal ONH and ONHs with other abnormalities To distinguish other ONH abnormalities from normal ONH To distinguish normal ONH from papilledema and ONHs with other abnormalities To compare the DLS's performance with 2 expert neuro-ophthalmologists Training dataset: 14,341 images (2148 with papilledema, 3037 with other optic disc abnormalities, 9156 with normal optic discs) from 19 sites and 11 countries Testing dataset: 800 images (201 with papilledema, 199 with other optic disc abnormalities, 400 with normal optic discs) from local dataset. 83.173.991.0 94.389.993.3 0.960.890.97 91.585.992.1 DLS vs Expert 1: k = 0.72DLS vs Expert 2: k = 0.65Expert 1 vs Expert 2: k = 0.71
 Yang et al (2020)48 DL Color fundus photographs Classification with ResNet-50 pretrained on ImageNet To detect normal ONH, NGON, GON To detect NGON among normal optic discs and GON To detect GON among normal optic discs and NGON To differentiate GON from NGON Training dataset: 900 images (300 normal ONH, 300 NGON, 300 GON) Validation dataset: 240 images (80 normal ONH, 80 NGON, 80 GON) Testing dataset: 2675 images (2503 normal ONH, 66 NGON, 106 GON) Images were obtained from local database. 86.492.593.4 99.699.581.8 0.920.950.87 99.1
 Vasseneix et al (2021)44 DL Color fundus photographs Segmentation network (U-net) Classification network (VGGNet) pretrained on ImageNet To grade papilledema severity into Mild/Mod (MFS 1-3) vs Severe (MFS 4 and 5) Training dataset: 2103 images (1052 with mild/moderate papilledema, 1051 with severe papilledema) from 16 sites and 11 countries Testing dataset: 214 images (92 with mild/moderate papilledema, 122 with severe papilledema) from 4 sites 91.8 82.6 0.93 87.9
 Liu et al (2021)38 DL Color fundus photographs Classification with ResNet-152 network Differentiate normal and abnormal ONH Differentiate normal and abnormal ONH using fundus photographs taken with smartphone and ophthalmic imaging adapter. Training dataset: 944 images (364 abnormal, 580 normal) from a local database. Testing dataset: 151 images (71 abnormal, 80 normal) from local database, of which 12 images (8 abnormal, 4 normal) were taken using a smartphone. 94.0100.0 96.050.0 0.99 83.0
Studies on Optic Nerve Function
 Kara et al (2007)54 CML VEP signals Classification with ANN trained with LM backpropagation algorithm To differentiate healthy and diseased optic nerve function using VEP signals VEP signals from 224 subjects (119 with optic nerve diseases, 105 healthy) from local database. 96.9 96.7 96.8
 Guven et al (2008)55 CML VEP signals Classification with C4.5 decision tree classifier Classification with LM backpropagation algorithm Classification with AIRS Classification with LDA Classification with SVM To evaluate the effect of GDA on classification of optic nerve diseases using VEP signals into healthy and diseased 129 VEP signals (68 with optic nerve diseases, 61 healthy) from local database. 93.893.981.393.893.8
 Thomas et al (2019)33 DL Visual field Classification using feed-forward back-propagation ANN created with Neural Networks Toolbox To detect VF loss caused by pituitary mass amongst glaucomatous VFs 907 glaucomatous VF and 121 fields with pituitary lesions from local database. Training set consisted of 70% of bilateral field representation with 15% each used for validation and post-training testing. 95.9 99.8
AIRS indicates artificial immune recognition system; ANN, artificial neural network; AUC, area under the curve; CML, classical machine learning; CNN, convolutional neural network; DL, deep learning; DLS, deep learning system; GDA, generalized discriminate analysis; GLCM, gray-level co-occurrence matrix; GON, glaucomatous optic neuropathy; LDA, linear discriminant analysis; LM, Levenberg Marquart; MFS, modified Frisen scale; NGON, nonglaucomatous optic neuropathy; ONH, optic nerve head; RBF, radial basis function; ResNet, deep residual learning; SVM, support vector machine; VEP, visual-evoked potential; VF, visual field; VGG, visual geometry group.k: Kappa agreement score; r = Pearson correlation coefficient.
Area under the precision-recall curve used.

Papilledema, Pseudopapilledema, and Other Optic Nerve Head Abnormalities

Using CML, Akbar et al30 developed an automated system to detect papilledema from healthy ONHs and grade its severity (mild vs severe) using 160 retrospectively collected fundus photographs. Four classes of features (textural, color, disc obscuration, and vascular) were extracted from ONH photographs and subsequently processed through support vector machine classifier and radial basis function kernel. This system yielded accuracies of 92.9% and 97.9% for the detection and grading of papilledema, respectively. The high accuracy of Akbar's supervised machine learning algorithm for the detection of papilledema was comparable to earlier findings of Fatima and colleagues, who investigated combinations of the same 4 features above and obtained an accuracy of 87.8% for papilledema detection using a supervised support vector machine classifier.31 Other studies using different combinations of ONH feature extraction and ML algorithms showed good agreement for papilledema grading when compared to an expert neuro-ophthalmologist (Kappa score = 0.71),32 and ONH volume when compared to OCT values (Pearson correlation coefficient, r = 0.77).33 The aforementioned studies showed promise for CML techniques to detect papilledema from healthy ONH. However, in clinical practice, the classification of ONH abnormalities yields several diagnostic possibilities and is not binary. In addition, datasets containing clear-cut cases/diseases with prominent clinical features on color fundus photographs, similar to those generally used in retrospective studies, are infrequent in clinics. Consequently, to develop CML solutions capable of detecting multiple ONH conditions in a clinical setting, considerable effort is required from expert ophthalmologists to label individual disease features and severity on a large spectrum of ONH photographs. DL algorithms can be trained to automatically recognize features and classify ONH conditions on color fundus photographs. These algorithms can outperform CML algorithms for the classification of images, provided large training datasets with robust ground truth are utilized.

Using DL, Ahn and colleagues34 differentiated normal ONH from swollen ONH (including a small subset of ONH with papilledema) due to other optic neuropathies and pseudo-papil-ledema. Using data augmentation and classical convolutional neural network (CNN) with Tensorflow and transfer learning, the authors differentiated true ONH swelling from pseudo-swelling with high accuracy (∼95%). Unfortunately, this study suffered from various methodological limitations such as a lack of rigorous clinical inclusion criteria and an external testing dataset. The Brain and Optic Nerve Study with Artificial Intelligence (BONSAI) consortium prompted in 2019 a large collaborative effort across 24 ophthalmology centers in 15 countries, leading to the development of a deep learning system (DLS) able to classify papilledema and other ONH abnormalities.35 The DLS, consisting of segmentation (U-Net) and classification (DensNet) networks, was trained to classify ONHs into 3 classes: normal, papilledema, and other ONH abnormalities, using a dataset of 14,341 retrospectively collected mydriatic fundus photographs from 6779 patients of various ethnicities from 19 centers worldwide. The training dataset consisted of 9156 images of normal ONH, 2148 with papilledema, and 3037 ONH with other abnormalities. Subsequently, the classification performance of the DLS was evaluated on an external testing dataset of 1505 photographs from 5 other independent centers. The BONSAI-DLS showed high accuracy for the classification of papilledema from normal and other ONH abnormalities, with an area under the curve (AUC) of 0.96 [95% confidence interval (CI): 0.95–0.97], a sensitivity of 96.4% (95% CI: 93.9–98.3), and a specificity of 84.7% (95% CI: 82.3–87.1). The BONSAI-DLS also displayed high accuracy for the classification of normal discs and discs with other abnormalities (eg, optic disc atrophy, nonarteritic ischemic optic neuropathy, optic disc drusen, etc) with AUCs of 0.98 and 0.90, respectively.35

A critical question is whether a DLS can provide more accurate classifications compared to humans. In a recent study addressing this question, the overall classification accuracy of the BONSAI-DLS (84.7%) was at least as good as 2 fellowship-trained neuro-ophthalmologists with over 25 years of clinical experience (80.1% and 84.4%) who, like the DLS, diagnosed the ONH appearance on digital fundus photographs without additional clinical information.36

The robustness of trained DLSs for the detection of papilledema and other ONH abnormalities was also confirmed recently by 2 studies albeit with smaller training and testing datasets.37,38 Interestingly, a DL algorithm trained exclusively using fundus photographs taken with conventional desktop cameras, to classify normal and abnormal ONHs, still achieved acceptable performance on a testing dataset of images taken by smartphone cameras (accuracy = 83%, sensitivity = 100%, specificity = 50%).38 This study demonstrated low specificity and high sensitivity likely due to its small, imbalanced testing dataset of 8 abnormal and 4 normal ONHs. With proper validation on larger, prospectively collected, and more representative datasets, there is potential for well-trained DLSs to detect ONH abnormalities in settings where high-grade desktop fundus cameras may not be available, such as nonophthalmic settings.

Differentiating the severity of papilledema appearance is important for prognostication of visual outcomes39,40 and monitoring of disease and treatment progress.41,42 In a follow-up study from the BONSAI consortium, a DLS was developed and trained to classify papilledema severity. The DLS was trained on 1052 mydriatic fundus photographs with mild/moderate papilledema and 1051 photographs with severe papilledema with ground truth provided by a panel of experts. Mild/moderate papilledema corresponded to grade 1 to 3 of Frisen grading, whereas severe papilledema was classified as grade 4 to 5 of Frisen grading.43 The classification performance of the DLS and that of 3 neuro-ophthalmologists were tested in 214 photographs of mild/moderate or severe papilledema. The DLS yielded an AUC of 0.93 (95% CI: 0.89–0.96), an accuracy of 87.9%, a sensitivity of 91.8%, and a specificity of 82.6% in classifying papilledema as mild/ moderate versus severe. This classification performance was not significantly different from that of neuro-ophthalmologists.44 It is worth noting that the majority of misclassifications occurred on photographs of papilledematous ONH with a Frisen grade 3 (14 out of 26 misclassifications).

Optic Nerve Head Pallor

Apart from a swollen appearance, the ONH can also seem pale and/or atrophic in patients with chronic optic neuropathies.45 Unlike ONH swelling, there is no standardized diagnosis and grading of severity of ONH pallor, and its assessment is variable in part due to the subjective nature of ONH evaluation even amongst trained ophthalmologists46 and anatomic differences between patients (ie, pseudophakia, physiologic temporal pallor, peripapillary atrophy, and tilted disc).45

Using CML, Yang et al47 developed a computer-aided detection (CAD) system to detect ONH pallor using color fundus photographs. The CAD system automatically segmented and enhanced the appearance of the ONH, then extracted features and parameters of ONH pallor in a training set of 230 fundus photographs with variable degrees of ONH pallor and 123 normal ONHs confirmed by imaging. The 2 parameters used in the detection of ONH pallor were (1) brightness correction ratio, which refers to the brightness of the “cup depth” compared to the “background region,” and (2) temporal to nasal ratio, which reflects brightness intensity of pixels in the temporal region divided by that in the nasal region at the neuroretinal rim. A logistic regression model was then used to predict the probability of ONH pallor. This system achieved an accuracy of 96.1%, sensitivity of 95.3%, and a specificity of 96.7% in detecting ONH pallor from normal discs on color fundus photographs.47

Glaucomatous Versus Nonglaucomatous Optic Neuropathy

Glaucoma typically presents with ONH cupping. It is nevertheless critical to appropriately identify a compressive optic neuropathy that can at times mimic glaucoma. Yang et al48 utilized DL to differentiate glaucomatous optic neuropathy (GON), defined as enlarged cupping of the ONH with corresponding visual field (VF) defect, from nonglaucomatous optic neuropathy (NGON) due to compression, hereditary diseases, chronic ischemia, inflammation, trauma, or toxic through analyzing ONH fundus photos with the use of the CNN of the ResNet-50 architecture. The diagnosis of the cause of optic neuropathy in the study was confirmed by 2 expert ophthalmologists and was supported by evidence of VF and OCT assessments. The accuracy of the DLS for detecting normal ONH, NGON, and GON was 99.7%, 86.4%, and 92.5%, respectively, with an overall accuracy of 99.1%. The diagnostic accuracy of the DLS to specifically differentiate GON from NGON images demonstrated a sensitivity of 93.4% and specificity of 81.8% with an area under the precision-recall curve of 0.87. In addition, the majority of the misclassification resulting in false positives were in patients with extensive peripapillary atrophy and tilted ONH.

EXPLORATION OF OPTIC NERVE FUNCTION IN NEURO-OPHTHALMIC DISEASES

The VF can be affected by lesions along the afferent visual pathway.20 In clinical practice, challenges arise when patients present with atypical VF defects or if they have multiple pathologies.49,50 For instance, it is important, yet difficult, to distinguish between an undiagnosed compressive pituitary mass and glaucomatous progression in a patient presenting with worsening temporal VF loss.51,52

In an attempt to harness the power of DL to classify patterns of VF defects, Thomas and colleagues developed and trained a feed-forward back-propagation artificial neuronal network (ANN) to detect VF loss caused by pituitary mass amongst glaucomatous VF.53 The trained ANN was evaluated in 2 ways: (1) trained on 70% of the total available bilateral VF representations, validated on 15% of the data and tested on the remaining 15%; 2) using a “needle-in-a haystack” algorithm where 1 of 121 pituitary visual defects with confirmed pituitary lesions on neuroimaging (which might not always be a classic bitemporal hemianopia to replicate a more real-life scenario) was withheld from the training dataset and the trained DLS was evaluated for its detection in 907 VFs with glaucomatous damage “haystack.” The model was programed to rank the probability of VF presented from most likely a bitemporal hemianopia defect to least likely. The higher the numerical rank (eg, No. 1), the more likely it is a bitemporal hemianopia. 1631 out of 2420 networks (67%) ranked the pituitary field as No. 1, the most likely; 2195 (91%) identified it rank No. 5 or better, 2268 (94%) identified it as ranked No. 10 or better. The algorithm's sensitivity was 95.9% and its specificity was 99.8%. However, such an algorithm, if used in clinical practice, could trigger a high false-positive rate resulting in unnecessary and expensive neuroimaging for patients.

Another means of functional exploration of the ONH is through visual electrophysiology diagnostic tests such as visual evoked potential (VEP). Although studies with small sample sizes utilizing ML and ANN have shown high accuracy for the detection of optic nerve abnormalities on VEPs in 2007 and 2008, to date, most VEP tests are still interpreted by trained experts.54,55

In summary, Al-driven algorithms, especially using DL, may constitute a paradigm shift in how neurological diseases causing ONH changes might be detected, managed, and monitored in ophthalmic, neuro-ophthalmic and nonophthalmic settings (eg, ED, internal medicine, and neurology clinics). Although the majority of studies using CML and DL relied on color fundus images to detect ONH abnormalities in neuro-ophthalmic settings, emerging studies are using DL on other imaging modalities (eg, OCT) to evaluate optic disc abnormalities and discriminate papilledema from optic disc drusen.56 Still, before such technology can be routinely incorporated into clinical practice, further prospective studies on real-world datasets are required to validate the utility of ML algorithms as decision-support tools in comparison to standards of care, where mostly needed (ie, where neuro-ophhtalmic expertise is lacking).

DETECTION OF EYE MOVEMENT DISORDERS

Eye movements are influenced by cortical control, subcortical centers, premotor coordination of conjugate eye movements, ocular motor cranial nerves (specifically cranial nerves III, IV, and VI), and extraocular muscles.20,57 This extensive system aims to establish stable binocular single vision. Any insult to the ocular motor pathways can lead to ocular misalignment, conjugate gaze abnormalities, or abnormal involuntary oscillatory movements of the eyes termed nystagmus.58

Ocular deviation in infantile and acquired strabismus, observed in children and adults, can be associated with muscle restriction, convergence or divergence insufficiency, or refractive errors.58 Ocular misalignment can be clinically detected through the Hirschberg test and Krimsky test amongst other methods with the gold standard being the prism cover test (PCT).58 These methods require specialized skills by an ophthalmologist or orthoptist who may not always be available. AI-techniques have been developed and used to model ocular motor data,59 predict features associated with congenital nystagmus,60 and detect strabismus.61–73 These techniques could potentially be extended to other causes of ocular misalignment, such as ocular motor cranial nerve palsies.

Strabismus and Conjugate Gaze Abnormalities

Ocular misalignment or strabismus detection using AI has been described in predominantly technical studies utilizing photographs of patients,62–66 eye movement67 or cover test videos,68,69 retinal birefringence scanning,70 or PCT measurements.71,72 These studies are summarized in Table 2.

Table 2 - Summary of Studies Utilizing Classical Machine Learning and Deep Learning On Eye Movement Disorders
Performance Characteristics
Authors Artificial Intelligence Method Modality Analyzed Model Description Goals/Predicted Categories Datasets Sensitivity (%) Specificity (%) AUC Accuracy (%) Other Metrics
Viikki et al (2001)59 CML Eye movements recorded with electrooculography Classification using decision tree induction in 3 classes (control, central, peripheral lesion).Classification using decision tree induction into 5 classes (control, brainstem, cerebellar, cerbello-brainstem, peripheral lesion). To model relationships between oculomotor test parameters (pursuit and saccades) and brain lesions sites Testing dataset: 137 patients with central lesion (35 operated cerebello-pontine tumor, 20 operated hemangioblastom, a 20 infarction of cerebellobrainstem) and peripheral lesion (62 Meniere disease) and 78 controls 91.0—88.0
Khumdat et al (2013)63 CML Face photograph Classification using automatic detection and calculation of central cornea light reflex ratio To detect ocular misalignment on face photographs in primary gaze using corneal light reflex Images of 103 subjects 97.2 73.1 94.2
Yang et al (2013)68 CML Full-face infrared images with video camera, with selective wavelength filter (occluder) placed in front of either eye Images analyzed using 3DStrabismus Photo Analyser. Model not specified. To compare measurements of binocular alignment from the software to 2 ophthalmologists Images of 90 subjects (30 esotropia, 30 exotropia, 30 orthotropic) r = 0.90
Almeida et al (2015)62 CML Face photograph Classification using SVM To detect ocular misalignment on face photographs at 5 positions of gaze using Hirschberg reflex. 200 images of 40 patients with strabismus 88% images used in training and 12% for testing 88 for ET100 for XT80 for HT83 for HoT
Valente et al (2017)69 CML Digital videos of eye movement with cover test Classification using automated deviation of eye. Model not specified. To detect the presence of strabismus on digital videos with cover test 15 patients with exotropia 80.0 100 93.3
Jung et al (2019)66 CML Full-face photograph Classification using SVM To detect strabismus based on facial asymmetry Training dataset: 600 images (300 strabismus and 300 normal)Testing dataset: 100 images (50 strabismus and 50 normal) 95.0
D’addio et al (2020)77 CML Eye movements recorded with electrooculography Classification using Random Forests and Logistic Regression Tree algorithms To investigate the relationships among different parameters of nystagmus and to predict visual acuityTo investigate the relationships among different parameters of nystagmus and variability of eye positioning 20 patients with nystagmus R2 = 0.70R2 = 0.73
Chandna et al (2009)72 DL Measurements of vertical strabismus with PCT Classification using ANN (Strabnet) To diagnose and classify vertical strabismus based on PCT measurements To compare Strabnet with expert orthoptist in making a clinical diagnosis of vertical strabismus Training dataset: 160measurementsValidation dataset: 120measurementsTesting dataset: 36 patientsTraining and validationdataset as above.Testing dataset: 43 patients 100 94.484.1
Gramatikov (2017)70 DL Retinal birefringence scanning results recorded using pediatric vision screener Classification using Neural Network toolbox for MATLAB To detect ocular misalignment using retinal birefringence scanning Training and validation dataset: 10 eyes of 5 subjects: 120 central fixation, 480 paracentral fixation Testing dataset: 78 eyes of 39 subjects (19 with strabismus and 20 controls) 98.5 100
Lu et al (2018)65 DL Eye photograph taken by patients Segmentation using ResNet-101 as a backbone. Classification using CNN To detect strabismus using self-screening from a tele strabismus dataset Training dataset: 3409 images (701 strabismus, 2708 normal)Testing dataset: 2276 images (470 strabismus, 1806 normal) 93.3 96.2 0.99 93.9
Chen et al (2018)67 DL Eye movements recorded with an eye tracker Classification using SVM and CNN trained on ImageNetClassification with AlexNetClassification with VGG-FClassification with VGG-MClassification with VGG-SClassification with VGG-16Classification with VGG-19 To detect strabismus based on eye-tracking gaze data Testing dataset: 42 images (25 normal 17 strabismus) 47.176.564.782.494.176.576.5 84.080.084.092.096.088.088.0
Figueiredo et al (2021)74 DL Face photograph Classification with ResNet-50 pretrained on ImageNet To classify eye versions into 9 positions of gaze of patients with strabismus using face photographs Images of 110 patients with strabismus (42 exotropia, 57 esotropia) 42 to 92
Zheng et al (2021)64 DL Face photograph Region of interest localized using Faster R-CNN.Classification using Inception-V3 pretrained on ImageNet. To screen referable horizontal strabismus on primary gaze using face photographs Training and validation dataset: 7026 images (3829 non strabismus from 3021 subjects and 3197 strabismus from 2772 subjects)Testing dataset: 277 images 94.0 99.3 0.99 95.0
ANN indicates artificial neural network; AUC, area under the curve; CML, classical machine learning; CNN, convolutional neural network; DL, deep learning; ET, esotropia; HoT, hypotropia; HT, hypertropia; PCT, prism cover test; SVM, support vector machine; VGG, visual geometry group; XT, exotropia. r = Pearson correlation coefficient; R2: coefficient of discrimination.

Face photographs have been used to detect strabismus with different AI techniques. Almeida et al62 proposed a CAD for detecting and diagnosing strabismus based on the Hirschberg reflex on clinical photographs of 40 adult patients at 5 positions of gaze (primary, up gaze, down gaze, left gaze, and right gaze). Five steps were used: segmentation of face, detection of the eye region, location of eyes, location of the limbus and the brightness, and finally diagnosis of strabismus based on the distance of the center of the cornea to the light reflex detected. The accuracy of identifying ocular misalignment was 100% in exotropia, 88% in esotropia, 80% in hypertropia, and 83% in hypotropia. A similar study that also used the analysis of corneal light reflex but only in primary gaze in children, achieved an accuracy of 94.2%, sensitivity of 97.2%, and specificity of 73.1%.63 However, the above studies were limited by small sample sizes of data. Figueiredo et al74 used a DL algorithm to objectively classify eye versions from face photographs of adults through a mobile application. The model was first trained on 9 positions of gaze and processed through ResNet-50 as the neural network architecture. The application achieved an accuracy ranging from 42% to 92% and precision ranging from 28% to 84% depending on the type of eye version. Recently, Zheng et al64 also developed a DL approach for screening referable horizontal strabismus in children based on primary gaze photographs. A total of 7026 images were used to train the model and 277 images from an independent dataset were tested. The algorithm achieved an accuracy of 95%, a better performance than resident ophthalmologists (accuracy ranging from 81% to 85%). In an attempt to promote automated self-screening, in 2018, Lu et al65 developed a deep neural network for the detection of strabismus in a telemedicine setting using photographs of eyes taken by patients themselves. The DLS achieved an accuracy of 93.9%, sensitivity of 93.3%, and specificity of 96.2% for the detection of strabismus. In 2019, a study by Jung et al66 even proposed using full-face photos to detect strabismus based on facial asymmetry with 95% accuracy. Further larger clinical validation studies, ideally performed in a prospective manner, would be required before the utility of such technologies can be confirmed.

Apart from using static photographs of patients, some studies analyzed eye movement videos in different directions of gaze. Chen et al67 developed a program that used different CNN models previously trained on the ImageNet database. Data from the eye tracker were extracted to produce a gaze deviation image that represented fixation accuracies of the subject at 3 different angles of gaze (left, right, and center). The gaze deviation image was then fed to a CNN and classified as strabismic or normal. The best performance was obtained with an accuracy of 95%, a sensitivity of 94%, and a specificity of 96% when tested on a small sample of 17 adult patients with strabismus and 25 controls. Researchers had also investigated videos of patients’ PCT. In a study by Yang et al,68 an infrared camera with a special occluder that blocks the subject's view and all visible light but selectively transmits infrared light was used to measure horizontal deviations of esotropia and exotropia in children and adults. This program achieved a strong positive correlation (R = 0.90, P < 0.001) with manual PCT measurements performed by 2 independent ophthalmologists. Valente and team attempted to remove the need for special camera or filters while analyzing videos of the cover test through a different program that incorporated limbus identification, eye tracking, and occluder detection.69 This methodology achieved 93.3% accuracy, 80.0% sensitivity, and 100% specificity for the detection of exotropia.

One major limitation of these algorithms that used clinical photos or videos was that they cannot be applied to patients with corneal, conjunctival, or periocular abnormalities (eg, microcornea, conjunctival pigmentation, facial fractures) because the limbus and corneal light reflex cannot be accurately determined. To overcome this, Gramatikov and team70 instead used retinal birefringence scanning to detect central fixation based on the changes in polarization of light reflected from the eye. In combination with analysis through a specially designed ANN, the system showed 98.5% sensitivity and 100% specificity when tested on 39 subjects (20 controls, 19 with strabismus, mostly children) to detect ocular misalignment.

Finally, PCT measurements in common patterns of vertical deviation have also been investigated using ANN algorithms. StrabNet is a back-propagation learning system that utilized multilayer perceptron to classify vertical strabismus into different patterns including unilateral or bilateral superior oblique palsies, inferior oblique palsy, Brown superior oblique tendon sheath syndrome, thyroid eye disease, or orbital blow-out fracture.71,72 StrabNet achieved a 94% diagnostic accuracy, 100% specificity, and 84% match with an expert orthoptist.

Extraocular movements abnormalities can be used to localize central nervous system lesions.75 One study employed decision tree induction to model relationships between oculomotor test parameters of conjugate eye movements and lesion location in patients with operated cerebellopontine angle tumor, operated hemangioblastoma, infarction of cerebello-brainstem, Meniere disease, and control subjects.59 Ocular motor evaluation included random pursuit eye movements and saccadic eye movements that were electro-oculographically recorded with skin electrodes. When divided into 3 classes (control subject, central lesion, and peripheral lesion), the program yielded a mean classification accuracy of 91%. When 5 classes were used (control subject, brainstem lesion, cerebellar lesion, cerebello-brainstem lesion, and peripheral lesion), classification accuracy became 88%.

The use of AI for the detection and diagnosis of ocular misalignment or conjugate gaze abnormalities is promising, both for its applications in pediatric ophthalmology72 and neuro-oph-thalmology59. However, larger studies, potentially using publicly available datasets with solid ground truth, are needed before an implementation in clinical practice or telemedicine programs.

Nystagmus

Nystagmus can be congenital or acquired and can be caused by central nervous system pathologies, peripheral vestibular disease, or severe vision loss. A diverse number of nystagmus waveform types have been described in clinical literature and the characteristics of nystagmus can at times point to the etiology of the condition.76 D’addio et al77 devised a predictive model built on 2 algorithms: 1) random forest and 2) logistic regression tree, to investigate the relationship among different parameters of congenital nystagmus. Electro-oculography of 20 patients (adults and children) was recorded and signals were extracted via a custom-made software. The model was able to predict visual acuity and variability of eye positioning with coefficient of determination values of 0.70 and 0.73, respectively. This study could potentially serve as a framework for investigations of other types of nystagmus.

CLASSICAL MACHINE LEARNING IN OCULAR MYASTHENIA GRAVIS

Ocular myasthenia gravis (OMG) is a subset of myasthenia gravis affecting specifically extraocular and eyelid muscles. Affected patients manifest typically with variable, fatigable pto-sis, and/or ophthalmoplegia. Diagnosis of OMG can be difficult in a clinical setting and there is no single test that can establish an absolute diagnosis of myasthenia.78 Liu et al79 developed a CAD system that used facial images and videos of extraocular movements and eyelid positions taken of OMG patients during the neostigmine test to aid the diagnosis of OMG. The image segmentation software, termed OMG-net, was developed by the team, and MobileNet served as the backbone of the encoder-decoder network. The program was able to successfully determine parameters of globe and eyelid positions (palpebra aperture, scleral distance) when compared to manual measurement results of doctors. Although this study potentially serves as a platform where more complex algorithms can be built upon, the authors acknowledged that image segmentation in this model could be improved.

ADDITIONAL APPLICATIONS FOR AI IN CONDITIONS ADJACENT TO NEURO-OPHTHALMOLOGY

Although sometimes not directly seen in a neuro-ophthal-mology clinic, patients with various intracranial insults (ie, neurodegenerative, neurodevelopmental, trauma, etc) can suffer from ocular manifestations. Briefly, ML techniques have been evaluated in gaze parameters for neurodegenerative diseases80 (Parkinson disease,81,82 Alzheimer disease83) and neuro-psychi-atric diseases.84,85 DL techniques on pupillometry have also been used to investigate neurodevelopmental86 and psychiatric disor-ders.87 Additionally, VEP responses in intracranial surgery, in particular at the sellar region, can also be automatically interpreted by neural network algorithms, potentially guiding real-time monitoring during surgical resection.88 Although there is much prospect for further investigation into the utility of AI in assessing the visual parameters in neurological conditions, this topic is outside the scope of this review.

CONCLUSIONS AND FUTURE PERSPECTIVES

The discipline of AI offers many useful systems for screening and characterizing ONH structure and function, and, to a lesser extent, certain eye movement disorders. These systems can potentially make complex diagnostic procedures automated, timely, accurate, and scalable, particularly in neuro-ophthalmic conditions, where ocular dysfunctions may harbor underlying life-threatening and/or systemic diseases. Today, more prospective, multicentric studies are required to evaluate the real-life utility of AI systems in neuro-ophthalmic and non-neuro-oph-thalmic settings where experts are not readily available. In addition, there is also increasing emphasis on DL-driven quality assessment of retinal images to reduce the frequency of diagnos-tically unusable datasets, an aspect that is even more important in neuro-ophthalmology where data is scarce.89 In the longer run, ophthalmologists may benefit from AI-assisted prognostication and disease monitoring that could allow for personalized treatments, whereas nonophthalmologists may benefit from an automated uncovering of neurological and systemic conditions through AI-assisted ocular examinations. Finally, it would also not be a farfetched prospect that the investigation of hidden layers in unsupervised DL models through various methods such as saliency maps with backpropagation,90 class activation maps,91 or even feature visualization,92 could one day allow the machine to teach neuro-ophthalmologists and other clinicians about new disease features.

The COVID-19 pandemic only accelerated the implementation of AI-powered telehealth into clinical practice,93–95 and AI-based interventions are increasingly accepted by medical practitioners from various disciplines, patients, and regulators. This surge in AI applications should be complemented with technological innovations allowing for embedded AI technologies,96 and long-distance clinical investigations/self-investigations (eg, VF testing applications, phone-based imaging, handheld imaging devices)96 to promote tele-neuro-ophthalmology as a viable health care delivery system.

REFERENCES

1. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med 2019; 380:2588–2590. doi: 10.1056/NEJMc1906060.
2. Kapoor R, Walters SP, Al-Aswad LA. The current state of artificial intelligence in ophthalmology. Surv Ophthalmol 2019; 64:233–240. doi: 10.1016/j.survophthal.2018.09.002.
3. Ongsulee P. Artificial intelligence, machine learning and deep learning. In: 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE). IEEE; 2017:1–6. doi:10.1109/ICTKE.2017.8259629.
4. Hinton G. Deep learning—a technology with the potential to transform health care. JAMA 2018; 320:1101–1102. doi: 10.1001/jama.2018.11100.
5. Hogarty DT, Su JC, Phan K, et al. Artificial intelligence in dermatology— where we are and the way to the future: a review. Am J Clin Dermatol 2020; 21:41–47. doi: 10.1007/s40257-019-00462-6.
6. Hosny A, Parmar C, Quackenbush J, et al. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18:500–510. doi: 10.1038/s41568-018-0016-5.
7. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2019; 103:167–175. doi: 10.1136/bjophthalmol-2018-313173.
8. Hogarty DT, Mackey DA, Hewitt AW. Current state and future prospects of artificial intelligence in ophthalmology: a review: artificial intelligence in ophthalmology. Clin Experiment Ophthalmol 2019; 47:128–139. doi: 10.1111/ceo.13381.
9. Bellemo V, Lim G, Rim TH, et al. Artificial intelligence screening for diabetic retinopathy: the real-world emerging application. Curr Diab Rep 2019; 19:72doi: 10.1007/s11892-019-1189-3.
10. Grzybowski A, Brona P, Lim G, et al. Artificial intelligence for diabetic retinopathy screening: a review. Eye 2020; 34:451–460. doi: 10.1038/s41433-019-0566-0.
11. Mayro EL, Wang M, Elze T, et al. The impact of artificial intelligence in the diagnosis and management ofglaucoma. Eye 2020; 34:1–11. doi: 10.1038/s41433-019-0577-x.
12. Mariottoni EB, Datta S, Dov D, et al. Artificial intelligence mapping of structure to function in glaucoma. Transl Vis Sci Technol 2020; 9:19doi: 10.1167/tvst.9.2.19.
13. Devalla SK, Liang Z, Pham TH, et al. Glaucoma management in the era of artificial intelligence. Br JOphthalmol 2020; 104:301–311. doi: 10.1136/bjophthalmol-2019-315016.
14. Yan Q, Weeks DE, Xin H, et al. Deep-learning-based prediction of late age-related macular degeneration progression. Nat Mach Intell 2020; 2:141–150. doi: 10.1038/s42256-020-0154-9.
15. von der Emde L, Pfau M, Dysli C, et al. Artificial intelligence for morphology-based function prediction in neovascular age-related macular degeneration. Sci Rep 2019; 9:11132doi: 10.1038/s41598-019-47565-y.
16. Bhuiyan A, Wong TY, Ting DSW, et al. Artificial intelligence to stratify severity of age-related macular degeneration (AMD) and predict risk of progression to late AMD. Transl Vis Sci Technol 2020; 9:25doi: 10.1167/tvst.9.2.25.
17. Campbell JP, Singh P, Redd TK, et al. Applications of artificial intelligence for retinopathy of prematurity screening. Pediatrics 2021; 147:e2020016618doi:10.1542/peds.2020-016618.
18. Scruggs BA, Chan RVP, Kalpathy-Cramer J, et al. Artificial intelligence in retinopathy of prematurity diagnosis. Transl Vis Sci Technol 2020; 9:5doi: 10.1167/tvst.9.2.5.
19. Mosby, Martin TJ. Palay DA, Krachmer JH. Neuro-ophthalmology. Primary Care Ophthalmology Second Edition2005.
20. Bhatti MT, American Academy of Ophthalmology. Neuro-Ophthalmology. 2021. Available from: https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=2939254 Accessed August 9, 2021.
21. Najjar RP, Vasseneix C, Milea D. Ichhpujani P, Thakur S. Artificial intelligence in neuro-ophthalmology. Artificial Intelligence and Ophthalmology. Current Practices in Ophthalmology. Singapore: Springer Singapore; 2021. 101–111. doi:10.1007/978-981-16-0634-2_8.
22. Woodward EG. Clinical negligence. Ophthalmic Physiol Opt 2006; 26:215–216. doi: 10.1111/j.1475-1313.2006.00402_9.x.
23. Rawlinson K. Optometrist Wins Appeal Against Conviction for Manslaughter of Boy, 8. The Guardian. Published July 31, 2017. Available from: https://www.theguardian.com/uk-news/2017/jul/31/optometrist-honey-rose-wins-appeal-against-conviction-manslaughter-boy-8. Accessed September 14, 2020.
24. Poostchi A, Awad M, Wilde C, et al. Spike in neuroimaging requests following the conviction of the optometrist Honey Rose. Eye 2018; 32:489–490. doi: 10.1038/eye.2017.274.
25. Biousse V, Bruce BB, Newman NJ. Ophthalmoscopy in the 21st century: the 2017H. Houston Merritt Lecture. Neurology 2018; 90:167–175. doi: 10.1212/WNL.0000000000004868.
26. Bruce BB, Lamirel C, Wright DW, et al. Nonmydriatic ocular fundus photography in the emergency department. NEngl J Med 2011; 364:387–389. doi: 10.1056/NEJMc1009733.
27. Sachdeva V, Vasseneix C, Hage R, et al. Optic nerve head edema among patients presenting to the emergency department. Neurology 2018; 90:e373–e379. doi: 10.1212/WNL.0000000000004895.
28. Bruce BB, Thulasi P, Fraser CL, et al. Diagnostic accuracy and use of nonmydriatic ocular fundus photography by emergency physicians: phase II of the FOTO-ED Study. Ann Emerg Med 2013; 62:28–33e1. doi:10.1016/j.annemergmed.2013.01.010.
29. Irani NK, Bidot S, Peragallo JH, et al. Feasibility of a nonmydriatic ocular fundus camera in an outpatient neurology clinic. Neurologist 2020; 25:19–23. doi:10.1097/NRL.0000000000000259.
30. Akbar S, Akram MU, Sharif M, et al. Decision support system for detection of papilledema through fundus retinal images. J Med Syst 2017; 41:66doi: 10.1007/s10916-017-0712-9.
31. Fatima KN, Hassan T, Akram MU, et al. Fully automated diagnosis of papilledema through robust extraction of vascular patterns and ocular pathology from fundus photographs. Biomed Opt Express 2017; 8:1005–1024. doi: 10.1364/BOE.8.001005.
32. Echegaray S, Zamora G, Yu H, et al. Automated analysis of optic nerve images for detection and staging of papilledema. Investig Opthalmology Vis Sci 2011; 52:7470–7478. doi: 10.1167/iovs.11-7484.
33. Agne J, Wang JK, Kardon RH. Hadjiiski LM, Tourassi GD, et al. Determining degree of optic nerve edema from color fundus photography. Medical Imaging 2015: Computer-Aided Diagnosis. Orlando, FL: SPIE Medical Imaging; 2015. 94140F.
34. Ahn JM, Kim S, Ahn KS, et al. Accuracy of machine learning for differentiation between optic neuropathies and pseudopapilledema. BMC Ophthalmol 2019; 19:178doi: 10.1186/s12886-019-1184-0.
35. Milea D, Najjar RP, Jiang Z, et al. Artificial intelligence to detect papilledema from ocular fundus photographs. N Engl J Med 2020; 382:1687–1695. doi: 10.1056/NEJMoa1917130.
36. Biousse V, Newman NJ, Najjar RP, et al. Optic disc classification by deep learning versus expert neuro-ophthalmologists. Ann Neurol 2020; 88:785–795. doi: 10.1002/ana.25839.
37. Saba T, Akbar S, Kolivand H, et al. Automatic detection of papilledema through fundus retinal images using deep learning. Microsc Res Tech 2021; 84:3066–3077. doi: 10.1002/jemt.23865.
38. Liu TYA, Wei J, Zhu H, et al. Detection of optic disc abnormalities in color fundus photographs using deep learning. J Neuroophthalmol 2021; 41:368–374. doi: 10.1097/WNO.0000000000001358.
39. Chen JJ, Thurtell MJ, Longmuir RA, et al. Causes and prognosis of visual acuity loss at the time of initial presentation in idiopathic intracranial hypertension. Investig Opthalmology Vis Sci 2015; 56:3850–3859. doi: 10.1167/iovs.15-16450.
40. Wall M, Falardeau J, Fletcher WA, et al. Risk factors for poor visual outcome in patients with idiopathic intracranial hypertension. Neurology 2015; 85:799–805. doi: 10.1212/WNL.0000000000001896.
41. Liu KC, Bhatti MT, Chen JJ, et al. Presentation and progression of papilledema in cerebral venous sinus thrombosis. Am J Ophthalmol 2020; 213:1–8. doi: 10.1016/j.ajo.2019.12.022.
42. Johnson LN, Krohel GB, Madsen RW, et al. The role of weight loss and acetazolamide in the treatment of idiopathic intracranial hypertension (pseudotumor cerebri). Ophthalmology 1998; 105:2313–2317. doi: 10.1016/S0161-6420(98)91234-9.
43. Frisen L. Swelling of the optic nerve head: a staging scheme. J Neurol Neurosurg Psychiatry 1982; 45:13–18. doi: 10.1136/jnnp.45.1.13.
44. Vasseneix C, Najjar RP, Xu X, et al. Accuracy of a deep learning system for classification of papilledema severity on ocular fundus photographs. Neurology 2021; 97:e369–e377. doi:10.1212/WNL.0000000000012226.
45. Osaguona VB. Differential diagnoses of the pale/white/atrophic disc. Community Eye Health 2016; 29:71–74.
46. O’Neill EC, Danesh-Meyer HV, Kong GXY, et al. Optic disc evaluation in optic neuropathies. Ophthalmology 2011; 118:964–970. doi: 10.1016/j.ophtha.2010.09.002.
47. Yang HK, Oh JE, Han SB, et al. Automatic computer-aided analysis of optic disc pallor in fundus photographs. Acta Ophthalmol (Copenh) 2019; 97:e519–e525. doi:10.1111/aos.13970.
48. Yang HK, Kim YJ, Sung JY, et al. Efficacy for differentiating nonglaucomatous versus glaucomatous optic neuropathy using deep learning systems. Am J Ophthalmol 2020; 216:140–146. doi: 10.1016/j.ajo.2020.03.035.
49. Lee IH, Miller NR, Zan E, et al. Visual defects in patients with pituitary adenomas: the myth of bitemporal hemianopsia. Am J Roentgenol 2015; 205:W512–W518. doi:10.2214/AJR.15.14527.
50. Ogra S, Nichols AD, Stylli S, et al. Visual acuity and pattern of visual field loss at presentation in pituitary adenoma. J Clin Neurosci 2014; 21:735–740. doi: 10.1016/j.jocn.2014.01.005.
51. Drummond SR, Weir C. Chiasmal compression misdiagnosed as normal-tension glaucoma: can we avoid the pitfalls? Int Ophthalmol 2010; 30:215–219. doi: 10.1007/s10792-009-9308-9.
52. Greenfield DS, Siatkowski RM, Glaser JS, et al. The cupped disc. Ophthalmology 1998; 105:1866–1874. doi: 10.1016/S0161-6420(98)91031-4.
53. Thomas PBM, Chan T, Nixon T, et al. Feasibility of simple machine learning approaches to support detection of non-glaucomatous visual fields in future automated glaucoma clinics. Eye 2019; 33:1133–1139. doi: 10.1038/s41433-019-0386-2.
54. Kara S, Güven A. Neural network-based diagnosing for optic nerve disease from visual-evoked potential. JMed Syst 2007; 31:391–396. doi: 10.1007/s10916-007-9081-0.
55. Güven A, Polat K, Kara S, et al. The effect of generalized discriminate analysis (GDA) to the classification of optic nerve disease from VEP signals. Comput Biol Med 2008; 38:62–68. doi: 10.1016/j.compbiomed.2007.07.002.
56. Girard MJA, Panda SK, Tun TA, et al. 3D Structural Analysis ofthe Optic Nerve Head to Robustly Discriminate Between Papilledema and Optic Disc Drusen. ArXiv211209970 Cs Eess. Published December 18, 2021. Available from: http://arxiv.org/abs/2112.09970. Accessed January 27, 2022.
57. Leigh RJ, Zee DS. The Neurology of Eye Movements. 5th editionUK: Oxford University Press; 2015.
58. Springer, Hengst TC, Hengst TC, Gilbert S, et al. Pediatric Ophthalmology and Strabismus. 2013.
59. Viikki K, Isotalo E, Juhola M, et al. Using decision tree induction to model oculomotor data. Scand Audiol 2001; 30:103–105. doi: 10.1080/010503901300007227.
60. D’Addio G, Ricciardi C, Improta G, et al. Feasibility of machine learning in predicting features related to congenital nystagmus. In: Henriques, J., Neves, N., de Carvalho, P., editors. XV Mediterranean Conference on Medical and Biological Engineering and Computing - MEDICON 2019. Vol. l 76. IFMBE Proceedings, Springer International Publishing; 2020
61. Van Eenwyk J, Agah A, Giangiacomo J, et al. Artificial intelligence techniques for automatic screening of amblyogenic factors. Trans Am Ophthalmol Soc 2008; 106:64–73. discussion 73-74.
62. Sousa de Almeida JD, Silva AC, Teixeira JAM, et al. Computer-aided methodology for syndromic strabismus diagnosis. J Digit Imaging 2015; 28:462–473. doi: 10.1007/s10278-014-9758-0.
63. Khumdat N, Phukpattaranont P, Tengtrisorn S. Development of a computer system for strabismus screening. In: The 6th 2013 Biomedical Engineering International Conference. IEEE; 2013
64. Zheng C, Yao Q, Lu J, et al. Detection of referable horizontal strabismus in children's primary gaze photographs using deep learning. Transl Vis Sci Technol 2021; 10:33doi: 10.1167/tvst.10.1.33.
65. Lu J, Fan Z, Zheng C, et al. Automated Strabismus Detection for Telemedicine Applications. ArXiv180902940 Cs. Published December 2, 2018. Available from: http://arxiv.org/abs/1809.02940. Accessed August 25, 2021.
66. Jung SM, Umirzakova S, Whangbo TK. Strabismus classification using face features. In: 2019 International Symposium on Multimedia and Communication Technology (ISMAC). IEE; 2019
67. Chen Z, Fu H, Lo WL, et al. Strabismus recognition using eye-tracking data and convolutional neural networks. J Healthc Eng 2018; 2018:1–9. doi: 10.1155/2018/7692198.
68. Yang HK, Seo JM, Hwang JM, et al. Automated analysis of binocular alignment using an infrared camera and selective wavelength filter. Investig Opthalmology Vis Sci 2013; 54:2733–2737. doi: 10.1167/iovs.12-11400.
69. Valente TLA, de Almeida JDS, Silva AC, et al. Automatic diagnosis of strabismus in digital videos through cover test. Comput Methods Programs Biomed 2017; 140:295–305. doi: 10.1016/j.cmpb.2017.01.002.
70. Gramatikov BI. Detecting central fixation by means of artificial neural networks in a pediatric vision screener using retinal birefringence scanning. Biomed Eng OnLine 2017; 16:52doi: 10.1186/s12938-017-0339-6.
71. Fisher AC, Chandna A, Cunningham IP. The differential diagnosis of vertical strabismus from prism cover test data using an artificially intelligent expert system. Med Biol Eng Comput 2007; 45:689–693. doi: 10.1007/s11517-007-0212-z.
72. Chandna A, Fisher AC, Cunningham I, et al. Pattern recognition of vertical strabismus using an artificial neural network (StrabNet). Strabismus 2009; 17:131–138. doi: 10.3109/09273970903234032.
73. Reid JE, Eaton E. Artificial intelligence for pediatric ophthalmology. Curr Opin Ophthalmol 2019; 30:337–346. doi: 10.1097/ICU.0000000000000593.
74. de Figueiredo LA, Dias JVP, Polati M, et al. Strabismus and artificial intelligence app: optimizing diagnostic and accuracy. Transl Vis Sci Technol 2021; 10:22doi: 10.1167/tvst.10.7.22.
75. Pedersen RA, Troost BT. Abnormalities of gaze in cerebrovascular disease. Stroke 1981; 12:251–254. doi: 10.1161/01.STR.12.2.251.
76. Abadi RV. Mechanisms underlying nystagmus. JRSM 2002; 95:231–234. doi: 10.1258/jrsm.95.5.231.
77. D’Addio G, Ricciardi C, Improta G, et al. Feasibility of machine learning in predicting features related to congenital nystagmus. In: Henriques J, Neves N, de Carvalho P, editors. XV Mediterranean Conference on Medical and Biological Engineering and Computing - MEDICON 2019. Vol. 76. IFMBE Proceedings, Springer International Publishing; 2020
78. Smith SV, Lee AG. Update on ocular myasthenia gravis. Neurol Clin 2017; 35:115–123. doi: 10.1016/j.ncl.2016.08.008.
79. Liu G, Wei Y, Xie Y, et al. A computer-aided system for ocular myasthenia gravis diagnosis. Tsinghua Sci Technol 2021; 26:749–758. doi: 10.26599/TST.2021.9010025.
80. T?uan AM, Ionescu B, Santarnecchi E. Artificial intelligence in neurodegenerative diseases: a review of available tools with a focus on machine learning techniques. Artif Intell Med 2021; 117:102081doi: 10.1016/j.artmed.2021.102081.
81. Prashanth R, Dutta Roy S, Mandal PK, et al. High-accuracy detection of early Parkinson's disease through multimodal features and machine learning. Int J Med Inf 2016; 90:13–21. doi: 10.1016/j.ijmedinf.2016.03.001.
82. Przybyszewski A, Kon M, Szlufik S, et al. Multimodal learning and intelligent prediction of symptom development in individual Parkinson's patients. Sensors 2016; 16:1498doi: 10.3390/s16091498.
83. Nam U, Lee K, Ko H, et al. Analyzing facial and eye movements to screen for Alzheimer's disease. Sensors 2020; 20:5349doi: 10.3390/s20185349.
84. Shen R, Zhan Q, Wang Y, et al. Depression detection by analysing eye movements on emotional images. In: ICAS 2021-2021 IEEE International Conference on Acoustics, Speech, Signal Processing (ICASSP). IEEE, 2021
85. Mao Y, He Y, Liu L, Chen X. Disease classification based on eye movement features with decision tree and random forest. Front Neurosci 2020; 14:798doi: 10.3389/fnins.2020.00798.
86. Khanna S, Das W. A novel application for the efficient and accessible diagnosis of ADHD using machine learning (extended abstract). In: 2020 IEEE/ITU International Conference on Artificial Intelligence for Good (AI4G). IEEE; 2020
87. Taha B, Kirk M, Ritvo P, et al. Detection of post-traumatic stress disorder using learned time-frequency representations from pupillometry. In: ICAS 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE; 2021
88. Qiao N, Song M, Ye Z, et al. Deep learning for automatically visual evoked potential classification during surgical decompression of sellar region tumors. Transl Vis Sci Technol 2019; 8:21doi: 10.1167/tvst.8.6.21.
89. Chan EJJ, Najjar RP, Tang Z, et al. Deep learning for retinal image quality assessment of optic nerve head disorders. Asia-Pac J Ophthalmol 2021; 10:282–288. doi: 10.1097/APO.0000000000000404.
90. Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer Vision - ECCV 2014. Lecture Notes in Computer Science. Vol. 8689. Springer International Publishing; 2014
91. Zhou B, Khosla A, Lapedriza A, et al. Learning deep features for discriminative localization. ArXiv151204150 Cs. Published December 13, 2015. Available from: http://arxiv.org/abs/1512.04150. Accessed January 30, 2022.
92. Olah C, Mordvintsev A, Schubert L. Feature visualization. Distill 2017. 2doi:10.23915/distill.00007.
93. Ohannessian R, Duong TA, Odone A. Global telemedicine implementation and integration within health systems to fight the COVID-19 pandemic: a call to action. JMIR Public Health Surveill 2020; 6:e18810doi:10.2196/18810.
94. Bloem BR, Dorsey ER, Okun MS. The coronavirus disease 2019 crisis as catalyst for telemedicine for chronic neurological disorders. JAMA Neurol 2020; 77:927–928. doi: 10.1001/jamaneurol.2020.1452.
95. Ko MW, Busis NA. Tele-neuro-ophthalmology: vision for 20/20 and beyond. JNeuroophthalmol 2020; 40:378–384. doi: 10.1097/WNO.0000000000001038.
96. Teikari P, Najjar RP, Schmetterer L, et al. Embedded deep learning in ophthalmology: making ophthalmic imaging smarter. Ther Adv Ophthalmol 2019; 11:251584141982717doi: 10.1177/2515841419827172.
Keywords:

artificial intelligence; deep learning; eye movement disorders; neuro-ophthalmology; optic nerve head diseases

Copyright © 2022 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.