Artificial Intelligence in Ophthalmology: Evolutions in Asia : The Asia-Pacific Journal of Ophthalmology

Secondary Logo

Journal Logo


Artificial Intelligence in Ophthalmology: Evolutions in Asia

Ruamviboonsuk, Paisan MD; Cheung, Carol Y. PhD; Zhang, Xiulan MD, PhD; Raman, Rajiv MD§; Park, Sang Jun MD; Ting, Daniel Shu Wei MD, PhD||

Author Information
Asia-Pacific Journal of Ophthalmology 9(2):p 78-84, March-April 2020. | DOI: 10.1097/
  • Open


Studies of artificial intelligence (AI) in ophthalmology, the forefront of AI in health care, have recently proliferated in the literature. This may not be surprising as existing huge number of images and data in ophthalmology is a goldmine for studying of the new generation of AI, that is, deep learning (DL). This new version of machine learning (ML), with availability of big data and current computing power, generates excitement because it was found to be able to screen certain eye diseases with much higher accuracy than the older version of ML or even human graders. Even more interesting is the fact that a large number of the recent publications of AI in ophthalmology comes from Asia. Countries in this region where there is approximately 60% of the world population has a diversity in economy, population size, access to health care, access to technology, and causes of visual loss. For this reason, deploying AI in ophthalmic care can be varied from country to country. The aim of this perspective is not only providing big pictures and insights on evolution of AI in ophthalmology in Asia but also encouraging ophthalmologists in Asia to study more on this rapidly emerging field of ophthalmology.


Although China is the largest country in the world, many countries, such as India, have a lower estimated ratio of number of ophthalmologists per million population (13.0 vs 26.4).1 There are different challenges in India like the vast area and the sheer number of beneficiaries. There are additional challenges like diversity of population, challenges unique to a specific geographic area and digital literacy. The strengths in the field of ophthalmology in both China and India are huge number of patients even for rarest diseases and good number of centers of excellence in eye care across the countries, making them a perfect platform for developing AI algorithms.

The government think-tank, National Institution for Transforming India (NITI) Aayog, spearheads a national program on AI focusing on research in India. In the field of ophthalmology, there are few public, private sector hospitals and research institutes in India which are involved in development and validation of AI algorithms. Likewise, academic engineering institutes and industry partners are involved in development of these algorithms.

South Korea, however, is a country with much smaller population, 51,800,000 people, and a higher ratio of number of ophthalmologists per million population (68.1) than China and India.1 In health care system of the country, the government-led National Health Insurance (NHI) scheme has covered the entire population since 1989, which is a compulsory social insurance for both beneficiaries and health care providers.2 The NHI not only helps Koreans gain access to quality health care easily but also allows the government to have strong control over the health care system, especially costs. Therefore, Koreans can gain access to high-quality eye care service in both primary and referral centers across the country at reasonable costs.

In addition, South Korea is one of the countries that quickly moved to digital hospitals. In 1999, the NHI reimbursed the Picture Archiving and Communication System for the first time in the world, which resulted in the rapid dissemination of Picture Archiving and Communication System to Korean hospitals. Subsequently, large hospitals in Korea quickly introduced the electronic health record (EHR) system, and in 2003, Seoul National University Bundang Hospital (SNUBH), the first paperless hospital with a fully digitalized EHR system, was opened. Since then, most large hospitals in Korea started to store all their data digitally. As these large hospitals carry out at least 2 million medical treatments (outpatient and inpatient) in a year, huge amount of EHR data, laboratory tests, signal data, and medical images are stored in each hospital in the form that can be analyzed.

South Korea provides a unique example of evolution of AI in Asia as follows: Koreans have high access to medical care, as are the ophthalmological areas. There are a sufficient number of well-trained ophthalmologists, medical costs are strictly controlled by the government, individual large medical institutions have hundreds of millions of medical records and corresponding data contained for 2 to 5 million patients, so that they often have sufficient data for individual AI research.


The most common use of AI in Asia and the world has been in analysis of color fundus photography (CFP) for diabetic retinopathy (DR). Studies on ML for DR screening were conducted in Thailand since the availability of digital CFP in the late 1990s. In a study, an automated retinal disease assessment (ARDA) was trained to recognize normal retinal components, including the optic disc, foveal center, and retinal vessels, and abnormal retinal features found in DR, including exudates, hemorrhages, and microaneurysms. It was then tested with digital CFP of 336 eyes of patients with diabetes examined by retinal specialists; of these, 221 eyes had no DR, whereas 115 eyes had nonproliferative DR (NPDR). The ARDA was found to have a sensitivity and specificity of 74.8% and 82.7%, respectively.3

The ARDA in this study was a continuing work carried on from a previous study that could identify optic disc, retinal vessels, and the fovea in 112 normal CFP with a sensitivity and specificity of 99.1% and 99.1%, 83.3% and 91.0%, and 80.4% and 99.1%, respectively.4 The similar model of ARDA for exudate recognition was applied to 30 CFP, of which 21 contained exudates and the rest were normal. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, using grading by the ophthalmologist as standard. For the 14 images wherein hemorrhages and microaneurysms were presented, the ARDA achieved a sensitivity of 77.5% and specificity of 88.7% for this detection.5 The ARDA was then improved from detecting DR lesions into screening referable DR.6 There was a general acceptance that AI based on this conventional ML reached a plateau of sensitivity around 90% with specificity around 45%.7

On the contrary, a study from Thailand found that 6 trained human graders who were not ophthalmologists could achieve an average sensitivity of 0.87 and specificity of 0.78 in grading 400 CFP of patients with diabetes using consensus of retinal specialists as standard.8 Results from this and other studies on trained graders might be the reasons why many DR screening programs, such as in Thailand, India, and Singapore, have chosen to adopt trained graders for DR screening since that time.

There may be some barriers for sustainability of screening DR with trained graders even in high-resourced settings.9 The graders require refresher courses and continuing education. Scaling up and expanding screening to meet the growing diabetes epidemic is another challenge for human graders.



Gulshan et al10 from Google Health along with a team of ophthalmologists from India were the first to show that DL algorithm based on retinal images achieved a very high accuracy (≥99%) of detecting referable DR. Since then, several DL algorithms have been developed and validated in diverse populations. Subsequently, the same algorithm was tested real-time at 2 tertiary eye care centers in India. The study showed that the AI generalizes to this population of Indian patients in a prospective setting and demonstrates the feasibility of using an AI DR grading system to expand screening programs.11 Rajalakshmi et al12 assessed the role of AI-based automated software for detection of DR and sight-threatening DR (STDR) by fundus photography taken using a smartphone-based device and validated it against ophthalmologist's grading. They concluded that the automated AI analysis of smartphone retinal images has very high sensitivity for detecting DR and sight-threatening DR, and thus can be an initial tool for mass retinal screening in DR.

Natarajan et al13 studied the use of an offline AI system in community screening for referable DR with a smartphone-based fundus camera. The sensitivity and specificity of the offline AI system in diagnosing referable DR were 100.0% and 88.4%, respectively, and in diagnosing any DR were 85.2% and 92.0%, respectively, compared with ophthalmologist grading using the same images. In India, currently AI algorithms are used for DR screening in different ways. Autonomous AI has a role for screening by nonophthalmologists who have little or less training in diagnosing DR, and few centers are using it at physician clinics. For anterior segment specialists, an assistive model is being used at some centers.14


The DL-based AI for DR developed by Google Health was also studied in >7400 patients with diabetes across all the health regions in Thailand. Of 25,326 gradable retinal images of the patients, when grades adjudicated by a panel of international retinal specialists were the reference standard and comparison with trained human graders who actually perform the grading in the national program was conducted, the AI could detect referable DR (moderate NPDR or worse) with significantly higher sensitivity (0.97 vs 0.74, P < 0.001), and a slightly lower specificity (0.96 vs 0.98, P < 0.001). The higher sensitivity of the AI was also observed for each of the categories of severe NPDR, proliferative DR, and diabetic macular edema (DME) (P < 0.001 for all comparisons). Based on these results, the AI may significantly reduce false-negative rate by 23% at the cost of slightly higher false-positive rates by 2%.15

Another study exploring the use of this DL-based AI for real-world screening of DR is being conducted prospectively in Thailand. There are many issues that required consideration in real-world deployment of the AI. For example, patient acceptability, image gradability when pupils are not dilated, the fitting of AI into screening workflow, and so on. The results of this prospective study in Thailand are expected in early 2020.

Another study in Thailand involved developing another AI algorithm to detect center-involved diabetic macular edema (CI-DME), which is generally detected by optical coherence tomography (OCT) devices in Retina Clinics, from analyzing CFP. This new AI algorithm, trained with data from OCT, was able to detect CI-DME from CFP with an area under the receiver operator characteristic curve (AUC) of 0.89 [95% confidence interval (CI): 0.87–0.91], corresponding with 85% sensitivity at 80% specificity. In comparison, retinal specialists, when graded the same set of images, have similar sensitivities (82%–85%), but only half the specificity (45%–50%, P < 0.001). The algorithm could also detect the presence of intraretinal fluid with AUC of 0.81 (95% CI: 0.81–0.86) and subretinal fluid with AUC of 0.88 (95% CI: 0.85–0.91). This study showed a potential of applying DL to simple 2D retinal images to detect features, which normally required sophisticated 3D imaging technology with acceptable accuracy.16


In 2017, in collaboration with 30 physicians and computer scientists worldwide, the Singapore Eye Research Institute (SERI) AI laboratory has developed and tested a DL system using a customized convolutional neural network, VGG-19, and approximately 500,000 retinal images, in detecting 3 major blinding eye conditions, namely DR, glaucoma, and age-related macular degenerations.17 The DL system was shown to have AUC, sensitivity, and specificity of 0.936, 90.5%, and 91.6% in detecting referable DR. For vision-threatening DR (VTDR), the corresponding statistics were 0.958, 100%, and 91.1%. The DR algorithm was further tested on 10 independent testing datasets to further validate the generalizability of this system, using varying reference standards, retinal cameras, races, age groups, sex, and glycemic controls, with AUCs ranging from 0.889 to 0.983.

In a subsequent multicentered AI study by Ting et al, this same system could also detect the prevalence rate of any DR, referable DR, and VTDR in datasets consisting of 19,000 patients from 5 races (Chinese, Malay, Indians, American Blacks, and Caucasian White) and 4 countries (Singapore, United States, China, and Australia).18 The study findings showed that the more severe forms of diabetic eye diseases are more likely to occur in patients with younger age, longer diabetes duration, poorer glycemic control, and higher systolic blood pressure, highlighting the importance of controlling these systemic risk factors in patients with diabetes to prevent visual impairment. The AI system was also compared favorably against the grading performed by 17 human assessors—10 eye specialists and 7 nonmedical graders, using a significantly much shorter time as compared with human assessors (1 month vs 2 years).

Given the shortage of medical experts in DR screening in the under-resourced countries such as Africa, the SERI AI laboratory also explored the use of DL system in the Zambia, in collaboration with the Moorfields Eye Hospital, Ministry of Health, Zambia, and the local hospitals. Zambia is currently ranked 159th (of 194 countries) for gross domestic product per capita in 2018, with the ratio of the number of ophthalmologists to Zambian population <3 per million population. In a study by Bellemo et al,20 the SERI DL system showed that AUC of the AI system for referable DR was 0.973, with corresponding sensitivity of 92.25% and specificity of 89·04%. VTDR sensitivity was 99.42% and DME sensitivity was 97.19%. AI model and human graders demonstrated similar outcomes in referable DR prevalence detection and systemic risk factors associations. Both AI model and human graders identified longer duration of diabetes, higher level of HbA1c, and increased systolic blood pressure as risk factors associated with referable DR.

At present, the Singapore AI team is also evaluating several domains, including OCT for retinal diseases, glaucoma and anterior segment diseases, myopia, and also proteonomics and genetics domains.


In another study in China, another system of DL algorithms based on CFP was found to have achieved excellent performance (AUC 0.955, sensitivity 92.5%, and specificity 98.5%) close to ophthalmologists in DR diagnosis.20 With combination of portable devices and diagnostic algorithms, it is convenient to apply DL algorithms in DR screening. Apart from local medical service, telemedicine systems for DR screening have also demonstrated high reliability. High rates of consistency between the system and ophthalmologists were observed for moderate or severe nonproliferative DR grade (Kappa = 0.92), and other DR grades (Kappa = 1). The telemedicine system offered the potential to increase DR screening rates in China.21

Hong Kong

Not only collaborating with Singapore and other countries for developing and validating DL algorithms for screening diabetic eye diseases from CFP,17 Cheung et al (unpublished data) in Hong Kong are studying spectral-domain OCT (SDOCT) images, for being added in different DR screening programs, aiming to reduce the false-positive rate of DME detection from fundus photography. Thus, they have focused on training SDOCT images with AI to detect DME. A multi-task 3D DL algorithm using SDOCT images collected from individuals with diabetes in Hong Kong for identifying DME and non-DME was developed. In the external validation using datasets from other countries, they found that their proposed multi-task DL algorithm could detect DME and non-DME from different OCT devices with high accuracy, which may be a very useful DME screening tool that could save resources and speed up workflow substantially (unpublished data).

South Korea

Although DR screening is at the forefront of AI development in many countries, it is not the case in South Korea. There is no national DR screening program in South Korea. However, the Korean NHI provides regular health checkup program to the entire adult population at least every other year, that lets Koreans know whether they have diabetes. In addition, although fundus examination is not included in the national health checkup program, health care providers provide CFP to examinees with small additional fee so that Koreans already get extensive CFPs. Therefore, DR screening program has a lower priority for the Korean health authority and the algorithm classifying DR does not have much development needs.


On the contrary, also in Korea, an AI algorithm with expertise like comprehensive ophthalmologists is needed to detect as many common retinal abnormalities in addition to DR. As most health checkup centers do not have an ophthalmologist to read the taken CFPs, general physicians often read the CFPs with low accuracy. Therefore, using the database from SNUBH which consisted of 286,050 annotations from 95,350 CFPs, Son et al22 developed DL classification models for 12 different abnormal features observed in CFPs rather than models for presenting diagnosis alone. Their algorithms had high AUCs (96.2%–99.9%) for the in-house datasets, slightly lower AUCs in external datasets (94.7%–98.0%), and comparable detection accuracy to that of 3 retinal specialists. The algorithm also provides lesion heatmaps demonstrating the probable area of salient features, which must be helpful to nonophthalmologist physicians who should read CFPs.

The SNUBH database was annotated by 57 ophthalmologists and each CFP was labeled by 3 independent ophthalmologists. The annotations not only contain information about abnormal findings, but also include information about image quality, diagnosis of abnormal findings, and need of referral.23 Using these multidimensional annotations, the algorithm should be able to determine the image quality, find abnormal findings, present diagnostics, and decide whether to refer, like a comprehensive ophthalmologist.


Glaucoma is the leading cause of irreversible blindness. SDOCT has been proposed to be used for screening glaucomatous optic neuropathy (GON) in high-risk communities (eg, elderly people, people with high myopia, people with a family history of glaucoma, or people with high intraocular pressure).24,25 However, experienced glaucoma specialists or highly trained assessors are required to interpret the SDOCT results. For example, assessment of retinal nerve fiber layer (RNFL) thinning with normative database, how other factors (eg, myopia, optic disc size) influence on RNFL thinning, assessment of image quality (eg, signal strength) and artifacts, and identification of software errors.26–30

Ran et al, in Hong Kong, have developed 3D DL algorithms for filtering ungradable SDOCT scans and for automated detection of GON,31,32 aiming to utilize features related to GON from entire retinal layered structure or optic nerve head structure from the whole 3D volumetric SDOCT cube that are not shown in 2D CFP. In the external validation for GON detection, the 3D DL algorithm was tested in 3 independent datasets and showed good performance, with AUCs of 0.893 to 0.897, sensitivities of 78% to 90%, specificities of 79% to 86%, and accuracies of 80% to 86%. In addition, the DL algorithm was shown to be able to predict most of the preperimetric glaucoma eyes as having GON, which might require further examination by glaucoma specialists.32 These results suggest that screening with the 3D DL algorithm is much faster than conventional glaucoma screening methods (ie, by experienced specialists) and does not require a large number of trained personnel on site. Currently, prospective studies are being initiated in Hong Kong to estimate the incremental cost-effectiveness of incorporating this DL-based model for screening for glaucoma, and to test the ability of the 3D DL algorithm to detect damage in people with suspected glaucoma.

In China, Li et al33 developed a DL system for glaucoma diagnosis based on visual fields, which has been further integrated into smart phone application and tested in the real world (unpublished data). The DL system is designed to scan printed visual field reports and generate diagnostic results, which are helpful for both patients and ophthalmologists. Other algorithms developed by Li et al and Liu et al based on CFP focused on screening of glaucoma suspects.34,35 These 2 DL systems achieved AUC of 0.986 and 0.996, sensitivity of 95.6% and 96.2%, and specificity of 96.0% and 97.7% for detecting GON from CFP, respectively.


In 2017, the first AI Clinic for cataract was established in Zhongshan Ophthalmic Center, Guangzhou, China. The clinic was based on a DL algorithm by Liu et al, who constructed the first platform for screening of congenital cataract and verified its performance in multicenter trial.36,37 The platform demonstrated diagnostic performance close to ophthalmologists but with higher efficiency. Besides cataract, Liu et al also developed a prediction algorithm of myopia prognosis in Chinese school-aged children38 and discrimination algorithm of ocular diseases in infants,38,39 which provides a new way for early detection and control of visual impairment in young children.


In addition to eye diseases, retinal imaging has provided a means to study neuronal structure and vasculature in the retina in patients with dementia and stroke, given the similarities between the retina and the brain.40,41 For example, studies in Hong Kong have shown thinning of peripapillary RNFL, macular ganglion cell inner plexiform layer and a sparse retinal vasculature in patients with Alzheimer's disease (AD).42–45 AD, the most common form of dementia, is a major public health and clinical challenge globally. Cheung et al (unpublished data) are currently developing AI DL algorithms to detect AD from retinal images. The availability of retinal imaging in eye clinics for assessing ocular diseases together with development of AI algorithm for AD may allow opportunistic screening for AD on a large scale, which enhances the potential of retinal imaging as a tool for “point of care” early detection and screening for AD in population-based and clinical settings.


Medical image databases are important for algorithms development and training. Development of high-performance algorithms rely on high-quality data. To promote development of diagnostic algorithms in ophthalmology, the largest public datasets of ocular imaging—iChallenge ( have been founded by Orlando et al45 in China. iChallenge is currently composed of 4 datasets: REFUGE, ADAM, PALM, and AGE, each focusing on a specific topic in diagnosis or image segmentation. Related challenges have been held in international medical imaging conferences including MICCAI, ISBI, among others. Review of the REFUGE dataset summarized all the algorithms in the REFUGE Challenge from MICCAI 2018 and provided a unified framework for evaluating automated methods for glaucoma assessment from CFPs.46

In Korea, AI-based medical devices are being actively developed in images (x-rays, computed tomography, magnetic resonance imaging, mammography, among others), signals, and EHR data. As of November 2019, about 10 algorithms reading medical images were already approved by Korean government regulatory and about 14 clinical trials for AI algorithms are underway to get the approval. The CFP reading algorithm stated above is expected to be approved in February 2020. All these algorithms aim to provide better health care and improve health care system in Korea.

On the contrary, Singapore Integrated DL system for DR screening has just been recently approved by the Singapore Health Service Authority (HSA) and was listed as one of the national AI strategies in Singapore, with the plan to achieve full clinical integration to screen for DR over the next few years.


At the global AI ophthalmology setting, Singapore also led 2 major AI review articles with most experts in field, including the Google AI Health team, summarizing the state-of-art AI technologies, the key technical and clinical aspects, and the unmet needs in AI ophthalmology.47,48 In October 2019, the American Academy of Ophthalmology AI taskforce was founded by a group of 10 global AI experts worldwide, with Singapore as one of the members. This taskforce was established to evaluate the current and future directions of AI in ophthalmology, including reporting guidelines, medical ethics, medical regulatory requirement, medical education, data privacy, and sharing challenges. In addition, having served in several editorial boards and published a number of editorials,49–51 Ting et al also published an educational article to share the AI knowledge with the general community of general ophthalmologists community on how to decipher an AI article.52 This is important to the general ophthalmology community to understand the potential pitfalls of each AI system in the clinical application settings.


Like most other studies in medical science, the objective of an AI study is very important. The AI model in a study should be relevant to address unmet need in ophthalmology. For study design, typical AI study comprises 2 areas: development and validation.

For development, investigators typically use datasets with a certain number of data points, such as images or other pieces of information, from populations in which the AI model will be applied to. A clear identification of the population in the development dataset is very important for generalizability of an AI model. For example, AI developed from populations in clinical trials may perform poorly in populations in primary care. The number of images or pieces of data used for the development is also essential since a DL system trained with more data should give higher performance. This is different from AI based on the conventional ML in which feature extraction based on rules is applied. The number of data points for developing a conventional ML model may be less important than the rules.

Labeling of input data for training AI models is also another important aspect. The performance of AI models cannot be good if we use poor labeled data to train them. Another interesting aspect of labeling input is called label transfer. An example of AI model using label transfer is the model described above that uses OCT data to train CFP to detect CI-DME from CFP itself.

AI engineers usually split the development dataset into 60% to 80% for “training” and 20% to 40% for “testing.” The latter was kept to test model performance after complete training. Performance on this testing dataset (sometimes called internal validation because the data were from the same dataset as training) is generally good.

For the validation part, many studies conduct only validation of the model in “testing” dataset. This cannot guarantee the model performance in other new datasets. Validation of the model using independent datasets is very important. The model performance could drop significantly when validated in the new dataset, this is called model overfitting.

The validation of the AI can be either from retrospective or prospective data. The latter helps in judging the model performance in the real-world setting in which many uncontrollable factors may cause poorer performance of a model.

The statistical methods for validation of an AI model, usually models for imaging with binary outcomes, are diagnostic metrics, such as AUC, sensitivity, and specificity, or methods for agreement, such as Cohen Kappa. One should bear in mind that high AUC does not necessarily mean high sensitivity and high specificity. Suggested complementary methods are positive and negative predictive value or precision recall curve. For an AI model with continuous outcomes, such as intraocular pressure, scatter plots with mean absolute error or correlation may be used. Suggested complementary methods are root mean square error or Bland-Altman Plot.53


To achieve a successful translation, one needs to consider the following. First, apart from the super smart DL scientists, AI in health care just simply cannot be done without good clinical datasets, that will always need to be contributed by excellent clinical teams that comprise of multiple stakeholders—clinicians, nurses, allied health professionals, graders, research scientists, and statisticians. Next, to clinically deploy the AI products, the AI research groups will need an excellent business and product management team to build the platform, solutions and more importantly, obtain a sustainable stream of funding not only to implement them in the real-world settings, but also to continue the research and development to further enhance the products when more data or novel techniques are introduced. Finally, apart from the above-mentioned points, the AI solutions will always need to be supported by a robust ecosystem, ranging from medical ethics, medicolegal and product regulation, cyber-security, telecommunication capabilities (4G or 5G), supercomputing powers, health economic analysis, and lastly, education of the current and next generation about the basic concepts, some Do's and Don’ts in the applications within medical settings.

AI can augment human intelligence to improve decision-making and operational processes in clinical ophthalmology. AI will become ubiquitous and indispensable for eye disease screening, with expectation to improve the efficiency and accessibility of screening programs, and therefore will be able to prevent visual loss and blindness from this devastating disease. AI in ophthalmology will keep evolving; without a doubt, AI in Asia will play a major part of the evolution.


1. Resnikoff S, Lansingh VC, Washburn L, et al. Estimated number of ophthalmologists worldwide (International Council of Ophthalmology update): will we meet the needs? Br J Ophthalmol 2019; [Epub ahead of print] doi: 10.1136/bjophthalmol-2019-314336.
2. Park SJ, Kwon KE, Choi NK, et al. Prevalence and incidence of exudative age-related macular degeneration in South Korea: a nationwide population-based study. Ophthalmology 2015; 122:2063–2070. e1.
3. Singalavanija A, Supokavej J, Bamroongsuk P, et al. Feasibility study on computer-aided screening for diabetic retinopathy. Jpn J Ophthalmol 2006; 50:361–366.
4. Sinthanayothin C, Boyce JF, Cook HL, et al. Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. Br J Ophthalmol 1999; 83:902–910.
5. Sinthanayothin C, Boyce JF, Williamson TH, et al. Automated detection of diabetic retinopathy on digital fundus images. Diabet Med 2002; 19:105–112.
6. Usher D, Dumskyj M, Himaga M, et al. Automated detection of diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening. Diabet Med 2004; 21:84–90.
7. Abràmoff MD, Reinhardt JM, Russell SR, et al. Automated early detection of diabetic retinopathy. Ophthalmology 2010; 117:1147–1154.
8. Ruamviboonsuk P, Teerasuwanajak K, Tiensuwan M, et al. Interobserver agreement in the interpretation of single-field digital fundus images for diabetic retinopathy screening. Ophthalmology 2006; 113:826–832.
9. Wong TY, Sabanayagam C. Strategies to tackle the global burden of diabetic retinopathy: from epidemiology to artificial intelligence. Ophthalmologica 2020; 243:9–20.
10. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016; 316:2402–2410.
11. Gulshan V, Rajan RP, Widner K, et al. Performance of a deep-learning algorithm vs manual grading for detecting diabetic retinopathy in India. JAMA Ophthalmol 2019; 137:987–993.
12. Rajalakshmi R, Subashini R, Anjana RM, et al. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye (Lond) 2018; 32:1138–1144.
13. Natarajan S, Jain A, Krishnan R, et al. Diagnostic accuracy of community-based diabetic retinopathy screening with an offline artificial intelligence system on a smartphone. JAMA Ophthalmol 2019; 137:1182–1188.
14. Akkara JD, Kuriakose A. Role of artificial intelligence and machine learning in ophthalmology. Kerala J Ophthalmol 2019; 31:150–160.
15. Ruamviboonsuk P, Krause J, Chotcomwongse P, et al. Deep learning versus human graders for classifying diabetic retinopathy severity in a nationwide screening program. NPJ Digit Med 2019; 2:68.
16. Varadarajan A, Bavishi P, Raumviboonsuk P, et al. Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning. accepted for publication in Nature Communications. Nat Commun 2020; 11:130.
17. Ting DSW, Cheung CY, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 2017; 318:2211–2223.
18. Ting DSW, Cheung CY, Nguyen Q, et al. Deep learning in estimating prevalence and systemic risk factors for diabetic retinopathy: a multi-ethnic study. NPJ Digit Med 2019; 2:24.
19. Li Z, Keel S, Liu C, et al. An automated grading system for detection of vision-threatening referable diabetic retinopathy on the basis of color fundus photographs. Diabetes Care 2018; 41:2509–2516.
    20. Bellemo V, Lim G, Rim TH, et al. Artificial Intelligence screening for diabetic retinopathy: the real-world emerging application. Curr Diab Rep 2019; 19:72.
    21. Peng J, Zou H, Wang W, et al. Implementation and first-year screening results of an ocular telehealth system for diabetic retinopathy in China. BMC Health Serv Res 2011; 11:250.
    22. Son J, Shin JY, Kim HD, et al. Development and validation of deep learning models for screening multiple abnormal findings in retinal fundus images. Ophthalmology 2020; 127:85–94.
    23. Park SJ, Shin JY, Kim S, et al. A novel fundus image reading tool for efficient generation of a multi-dimensional categorical image database for machine learning algorithm training. J Korean Med Sci 2018; 33:e239.
    24. Klein BE, Johnson CA, Meuer SM, et al. Nerve fiber layer thickness and characteristics associated with glaucoma in community living older adults: prelude to a screening trial? Ophthalmic Epidemiol 2017; 24:104–110.
    25. Liu MM, Cho C, Jefferys JL, et al. Use of optical coherence tomography by nonexpert personnel as a screening approach for glaucoma. J Glaucoma 2018; 27:64–70.
    26. Cheung CY, Chan N, Leung CK. Retinal nerve fiber layer imaging with spectral-domain optical coherence tomography: impact of signal strength on analysis of the RNFL map. Asia Pac J Ophthalmol (Phila) 2012; 1:19–23.
    27. Knight OJ, Girkin CA, Budenz DL, et al. Effect of race, age, and axial length on optic nerve head parameters and retinal nerve fiber layer thickness measured by Cirrus HD-OCT. Arch Ophthalmol 2012; 130:312–318.
    28. Leung CK, Yu M, Weinreb RN, et al. Retinal nerve fiber layer imaging with spectral-domain optical coherence tomography: interpreting the RNFL maps in healthy myopic eyes. Invest Ophthalmol Vis Sci 2012; 53:7194–7200.
    29. Cheung CY, Chen D, Wong TY, et al. Determinants of quantitative optic nerve measurements using spectral domain optical coherence tomography in a population-based sample of non-glaucomatous subjects. Invest Ophthalmol Vis Sci 2011; 52:9629–9635.
    30. Qiu KL, Zhang MZ, Leung CK, et al. Diagnostic classification of retinal nerve fiber layer measurement in myopic eyes: a comparison between time-domain and spectral-domain optical coherence tomography. Am J Ophthalmol 2011; 152:646–653. e2.
    31. Ran AR, Shi J, Ngai AK, et al. Artificial intelligence deep learning algorithm for discriminating ungradable optical coherence tomography three-dimensional volumetric optic disc scans. Neurophotonics 2019; 6:041110.
    32. Ran AR, Cheung CY, Wang X, et al. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. Lancet Digit 2019; 1:e172–e182.
    33. Li F, Wang Z, Qu G, et al. Automatic differentiation of Glaucoma visual field from non-glaucoma visual field using deep convolutional neural network. BMC Med Imaging 2019; 19:40.
    34. Li Z, He Y, Keel S, et al. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 2018; 125:1199–1206.
    35. Liu H, Li L, Wormstone IM, et al. Development and validation of a deep learning system to detect glaucomatous optic neuropathy using fundus photographs. JAMA Ophthalmol 2019; 137:1353–1360.
    36. Long E, Lin H, Liu Z, et al. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nat Biomed Eng 2017; 1:0024.
    37. Lin H, Li R, Liu Z, et al. Diagnostic efficacy and therapeutic decision-making capacity of an artificial intelligence platform for childhood cataracts in eye clinics: a multicentre randomized controlled trial. EClinicalMedicine 2019; 9:52–59.
    38. Lin H, Long E, Ding X, et al. Prediction of myopia development among Chinese school-aged children using refraction data from electronic medical records: a retrospective, multicentre machine learning study. PLoS Med 2018; 15:e1002674.
    39. Long E, Liu Z, Xiang Y, et al. Discrimination of the behavioural dynamics of visually impaired infants via deep learning. Nat Biomed Eng 2019; 3:860–869.
    40. Cheung CY, Ikram MK, Chen C, et al. Imaging retina to study dementia and stroke. Prog Retin Eye Res 2017; 57:89–107.
    41. Cheung CY, Chan VTT, Mok VC, et al. Potential retinal biomarkers for dementia: what is new? Curr Opin Neurol 2019; 32:82–91.
    42. Cheung CY, Ong YT, Hilal S, et al. Retinal ganglion cell analysis using high-definition optical coherence tomography in patients with mild cognitive impairment and Alzheimer's disease. J Alzheimers Dis 2015; 45:45–56.
    43. Chan VTT, Sun Z, Tang S, et al. Spectral-domain OCT measurements in Alzheimer's disease: a systematic review and meta-analysis. Ophthalmology 2019; 126:497–510.
    44. Cheung CY, Ong YT, Ikram MK, et al. Microvascular network alterations in the retina of patients with Alzheimer's disease. Alzheimers Dement 2014; 10:135–142.
    45. Orlando JI, Fu H, Barbosa Breda J, et al. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570.
    46. Orlando JI, Fu H, Breda JB, et al. 2020. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570.
    47. Ting DSW, Peng L, Varadarajan AV, et al. Deep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res 2019; 72:100759.
    48. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2019; 103:167–175.
    49. Ting DSW, Liu Y, Burlina P, et al. AI for medical imaging goes deep. Nat Med 2018; 24:539–540.
    50. Ting DS, Gunasekeran DV, Wickham L, et al. Next generation telemedicine platforms to screen and triage. Br J Ophthalmol 2020; 104:299–300.
    51. Ting DSJ, Ang M, Mehta JS, et al. Artificial intelligence-assisted telemedicine platform for cataract screening and management: a potential model of care for global eye health. Br J Ophthalmol 2019; 103:1537–1538.
    52. Ting DSW, Lee AY, Wong TY. An ophthalmologist's guide to deciphering studies in artificial intelligence. Ophthalmology 2019; 126:1475–1479.
    53. Yu M, Tham YC, Rim TH, et al. Report on deep learning algorithms in health care. The Lancet Digital Health 2019. Available at: Accessed December 29, 2019.

    artificial intelligence; deep learning; ophthalmology

    Copyright © 2020 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.