Secondary Logo

Journal Logo

Review Article

Artificial Intelligence for Cataract Detection and Management

Goh, Jocelyn Hui Lin∗,†; Lim, Zhi Wei BSc(Hons)∗,‡; Fang, Xiaoling MD∗,§; Anees, Ayesha MSc; Nusinovici, Simon PhD; Rim, Tyler Hyungtaek MD, PhD∗,||; Cheng, Ching-Yu MD, PhD∗,||,∗∗; Tham, Yih-Chung PhD∗,||

Author Information
Asia-Pacific Journal of Ophthalmology: March-April 2020 - Volume 9 - Issue 2 - p 88-95
doi: 10.1097/01.APO.0000656988.16221.04
  • Open



Cataract is the leading cause of visual impairment worldwide, accounting for 65.2 million cases of vision impairment and blindness globally.1 With aging population, these numbers are projected to increase to 70.5 million by 2020 worldwide.1 Importantly, a substantial proportion of these cataract cases remain undiagnosed.2–4 Hence, cataract remains a major public health concern.


To date, cataracts are clinically diagnosed by ophthalmologists using slit-lamp biomicroscopy, and graded based on established clinical scales such as the Lens Opacities Classification System III.5 Notably, this “manual” process requires clinical expertise and therefore poses a significant challenge particularly in developing countries or rural communities, where there are shortages of trained ophthalmologists.6 Furthermore, subjective clinical grading scales possess an inherent limitation where grading results are compromised by inter-examiner variability.7 Taken together, the conventional approach of cataract detection, which requires an ophthalmologist's expertise, has considerably limited reach for screening. Hence, with the growing health burden related to cataract, there is an imperative need for novel methods to address existing limitations and revolutionize approaches in cataract detection.


The current mainstay treatment for cataract involves surgical removal accompanied with intraocular lens (IOL) implantation.8 Importantly, the prognosis and visual outcome of cataract procedures depend significantly on the calculation of IOL power, which utilizes formulas based on preoperative ocular biometry measurements.8 Nevertheless, owing to the wide variations of ocular biometry profiles across individuals, there is currently no “one-size-fits-all” formula that can be applied generally to all patients.9

Currently, selection of IOL power calculation formula in the clinic remains unstandardized, and at times, at the subjective discretion of the surgeon. Ocular parameters such as axial length and keratometry measurements are important factors when determining the suitability of each formula.10–14 However, existing formulas are mainly catered for eyes with typical ranges of biometric measurements and may not be adequate in handling eyes with atypical ranges of axial length (ie, too short or long)15 or atypical corneal profiles (ie, eyes with refractive surgery history, keratoconus, microcornea among others).16–18


In an effort to optimize work processes in various industries, many have turned to artificial intelligence (AI) systems, particularly in the domains of machine learning and deep learning in recent years. Deep learning is an emerging AI entity that is a subset of machine learning. It involves the use of Artificial Neural Network, which consists of multiple layers of artificial neurons to simulate the physiological functions of the human brain.19 Deep learning system can be trained to extract and process information in images, texts, and for speech recognition.20 Recently, the applications of AI systems in the medical field showed promising results in narrow tasks such as detection of lung cancer, lymph node metastases secondary to breast cancer, and real-time detection of colonoscopic polyps and adenoma.21–23 In ophthalmology where massive number of images and patient data are available, AI systems had shown promising results in automated detection of age-related eye diseases such as diabetic retinopathy (DR), age-related macular degeneration, and glaucoma.24–27 The growth in computing infrastructure and power have contributed to the rapid adoption of deep learning approaches in AI development. Due to its exceptional ability in extracting high-level features and unrecognized patterns within huge amount of data, deep learning systems can now achieve good, if not better performance than human graders and clinicians in feature-based diagnosis.19


Following the early promising results of AI systems in various eye diseases, there have also been several AI algorithms developed for automated detection and grading of cataract, based on either machine learning or deep learning approaches. Apart from the different underlying structures of AI algorithms, these developed algorithms from previous studies also differed according to the types of input images used (eg, either based on slit lamp photographs or color fundus photographs). The details of these studies are elaborated as below.


As cataract is “traditionally” diagnosed based on slit lamp examinations, some previous studies solely focused on slit lamp photographs as “training data” for algorithm development for automated detection and grading of nuclear cataract (Table 1).

Previous Studies on Automated Detection and Grading of Cataract Based on Slit-lamp Photographs

For instance, Li et al28 applied a modified Active Shape Model29 (ASM, a type of shape detection algorithm) to first identify the location of the crystalline lens and its nucleus on 5820 slit lamp photographs from the Singapore Malay Eye Study (SiMES). The ASM achieved 95% success rate in correctly identifying the location of lens. After excluding ungradable photos and photos with error in lens location detection, 100 slit lamp photographs were used to develop an algorithm for nuclear cataract severity grading using the Support Vector Machine (SVM, a form of machine learning) regression model. This algorithm was then validated in an internal set of 5490 photographs. Ground truth data was defined based on the Wisconsin cataract grading system, with nuclear cataract score ranging from 0.1 to 5.30,31 When comparing with the reference standard, the algorithm showed a mean difference of 0.36 for nuclear cataract grading.

In another study, Xu et al32 also used the same modified ASM for identification of lens location, but they then applied Bag-of-features model for feature extraction, and Group Sparsity Regression (a form of machine learning) for feature selection and nuclear cataract severity grading. In this study, the authors trained the algorithm using 100 slit lamp photographs from the ACHIKO-NC dataset, and internal validation was conducted with a total of 5278 slit lamp photographs. The ground truth was also based on the Wisconsin cataract grading system, and this algorithm achieved a mean absolute error (MAE) of 0.336 when comparing with the ground truth labels.

Using the same dataset as Xu et al, Gao et al33 established a different system which is based on Convolutional-Recursive Neural Network (a form of deep learning) for lens structure detection and automatic feature learning. SVM regression was then utilized for cataract severity grading. Similarly, 100 slit lamp photographs were used for algorithm training and 5278 photographs were used for internal validation. Based on ground truth data defined according to the Wisconsin Cataract Grading System, this system achieved a MAE of 0.304, a performance that was slightly better compared with Xu et al while using the exact same dataset. The slight improvement may be due to the deployment of Convolutional-Recursive Neural Network which has greater capability in terms of feature extraction and learning.19

In a recent large-scale study in China, Wu et al34 utilized deep learning via residual neural network (ResNet) to establish a 3-step sequential AI algorithm for the diagnosis and referral of cataracts. First, in the capture mode recognition phase, the AI system would first differentiate slit lamp photographs between mydriatic and nonmydriatic images, and between optical section and diffuse slit-lamp illumination. Second, the images would be categorized as either normal (ie, no cataract), cataractous, or postoperative IOL. Third, if cataract was detected, the type and severity of the cataract/posterior capsular opacification would then be evaluated based on the Lens Opacities Classification System II scale,35 and a decision whether to follow-up or refer the patient for tertiary care would be derived. Upon validation of the system with 37,638 photographs (18,819 eyes) from a Chinese cataract screening program, the AI achieved area under the receiver-operating characteristic curve (AUC) of >99% (Table 1) for both steps 1 and 2, namely the capture mode recognition and cataract detection phase. Notably, AUCs for evaluation of cataract severity (step 3) were most optimal using mydriatic images with optical sections (AUC 0.9915) and least optimal using nonmydriatic images with diffuse illumination (AUC 0.9328), whereas AUCs for referral accuracy were highest for pediatric cataracts with visual axis involvement (AUC 1.00), and lowest for posterior capsular opacification with visual axis involvement (AUC 0.919).

This AI algorithm was further put on trial as part of a web-based platform, in a pilot study conducted in the Yuexiu district of Guangzhou, China. At the first level of this web-based platform, participating residents were able to “self-report” symptoms of decreased visual acuity or blurred vision using a smartphone app. These self-reported cases were subsequently directed to community-based healthcare facilities, where nonmydriatic slit-lamp images were taken by nurses or technicians, and processed by the AI algorithm. The algorithm then generated outputs to indicate whether there was a need to refer to ophthalmologists. When comparing with ophthalmologist's final diagnosis, sensitivity and specificity of the algorithm for cataract detection were 92.00% and 83.85%, respectively, in this pilot study. Based on these preliminary results, the authors proposed to switch the first “point-of-care” from ophthalmologists to community-based health care facilities (where the AI algorithm will be deployed). It was estimated that this change of practice would free up ophthalmologists’ current workload and potentially enable them to serve approximately 10 times more patients compared with existing health care model.


With the increasing use of retinal imaging in primary care settings such as for DR screening,36,37 other groups also explored the use of color fundus photographs for development of automated cataract assessment system (Table 2), potentially leveraging on retinal imaging as an opportunistic screening tool for cataract as well.

Previous Studies on Automatic Detection and Grading of Cataract Based on Color Fundus Photographs

For instance, Dong et al38 trained and developed an AI algorithm (using a combination of machine learning and deep learning algorithm) using 5495 fundus images. Features in the images were first extracted by a deep learning network that was constructed based on the Caffe software, followed by cataract detection (noncataract or cataractous) and severity grading using a Softmax function (a form of machine learning algorithm). The ground truth was determined by experienced ophthalmologists, and was defined based on classification of the “visibility” of fundus images to indicate four classes of cataract severity (normal, mild, moderate, and severe). The accuracy of the system was defined as the proportion of images being correctly classified among total number of images tested. When validated in an internal test set of 2355 images, the authors reported that 94.07% of the images were correctly classified for cataract detection, and 90.82% of the images were correctly classified for different severity level of cataract.

On the contrary, Ran et al39 developed an algorithm which used both Deep Convoluted Neural Network (a form of deep learning) for initial feature extraction, and Random Forest network (machine learning model) for prediction of cataract presence. A total of 5409 images were used in this study, but the details on the splitting ratio of training and testing set were not described. In this instance, the ground truth (ie, presence of cataract) was also determined by experienced ophthalmologists based on the “haziness” level on fundus images. The system achieved an AUC of 97.04%, sensitivity of 97.26%, and specificity of 96.92% for detection of cataract.

Similar to Ran et al, Pratap and Kokil40 applied a combination of pre-trained Convoluted Neural Network (a form of deep learning network) and SVM in their AI system. The authors performed transfer learning whereby the pre-trained Convoluted Neural Network, that was already trained on millions of natural nonmedical images, was further fine-tuned by using 400 fundus images19 for further feature learning. Based on a ground truth established through labeling of the “visibility” of fundus images to denote 4 classes of cataract severity (noncataract, mild, moderate, and severe cataract) by ophthalmologists, the system showed a 100% proportion for correctly classifying presence of cataract, and 92.91% for cataract severity level, when tested in another 400 fundus images. Despite the optimal performance shown, it should be noted that the testing was performed in an internal dataset which might have very similar features as the training set.

On the contrary, unlike the above-mentioned studies whereby combinations of deep learning and machine learning algorithms were used, Zhang et al41 descibed a deep learning system that was based only on Deep Convoluted Neural Network and trained with 4004 fundus images from the Beijing Tongren Eye Center's clinical database. The authors preprocessed original fundus images using a green channel (G-channel) filter to enhance the contrast of image and visibility of retinal vessels.42 The ground truth was similarly established by professional graders through labeling of the “visibility” of fundus images to denote four classes of cataract severity (noncataract, mild, moderate, and severe cataract). Using an internal test set of 1606 images, the system achieved an AUC of 93.52% for cataract detection, and 86.69% for severity grading.

Lastly, Li et al43 developed a deep learning-based system consisting of both ResNet-18 and ResNet-50, using training data of 7030 fundus images, from the clinical database of Beijing Tongren Eye Center. With “deeper” layers of network, ResNet-50 was used for the more complex task of cataract severity grading (eg, noncataracts, mild, moderate and severe cataract), whereas ResNet-18 was applied for cataract detection. Similarly, Li et al also preprocessed the fundus images to minimise uneven illumination by applying a G-channel filter. The groud truth in this study was also established by professional graders through labeling of the “visibility” of fundus images to denote four classes of cataract severity (noncataract, mild, moderate, and severe cataract). Using an internal test set of 1000 images, the system achieved an AUC of 97.2% for cataract detection, and 87.7% for severity grading. Furthermore, in this work, Li et al generated saliency maps (ie, heatmaps) in an attempt to elucidate regions “used” by AI to make predictions on presence and degree of cataract. However, those heatmap illustrations were not entirely in concordance with the degree of cataract severity, indicating room for improvements in the algorithm's performance, especially in better identification of cataract's relevant features.

Overall, the above-mentioned studies had shown promising results; however, it should be noted that for most of these studies, the definition of ground truth was either not described in detail or was based on subjective method in classifiying fundus photo's “haziness” level as labeled grouth truth data. The less than ideal “ground truth” data might have rendered the development process less robust and limited the performance of the algorithm.


Current IOP power formulas are based largely on conventional linear assumptions between the biometric measures. With the advent of deep learning, AI now has the potential of unraveling complex, nonlinear relationships among these ocular parameters, and may generate calculations which can better cater to individual eye's profile.

In this regard, an AI-based formula has been developed with the focus of standardizing the selection of calculation formulas for IOL power. The Ladas Super Formula was derived by extracting features from respective “ideal portions” of existing formulas (namely, Hoffer Q, Holladay-1, Holladay-1 with Koch adjustment, Haigis, and SRK/T formulas) and plotted into a 3-D surface.44 Although the mathematical details of the algorithm were not specified in literature, the algorithm was described to derive the “most ideal biometric components” of each existing formula, for instance, axial length and corneal refractive power.44 Taken together, this formula may potentially “automate” the process of formula selection for IOL power calculation which remains a challenge for inexperienced surgeons. However, Hee45 highlighted that the Ladas Super formula has its own limitations. Fundamentally, the formula relies on traditional keratometry to estimate corneal refractive power, with the assumption that the ratio between a “uniformly spherical” anterior cornea and the posterior corneal curvature remains unchanged. However, this assumption does not hold true, especially in patients with previous refractive surgery.

Apart from optimizing formula selection process, AI has also been used to develop algorithms for IOL power calculation. For instance, the Hill-Radial Basis Function (RBF) method has been reported to be able to estimate individual eye's IOL power based on an algorithm that was trained from approximately 12,000 eyes with measurements obtained from the Haag-Streit LENSTAR optical biometer.46 Another AI-based formula for IOL power calculation is the Kane formula. It was created using high-performance cloud-based computing, where conventional regression model and machine learning components were incorporated for refinement of IOL power predictions.47 However, specific algorithm details of the Kane formula were not described in literature and past studies.

The accuracy of Hill-RBF method and the Kane formula was further assessed in a study conducted by Connell and Kane.48 When comparing between formula-estimated and actual postoperative refractive errors, in eyes with short axial length (≤22.0 mm), the Kane formula showed MAE at 0.441 dioptre (D), whereas Hill-RBF method showed a MAE of 0.440D. In eyes with intermediate axial length (>22.0 to <26.0 mm), the Kane formula demonstrated a MAE of 0.322D and Hill-RBF method posted a MAE of 0.340D. Lastly, in eyes with long axial length (≥26.0 mm), the Kane formula showed a MAE of 0.326D, whereas Hill-RBF method showed a MAE of 0.358D. Overall, both formulas showed promising results, but further improvements are needed, especially in eyes with short axial lengths.


Most of the above-mentioned studies showed promising results; however, it should be noted that none of these previous studies further tested their respective algorithms in external test sets. Hence, the generalizability of these algorithms had yet been proven. Furthermore, the actual utility of these algorithms in real-world settings (eg, community, primary care setting, or tertiary eye hospitals) remains to be evaluated as well.

In the aspects of AI application in IOL power calculation and formula selection, further works are required to further tailor current algorithms for eyes with atypical biometric profiles, for example, extremely short and long axial lengths. Development in this aspect is currently limited by a small number of extreme biometric cases available for optimal algorithm training. On the contrary, in patients with history of refractive surgery, calculation of IOL power using existing formulas poses a further challenge as well. This is because existing formulas were not well-designed to cater for eyes with past refractive surgery.49,50 This gap is particularly relevant in Asian communities where uptake of corneal refractive surgery is generally higher, compared with western countries. With the aging trend in Asia, the number of patients in need of cataract surgery but with past refractive surgery, is expected to increase in the coming years. For this reason, new AI-derived formula or formula selection method may be explored to improve the accuracy of IOL power estimation in individuals with history of refractive surgery.

Lastly, for development of new algorithm or refinement of existing AI algorithms, curation of large, robust, and well-annotated clinical datasets remains a challenge. To achieve this, hospital's electronic medical records (including clinical data and images) coupled with seamless connection to cloud storage may offer new opportunities in revolutionizing the way we harness and curate high-quality ground truth data for development of algorithms. Nevertheless, while establishing and sustaining this “continuous inflow” of electronic clinical data for algorithm development, it is also imperative to safeguard patients’ data confidentiality, and ensure that strict regulations and structured processes on data anonymization are in place.

Future Outlook

With the increasing availability of ocular imaging modalities, including hand-held retinal cameras, and retinal camera or slit lamp adapters attached to smartphones,51,52 new AI systems can potentially provide better outreach for cataract screening, especially in rural or less-resourced areas. Furthermore, these imaging modalities are increasingly cheaper, and easy to use as they can be taken by trained technicians or nurses. In addition, in settings where retinal imaging is already readily available, for example, DR screening programs in diabetes care clinics, automated cataract assessment based on retinal photos may serve as opportunistic screening tool with minimum additional cost, as the same photo originally intended for use in DR screening can also be used for cataract detection.

AI has likewise proven promising in cataract surgery training. Recent work has demonstrated AI's capability in recognizing different phases of cataract surgery.53–55 With this newly introduced system, surgical workflow may potentially be constantly monitored, and real-time prompts will be triggered when nonoptimal or nonstandard surgical procedures are identified by the AI system.53,56 This system may be particularly useful when training ophthalmology residents.


The advent of AI brings new opportunities for developing novel systems and strategies in the areas of cataract detection, grading, and IOL power calculation. With enhanced computing power and the increasingly available big data, AI is poised to introduce paradigm shift in cataract-related clinical practice and services in the foreseeable future. Lastly, despite the hype around AI, it is important to keep our mind and focus grounded on the ultimate goal—implementation.


1. Flaxman SR, Bourne RRA, Resnikoff S, et al. Global causes of blindness and distance vision impairment 1990-2020: a systematic review and meta-analysis. Lancet Global Health 2017; 5:e1221–e1234.
2. Chua J, Lim B, Fenwick EK, et al. Prevalence, risk factors, and impact of undiagnosed visually significant cataract: The Singapore Epidemiology of Eye Diseases Study. PLoS One 2017; 12:e0170804.
3. Varma R, Mohanty SA, Deneen J, Wu J, Azen SP. Burden and predictors of undetected eye disease in Mexican-Americans: the Los Angeles Latino Eye Study. Med Care 2008; 46:497–506.
4. Keel S, McGuiness MB, Foreman J, Taylor HR, Dirani M. The prevalence of visually significant cataract in the Australian National Eye Health Survey. Eye (Lond) 2019; 33:957–964.
5. Chylack LT Jr, Wolfe JK, Singer DM, et al. The Lens Opacities Classification System III. The Longitudinal Study of Cataract Study Group. Arch Ophthalmol 1993; 111:831–836.
6. Resnikoff S, Lansingh VC, Washburn L, et al. Estimated number of ophthalmologists worldwide (International Council of Ophthalmology update): will we meet the needs? Br J Ophthalmol 2019; Epub Ahead of print.
7. Bailey IL, Bullimore MA, Raasch TW, Taylor HR. Clinical grading and the effects of scaling. Invest Ophthalmol Vis Sci 1991; 32:422–432.
8. Liu YC, Wilkins M, Kim T, Malyugin B, Mehta JS. Cataracts. Lancet 2017; 390:600–612.
9. Olsen T. Calculation of intraocular lens power: a review. Acta Ophthalmol Scand 2007; 85:472–485.
10. Aristodemou P, Knox Cartwright NE, Sparrow JM, Johnston RL. Formula choice: Hoffer Q, Holladay 1, or SRK/T and refractive outcomes in 8108 eyes after cataract surgery with biometry by partial coherence interferometry. J Cataract Refract Surg 2011; 37:63–71.
11. Chen C, Xu X, Miao Y, Zheng G, Sun Y, Xu X. Accuracy of intraocular lens power formulas involving 148 eyes with long axial lengths: a retrospective chart-review study. J Ophthalmol 2015; 2015:976847.
12. Zhang Y, Liang XY, Liu S, Lee JW, Bhaskar S, Lam DS. Accuracy of intraocular lens power calculation formulas for highly myopic eyes. J Ophthalmol 2016; 2016:1917268.
13. Rong X, He W, Zhu Q, Qian D, Lu Y, Zhu X. Intraocular lens power calculation in eyes with extreme myopia: comparison of Barrett Universal II, Haigis, and Olsen formulas. J Cataract Refract Surg 2019; 45:732–737.
14. Kane JX, Van Heerden A, Atik A, Petsoglou C. Intraocular lens power formula accuracy: comparison of 7 formulas. J Cataract Refract Surg 2016; 42:1490–1500.
15. Wang L, Shirayama M, Ma XJ, Kohnen T, Koch DD. Optimizing intraocular lens power calculations in eyes with axial lengths above 25.0 mm. J Cataract Refract Surg 2011; 37:2018–2027.
16. Siddiqui AA, Devgan U. Intraocular lens calculations in atypical eyes. Indian J Ophthalmol 2017; 65:1289–1293.
17. Chen X, Yuan F, Wu L. Metaanalysis of intraocular lens power calculation after laser refractive surgery in myopic eyes. J Cataract Refract Surg 2016; 42:163–170.
18. Wang L, Tang M, Huang D, Weikert MP, Koch DD. Comparison of newer intraocular lens power calculation methods for eyes after corneal refractive surgery. Ophthalmology 2015; 122:2443–2449.
19. Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018; 2:719–731.
20. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521:436–444.
21. Ardila D, Kiraly AP, Bharadwaj S, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 2019; 25:954–961.
22. Ehteshami Bejnordi B, Veta M, Johannes van Diest P, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017; 318:2199–2210.
23. Wang P, Berzin TM, Glissen Brown JR, et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut 2019; 68:1813–1819.
24. Ting DSW, Cheung CY, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 2017; 318:2211–2223.
25. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016; 316:2402–2410.
26. Pead E, Megaw R, Cameron J, et al. Automated detection of age-related macular degeneration in color fundus photography: a systematic review. Surv Ophthalmol 2019; 64:498–511.
27. Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 2018; 125:1199–1206.
28. Li H, Lim JH, Liu J, et al. An automatic diagnosis system of nuclear cataract using slit-lamp images. Conf Proc IEEE Eng Med Biol Soc 2009; 2009:3693–3696.
29. Li H, Chutatape O. Boundary detection of optic disk by a modified ASM method. Pattern Recognit 2003; 36:2093–2104.
30. Klein BE, Klein R, Linton KL, Magli YL, Neider MW. Assessment of cataracts from photographs in the Beaver Dam Eye Study. Ophthalmology 1990; 97:1428–1433.
31. Dai W, Tham YC, Chee ML, et al. Systemic medications and cortical cataract: the Singapore Epidemiology of Eye Diseases Study. Br J Ophthalmol 2020; 104:330–335.
32. Xu Y, Gao X, Lin S, et al. Automatic Grading of Nuclear Cataracts from Slit-Lamp Lens Images Using Group Sparsity Regression. 2013; Berlin, Heidelberg: Springer Berlin Heidelberg, 468-475.
33. Gao X, Lin S, Wong TY. Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Trans Biomed Eng 2015; 62:2693–2701.
34. Wu X, Huang Y, Liu Z, et al. Universal artificial intelligence platform for collaborative management of cataracts. Br J Ophthalmol 2019; 103:1553–1560.
35. Chylack LT Jr, Leske MC, McCarthy D, Khu P, Kashiwagi T, Sperduto R. Lens opacities classification system II (LOCS II). Arch Ophthalmol (Chicago, Ill: 1960) 1989; 107:991–997.
36. Lian JX, Gangwani RA, McGhee SM, et al. Systematic screening for diabetic retinopathy (DR) in Hong Kong: prevalence of DR and visual impairment among diabetic population. Br J Ophthalmol 2016; 100:151–155.
37. Prescott G, Sharp P, Goatman K, et al. Improving the cost-effectiveness of photographic screening for diabetic macular oedema: a prospective, multi-centre, UK study. Br J Ophthalmol 2014; 98:1042–1049.
38. Dong Y, Zhang Q, Qiao Z, Yang J. Classification of cataract fundus image based on deep learning. 2017 IEEE International Conference on Imaging Systems and Techniques (IST) 2017; 1–5.
39. Ran J, Niu K, He Z, Zhang H, Song H. Cataract Detection and Grading Based on Combination of Deep Convolutional Neural Network and Random Forests. 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC) 2018; 155–159.
40. Pratap T, Kokil P. Computer-aided diagnosis of cataract using deep transfer learning. Biomedical Signal Processing and Control 2019; 53:101533.
41. Zhang L, Li J, Zhang I, et al. Automatic cataract detection and grading using Deep Convolutional Neural Network. 2017 IEEE 14th International Conference on Networking. Sensing and Control (ICNSC), Calabria 2017; 60–65.
42. Liang Y, He L, Fan C, Wang F, Li W. Preprocessing study of retinal image based on component extraction. IEEE International Symposium on IT in Medicine and Education, Xiamen 2008; 670–672.
43. Li J, Xu X, Guan Y, et al. Automatic Cataract Diagnosis by Image-Based Interpretability, IEEE International Conference on Systems. Man and Cybernetics (SMC), 2018, Miyazaki, Japan, 3964-3969.
44. Ladas JG, Siddiqui AA, Devgan U, Jun AS. A 3-D “super surface” combining modern intraocular lens formulas to generate a “super formula” and maximize accuracy. JAMA Ophthalmol 2015; 133:1431–1436.
45. Hee MR. State-of-the-art of intraocular lens power formulas. JAMA Ophthalmol 2015; 133:1436–1437.
46. Forman D, Newell DG, Fullerton F, et al. Association between infection with Helicobacter pylori and risk of gastric cancer: evidence from a prospective investigation. BMJ 1991; 302:1302–1305.
47. Melles RB, Kane JX, Olsen T, Chang WJ. Update on intraocular lens calculation formulas. Ophthalmology 2019; 126:1334–1335.
48. Connell BJ, Kane JX. Comparison of the Kane formula with existing formulas for intraocular lens power selection. BMJ Open Ophthalmol 2019; 4:e000251.
49. Hoffer KJ. Intraocular lens power calculation after previous laser refractive surgery. J Cataract Refract Surg 2009; 35:759–765.
50. Alio JL, Abdelghany AA, Abdou AA, Maldonado M. Cataract surgery on the previous corneal refractive surgery patient. Surv Ophthalmol 2016; 61:769–777.
51. Bolster NM, Giardini ME, Bastawrous A. The Diabetic Retinopathy Screening Workflow: Potential for Smartphone Imaging. J Diabetes Sci Technol 2015; 10:318–324.
52. Maamari RN, Keenan JD, Fletcher DA, Margolis TP. A mobile phone-based retinal camera for portable wide field imaging. Br J Ophthalmol 2014; 98:438–441.
53. Al Hajj H, Lamard M, Conze PH, et al. CATARACTS: challenge on automatic tool annotation for cataRACT surgery. Med Image Anal 2019; 52:24–41.
54. Yu F, Silva Croso G, Kim TS, et al. Assessment of automated identification of phases in videos of cataract surgery using machine learning and deep learning techniques. JAMA Netw Open 2019; 2:e1191860.
55. Zisimopoulos O, Flouty E, Luengo I, et al. DeepPhase: surgical phase recognition in CATARACTS videos. International Conference on Medical Image Computing and Computer-Assisted Intervention: Springer, 2018;265–272.
56. Vedula S, Speidel S, Navab N, et al. Surgical data science: enabling next-generation surgery. Nat Biomed Eng 2017; 1:691–696.

artificial intelligence; deep learning; cataract; visual impairment

Copyright © 2020 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.