Prediction of Visual Impairment in Epiretinal Membrane and Feature Analysis: A Deep Learning Approach Using Optical Coherence Tomography : The Asia-Pacific Journal of Ophthalmology

Secondary Logo

Journal Logo

Original Studies

Prediction of Visual Impairment in Epiretinal Membrane and Feature Analysis: A Deep Learning Approach Using Optical Coherence Tomography

Hsia, Yun MD*,†; Lin, Yu-Yi PhD; Wang, Bo-Sin MS; Su, Chung-Yen PhD§; Lai, Ying-Hui PhD‡,¶; Hsieh, Yi-Ting MD, PhD

Author Information
Asia-Pacific Journal of Ophthalmology 12(1):p 21-28, January/February 2023. | DOI: 10.1097/APO.0000000000000576

Abstract

INTRODUCTION

Idiopathic epiretinal membrane (ERM), a fibrocellular proliferation on the inner surface of the retina, can range widely in disease severity. In the early stage, it is named as cellophane maculopathy, which characteristically presents as a transparent membrane without obvious traction on the underlying retina and could be asymptomatic. As the disease progresses, it can become a thick, opaque, and contractile membrane, which causes decreased vision and metamorphopsia.1–3 The timing of surgery depends on the severity of visual impairment caused by ERM, and the benefit of surgery should outweigh the risk.3 The incidence of ERM and cataract both increase with age, occurring in up to 20% and 40% of patients aged above 75 years, respectively.2,4 Therefore, many elderly patients suffer from ERM and visually significant cataract simultaneously. It is often unclear whether cataract surgery alone would result in significant visual improvement or whether combined cataract and ERM surgery is required. Optical coherence tomography (OCT) is one of the most important diagnostic tools to help ophthalmologists make a more accurate diagnosis and treatment decision for ERM.5

OCT facilitates the visualization of the morphology of ERMs and associated retinal change.6,7 However, previous studies regarding the correlation between visual acuity and retinal structural changes on OCT have reported conflicting results. Some studies have suggested that outer retinal change is a determinant of visual function,8–10 while others have addressed the role of inner retinal changes.11–16 Meanwhile, some OCT-based ERM classifications have been proposed to correlate with visual function,1,11,17,18 but none have been widely accepted in clinical practice. In other words, the OCT biomarkers for visual impairment in patients with ERM are yet to be determined. To answer this question, an effective artificial intelligence system, such as the deep learning (DL) approach, may help us analyze these data and seek solutions.

The DL approach is one of the most representative artificial intelligence technologies that can create a high-dimensional space to perform a specific task based on training data. Previous studies have shown that the DL approach has been widely used in many applications, such as image processing19,20 and assistive hearing,21 with better performance than the traditional machine learning approach. DL technology has also been used successfully in OCT image analysis for diagnosing various retinal diseases.22–24 Therefore, this study aimed to propose a DL-based system to investigate the effectiveness of visual acuity prediction using OCT images. We used the proposed system to classify patients with profound or insignificant visual impairment due to ERM based on OCT images. We also attempted to realign the focus of observations from the trained model to help clinicians reveal the OCT biomarkers for profound visual impairment in ERM.

MATERIALS AND METHODS

Proposed System

Figure 1 shows an overview of our proposed system, including the training and heat map generative phases. In the training phase, the classical convolution neural network (CNN)19 model with a deep residual learning approach was used.25 There are multiple layers in this model, including convolution, pooling, and fully connected layers.26 Finally, the output of this system yields a classification and class activation map for the input image.

F1
FIGURE 1:
The block diagram for the proposed system. VA indicates visual acuity.

Data Sets

Patients who were diagnosed with idiopathic ERM at the National Taiwan University Hospital between January 2016 and August 2020 were retrospectively included. The diagnosis of idiopathic ERM was confirmed by 2 retinal specialists (Y.T.H. and Y.H.). Patients who had secondary ERM due to diseases including retinal breaks, retinal detachment, diabetic retinopathy, high myopia, and intraocular inflammation, other coexistent retinal diseases, or had undergone intraocular surgery other than uncomplicated cataract surgeries were excluded. Patients with media opacity that may affect visual acuity, including corneal opacity or visually significant cataracts (Lens Opacity Classification System III grades C1–5, P1–5, NC1–6, or NO1–6), were also excluded. Macular OCT (RTVue Model-RT 100 scanner, version 3.5; Optovue Inc) examinations were performed on these patients, and standard 10-mm horizontal and vertical B scans centered at the fovea were collected. Images with poor quality and a signal strength index of <40 were excluded. Best-corrected visual acuity (BCVA) was measured at the same time. Images with a corresponding BCVA of 20/50 or less were classified as “profound visual impairment,” while those with a corresponding BCVA better than 20/50 were classified as “less visual impairment.” A total of 600 qualified OCT image sets from 511 patients, including 300 with “profound visual impairment” and 300 with “insignificant visual impairment,” were included in this study (Supplementary Digital Content 1, https://links.lww.com/APJO/A196). This study was approved by the Institutional Review Board of the National Taiwan University Hospital (No: 201803097RINB) and was conducted in accordance with the Declaration of Helsinki. Informed consent was waived owing to the retrospective nature of this study.

Procedures for DL Algorithm Development

In the training phase, a classical CNN model with a deep residual learning approach was used.19 Before model training, horizontal and vertical macular B-scan OCT images from the same eye were combined as one image to increase the dimensionality for the CNN model.27,28 Each image was normalized to achieve the same resolution and similar image quality to exclude bias during different exams.29,30 The resolution of each image was 400×550 pixels to avoid the occurrence of excess parameters in the training model.

Next, the trained model (1000 output ResidualNet model provided by ImageNet31) was used as the initial parameter in the transfer learning technology.32 With respect to the architecture for model training and validation, 90% (540 images: 270 with profound visual impairment, 270 with less visual impairment) were randomly selected as the training data set, and the remaining 10% (60 images: 30 with profound visual impairment, 30 with less visual impairment) were used for testing. The testing data were not included in the training data set to avoid overfitting the interference accuracy result. Batch numbers of 16 and 200 epochs were used. The detailed model structure and settings were designed according to previous studies.33,34 Finally, under the same data arrangement for the training and testing sets, one model with a deeper structure (ie, ResNet-50)35,36 and the other with a shallower structure (ie, ResNet-18)37,38 were adopted. The t-distributed stochastic neighbor-embedding (t-SNE) approach was used to compare the performance of these 2 structures.39t-SNE is a dimensional reduction method that can project the distribution of each layer onto a 2-dimensional (2D) plane through the perplexity of distance and neighbors between different features. This study performed a t-SNE feature analysis of convolutional layers 1, 10, 16, 22, 28, 34, and 40 (shallower layers), as well as 43 and 49 (deeper layers).

After the model training, Grad-CAM was used to generate heat maps to identify the key features associated with visual impairment.40 Heat maps from 270 OCT images with profound visual impairment were examined for the OCT features labeled with hot spots. We identified several OCT features, including the gap between the inner retina and ERM,16 inner retinal thickening, folding, ectopic inner foveal layer,11 disorganization of the retinal inner layers (DRIL),13 retinal cysts in the inner retina or outer nuclear layer (ONL), ONL inward projection,17,41 outer retinal thickening, cotton ball signs,12 disruption of the ellipsoid zone and external limiting membrane (ELM), choroidal thickness, and the presence of a pseudohole. Furthermore, 270 heat maps of profound visual impairment were combined with the alignment of the retinal pigment epithelium layer to obtain an overlap heat map. The retinal pigment epithelium layer was identified as the line with the highest reflectivity, and the images were adjusted to place the retinal pigment epithelium on the same baseline to generate the overlap heat map. An overlap heat map was used to identify areas associated with visual impairment (Supplementary Digital Content 2, https://links.lww.com/APJO/A197).

RESULTS

During the training stage, 270 images with profound visual impairment and 270 images with less visual impairment were used. The training accuracy was 100% for both ResNet-18 and ResNet-50. During the testing stage, the accuracy for predicting profound visual impairment was 70% and 80% for ResNet-18 and ResNet-50, respectively. Supplementary Digital Content 3 (https://links.lww.com/APJO/A198) shows that the accuracy in the training stage and both accuracy and loss in the validation stage were unstable for the ResNet-18 model. Conversely, the performance of the ResNet-50 model was much better, and convergence could be achieved in terms of accuracy and loss function in both the training and validation stages. Figure 2 shows the results of t-SNE comparing the performance of the ResNet-50 and ResNet-18 models on the training data set. Figures 2A–I show the t-SNE results of convolution layers 1, 10, 16, 22, 28, 34, 40, 43, and 49 in the ResNet-50 model. The deeper layers (layers 43 and 49) provided better differentiation of OCT features between the “profound visual impairment” (blue points) and “less visual impairment” groups (orange points) than the shallower layers (eg, layers 1, 10, 16, 22, 28, 34, and 40). These findings indicate that the deeper layers could identify more key features from the input data than the shallower layers.

F2
FIGURE 2:
Comparison of performance between the deeper and shallower layers of the ResNet-50 model in the training data set based on t-distributed stochastic neighbor-embedding approach (t-SNE). The blue points “0” indicate the OCT features from the “profound visual impairment” group, and the orange points “1” indicate the OCT features from the “less visual impairment” group. A–G, In the shallower layers (layers 1, 10, 16, 22, 28, 34, and 40), the OCT features from the 2 groups could not be separated precisely. H–I, In the deeper layers (layers 43 and 49), the OCT features from the 2 groups can be separated perfectly, which indicate that the deep layers have better performance than the shallower layers do. OCT indicates optical coherence tomography.

Hot Spot Analysis for Heat Maps of Grad-CAM

Table 1 shows the features of the morphological alterations on OCT that corresponded to the hot spots in the heat maps of the 270 OCT images with profound visual impairment included in the training data set. Structural abnormalities at the fovea and parafovea both contributed to visual impairment, and inner retinal abnormalities contributed more than outer retinal abnormalities at both the fovea and parafovea. For inner retinal abnormalities at the fovea, the ectopic inner foveal layer (52.2%) was the most common feature with hot spots, followed by an irregular or indistinguishable inner nuclear layer (INL)-outer plexiform layer (OPL) (34%). For outer retinal abnormalities at the fovea, ONL inward projection (35.9%) was the most common feature with hot spots, followed by outer retinal thickening (26.3%) and ONL cyst (16.3%). Cotton ball sign (7.0%), disruption of the ellipsoid zone (8.5%), and ELM (2.2%) contributed less to visual impairment. Inner retinal thickening (59.6%) was identified most frequently in inner retinal abnormalities at the parafovea, followed by irregular or indistinguishable INL-OPL (52.5%), inner retinal folding (43.3%), and the gap between the inner retina and ERM (33.3%). Outer retinal thickening was the most common feature of outer retinal abnormalities at the parafovea (30.4%). Hot spots on the choroidal layer were identified at the fovea (36.7%) and parafoveal regions (69.3%). No apparent differences in feature distribution among the different areas of the parafovea were noted. In all regions, irregularities in INL-OPL were more frequently highlighted by hot spots than irregularities in the ganglion cell-inner plexiform layer (GCIPL)-INL. ONL cysts were identified more frequently than INL cysts at the fovea but did not contribute to visual impairment in the parafoveal regions. Figure 3 shows an example of hot spot analysis of OCT characteristics from the heat map of Grad-CAM.

TABLE 1 - Features of the Morphological Alterations on OCT That Corresponded to the Hot Spots in the Heat Maps of 270 OCT Images With Profound Visual Impairment
Morphological Alterations on OCT Fovea Any Parafovea Nasal Parafovea Temporal Parafovea Nasal or Temporal Parafovea Superior or Inferior Parafovea
Inner retinal features
 Ectopic inner foveal layer 141 (52.2)
 Inner retinal thickening 161 (59.6) 64 (23.7) 59 (21.9) 99 (36.7) 131 (48.5)
 Inner retinal fold 117 (43.3) 33 (12.2) 40 (14.8) 63 (23.3) 94 (34.8)
 Inner retinal cyst or schisis 37 (13.7) 46 (17.0) 19 (7.0) 19 (7.0) 30 (11.1) 30 (11.1)
 Disorganization of retinal inner layers
  Irregular GCIPL-INL 28 (10.4) 58 (21.5) 14 (5.2) 13 (4.8) 24 (8.9) 41 (15.2)
  Indistinguishable GCIPL-INL 22 (8.1) 25 (9.3) 2 (0.7) 14 (5.2) 14 (5.2) 19 (7.0)
  Irregular INL-OPL 76 (28.1) 130 (48.1) 51 (27.4) 44 (16.3) 82 (30.4) 92 (34.1)
  Indistinguishable INL-OPL 16 (5.9) 12 (4.4) 2 (0.7) 4 (1.5) 6 (2.2) 6 (2.2)
Outer retinal features
 ONL cyst 44 (16.3) 0 0 0 0 0
 ONL inward projection 97 (35.9)
 Outer retinal thickening 71 (26.3) 82 (30.4) 28 (10.4) 23 (8.5) 44 (16.3) 54 (20.0)
 Ellipsoid zone disruption 23 (8.5) 0 0 0 0 0
 Cotton ball sign 19 (7.0) 0 0 0 0 0
 External limiting membrane disruption 6 (2.2) 0 0 0 0 0
Other features
 Gap between the inner retina and ERM 0 90 (33.3) 29 (10.7) 25 (9.3) 46 (17.0) 74 (18.9)
 Choroid 99 (36.7) 187 (69.3) 74 (18.9) 87 (32.2) 130 (48.1) 135 (50.0)
 Pseudohole 48 (17.8)
All data are presented as a number (%).
ERM indicates epiretinal membrane; GCIPL, ganglion cell and inner plexiform layers; INL, inner nuclear layer; OCT, optical coherence tomography; ONL, outer nuclear layer; OPL, outer plexiform layer.

F3
FIGURE 3:
Heat map of a patient with profound visual impairment. The red arrows indicate inner retinal cysts. The red arrowhead indicates irregular GCIPL-INL. The yellow arrowheads indicate irregular INL-OPL. GCIPL indicates ganglion cell-inner plexiform layer; INL, inner nuclear layer; OPL, outer plexiform layer.

Overlap Heat Map

Figure 4 shows an overlap heat map composed by overlapping of the heat maps of all 270 OCT images with profound visual impairment. The regions highlighted with warmer codes indicated that hot spots appeared more frequently there and were considered to be more important for visual acuity classification. The results of the overlap heat map indicate that the DL model most focused on the fovea for the horizontal scans. As for the vertical scans, the model focused more on the inner retinal layers of superior or inferior parafoveal areas.

F4
FIGURE 4:
Horizontal and vertical B-scan optical coherence tomography images of a representative case show ectopic inner foveal layers (red arrows), gaps between the epiretinal membrane and inner retina (yellow arrowheads), and inner retinal thickening. The overlap heat map is obtained by overlapping of the heat maps from 270 optical coherence tomography images with profound visual loss. The warmer color code indicates the more focused regions.

DISCUSSION

ERM is highly variable in severity and affects visual acuity to different extents. The present study demonstrated that DL models could help distinguish the visual acuity changes associated with ERM severity using OCT images and shed light on the OCT features associated with visual impairment in ERM. The testing accuracy was 80% for discriminating patients with profound visual impairment from those with less visual impairment. The overlap heat map indicated that the DL models focused on the foveal and parafoveal structures to judge the corresponding visual acuity. The changes in inner retinal structures played a more important role in determining visual acuity than those in outer retinal structures.

Regarding DL and ERM, DL models have been previously built for the diagnosis of ERM.42–44 Lu et al44 built a DL-based system to detect and differentiate common macular lesions accurately, including ERM, macular hole, cystoid macular edema, and serous macular detachment. Lo et al42 implemented a DL model with noninferior accuracy compared to nonretinal specialists in ERM diagnosis using OCT images. To our knowledge, the current study is the first to use DL models to predict visual deterioration caused by ERM. We used OCT images from patients with clear media to train the DL models, which could discriminate those with profound visual impairment from those with less visual impairment. In the present study, the overlap heat map showed that the hot spots mainly appeared in the fovea and parafoveal regions, indicating that the DL models were well trained to learn the features associated with visual impairment in ERM and were not overfitted. Lo et al42 reported that the foveal region is the most important area for pattern recognition in ERM. Lu et al44 found that deeper CNN networks could extract more discriminative details of images to achieve more accurate recognition, which is compatible with other previous studies.19,25 In the present study, the t-SNE analysis tool also demonstrated that ResNet-50 had better performance than ResNet-18, which means that the deeper layers of CNN identified the OCT features of visual impairment in ERM better than the shallower layers did. This is a reasonable finding because the OCT features associated with visual impairment are mostly localized and small. Therefore, ResNet with more layers should have higher accuracy.

The anatomical characteristics associated with visual impairment are yet to be fully understood in patients with ERM. The tangential traction of ERM on the fovea has been found to increase the thickness of all retinal layers.45 Since the inner retina bears the tractional force directly, it should be affected more significantly than the outer retina.45 The inner retina may thicken due to the distortion in GCIPL, cystoid change in the INL, or ectopic inner foveal layer formed by the activated Müller cells.11,46 Furthermore, a gap may form between the retinal surface and ERM accompanied by the folding of the inner retina.16 On the other hand, changes in the outer retina may be secondary to the transneuronal degeneration caused by INL damage and may correlate with changes in the inner retina.46 Both inner retinal distortion and outer retinal changes are considered to be associated with visual acuity. Some studies have stated that outer retinal distortion, including the disruption of the ellipsoid zone, ELM, and cone outer segment tip line, was more significantly associated with visual impairment than inner retinal changes.8,9 The photoreceptor outer segment length was positively associated with the extent of retinal thickening and poor visual acuity.10 Nevertheless, some studies have reported correlations between visual acuity and inner retinal changes.11–16,45 The thickness of INL rather than the outer retina layer was associated with metamorphopsia.47 Several biomarkers have been identified to represent inner retinal damage. DRIL, which is characterized by indistinct inner retinal borders, was found to be strongly associated with poor visual acuity.12,13 This sign represents an anatomical interruption of the inner retinal layers and affects neuronal transmission.48 The inner retinal irregularity index, defined as the ratio between the IPL inferior border length and retinal pigment epithelium length, also had a good correlation with visual acuity.14 The ectopic inner foveal layer, depicted as a continuous inner retinal floor across the central fovea, originates from activated Müller cell proliferation and inward displacement of the inner retinal structure.11 It may obstruct the pathway of afferent light to the photoreceptor and affect the image projected on the cone photoreceptor.11 SUKIMA, a gap between the retinal surface and ERM, was associated with worse visual acuity and increased metamorphopsia.16 In addition, the morphology of ERM could also affect visual acuity. Those with diffuse thickening of the inner retinal layer had the worst visual acuity.11,15,17

Since the abovementioned biomarkers were difficult to evaluate in clinical practice, and no single biomarker could represent the visual function well, DL models may be applied to cumulatively consider all possible characteristics. We further used the heat maps of Grad-CAM to examine the anatomical characteristics associated with visual impairment. In patients with profound visual impairment, inner retinal abnormalities contributed more than outer retinal abnormalities at the fovea and parafovea. In the foveal region, an ectopic inner foveal layer was the most commonly detected structural change associated with poor visual acuity and was detected in more than half of the images.11 ONL inward projection, presented in one-third of the images, was the most common outer retinal change at the fovea. ONL inward projection has been reported to be associated with the disappearance of the foveal pit and more significant retinal changes.11,41 However, the disruption of the ELM and ellipsoid zone was seldom labeled as hot spots in the heat maps. In the parafoveal region, the inner retinal changes also played a more crucial role in causing visual impairment than the outer retinal changes. For example, inner retinal thickening, inner retinal folding, and DRIL were all often labeled. Regarding DRIL, the indistinct boundary of INL-OPL was more often labeled than that of GCIPL-INL. The changes in the INL-OPL boundary may reflect more extensive retinal changes than those in GCIPL-INL. Notably, the choroidal layer was also labeled at the fovea and parafovea in the present study. Previous studies have reported that ERM may cause compensatory choroidal thickening due to impairment of retinal blood flow,49 and that choroidal thickness may decrease after surgery.50 However, we could not confirm whether thickening or thinning of the choroidal layer is associated with visual impairment. The role of choroidal thickness in ERM merits further investigation.

In clinical practice, we often encounter patients with ERM and concurrent visually significant cataracts. It can be challenging for clinicians to decide whether to perform cataract surgery alone or combined surgery for ERM and cataracts simultaneously. Since the DL model we built could discern the severity of visual impairment caused by ERM, it could be used to identify whether ERM contributes greatly to visual impairment in patients with visually significant cataracts or opaque media. In addition, this DL model could also assist nonretinal specialists in making decisions regarding the referral of patients with ERM.

This study had several limitations. First, the sample size was small, and all data were obtained from the same OCT machine at a single medical center. Such results may not be extrapolated to other OCT machines, and a larger and more diverse data set may be needed for further refinement of this DL model. Second, we only classified visual acuity into 2 categories: ≤20/50 and >20/50 because of the small sample size. Classification with more categories of visual acuity or treating BCVA as a continuous variable may improve its efficacy in clinical applications. However, this is the first study to use DL algorithms to judge profound visual impairment in ERM. We believe that this preliminary study will be helpful for further studies on this topic. Third, the highly variable presentation of OCT images may account for the reason for the model only attaining 80% accuracy. The B-scan OCT images were 2D images, which may not represent the entire macular structure. Multimodal images, including en face OCT or OCT angiography, may offer more information and should be considered for model training in future studies. Furthermore, we did not perform validation on those with simultaneous ERM and vision-threatening cataracts because it is difficult to collect data from patients who have received OCT examinations first, followed by cataract surgery, only to prove that their ERMs did not result in visual impairment. We also did not perform external validation; however, convergence achievement in accuracy and loss of function in both the training and validation stages for the ResNet-50 model implied that the DL model was well trained and not overfitted. Finally, metamorphopsia is a major visual complaint in patients with ERM. However, owing to the retrospective nature of this study, records regarding the extent of metamorphopsia were not available for every patient. Therefore, metamorphopsia was not included as an indicator of visual impairment. Further studies may build DL algorithms to investigate the association between metamorphopsia and OCT features.

In conclusion, we successfully trained the DL models to predict the extent of visual impairment caused by ERM from OCT images, and deeper ResNet structures could enhance the classification accuracy. We also found that changes in the inner retinal layers had a greater impact on visual acuity than the outer retinal changes according to the heat maps of Grad-CAM. Such results could help ophthalmologists decide whether ERM surgery is necessary. Further studies with larger data sets are needed to achieve more precise discrimination of the extent of visual impairment.

ACKNOWLEDGMENTS

The authors thank Eastern Electronics Co, Ltd for its support for this work.

REFERENCES

1. Stevenson W, Prospero Ponce CM, Agarwal DR, et al. Epiretinal membrane: optical coherence tomography-based diagnosis and classification. Clin Ophthalmol. 2016;10:527–534.
2. Bu SC, Kuijer R, Li XR, et al. Idiopathic epiretinal membrane. Retina. 2014;34:2317–2335.
3. Flaxel CJ, Adelman RA, Bailey ST, et al. Idiopathic epiretinal membrane and vitreomacular traction preferred practice pattern. Ophthalmology. 2020;127:145–183.
4. Song P, Wang H, Theodoratou E, et al. The national and subnational prevalence of cataract and cataract blindness in China: a systematic review and meta-analysis. J Glob Health. 2018;8:010804.
5. Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. Science. 1991;254:1178–1181.
6. De Carlo TE, Romano A, Waheed NK, et al. A review of optical coherence tomography angiography (OCTA). Int J Retina Vitreous. 2015;1:5.
7. Sato T, Yamauchi-Mori R, Yamamoto J, et al. Longitudinal change in retinal nerve fiber layer thickness and its association with central retinal sensitivity after epiretinal membrane surgery. Asia Pac J Ophthalmol (Phila). 2022;11:279–286.
8. Arichika S, Hangai M, Yoshimura N. Correlation between thickening of the inner and outer retina and visual acuity in patients with epiretinal membrane. Retina. 2010;30:503–508.
9. Fang IM, Hsu CC, Chen LL. Correlation between visual acuity changes and optical coherence tomography morphological findings in idiopathic epiretinal membranes. Graefes Arch Clin Exp Ophthalmol. 2016;254:437–444.
10. Shiono A, Kogo J, Klose G, et al. Photoreceptor outer segment length: a prognostic factor for idiopathic epiretinal membrane surgery. Ophthalmology. 2013;120:788–794.
11. Govetto A, Lalane RA III, Sarraf D, et al. Insights into epiretinal membranes: presence of ectopic inner foveal layers and a new optical coherence tomography staging scheme. Am J Ophthalmol. 2017;175:99–113.
12. Karasavvidou EM, Panos GD, Koronis S, et al. Optical coherence tomography biomarkers for visual acuity in patients with idiopathic epiretinal membrane. Eur J Ophthalmol. 2021;31:3203–3213.
13. Zur D, Iglicki M, Feldinger L, et al. Disorganization of retinal inner layers as a biomarker for idiopathic epiretinal membrane after macular surgery—The DREAM Study. Am J Ophthalmol. 2018;196:129–135.
14. Cho KH, Park SJ, Cho JH, et al. Inner-retinal irregularity index predicts postoperative visual prognosis in idiopathic epiretinal membrane. Am J Ophthalmol. 2016;168:139–149.
15. Joe SG, Lee KS, Lee JY, et al. Inner retinal layer thickness is the major determinant of visual acuity in patients with idiopathic epiretinal membrane. Acta Ophthalmol. 2013;91:242–243.
16. Murase A, Asaoka R, Inoue T, et al. Relationship between optical coherence tomography parameter and visual function in eyes with epiretinal membrane. Invest Ophthalmol Vis Sci. 2021;62:6.
17. Hwang JU, Sohn J, Moon BG, et al. Assessment of macular function for idiopathic epiretinal membranes classified by spectral-domain optical coherence tomography. Invest Ophthalmol Vis Sci. 2012;53:3562–3569.
18. Konidaris V, Androudi S, Alexandridis A, et al. Optical coherence tomography-guided classification of epiretinal membranes. Int Ophthalmol. 2015;35:495–501.
19. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444.
20. Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–248.
21. Lai YH, Tsao Y, Lu X, et al. Deep learning-based noise reduction approach to improve speech intelligibility for cochlear implant recipients. Ear Hear. 2018;39:795–809.
22. Mehta P, Lee AY, Lee C, et al. Multilabel multiclass classification of OCT images augmented with age, gender and visual acuity data. bioRxiv. 2018:316349.
23. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24:1342–1350.
24. Ran A, Cheung CY. Deep learning-based optical coherence tomography and optical coherence tomography angiography image analysis: an updated summary. Asia Pac J Ophthalmol (Phila). 2021;10:253–260.
25. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:770–778.
26. Bengio Y. Learning deep architectures for AI. Found Trends Mach Learn. 2009;2:1–127.
27. Zhang J, Xie Y, Wu Q, et al. Medical image classification using synergic deep learning. Med Image Anal. 2019;54:10–19.
28. Yinka-Banjo C, Ugot OA. A review of generative adversarial networks and its application in cybersecurity. Artif Intell Rev. 2020;53:1721–1736.
29. Shinohara RT, Sweeney EM, Goldsmith J, et al. Statistical normalization techniques for magnetic resonance imaging. Neuroimage Clin. 2014;6:9–19.
30. Nyúl LG, Udupa JK, Zhang X. New variants of a method of MRI scale standardization. IEEE Trans Med Imaging. 2000;19:143–150.
31. Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis. 2015;115:211–252.
32. Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2010;22:1345–1359.
33. de Boer PT, Kroese DP, Mannor S, et al. A tutorial on the cross-entropy method. Ann Oper Res. 2005;134:19–67.
34. Kingma DP, Ba J. A method for stochastic optimization. arXiv. 2017: 1412.6980r.
35. Rezende E, Ruppert G, Carvalho T, et al. Malicious software classification using transfer learning of resnet-50 deep neural network. 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE; 2017:1011–1014.
36. Akiba T, Suzuki S, Fukuda K. Extremely large minibatch sgd: training resnet-50 on imagenet in 15 minutes. arXiv. 2017: 1711.04325.
37. Ou X, Yan P, Zhang Y, et al. Moving object detection method via ResNet-18 with encoder–decoder structure in complex scenes. IEEE Access. 2019;7:108152–108160.
38. Ayyachamy S, Alex V, Khened M, et al. Medical image retrieval using Resnet-18. Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications. 10954. International Society for Optics and Photonics. 2019:1095410.
39. Maaten Lvd, Hinton G. Visualizing data using t-SNE. J Mach Learn Res. 2008;9:2579–2605.
40. Selvaraju RR, Cogswell M, Das A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis. 2020;128:336–359.
41. Liao DY, Liu JH, Zheng YP, et al. Outer plexiform layer angle: a prognostic factor for idiopathic macular pucker surgery. J Ophthalmol. 2020;2020:1–7.
42. Lo YC, Lin KH, Bair H, et al. Epiretinal membrane detection at the ophthalmologist level using deep learning of optical coherence tomography. Sci Rep. 2020;10:8424.
43. Sonobe T, Tabuchi H, Ohsugi H, et al. Comparison between support vector machine and deep learning, machine-learning technologies for detecting epiretinal membrane using 3D-OCT. Int Ophthalmol. 2019;39:1871–1877.
44. Lu W, Tong Y, Yu Y, et al. Deep learning-based automated classification of multi-categorical abnormalities from optical coherence tomography images. Transl Vis Sci Technol. 2018;7:41.
45. Koo HC, Rhim WI, Lee EK. Morphologic and functional association of retinal layers beneath the epiretinal membrane with spectral-domain optical coherence tomography in eyes without photoreceptor abnormality. Graefes Arch Clin Exp Ophthalmol. 2012;250:491–498.
46. Cho KH, Park SJ, Woo SJ, et al. Correlation between inner-retinal changes and outer-retinal damage in patients with idiopathic epiretinal membrane. Retina. 2018;38:2327–2335.
47. Ichikawa Y, Imamura Y, Ishida M. Inner nuclear layer thickness, a biomarker of metamorphopsia in epiretinal membrane, correlates with tangential retinal displacement. Am J Ophthalmol. 2018;193:20–27.
48. Sun JK, Lin MM, Lammer J, et al. Disorganization of the retinal inner layers as a predictor of visual acuity in eyes with center-involved diabetic macular edema. JAMA Ophthalmol. 2014;132:1309–1316.
49. Fang IM, Chen LL. Association of macular choroidal thickness with optical coherent tomography morphology in patients with idiopathic epiretinal membrane. PLoS One. 2020;15:e0239992.
50. Michalewska Z, Michalewski J, Adelman RA, et al. Choroidal thickness measured with swept source optical coherence tomography before and after vitrectomy with internal limiting membrane peeling for idiopathic epiretinal membranes. Retina. 2015;35:487–491.
Keywords:

artificial intelligence; deep learning; epiretinal membrane; optical coherence tomography

Supplemental Digital Content

Copyright © 2023 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.