Secondary Logo

Journal Logo

Original Study

Development and Clinical Validation of Semi-Supervised Generative Adversarial Networks for Detection of Retinal Disorders in Optical Coherence Tomography Images Using Small Dataset

Zheng, Ce PhD, MD; Ye, Hongfei PhD, MD; Yang, Jianlong PhD†,||; Fei, Ping PhD, MD; Qiu, Yingping MD; Xie, Xiaolin MD; Wang, Zilei BSc§; Chen, Jili MD; Zhao, Peiquan PhD, MD

Author Information
Asia-Pacific Journal of Ophthalmology: May-June 2022 - Volume 11 - Issue 3 - p 219-226
doi: 10.1097/APO.0000000000000498
  • Open

Abstract

Deep learning (DL) is artificial intelligence-based machine learning technology composed of multiple processing layers to learn representations of data with multiple abstraction levels.1 in ophthalmology, DL has shown dramatic diagnostic performance, across subspecialties including detection ofdiabetic retinopathy, glaucoma, and age-related macular degeneration from fundus photographs and optical coherence tomography (OCT).2–7

So far, the most common form of DL for medical image classification is supervised learning. During supervised DL model training, researchers need to collect a dataset with the label. The machine then produces an output as a vector of scores, 1 for each category. Using a training dataset (hereinafter referred to as “Cell dataset”) with a total of 108,312 OCT images, Kermany et al8 trained DL models by sharing multisite datasets from different hospitals. A challenging problem in supervised learning is that this approach requires a large training dataset of well-labeled medical images, which is time-consuming and suffers from inherent interrater or intrarater variability.2

Generative adversarial networks (GANs) are first proposed by Goodfellow et al9 as a new unsupervised learning model in 2014. During GANs training, 2 neural networks (a generative net and a discriminator net) are trained simultaneously with conflicting objectives. Recently, we reported a GANs approach to synthesizing OCT images as training datasets for DL algorithms training and achieved good diagnostic performance.10 However, large training datasets with labels are still needed to train GANs to generate realistic OCT images with different retinal disorders. There is a strong interest in using unlabeled data to improve DL performance, and semi-supervised GANs and few-shot learning have shown promising results.11–13 For example, Odena et al14 showed how a semi-supervised GANs classifier could perform as good as a standalone DL model on the public Modified National Institute of Standards and Technology dataset (60,000 images in total) when trained with few labeled examples (25, 50, 100, and 1000, respectively). In our recent work,15 we demonstrated how the small dataset (such as anterior segment OCT images) could further benefit from a semi-supervised GAN that also achieved accuracies comparable to a full-supervised DL model. Thereby, in this study, we explored the semi-supervised GANs to classify OCT images with retinal disorders using a small-labeled dataset.

METHODS

Datasets

To develop a semi-supervised GAN, we used OCT images from a public database provided by Kermany et al8 (abbreviated as “Cell datasets”), described in detail elsewhere. The original Cell datasets had been labeled [choroidal neovascularization (CNV), diabetic macular edema (DME), drusen, and normal] with a tiered grading system. In the current study, we only randomly selected a small supervised dataset with the labels from the Cell datasets as the supervised training dataset and left other OCT images with unlabel for unsupervised GAN training. We used the same validation dataset from the Cell datasets to test the classification performance of the semi-supervised GAN model. To test the generalizability of the semi-supervised GAN model, we further collected 2 independent clinical datasets using different OCT devices from an ongoing clinical trial (Chinese clinical trial registration number: ChiCTR1900024528).16 Following the study protocol as mentioned by Kermany et al, we retrospectively reviewed the medical records from the Department of Ophthalmology at Shanghai Shibei Hospital (SSH) and Xinhua Hospital (XH) from July 2018 to November 2019 (hereinafter referred to as the “SSH testing dataset” and “XH testing dataset,” respectively). The presence or absence of CNV, DME, drusen, and normal macular on the OCT scan was recorded. The CNV was defined based on fluorescein angiography, which is characterized as a well-demarcated hyperfluorescence in the early phase of the angiogram, with progressive leakage of dye into the overlying subneurosensory retinal space during the late phases of the angiogram.17 For DME, the OCT features included thickening with homogenous optical reflectivity, thickening with markedly decreased optical reflectivity in the outer retinal layer, or foveolar detachment without traction and with apparent vitreofoveal traction.18 OCT images from SSH were imaging with Cirrus OCT (Carl Zeiss Meditec), and OCT images from XH were taken using RTVue (Optovue, Inc, Fremont, CA). All OCT images in independent clinical datasets were graded by 3 certified retinal specialists. For each patient, only 1 image most representative of the disease was chosen. The intraclass agreement of 3 retinal specialists was evaluated with kappa statistics (k) in a test set (80 OCT images with 20 from each category). Three retinal specialists achieved an unweighted k value above 0.7. Figure 1 shows the flow chart of the current study.

F1
FIGURE 1:
Flow chart of the current study.

The written informed consent was obtained from all subjects. The study was approved by the Institutional Review Board of SSH (identifier: YL_201805258-05) and XH (identifier: XHEC-D-2021-115) in accordance with the tenets of the Declaration of Helsinki.

Development of Semi-Supervised GANs

In our previous study, we had reported the detail of GANs synthetic OCT images using labeled training dataset.10 In brief, GANs are deep neural network architectures comprised of 2 networks, where the generator is trained to produce realistic samples, and the discriminator is trained to distinguish synthetic data from real data.9 These 2 networks are trained together in a zero-sum game until the synthetic data to be indistinguishable from real data. The semi-supervised GANs are an extension of the GANs architecture that involves the simultaneous training of (1) a supervised discriminator, (2) an unsupervised discriminator, and (3) a generator model11,14 (Fig. 2). It involves directly training the discriminator models for both the unsupervised GANs task and the supervised classification task simultaneously. The unsupervised discriminator is a binary classifier model that predicts whether the image is real or synthetic. The supervised discriminator adopts a multiclass classifier model that predicts the class of the image. The 2 discriminators have different output layers but share all feature extraction layers. This means that updates to one of the classifier models will impact both discriminator models. For generator model, we employ the deep convolutional GAN model. The main goal of the deep convolutional GAN in our semi-supervised GAN model is to minimize the classification error of the discriminator and produce more realistic images from the generator. For the implementation of the semi-supervised GANs architectures, we follow the scheme suggested by Brownlee et al.19 Regarding image resolution, we modified the architecture to handle 128 × 128 pixels. Figure 2 shows a scheme of the semi-supervised GANs architecture. The generator model mainly consists of 5 transpose convolution layers (kernel size = 5, stride = 2, padding = same) with batch normalization. The unsupervised discriminator includes 5 convolution layers (kernel size = 5, stride = 2, padding = same) and 1 dense layer [Dense (32,786 × 1)]. The supervised discriminator includes a 4 × 1 fully connected layer. All layers take Leaky Rectified Linear Activation Function as the active function, except for the output layer (softmax). All the models in our experiments were trained and tested using Keras API (Google, version 2.2.4) with Tensorflow framework (Google, version 2.1.0) as the backend. The computer used in this study was equipped with NVIDIA GTX 1080Ti 12 GB graphics processing unit, 128 GB Random Access Memory, and Intel Core i7-2700K 4.6 GHz Central Processing Unit.

F2
FIGURE 2:
Schema of the semi-supervised GANs architecture. GANs indicates generative adversarial networks.

Evaluation of Classifier Model of Semi-Supervised GAN for Macular Disorders Classification

We first assessed the classifier model of the semi-supervised GAN with different small supervised datasets (100, 200, 400, and 1000 OCT images, all with an equal number of images from each category). We then evaluated the diagnostic performance of retinal disorders classification by comparing the classifier model trained in the semi-supervised GAN and supervised DL model trained supervised dataset. The detail of the supervised DL model with transfer learning technique had been described by Kermany et al. In brief, a pretrained model was adopted by using a modified Inception V3 architecture with weights pretrained on Image-Net.20,21 We further added a new classification layer on top of the pretrained model to recognize our classes from scratch. We used an Adam Optimizer with a learning rate of 0.001 and a batch size of 32. Because of the small supervised dataset, data augmentation was performed to increase the amount and the type of variation during supervised DL model training, including horizontal flipping, rotation of 10 degrees, sharpening, and adjustments to saturation zooming. Training on all categories was run for 100 epochs as the absence of further improvement in both accuracy and cross-entropy loss. We then assessed the diagnostic performance of the classifier model in the semi-supervised GAN and the supervised DL model in Cell validation dataset, SSH testing dataset, and XH testing dataset.

Statistics

All statistical analyses were computed using the Python (version 3.7) and Scikit_learn modules (Anaconda, version 1.9.12, Continuum Analytics). We used confusion matrices to compare the prediction of DL models with true labels. The true positives, true negatives, false positives, and false negatives were measured according to the confusion matrix. The performance of the DL model was indicated by the following: accuracy, precision, recall, and F1 score, which were defined as:Accuracy=TP+TNTP+TN+FN+FPPrecision=TPTP+FPRecall=TPTP+FNF1score=TNTN+FP

The area under the receiver operating characteristic curve (AUC) was generated to evaluate the models’ ability to distinguish urgent referrals (defined as CNV or DME OCT images) from nonurgent referrals (drusen and normal OCT images).

RESULTS

A total of 108,312 OCT images were downloaded from the “Cell datasets.” The whole training datasets included (1) small supervised datasets (100, 200, 400, and 1000 OCT images with an equal number from each category) with the label for supervised GANs training, and (2) 107,912 OCT images without labeling for unsupervised GANs training. The local cell validation dataset included 1000 images with 250 from each category. For the SSH testing dataset, a total of 366 OCT images from 366 patients were downloaded using Cirrus OCT. The SSH testing dataset included 92 OCT images from eyes with CNV, 101 OCT images from eyes with DME, 56 OCT images from eyes with drusen, and 117 OCT images from normal eyes. For the XH testing dataset, a total of 511 OCT images from 511 patients were downloaded using RTVue OCT. The XH testing dataset included 69 OCT images from eyes with CNV, 115 OCT images from eyes with DME, 32 OCT images from eyes with drusen, and 295 OCT images from normal eyes.

We first evaluated the performance of the semi-supervised GANs model with different numbers of supervised training images. The classifier model of the semi-supervised GAN did improve the performance when the training datasets increased. The diagnostic accuracy (in the local cell validation dataset) was 0.65 [95% confidence interval (CI): 0.62–0.68] for 100 images, 0.83 (95% CI: 0.81–0.85) for 200 images, 0.92 (95% CI: 0.90–0.95) for 400 images, and 0.93 (95% CI: 0.91–0.95) for 1000 images, respectively. We, therefore, chose 400 images as supervised training dataset in our final analysis, as the semi-supervised GAN method only achieved limited improvements when the supervised dataset was larger than 400.

For detecting retinal disorders, the semi-supervised GANs classifier achieved better performance than the supervised DL model when training on a small dataset. In 3 testing datasets, the semi-supervised GANs achieved an accuracy of 0.92 (95% CI: 0.90–0.94) in the local cell validation dataset, an accuracy of 0.90 (95% CI: 0.88–0.92) in the SSH testing dataset, and an accuracy of 0.92 (95% CI: 0.90–0.95) in XH testing dataset, respectively (Table 1 and Fig. 3). We also evaluated the ability of the 2 models to distinguish urgent referrals from nonurgent referrals on OCT images. The semi-supervised GANs classifier also achieved better performance with an AUC of 0.99 (95% CI: 0.98–0.99) in the local cell validation dataset, an AUC of 0.97 (95% CI: 0.96–0.98) in the SSH testing dataset, and an AUC of 0.99 (95% CI: 0.98–0.99) in the XH testing dataset, respectively (Fig. 4). The supervised DL model had an AUC of 0.97 (95% CI: 0.96–0.98) in the local cell validation dataset, an AUC of 0.96 (95% CI: 0.95–0.97) in the SSH testing dataset, and an AUC of 0.98 (95% CI: 0.97–0.99) in the XH testing dataset, respectively (Fig. 4).

TABLE 1 - The Multiclass Classification Matrics of the Semi-Supervised GANs and the Supervised DL Model Testing in the Local Cell, SSH and XH Validation Datasets
Semi-Supervised GANs Supervised DL model
Precision (95% CI) Recall (95% CI) F1 score (95% CI) Precision (95% CI) Recall (95% CI) F1 score (95% CI)
A: Testing in the local cell dataset
 CNV 0.96 (0.95–0.98) 0.89 (0.87–0.91) 0.93 (0.91–0.95) 0.94 (0.92–0.95) 0.83 (0.80–0.85) 0.88 (0.86–0.90)
 DME 0.95 (0.93–0.96) 0.90 (0.88–0.92) 0.92 (0.91–0.94) 0.87 (0.85 -0.89) 0.86 (0.84–0.88) 0.87 (0.85–0.89)
 Drusen 0.83 (0.81–0.86) 0.95 (0.93–0.96) 0.89 (0.87–0.91) 0.68 (0.65 -0.71) 0.9 (0.92–0.95) 0.79 (0.76–0.81)
 Normal 0.89 (0.87–0.91) 0.91 (0.89–0.92) 0.90 (0.89–0.92) 0.94 (0.93–0.96) 0.83 (0.81–0.85) 0.88 (0.86–0.90)
B: Testing in SSH dataset
 CNV 0.93 (0.91–0.96) 0.98 (0.96–0.99) 0.96 (0.94–0.98) 0.85 (0.81 -0.89) 0.90 (0.87–0.93) 0.87 (0.84–0.91)
 DME 0.93 (0.91–0.96) 0.88 (0.85–0.91) 0.90 (0.87–0.93) 0.89 (0.86–0.92) 0.85 (0.81–0.89) 0.87 (0.84–0.90)
 Drusen 0.82 (0.78–0.86) 0.82 (0.78–0.86) 0.82 (0.78–0.86) 0.64 (0.59–0.69) 0.82 (0.78–0.86) 0.72 (0.67–0.77)
 Normal 0.91 (0.88–0.94) 0.92 (0.89–0.95) 0.91 (0.89–0.94) 0.93 (0.91 -0.96) 0.84 (0.81–0.88) 0.89 (0.85–0.92)
C: Testing in XH dataset
 CNV 0.94 (0.92–0.97) 0.76 (0.72–0.80) 0.84 (0.81–0.88) 0.93 (0.91 -0.95) 0.88 (0.85–0.91) 0.90 (0.88–0.93)
 DME 0.88 (0.85–0.91) 0.94 (0.92–0.96) 0.91 (0.89–0.94) 0.91 (0.89–0.94) 0.87 (0.84–0.90) 0.89 (0.86–0.92)
 Drusen 0.84 (0.81–0.88) 0.79 (0.75–0.83) 0.82 (0.79–0.85) 0.97 (0.95 -0.98) 0.72 (0.68–0.76) 0.83 (0.79–0.86)
 Normal 0.95 (0.93–0.97) 0.99 (0.97–1.00) 0.97 (0.95–0.98 0.92 (0.89–0.94) 0.99 (0.98–1.00) 0.95 (0.93–0.97)
CI indicates confidence interval; CNV, choroidal neovascularization; DL, deep learning; DME, diabetic macular edema; GAN, generative adversarial network; SSH, Shanghai Shibei Hospital; XH, Xinhua Hospital.

F3
FIGURE 3:
Confusion matrics of the semi-supervised GANs and the supervised DL model testing in the local cell validation dataset, the SSH testing dataset, and the XH testing dataset. DL indicates deep learning; GANs, generative adversarial networks; SSH, Shanghai Shibei Hospital; XH, Xinhua Hospital.
F4
FIGURE 4:
Receiver operating characteristic curves summarizing the ability of the semi-supervised GANs and the supervised DL model to discriminate urgent referrals from nonurgent referrals in the local cell validation dataset, the SSH testing dataset, and the XH testing dataset. DL indicates deep learning; GANs, generative adversarial networks; SSH, Shanghai Shibei Hospital; XH, Xinhua Hospital.

DISCUSSION

In this study, we proposed a semi-supervised GANs model to detect retinal disorders from OCT images with a small dataset. In contrast to supervised DL approaches to retinal disorders detection, the semi-supervised GANs model presented in our work did not need a large labeled dataset and still achieved good diagnostic performance. Most importantly, our models maintained their performance when tested on independent datasets taken from different hospitals using different OCT devices. Our study suggests that semi-supervised GANs have been of great advantage both in theory and practice because it requires less human effort and gives higher accuracy.

One major limitation of the supervised DL model is that it requires a large amount of labeled data to obtain good performance. Previously, researchers from different centers may share data and combine them into a larger dataset for DL model training. Li et al22 reported a fully automated DL model trained on more than 200,000 OCT images from 5 different hospitals and achieved a prediction accuracy of 98.6%. Gulshan et al3 trained a transfer learning DL model on diabetic retinopathy datasets from 2 different countries, and the model had excellent performance with an AUC of 0.974. Recently, the National Institute of Standard and Technology (US) defined biomedical image data as a kind of personally identifiable information, which could possibly preclude sharing medical images from different centers/ countries or based on approval from local institutional review board.23 Meanwhile, in medical image analysis, obtaining high-quality labels for the data is time-consuming, as accurately grading medical images needs expertise knowledge of the clini-cians.24–26 In the clinical research setting, clinicians can often feasibly label a small porting of the images while having a much larger portion of unlabeled images.

GANs have made dramatic progress in ophthalmology, such as generating realistic ophthalmic imaging,27 domain adapta-tion28–30 between different ophthalmic imaging modalities, and synthesizing OCT images31 for developing DL algorithms. Semi-supervised classification is an area in machine learning and aims to learn from partially labeled/classified data.25 In the application of semi-supervised recognition, GANs leverage the information in both the labeled and unlabeled data to improve the classifier's performance on unseen labeled data. Yi et al32 adopted CatGANs for unsupervised and semi-supervised feature representation learning from dermoscopy images. Lahiri et al and Lecouat et al adopted the semi-supervised GANs for retinal vessel classification and cardiac disease diagnosis, respectively.25,32 Their studies also suggested that the semi-supervised GANs can achieve performance comparable with a traditional supervised DL model with less labeled data. In the current study, we used a public “Cell dataset,” reported by Kermany et al, with more than 100,000 OCT images collected in 4 eye centers from 2 different countries. In contrast to Kermany's study, we randomly chose a labeled dataset with only 400 images for supervised GANs training and left other OCT images unlabeled for unsupervised GANs training. Using the similar supervised DL model as mentioned in Kermany's study, It is interesting to notice that, using the same cell validation dataset, our model trained on small labeled images achieved a similar AUC of 0.98 with that of the DL model trained in labeled Cell dataset.

Previously, the DL community also used transfer learning and data augmentation techniques to address the problem with small datasets.33–35 Transfer learning can improve performance of a DL model by conducting a pretrain using a different large dataset (ImageNet).36 However, a critical requirement for successful transfer learning is that the source and target domains should be related.37 There are fundamental differences in data sizes, features, and task specifications between natural image classification (ImageNet) and the target medical imaging tasks.38 Data augmentation is a method of generating more training data from an existing training sample and adopted universally in DL.39 Traditional augmentation approaches, such as flipping, scaling, translating, rotating, blurring, and sharpening, are firmly limited, especially in medical imaging tasks where the images follow strict standards. In the current study, we also trained a supervised DL model using both transfer learning and data augmentation techniques. Our results demonstrated that the performance of semi-supervised GANs was better than that of the supervised DL model when training on a small labeled dataset. We also noticed that some recent advances, such as novel residual blocks for DL architectures and attention mechanisms for detecting retinal dis-eases,40,41 may improve diagnostic performance. The benefit of the above DL models remains challenging in real clinical setting. However, the future work of novel DL models would require more progress in semi-supervised learning to address training with reduced datasets.

In contrast to other GANs’ approaches to medical image synthesis, the semi-supervised GANs presented in this work focus on the supervised mode rather than the generator mode. It is still controversial how the discriminator benefits from joint training with a generator, and why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Some authors noticed that the model generated better images11,42 but failed to improve the performance on semi-supervised learning. Dai et al43 further proposed that given the discriminator objective, good semi-supervised learning indeed required a bad generator. During our training, the loss for the supervised model would shrink to a small value close to zero and accuracy would hit higher than 90%, which would be maintained for the entire run. On the other hand, the loss of the unsupervised discriminator (the accuracy was around 0.4) and generator remained at modest values throughout the run as they had to be kept in equilibrium. As shown in Figure 5, the generated images in our semi-supervised GANs were not as good as the generated images in progressively grown GANs model presented in our previous study.31

F5
FIGURE 5:
Examples of real images (the last column), synthetic images generated by the semi-supervised GAN method (the second column), and synthetic images generated by the progressively grown GAN (the first column). GAN indicates generative adversarial network.

The limitations of the current study are as follows. Images synthesized in our study were 128 × 128 pixels, which was lower than that of OCT images used in the original Kermany's study. This may be 1 reason that semi-supervised model could not achieve better performance than Kermany's supervised model training on larger dataset. The primary purpose of semi-supervised GANs architecture was to train a classifier rather than a generator. We had previously synthesized the realistic OCT images with higher resolutions (eg, 256 x 256 pixels or above) using a progressively grown GANs architecture.10 Second, we did not include other retinal disorders (such as macular hole, epire-tinal membrane, hypermyopia retinal, or pigment epithelium detachment) in our study. It is possible to train semi-supervised GANs using OCT images with more retinal disorders.44 Third, 2 independent clinical datasets only included OCT images with limited sample size, making it challenging to interpret the small difference in the performance of the DL model. Future work is warranted to improve the generalization of semi-supervised GANs using testing datasets from different centers. Recently, Alexey Dosovitskiy et al45 proposed the Vision Transformer (ViT) model that achieved better results compared to other DL models (such as Inception V3 in the current study). To achieve state-of-the-art results, ViT model needs to train on the JFT-300 M dataset (with more than 300 million images in total), which is not available in our study. Some authors also proposed a scalable self-supervised learning from the pretrained large ViT models, which could also be adopted into semi-supervised GANs architecture to further improve the performance.46 Further studies will involve using other transformer or GAN models such as retinal vascular GAN or vision transformer GAN.47,48

In summary, our study showed that, in a small labeled dataset, a semi-supervised GAN could detect the retinal disorders imaging by different OCT devices, and the performance of the semi-supervised GANs was better than (or at least equal to) that of a supervised DL model. In real-life clinical settings, we believe the same network architecture can also be developed by training on the limited number of experts’ labeled images and plentiful of unlabeled images from the imaging centers.

REFERENCES

1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 52:436–444.
2. Ting DSW, Liu Y, Burlina P, et al. AI for medical imaging goes deep. Nat Med 2018; 24:539–540.
3. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016; 316:2402–2410.
4. Ting DSW, Cheung CY, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 2017; 318:2211–2223.
5. Sim SS, Yip MYT, Wang Z, et al. Digital technology for AMD management in the post-COVID-19 new normal. Asia Pac J Ophthalmol (Phila) 2021; 10:39–48.
6. Akhter M, Toy B. Big data-based epidemiology of uveitis and related intraocular inflammation. Asia Pac J Ophthalmol (Phila) 2021; 10:60–62.
7. Lee EB, Wang SY, Chang RT. Interpreting deep learning studies in glaucoma: unresolved challenges. Asia Pac J Ophthalmol (Phila) 2021; 10:261–267.
8. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018; 172:1122–1131.
9. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014; 2:2672-2680.
10. Zheng C, Xie X, Zhou K, et al. Assessment of generative adversarial networks model for synthetic optical coherence tomography images of retinal disorders. Transi Vis Sci Technol 2020; 9:29.
11. Salimans T, Goodfellow IJ, Zaremba W, et al. Improved Techniques For Training GANs. Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016; 29:2234-2242.
12. Chang CY, Chen TY, Chung PC. Semi-supervised learning using generative adversarial networks. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). 2018:892-896.
13. Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput 2021; 59:401–415.
14. Odena A. Semi-supervised Learning with Generative Adversarial Networks. 2016:arXiv:1606.01583. Available from https://ui.adsabs.harvard.edu/abs/2016arXiv160601583O. Accessed June 1, 2016.
15. Zheng C, Koh V, Bian F, et al. Semi-supervised generative adversarial networks for closed-angle detection on anterior segment optical coherence tomography images: an empirical study with a small training dataset. Ann Transl Med 2021; 9:1073.
16. He J, Cao T, Xu F, et al. Artificial intelligence-based screening for diabetic retinopathy at community hospital. Eye (Lond) 2020; 34:572–576.
17. Davis MD, Gangnon RE, Lee LY, et al. The age-related eye disease study severity scale for age-related macular degeneration: AREDS report no. 17. Arch Ophthalmol 2005; 123:1484–1498.
18. Kang SW, Park CY, Ham DI. The correlation between fluorescein angiographic and optical coherence tomographic features in clinically significant diabetic macular edema. Am J Ophthalmol 2004; 137:313–322.
19. Brownlee J. How to Implement a Semi-supervised GAN (SGAN) from Scratch in Keras. Machine Learning Mastery. Update on September 1, 2020. Available from https://machinelearningmastery.com/semi-supervised-generative-adversarial-network. Accessed July 24, 2019.
20. Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:2818-2826.
21. Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-scale Image Recognition. 2014;arXiv:1409.1556.
22. Li F, Chen H, Liu Z, et al. Fully automated detection of retinal disorders by image-based deep learning. Graefes Arch Clin Exp Ophthalmol 2019; 257:495–505.
23. Benke KK, Arslan J. Deep learning algorithms and the protection of data privacy. JAMA Ophthalmol 2020; 138:1024–1025.
24. Ting DSW, Peng L, Varadarajan AV, et al. Deep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res 2019; 72:100759.
25. Zhu XJ. Semi-Supervised Learning Literature Survey. Unted States: University of Wisconsin-Madison; 2005.
26. McMillan T. CCNA Security Study Guide: Exam 210–260. United States: John Wiley & Sons; 2018.
27. Burlina PM, Joshi N, Pacheco KD, et al. Assessment of deep generative models for high-resolution synthetic retinal image generation of age-related macular degeneration. JAMA Ophthalmol 2019; 137:258–264.
28. Kamran SA, Hossain KF, Tavakkoli A, et al. Attention2angiogan: Synthesizing Fluorescein Angiography From Retinal Fundus Images Using Generative Adversarial Networks. Presented at the 25th International Conference on Pattern Recognition (ICPR), 2000. IEEE;2021:9122-9129.
29. Tavakkoli A, Kamran SA, Hossain KF, et al. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep 2020; 10:21580.
30. Kamran SA, Hossain KF, Tavakkoli A, et al. Fundus2Angio: A Conditional GAN Architecture for Generating Fluorescein Angiography Images From Retinal Fundus Photography. Presented at the 15th International Symposium on Visual Computing, 2020. 2020:125-138.
31. Zheng C, Bian F, Li L, et al. Assessment of generative adversarial networks for synthetic anterior segment optical coherence tomography images in closed-angle detection. Transl Vis Sci Technol 2021; 10:34.
32. Yi X, Walia E, Babyn P. Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks Assisted by Wasserstein Distance for Dermoscopy Image Classification. 2018:arXiv:1804.03700. Available from https://ui.adsabs.harvard.edu/abs/2018arXiv180403700Y. Accessed April 1, 2018.
33. Simard P, Victorri B, LeCun Y, et al. Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network. Proceedings of the 4th International Conference on Neural Information Processing Systems. 1991:895-903.
34. Sharif Razavian A, Azizpour H, Sullivan J, et al. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2014:806-813.
35. Iskander M, Ogunsola T, Ramachandran R, et al. Virtual reality and augmented reality in ophthalmology, a contemporary prospective. Asia Pac J Ophthalmol (Phila) 2021; 10:244–252.
36. Yosinski J, Clune J, Bengio Y, et al. How Transferable Are Features In Deep Neural Networks? Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014;arXiv:14111792.
37. Tan B, Song Y, Zhong E, et al. Transitive Transfer Learning. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2015:1155-1164.
38. Raghu M, Zhang C, Kleinberg J, et al. Transfusion: Understanding Transfer Learning for Medical Imaging. Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019;arXiv:190207208.
39. Dao T, Gu A, Ratner A, et al. A Kernel Theory of Modern Data Augmentation. Proceedings of the 36th International Conference on Machine Learning. PMLR. 2019; 97:1528-1537.
40. Kamran SA, Saha S, Sabbir AS, et al. A comprehensive set of novel residual blocks for deep learning architectures for diagnosis of retinal diseases from optical coherence tomography images. Deep Learn Appl 2021; 2:25–48.
41. Kamran SA, Tavakkoli A, Zuckerbrod SL. Improving Robustness Using Joint Attention Network for Detecting Retinal Degeneration from Optical Coherence Tomography Images. 2020 IEEE International Conference on Image Processing (ICIP). IEEE;2020:2476-2480.
42. Ulyanov D, Vedaldi A, Lempitsky V. It Takes (Only) Two: Adversarial Generator-Encoder Networks. Presented at the 32nd AAAI Conference on Artificial Intelligence. 2018.
43. Dai Z, Yang Z, Yang F, et al. Good Semi-supervised Learning that Requires a Bad GAN. 2017;arXiv:170509783.
44. Chan EJJ, Najjar RP, Tang Z, Milea D. Deep learning for retinal image quality assessment of optic nerve head disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282–288.
45. Dosovitskiy A, Beyer L, Kolesnikov A, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. 2020;arXiv:201011929.
46. He K, Chen X, Xie S, et al. Masked Autoencoders are Scalable Vision Learners. 2021;arXiv:211106377.
47. Kamran SA, Hossain KF, Tavakkoli A, et al. VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction Using Vision Transformers. 2021;arXiv:210406757.
48. Kamran SA, Hossain KF, Tavakkoli A, et al. RV-GAN: Segmenting Retinal Vascular Structure in Fundus Photographs Using a Novel Multi-scale Generative Adversarial Network. Medical Image Computing and Computer Assisted Intervention - MICCAI 2021 2021. 34–44.
Keywords:

deep learning; generative adversarial networks; optical coherence tomography; retinal disorders; semi-supervised

Copyright © 2022 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.