Visual field (VF) testing plays an essential role in the diagnosis and monitoring of glaucoma for the assessment of functional loss, combined with the morphometric analysis of the optic disc. Several methods are available to quantify VF loss. Today, white-on-white standard automated perimetry (SAP) represents the gold standard and is the most commonly used method. Although, many factors influence its outcome. Anderson et al1 showed that home monitoring can improve patients’ outcomes by facilitating a higher frequency of VF testing. The earlier the VF loss is detected, the earlier therapeutic intervention can be performed.
The idea of using virtual reality (VR) for VF assessment was first described in 1998.2 Since then, software and hardware have significantly improved and various ideas about implementing novel technologies into VF testing have emerged.3 Because of the low resolution of VR glasses, the combination of smartphones inserted into a VR Headset seems to be a promising alternative. A head-mounted perimeter can be used to test patients’ luminance sensitivity similarly to the SAP. Tsapakis et al4 showed a high correlation between the results of perimetry using VR glasses and the Humphrey perimeter, yet the study included only 20 eyes of 10 patients. Narang et al5 introduced a VR-based Advanced Vision Analyzer that allowed accurate VF assessment and showed a test-retest reliability similar to the Humphrey Perimeter. Also, a recent study conducted by Stapelfeldt et al6 showed well comparable results between a VR-based and the Octopus 900 perimeter.
The COVID-19 pandemic has highlighted the need for telemedical diagnostics. In Germany, elective examinations and treatments had to be postponed to preserve capacities for critically ill patients. To ensure optimal care for all patients, there is now a significant need for high-quality telemedical diagnostics. The development of the VR perimeter was based on the idea of providing a home care measurement method for VF control to patients. In this context, we developed and tested a VR system to assess the VF using a smartphone. The question of whether clinically meaningful VF measurements are valid with a smartphone-based VR system was examined.
This study was conducted at the Department of Ophthalmology of the Friedrich-Alexander-University in Erlangen, Germany. Patients were recruited during regular consultation hours and the Erlangen Glaucoma Registry (EGR; NTC00494923). All patients were informed about the examination, and written informed consent was obtained from all participants before the study. The registered study followed the tenets of the Declaration of Helsinki for research involving human subjects and was approved by the Ethics Review Committee of the Friedrich-Alexander-University in Erlangen-Nürnberg (Number: 3458). One hundred patients were ophthalmologically examined, including white-on-white perimetry in normal or dynamic mode using an Octopus 900 perimeter (Haag-Streit). The selection criteria for statistical evaluation were (1) the decimal visual acuity was >0.5 (50%) and (2) the Octopus reliability factor (RF) was <15%. The RF describes the percentage of false-positive and false-negative answers from all catch trials. If both eyes matched the criteria, the eye with the more advanced disease was included in the study. If both eyes showed the same pathology, the selection was made randomly. After applying the inclusion criteria, 93 eyes were included in the statistical evaluation. Table 1 shows descriptive data about the cohort consisting of 48 women and 45 men with a mean age of 62.52 years (±12.2, range: 26–83 y). Within this group of patients, we found 19 controls (normal optic disc and intraocular pressure, no visual defect or suspect because of glaucoma risk factors but without pathologic findings), 17 patients with ocular hypertension, 11 preperimetric glaucomas, and 46 perimetric glaucomas. The diagnoses were made by a glaucoma expert in regard of morphometric changes of the optic nerve head such as neuroretinal rim thinning, retinal nerve fiber layer defects, excavation, and cup-to-disc-ratio, as well as the perimetry results and the IOP. The patients were recruited from the Erlangen Glaucoma registry. These patients are regularly examined so that they show a lot of experience in perimetry.
TABLE 1 -
Description of the Included Patients Split by Diagnosis Groups (N=93)
||Octopus mode (%)
||N (% of total)
||Visual acuity (SD)
|Normal/with risk factors
Using the Kruskal-Wallis test, there was so significant difference between ages of the groups (P>0.05).
In addition, 6 healthy volunteers who were not included in the patients’ cohort were examined by the smartphone-based campimetry (Sb-C) twice on the same day to evaluate its test-retest reliability. The participants were medical students aged 22–26 years with no known internal or ocular disease, no correction glasses and a decimal visual acuity of 1.0 or higher. Patients could only keep their distance correction glasses on during the Sb-C examination. In the case of other correction glasses, patients were given trial lenses.
The software Sb-C was downloaded on the smartphone and installed as the SmartCampiTracker app. In addition, the SmartCampiTracker app allowed the registration and storage of data. The software Sb-C was applied using VR glasses (VR One plus; Zeiss) and an iPhone 6 with a 4.7-inch display and a maximum luminance of (Fig. 1), examining the 30-degree VF with 59 test positions corresponding to the G pattern for Octopus perimetry, in which stimuli are distributed along with the retinal nerve fiber bundles (Fig. 2).7,8 The VR goggles strap onto the head with an adjustable head knot and are equipped with precision lenses that allow different distances between the pupil and lens, as well as the wearing of correction glasses. It supports an interpupillary distance between 53 and 77 mm without any necessary adjustment. The iPhone 6 display has a resolution of 1334×750 pixels with a pixel density of 326 pixels per inch and a contrast ratio of 1400:1.8 During VF assessment, the maximum screen brightness was turned on and the battery was fully charged. Automatic screen brightness adjustment, as well as the blue light filter (Night Shift), was turned off. The software Sb-C was developed using a Google Cardboard Software Development Kit. The background brightness of the Sb-C was 0.05 cd/m2. The luminance of the VR-Screen was detected by a photometer (Photometer J16, Tektronix Inc.) measuring the luminance in cd/m2 followed by the luminance of each single luminance step of the presented stimulus. Representing a Goldmann III stimulus size, the visual stimuli were offered for 200 ms in random order with increasing luminance and starting with the lowest possible luminance. If stimuli were not seen, the intensity was increased using a standard 4/2 dB full-threshold staircase strategy. The decibel steps chosen were the following: 24.81, 23.61, 22.38, 21.41, 20.24, 19.43 dB, 18.56 dB, 17.62, 16.72, 15.78, 14.92, 14.09, 13.16, 12.280, 11.34, 10.44, 9.52 dB, 8.59, 7.70, 6.81, 5.89, 4.95 dB, 4.04, 3.07, and 2.07 dB. The decibel steps were constructed in such a way that the resulting luminance steps were the same as the steps of the Octopus G1 and referring to the measured background brightness. However, the maximum sensitivity, which can be measured by the Sb-C, is 24.81 dB compared with 47 dB, which can be measured by the Octopus G1. The time between stimuli differed with a maximum waiting time of 3000 ms. The recognition of each light signal had to be confirmed by the patients by pushing a button on a Bluetooth-connected controller. Otherwise, it was noted as not seen. Contralateral eyes were not occluded. Thus, the contralateral eye focused on a display showing the same luminance as the background of the testing site. The Sb-C can be used in different modes in terms of starting luminance of the stimuli: it can be started with maximum or minimum luminance. In this study, we started with the minimum luminance. A test run with 10 testing points showing the highest intensity was made with every patient before measuring the sensitivities. This way we could verify that the patients’ cognitive and motoric function was sufficient for the examination. Also, patients could get an impression of how the stimuli will look like without a resulting fatigue from the pretest. Fixation loss was captured via the stable location of the blind spot 3 times during each examination using the Heijl-Krakau method. False-positive reactions were recorded if the patient pushed the button when no stimulus was presented.
Patients could create a profile on the SmartCampiTracker app in which the results of the Sb-C were saved. The Sb-C generated a VF map with a graphic illustration of the defect. Two patients’ examples are shown in Figure 3 in comparison to the output of the Octopus perimeter. The examinations on both devices were held on the same day.
Statistical analysis was performed using SPSS version 25.0 (SPSS Inc.). Descriptive studies were made and associations between the sensitivity thresholds of single tested points in Octopus G1 and the Sb-C. Correlations between them were tested using Spearman correlation test, which was also used to test the correlation of the overall MS of the tested points. The Kruskal-Wallis test was used to examine age differences between the groups. The test-retest reliability of the Sb-C was tested on 6 healthy probands with 2 test runs on the same day and was statistically evaluated using Spearman correlation. To demonstrate the mean difference in measurements and the acceptable range of variation, a Bland-Altman plot was drawn. In addition, the ranked sensitivities resulting from the different examinations were compared for 6 eyes. The level of significance was accepted at P<0.05.
The MS of all 59 testing points of the Octopus G1 perimetry was 23.13 dB (95% CI, 22.08–24.18) in all eyes. The MS of the Sb-C was 21.23 dB (95% CI, 20.37–22.08). The correlation between the mean MS (whole VF) measured by the Octopus G1 perimetry and the Sb-C was strong with r=0.815 (P<0.05). The scatter plot (Fig. 4) graphically demonstrates the correlation between the MS measured by the 2 devices. Comparing patients with and without VF defects, the MS measured by the Octopus G1 and the Sb-C was strong in each group: r=0.733, P<0.05 (no VF defect, N=48) and r=0.797, P<0.05 (VF defect, N=45). Figure 5 shows the mean difference between the MS measured by the Octopus G1 and the SB-C in dB and the mean with maximum and minimum accepted range (2 SD). The mean difference between the 2 methods was 1.88 dB with a SD of 2.30 dB and an accepted range between −2.63 (lower limit agreement) and 6.39 dB (upper limit agreement) consequently. Figure 6 shows the sensitivity values assessed by the Octopus and the Sb-C ranked for 6 patients with central and peripheral scotoma.
There was a moderate correlation found between the sensitivity thresholds of single points measured by the Octopus G1 and the Sb-C with a mean r of 0.57 (95% CI, 0.55–0.60). The mean number of false-positive reactions patients showed when tested by the Sb-C was 3.89 with a range from 0 to 40 and a median of 2. The mean number of reactions when testing signals in the area of the blind spot was 0.69 with a range from 0 to 3 and a median of 0. The number of false-positive reactions (r=−0.080, P=0.444) and fixation losses (r=−0.072, P=0.494) did not significantly correlate with the MS measured by the SB-C.
The test-retest reliability of the Sb-C was tested on 6 healthy participants with a correlation of r=0.591 (P<0.05).
The goal of the study was to test the smartphone-based campimetry software within the app SmartCampiTracker, which would be easy and inexpensive to use at home.
Smartphone-based devices can be used to assess VF defects in a similar way to conventional SAP. The undeniable advantage of head-mounted perimeters is their ability to be used regardless of where the patient is located. The software Sb-C to perform VF assessment can be easily downloaded to a smartphone as the SmartCampiTracker app. The app enables the patient to perform the VF examination at home using a smartphone and a head-mounted display. The workflow is as follows: the patient downloads the app SmartCampiTracker, registers, and performs the Sb-C using the software Sb-C. The VF data are stored on the app. The patient can request a remote medical grading. Within the app, the patient can view his or her latest and all previous findings.
It has been shown that the results of VF testing are influenced by many external and internal factors. The measured sensitivities of different test points depend on the defect’s severity,9 the patient’s motivation and cognitive ability, the technician’s experience, and even circadian and seasonal variabilities.10 Common parameters for the objectification of a patient’s reliability are fixation loss, false-positive and false-negative answers, and testing time. Though, their actual effect on VF reliability strongly depends on the defect’s severity.11 The resulting VF variability represents a hindrance to the detection of progressing VF loss. This raises the question of how frequently a VF assessment should be performed. Wu et al had shown that raising the frequency of VF tests from annually to twice or 3 times a year does decrease the time to detect VF loss progression. However, a limitation of this study was the assumption that VF loss is a linear process that does not apply in reality. Rather, it is a slowly evolving process, but one that seems to progress rapidly in some patients.12 Patients and practitioners might tremendously profit from the possibility of uncomplicated regular VF testing without the necessity to see a specialist. In addition to enabling home-testing, the Sb-C might be useful for the in-clinic diagnostics of special patient groups. VR glasses are lightweight and allow the patient to move during the examination. This is not only an advantage for claustrophobic patients, yet also for patients who are unable to hold the required head position in the perimeter hemisphere.4
The MS and the sensitivities of single points correlated significantly between the Octopus G1 and the Sb-C. Figure 4 underlines the strong correlation apart from a few outliers. However, the MS measured by the Sb-C was significantly lower than the MS measured by the Octopus G1. The differences between sensitivity thresholds of single points measured by the Octopus G1 and the Sb-C were not consistent throughout the VF. They were bigger in the macular region and became smaller toward the periphery. A direct comparison of results measured by different perimeters is difficult because of lacking standardization concerning the test point pattern, background luminance, stimulus maximal intensity, stimulus size, and the stimulus exposure time.13,14 The background brightness of the Octopus perimeter was 1.27 db/m2 compared with 0.05 cd/m2 in the Sb-C. This was because of the nearly black background of the iPhone. The maximum sensitivity that could be measured by the Sb-C was 24.8 dB compared with 47 dB that can be achieved by the Octopus G1. Because of this technical limitation, a hill of vision cannot be reached in the macular area. This explains the ceiling effect at 24.8 dB which is shown in Figure 5. This results from the limited luminance of the smartphone display. With improvement of the hardware, this effect might not be as striking. Also, further development of the software needs to be done to measure higher sensitivities, especially to portray the macular area more accurately. This limitation is also demonstrated by the graphical representation of the ranked sensitivity (Fig. 6). Still, they show good agreement between Octopus and Sb-C results in patients with peripheral and central scotomas. The scotomas presented by the Sb-C also were well comparable to the ones detected by the Octopus G1 (Fig. 3). The Bland-Altman Plot (Fig. 5) also shows that the measured differences are mostly within an acceptable range of 2 standard deviations of the mean difference. The fact that patients wore their correction glasses during the Sb-C was convenient for the patients yet it might have led to disrupted measurements in the upper VF (points 1–8, Fig. 2).
Monocular sensitivities were found to be significantly higher when the fellow eye is not occluded. This results from a “blankout” caused by differing levels of illumination experienced by each eye.15,16 However, nonocclusion can lead to a higher sensitivity because of binocular interaction.17 In the Sb-C, the not-tested eye focuses on the blank display that shows the same luminance as the testing site background. As the ocular system of the head-mounted device works dichoptic, binocular viewing is not possible and there was no necessity to occlude one eye in contrast to the Octopus in which the contralateral eye had to be occluded with a translucent eye patch.
The correlation between the MS values of the Sb-C and the Octopus G1 was stronger in patients with VF defects than in patients without VF defects. Razeghinejad et al18 who worked with a similar VR perimeter and compared it to MS values of the Humphrey field analyzer, made a similar observation. They showed a stronger correlation in glaucoma patients and concluded that this might be because of the principle behind Spearman rank correlation.18 As our data were not normally distributed, we used the Spearman correlation as well. The usage of ranks instead of raw data arithmetically leads to a higher correlation coefficient, because the impact of the ranked severity spectrum of glaucoma is stronger than it would be in linear correlation.18 Thus, this difference in correlation might be more of a statistical problem than a clinical result. Another reason for the higher correlation in glaucomatous eyes might be a result of the ceiling effect described above. If patients could detect stimuli presented with sensitivities out of the range of the Sb-C, this skewed the correlation.
The SB-C can be used in different modes concerning the starting luminance of the stimuli. In this study, the examination began with the minimum luminance and increased if the stimuli were not seen. In patients with VF defects, this might have led to frustration. In addition, a learning effect cannot be achieved if patients are not familiar with the stimuli. Before measuring sensitivity levels, we performed a test run with every patient, testing 10 points that displayed the greatest intensity. This way, we tried to avoid the bias mentioned above. On the other hand, when starting with too bright stimuli in healthy individuals, the program would take a very long time to run through all intensity levels and detect the usually very low sensitivities. This might lead to test-induced fatigue, which also skews the measurements. This problem could be addressed by implementing stimuli of higher intensity without measurement to generate a successful experience for the patients and help them to become accustomed to the stimuli. Starting with moderate intensity and varying it throughout the test could be another solution.
Using Spearman correlation for examining test-retest reliability, the measurements correlated well (r=0.591, P<0.05). Therefore, the SB-C showed reproducibility in 6 healthy subjects, though its reliability, when tested on patients with VF defects, still requires closer examination. The small number of healthy participants is one of the study’s limitations. The Advanced Vision Analyzer examined by Narang et al5 showed a good test-retest reliability with an overall intraclass correlation coefficient of 0.893 (P<0.001). This result is comparable to the Humphrey field analyzer. Unfortunately, our sample size was too small to use this test method so that the test-retest reliability of our device needs further investigation. The reliability parameters measured by SB-C were the number of fixation losses and the number of false-positive responses. Literature has shown that in glaucoma patients the impact of fixation loss on reliability is small.19,20 Though, the number of tests on fixation loss will be raised in further versions of the software to track patients’ compliance. Yohannan and colleagues stated that false-positive responses have the most significant impact on patients’ reliability.21 We chose the RF measured by the Octopus as an exclusion criterion if it was >15%. This way we tried to avoid including patients who were unable to comply because of any reason.
The study was limited by the heterogeneity of the group and the small number of healthy participants, which prevented us from determining standard values. Considering that sensitivity thresholds cannot easily be compared between different perimeters, further research is necessary to devise conversion formulas. In addition, it is required to estimate age-adapted values, which have been described for other perimeters. Studies showed a decline of 0.065 dB/y for Octopus G1 21 and 0.08−0.16 dB/y for the Humphrey field analyzer.22 In our cohort, patients with preperimetric and perimetric glaucoma were older than the control and ocular hypertension group (Table 1). Though, using the Kruskal-Wallis test, this age difference was not significant.
The SB-C was developed to provide patients with the opportunity to perform VF examinations at home. The COVID-19 pandemic underlines the importance of such methods. To reduce the risk of exposure to COVID-19, patient travel to physicians and direct contact should be minimized. It has been shown that telemedicine can be used effectively for glaucoma treatment.23–26 The focus of the treatment is on preventive measures. Therefore, telemedical glaucoma treatment requires specific home care measurement metrics to assess disease progression in a clinically robust manner.27 Of critical importance are home care–based measurements of intraocular pressure and VF.28,29 To reduce the frequency of doctor-patient contacts, the use of “digitally integrated applications” should be considered. Home care–based VF measurements could be performed telemedically with time-synchronized support by technical staff. This strategy could provide a path to ophthalmic telemedicine that manages integration into everyday practice.29 The major idea is at-home testing each week or each 2 weeks. The data of each VF are automatically compared with fields performed before. Thus, the patient and the doctor get a number of point-by-point reliability of the sensitivity, which will be graphically presented. Thus, inappropriate at-home testing will become apparent for the patient and the doctor. To realize our concept, it is necessary to apply the software to different hardware devices. Both our developed app and the VR goggles we used are compatible with different smartphones. VR One Plus by Zeiss requires display sizes between 4.7 and 5.5 inches and the resolution of the used smartphone should be higher than 1280×720 pixels.30 It can be expected that VR will continue to develop and improve rapidly in the years to come. In addition, there are a variety of medical applications available on the market, many of them in the form of freeware with questionable quality. The cost of the head-mounted display is <50 €. With the Sb-C, we developed an inexpensive and easy-to-use tool for assessing the VF in remote locations and in patients who are not able to undergo SAP.
The Sb-C has some advantages compared with the Octopus G1. Although it has some technical limitations that need to be improved, it is a convenient tool for screening purposes. Because it is a small head-mounted device, tests can be taken anywhere and in any position, enabling even immobile patients to be tested. Several patients might benefit from the device, as it allows them to move their heads and bodies without affecting the VF assessment. The workload in hospitals or medical practices could be reduced and waiting times shortened if exams could be done at home. These are crucial points to ensure health care during COVID-19. Furthermore, smartphones and VR headsets are inexpensive to purchase and thus can be used in economically underdeveloped and less favored areas. Combined with a telemedical setting, the Sb-C can open new opportunities in home care and areas with underdeveloped medical infrastructure. When the app is certified as a Class IIa Medical Device in Europe, every doctor can prescribe the app as a digital application. The patient will be able to monitor his VF. By sharing the data, the physician will envision home monitoring. To what extent smartphone-based perimetry can provide comparable results to SAP, and what its limitations are, needs to be studied in a further approach. It seems that the Sb-C method may be suitable for perimetry in both healthy individuals and glaucoma patients.
1. Anderson AJ, Bedggood PA, Kong GYX, et al. Can home monitoring allow earlier detection of rapid visual field progression in glaucoma? Ophthalmology. 2017;124:1735–1742.
2. John R, KashaJr, inventors; Kasha Jr, John R, assignee. Visual field perimetry using virtual reality glasses. United States patent US5737060A. 1998.
3. Skalicky SE, Kong GY. Novel means of clinical visual function testing among glaucoma patients, including virtual reality. J Curr Glaucoma Pract. 2019;13:83–87.
4. Tsapakis S, Papaconstantinou D, Diagourtas A, et al. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter. Clin Ophthalmol. 2017;11:1431–1443.
5. Narang P, Agarwal A, Maheshwari S, et al. Advanced vision analyzer–virtual reality perimeter: device validation, functional correlation and comparison with humphrey field analyzer. Ophthalmol Sci. 2021;1:100035.
6. Stapelfeldt J, Kucur SS, Huber N, et al. Virtual reality–based and conventional visual field examination comparison in healthy and glaucoma patients. Transl Vis Sci Technol. 2021;10:10.
7. Racette L, Fischer M, Bebie H, et al. Visual Field Digest, 6 ed. Köniz: Haag-Streit AG; 2016.
8. Apple Inc., iPhone 6 - Technical Specifications
. 2021. Acessed December 7, 2021. https://support.apple.com/kb/SP705?viewlocale=en_US&locale=de_DE
9. De Moraes CGCG, Liebmann JM, Levin LA. Detection and measurement of clinically meaningful visual field progression in clinical trials for glaucoma. Prog Retin Eye Res. 2017;56:107–147.
10. Junoy Montolio FG, Wesselink C, Gordijn M, et al. Factors that influence standard automated perimetry test results in glaucoma: test reliability, technician experience, time of day, and season. Invest Ophthalmol Vis Sci. 2012;53:7010–7.
11. Rao HL, Yadav RK, Begum VU, et al. Role of visual field reliability indices in ruling out glaucoma. JAMA Ophthalmol. 2015;133:40–4.
12. Anderson AJ. Comparison of three parametric models for glaucomatous visual field progression rate distributions. Transl Vis Sci Technol. 2015;4:2.
13. Papp A, Kis K, Németh J. Conversion formulas between automated-perimetry indexes as measured by two different types of instrument. Ophthalmologica. 2001;215:87–90.
14. Zeyen T, Roche M, Brigatti L, et al. Formulas for conversion between Octopus and Humphrey threshold values and indices. Graefes Arch Clin Exp Ophthalmol. 1995;233:627–634.
15. Wakayama A, Matsumoto C, Ayato Y, et al. Comparison of monocular sensitivities measured with and without occlusion using the head-mounted perimeter imo. PLoS One. 2019;14:e0210691.
16. Bolanowski SJ, Doty RW. Perceptual “blankout” of monocular homogeneous fields (Ganzfelder) is prevented with binocular viewing. Vision Res. 1987;27:967–982.
17. Pradhan ZS, Sircar T, Agrawal H, et al. Comparison of the performance of a novel, smartphone
-based, head-mounted perimeter (GearVision) with the Humphrey field analyser. J Glaucoma. 2021;30:e146–e152.
18. Razeghinejad R, Gonzalez-Garcia A, Myers JS, et al. Preliminary report on a novel virtual reality perimeter compared with standard automated perimetry. J Glaucoma. 2021;30:17–23.
19. Birt CM, Shin DH, Samudrala V, et al. Analysis of reliability indices from Humphrey visual field tests in an urban glaucoma population. Ophthalmology. 1997;104:1126–1130.
20. Yohannan J, Wang J, Brown J, et al. Evidence-based criteria for assessment of visual field reliability. Ophthalmology. 2017;124:1612–1620.
21. Zulauf M. Normal visual fields measured with Octopus Program G1. I. Differential light sensitivity at individual test locations. Graefes Arch Clin Exp Ophthalmol. 1994;232:509–515.
22. Iwase A, Kitazawa Y, Ohno Y. On age-related norms of the visual field. Jpn J Ophthalmol. 1988;32:429–37.
23. Martini M, Gazzangia V, Braggazi NL, et al. The Spanish influenza pandemic: a lesson from history 100 years after 1918. J Prev Med Hyg. 2019;60:E64–E67.
24. Liebmann JM. Ophthalmology and glaucoma practice in the COVID-19 era. J Glaucoma. 2020;29:407–408.
25. Lim LW, Yip LW, Tay HW, et al. Sustainable practice of ophthalmology during COVID-19: challenges and solutions. Graefes Arch Clin Exp Ophthalmol. 2020;258:1427–1436.
26. Denniss J, Turpin A, McKendrick AM. Relating optical coherence tomography to visual fields in glaucoma: structure-function mapping, limitations and future applications. Clin Exp Optom. 2019;102:291–299.
27. Quaranta L, Riva I, Gerardi C, et al. Quality of life in glaucoma: a review of the literature. Adv Ther. 2016;33:959–981.
28. Park I, Gale J, Skalicky SE. Health economic analysis in glaucoma. J Glaucoma. 2020;29:304–311.
29. Gan K, Liu Y, Stagg B, et al. Telemedicine for glaucoma: guidelines and recommendations. Telemed J E Health. 2020;26:551–555.
30. Zeiss. VR One Plus: VR One Plus Headset Manual. Carl Zeiss AG (2017).