Secondary Logo

Journal Logo

Original Article

Patients Perceptions of Artificial Intelligence in Diabetic Eye Screening

Yap, Aaron MBChB; Wilkinson, Benjamin MBChB; Chen, Eileen BOptom; Han, Lydia BOptom; Vaghefi, Ehsan PhD, MSc‡,§; Galloway, Chris PhD, MMgt; Squirrell, David FRANZCO∗,§

Author Information
Asia-Pacific Journal of Ophthalmology: May-June 2022 - Volume 11 - Issue 3 - p 287-293
doi: 10.1097/APO.0000000000000525
  • Open


Artificial intelligence (AI) and more specifically, deep learning algorithms, are now capable of recording a level of performance that is, in many cases, comparable to medical practitioners.1,2 Diabetic retinopathy screening, with its large repository of graded images, has been at the forefront of this technology. The landmark approval of IDx-DR for automated detection of referable diabetic retinopathy by the US Food and Drug Administration in 2018 represented the first of such wide acceptance of an AI system in any field of medicine. Since then, the Food and Drug administration has also approved EyeArt with other systems due to file shortly. As such, AI algorithms that are designed to automatically detect diabetic retinopathy are poised to enter mainstream clinical application.

As diabetic retinal screening programs across the world,3,4 including New Zealand (NZ),5 look to incorporate this technology into its retinal screening programs, a wide lens on the receptivity of AI among patients will be necessary to ensure that health care providers design services that are acceptable and trusted by all members of the community that these services are designed to serve. The primary aim of this study was to assess the knowledge base and attitudes of patients, who are enrolled in a mix of urban and provincial community diabetic eye screening programs in NZ, on the use of AI in reading their retinal images.



Patients attending their routine diabetic retinal screening appointment across 4 different sites in metro Auckland (Greenlane Clinical Centre, Manukau Superclinic, Waitakere Hospital, and community screening centers) and 2 regional clinics in Nelson (Nelson Hospital and Wairau Hospital), from April 2020 to August 2021, were invited to participate in this study. Non-English-speaking patients were excluded unless they were accompanied by a relative that could aid in translating the questionnaire. All eligible participants were provided an explanation of the study objectives and rationale behind AI-based retinal screening before written informed consent for their participation was obtained.

Ethical approval was obtained from local ethics committees of both Auckland and Nelson; Auckland District Health Board Research Review Committee (A+8218) and Nelson Marlborough District Clinical Governance Research Committee (20/STH/178). The trial is registered with the Australia NZ Clinical Trials Registry (ACTRN12620000488909). All study procedures were conducted in accordance with the standards outlined in the tenets of the Declaration of Helsinki, 2013 revision.

Development of Validated Artificial Intelligence Questionnaire

This survey was originally developed in a 2-stage process by a group of survey methodologists, led by Ongena and Haan, in the Netherlands to assess patient views of AI use in radiology.6,7 In the first stage, a semi-structured interview of 20 patients was conducted and 6 key domains related to AI use in radiology were formulated.6 After this, 7 questions utilizing the 5-point Likert agree-disagree scale were assigned to each domain and fine-tuned by means of pretest cognitive interviews conducted on a small subgroup of participants.7 Exploratory factor analysis generated 5 factors representing the following latent variables: (1) distrust and accountability of AI, (2) procedural knowledge of AI, (3) personal interaction with AI, (4) efficiency of AI, and (5) being informed of AI.

Thirteen questions from their questionnaire were selected and modified to suit the aim, setting, and design of our study. Basic demographical datapoints, such as age, sex, ethnicity, device ownership, and prior AI knowledge, were collected in the first section of our study questionnaire. Responses were converted from the 5-point Likert scale to a 3-point scale: Agree, Disagree, Neither Agree nor Disagree.

Study Protocol

Eligible, consented patients underwent a 10-minute interview conducted by coauthors E.C., L.H., B.W., and A.Y. to explore his or her perspective of AI in diabetic eye screening. Interviewers attended randomly selected clinics and consecutive patients attending the screening clinic were invited to participate. Each interview was guided by the study questionnaire. Patients were given the opportunity to give subjective responses in addition to their scoring of the questions provided. Responses were recorded on paper and entered into secure electronic spreadsheet for analysis.

Statistical Analysis

All figures, including radar plots, were generated using Microsoft Excel (version 16.29.1). The radar plots stratified responses according to age, ethnicity, sex, and number of devices owned. Each axis represents a question in the survey and the corresponding percentage of participants that agreed with that question. Pearson correlation analysis was performed using IBM SPSS Statistics (Release



A total of 438 patients agreed to participate in the interview. Approximately 10% of patients declined to participate. The most commonly stated reason was due to time constraints. The demographics are listed in Table 1. Eighteen percent of patients were from Nelson while the rest resided in Auckland. There was an equal distribution of males to females. The mean age was 59 years (range 12–90). The majority of participants identified as NZ European (50%), followed by Asian (31%), Pacific Islander (10%), and Maori (5%). Seventy-six percent of participants used more than 3 electronic devices per day.

Table 1 - Patient Demographics
Site No. Percentage
Auckland 357 82%
Nelson 81 18%
 Male 227 52%
 Female 211 48%
 Under 25 30 7%
 25–44 45 10%
 45–64 171 39%
 Over 65 190 44%
 NZ European 220 50%
 Asian 133 31%
 Pacific Islander 42 10%
 Maori 23 5%
 Other 18 4%
No. electronic devices used per day
 1 17 4%
 2 88 20%
 3 289 66%
 4 42 10%
Includes laptops, mobile phones, tablets, smartwatches, and smart devices.

Survey Results

Seventy-three percent of participants were aware of AI, but only 59% knew that this technology could be used in making a diagnosis (Fig. 1). Younger age and greater number of electronic devices owned were positively correlated with AI awareness; Pearson Correlation r value was 0.21 and 0.25, respectively (P < 0.001, 2-tailed test). Ninety-three percent of participants under the age of 25 said they were aware of what AI is, compared to 64% of participants over the age of 65 (Fig. 2).

Figure 1:
Graph showing the responses of participants toward all the survey questions. Listed above are key themes identified from patients’ subjective responses during the interview.
Figure 1 (Continued):
Graph showing the responses of participants toward all the survey questions. Listed above are key themes identified from patients’ subjective responses during the interview.

Figure 2:
Radar plot showing the percentage of participants who agreed with key statements in the survey, stratified by A: Age, B: Ethnicity, C: Number of devices owned, D: Sex.

Approximately half of participants said they would trust an AI-assisted eye exam as much as that of a trained health professional (Fig. 1). Twenty-seven percent of those aged under the age of 25 agreed with this statement compared to 55% of those over the age of 65. In terms of shortening the time to screening results, 66% of participants would feel more at ease with a shorter timeframe, even if this is generated by a computer. Sixty-eight percent of those over the age of 65 agreed with this statement, compared to 47% of those under the age of 25 (Fig. 2).

Thirty-six percent of participants still would prefer a human-led screening program, even if it meant a longer waiting time for results (Fig. 1). This rate was higher among Pacific Islanders (64%) compared to Maori (39%), NZ Europeans (28%), and Asians (38%). Maori (91%) and Pacific Islanders (83%) were also more likely to agree that they could have a longer consultation time with the doctor if AI made the diagnosis, compared to NZ Europeans (59%) and Asians (71%) (Fig. 2).

Subjective Responses

There were a few key themes highlighted by participants throughout the interview process (Fig. 1). Most said they would trust an AI-assisted program as long as there was some form of reassurance that the AI system was accurate and that human providers were still responsible for delivering sensitive test results. A few participants mentioned the statement, “Garbage in, garbage out,” in reference to the expectation that AI systems need to be developed and validated to high operating standards. Recent events such as the cybersecurity attack upon a NZ hospital board IT operating service were frequently raised as confidentiality with AI was explored.8 Some suggested that backup systems be prepared in events of AI system failure, such as server malfunction, power outages, or cybersecurity breaches.


This is the largest qualitative study to date to evaluate the knowledge and perspective of patients on the use of AI in diabetic retinal screening. Although others have studied the patient and public perception of clinical AI tools,9 only Keel et al10 have looked specifically at diabetic retinal screening. Keel et al provided insights into patients’ acceptance of the use of automated AI screening but did not explore other aspects, such as AI concepts, implementation, strengths, and weaknesses. Each aspect needs to be addressed before AI can be successfully incorporated into clinical practice.

We found that our participants were generally aware of AI, but less were familiar with its clinical applications. This is consistent with other published data11–14; an online survey from the UK found that only 63% of people reported knowing about AI and just 49% stated they were aware of current impact of AI on their life.15 Despite an overall low awareness of AI, study participants were receptive to AI with 78% reporting that they would be comfortable with AI being involved in their care. Similarly, Keel et al found that 96% of participants were satisfied with the automated AI screening model, and it was noteworthy that 78% preferred this model over manual human screening once they had both systems demonstrated to them. Greater acceptance of AI has been demonstrated in situations where the clinical risk is low,14,16,17 or the accuracy of AI can be proven.6,11 It is fostered further if the clinician has recommended the technology or if it fits societal and cultural norms.18,19

Improved accuracy, less bias, and more free time to spend with the doctor were the principal gains identified by participants in our study. These perceived gains were remarkably similar across different races, sex, age groups, or number of devices owned. Interestingly, very few voiced concerns that AI would make doctors lazy or that errors generated by AI were more harmful than those generated by humans. Increased diagnostic speed, greater objectivity, and accuracy were also amongst the main perceived benefits of AI implementation that were identified in other studies.11,12–14 Clinicians held a similar viewpoint but also identified improved access to disease screening and reduced time spent on monotonous tasks as the major perceived benefits stemming from the use of clinical AI.20

One consistent message that emerges from all patient surveys is the overwhelming preference for health care professionals to still oversee the overall process.7,11,12,13,19 Respondents in our survey were near unanimous (88%) in stating that they would trust an AI system more if it was supervised by a doctor. Although we did not explore the reasons for this finding in our survey, one possibility is that clinician involvement would be viewed as the doctor giving tacit approval of the AI system.19 These data indicate strongly that when used in a clinical setting, AI should be positioned as a “clinician support tool,” relieving clinicians of the more mundane tasks and freeing up time to spend on more complex or urgent cases.11,12,13

Irrespective of their responses to other questions, Pacific Islanders and Maori participants clearly placed a higher value on personal interaction than other ethnic groups with a greater proportion stating they would like to maintain the human involvement in screening. These results are consistent with wider Maori health values that are tied in closely with Tikanga, which highly regards “whakawhanaungatanga,” a process of developing a rapport with the human health provider by the sharing of Maori beliefs, values, and experiences.21 The face-to-face component of the program will remain important for Maori and Pacific Islanders. Therefore, program designers will need to engage with these communities to ensure that care pathways utilizing AI are both culturally relevant and acceptable.

Even though AI is thought to improve accuracy, less accurate diagnosis was also identified by patients and the general public as a primary weakness of clinical AI.7,13,14 This perception was related to the fact that AI is unable to conceptualize ideas,6,7,13 generalize to all individual situations,12,22 or perform a physical examination.13 As indicated in our subjective responses, it is vital to reassure patients and clinicians that every commissioned AI application has to pass through a rigorous and transparent review process. To address these concerns further, we believe that patient advocates will play an important role in developing models of care and educational strategies surrounding clinical AI applications to help patients understand how it is being used.6,19,23 In addition to these concerns, clinicians have also expressed philosophical concerns with the use of AI, such as the divestment of health care to large technology companies and the need to establish who bears liability in cases of machine error.20 Both of these are important subjects, which invite further debate for trust in clinical AI tools to be established.

Currently, AI tools are validated by their respective stakeholders, and this clearly poses a conflict of interest.5,10,24–26 This has been unavoidable as AI developers are required to demonstrate the efficacy and safety of AI applications to the regulatory authorities to obtain the necessary commercial licensing.27 It is well accepted that AI applications have a number of inherent biases and to ensure that these are understood and visible, regulators are now demanding greater transparency from algorithm developers.28,29 It is also recognized that regulators need to start looking at algorithms as a whole rather than simply assessing traditional performance metrics.30 As such it is perhaps unrealistic to expect those trained solely on the medical field to have the expertise to assess the overall performance of AI in their service. Recognizing this knowledge gap, governing bodies such as the UK National Screening Committee, have conducted independent evaluations of AI systems, which provides additional reassurance to stakeholders that a given AI system will be suitable for the environment that it is intended to be deployed within.4

There are several limitations to this study. Although every question was obtained from a validated questionnaire, the final survey was not formally revalidated. Nonetheless, this study was the first of its kind to gather a comprehensive patient perspective toward AI in diabetic screening. The participants were recruited from 4 different diabetic screening sites across metro and provincial NZ with the aim of obtaining a sample representative of the total population living with diabetes in NZ. However, the study was limited to English-speaking patients and thus there is a selection bias with non-English speakers effectively excluded. As in most quantitative AI studies, those under the age of 25 were overrepresented in this cohort, comprising 7% of our total sample, compared to national prevalence of diabetes in this age group of 2%.9 The higher age groups were in turn, slightly underrepresented by 1% to 3%.31 Both Maori and Pacific Islander ethnicities were also underrepresented by 11% and 4%, respectively, which could be attributable to the lower percentage of Maori residing in urban centers.32 This is significant because of the existence of a digital divide as older, ethnic minorities are less likely to use certain technologies when managing their health.33 Even though our sample group is reasonably representative of people living with diabetes in NZ, the results of this survey may not be generalizable beyond our study population. Finally, the survey questions are focused on specific areas and may not gather a comprehensive perspective from all respondents.

Based on the findings of our survey and a review of the relevant literature, we have identified 4 key findings that if addressed would enhance the trust and receptivity toward incorporation of AI into diabetic retinal screening programs:

  • 1. Both clinicians and patients need to be actively involved in the process of integrating AI into the screening pathways.
  • 2. Human interaction will remain an integral component of the service and will need to remain an option for those who prefer the service to be delivered manually.
  • 3. More educational resources for patients and clinicians surrounding AI technology are required. Further qualitative research into patient beliefs and emotionally driven responses is required to identify a communication strategy that facilitates greater receptivity toward AI deployment. Narrative storytelling using selected individual cases, focused on what AI support can mean for diagnosis and treatment, and workshops showcasing the AI could be valuable tools.19,34
  • 4. AI processes be made transparent to all stakeholders and clients.


The authors thank Stephanie Emma, Ros Moffatt, and the diabetic screening team at Counties Manukau District Health Board (DHB), Auckland DHB, Waitemata DHB, and Nelson Marlborough DHB for their assistance with this project.


1. Ramessur R, Raja L, Kilduff CLS, et al. Impact and challenges of integrating artificial intelligence and telemedicine into clinical ophthalmology. Asia Pac J Ophthalmol (Phila) 2021;10:317–327.
2. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115–118.
3. Wong DCS, Kiew G, Jeon S, et al. Singapore Eye Lesions Analyzer (SELENA): the deep learning system for retinal diseases. Artificial Intelligence in Ophthalmology 2021;Springer, 177–185.
4. Zhelev Z, Peters J, Rogers M, et al. Automated Grading to Replace Level 1 Graders in the Diabetic Eye Screening Programme. 2021;UK National Screening Committee,
5. Vaghefi E, Yang S, Xie L, et al. THEIA™ development, and testing of artificial intelligence-based primary triage of diabetic retinopathy screening images in New Zealand. Diabet Med 2021;38:e14386.
6. Haan M, Ongena YP, Hommes S, et al. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol 2019;16:1416–1419.
7. Ongena YP, Haan M, Yakar D, et al. Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire. Eur Radiol 2020;30:1033–1040.
8. Moloney E. Cyber attack similar to HSE breach cripples New Zealand district's health system. Irish Independent.
9. Young AT, Amara D, Bhattacharya A, et al. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health 2021;3:e599–e611.
10. Keel S, Lee PY, Scheetz J, et al. Feasibility and patient acceptability of a novel artificial intelligence-based screening model for diabetic retinopathy at endocrinology outpatient services: a pilot study. Sci Rep 2018;8:4330.
11. Jutzi TB, Krieghoff-Henning EI, Holland-Letz T, et al. Artificial intelligence in skin cancer diagnostics: the patients’ perspective. Front Med (Lausanne) 2020;7:233.
12. Yang K, Zeng Z, Peng H, et al. Attitudes of Chinese cancer patients toward the clinical use of artificial intelligence. Patient Prefer Adher 2019;13:1867–1875.
13. Nelson CA, Pérez-Chada LM, Creadore A, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol 2020;156:501–512.
14. Nadarzynski T, Miles O, Cowie A, et al. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digit Health 2019;5:205520761987180.
15. Kantar Public. Artificial Intelligence: Public Awareness Survey. 2019;Department for Business, Energy & Industrial Strategy,
16. Juravle G, Boudouraki A, Terziyska M, et al. Trust in artificial intelligence for medical diagnoses. Prog Brain Res 2020;253:263–282.
17. Sung J, Portales-Casamar E, Görges M. Perceptions of expert and lay users on trust in the use of artificial intelligence for medical decision-making and risk prediction. 2020.
18. Ye T, Xue J, He M, et al. Psychosocial factors affecting artificial intelligence adoption in health care in China: cross-sectional study. J Med Internet Res 2019;21:e14316.
19. Adams SJ, Tang R, Babyn P. Patient perspectives and priorities regarding artificial intelligence in radiology: opportunities for patient-centered radiology. J Am Coll Radiol 2020;17:1034–1036.
20. Scheetz J, Rothschild P, McGuinness M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep 2021;11:5193.
21. Lacey C, Huria T, Beckert L, et al. The Hui process: a framework to enhance the doctor-patient relationship with Maori. N Z Med J 2011;124:72–78.
22. Gao S, He L, Chen Y, et al. Public perception of artificial intelligence in medical care: content analysis of social media. J Med Internet Res 2020;22:e16649.
23. Palmisciano P, Jamjoom AA, Taylor D, et al. Attitudes of patients and their relatives toward artificial intelligence in neurosurgery. World Neurosurg 2020;138:e627–e633.
24. Bhaskaranand M, Ramachandra C, Bhat S, et al. The value of automated diabetic retinopathy screening with the EyeArt system: a study of more than 100,000 consecutive encounters from people with diabetes. Diabetes Technol Ther 2019;21:635–643.
25. Abràmoff MD, Lavin PT, Birch M, et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018;1:39.
26. van der Heijden AA, Abramoff MD, Verbraak F, et al. Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the Hoorn Diabetes Care System. Acta Ophthalmol 2018;96:63–68.
27. Arcus S. Determining Our Future: Artificial Intelligence. 2016;Institute of Directors & Chapman Tripp,
28. Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med 2018;378:981–983.
29. Artificial Intelligence/Machine Learning Based Software as a Medical Device (SaMD) Action Plan [press release]. 2021.
30. Gerke S, Babic B, Evgeniou T, et al. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digit Med 2020;3:53.
31. Ministry of Health. Virtual Diabetes Register (VDR). Published 2020. Available at: Accessed January 2, 2021.
32. Stats NZ. 2018 Census Ethnic Group Summaries. 2018;New Zealand Government,
33. Mitchell UA, Chebli PG, Ruggiero L, et al. The digital divide in health-related technology use: the significance of race/ethnicity. Gerontologist 2019;59:6–14.
34. Frank LB, Murphy ST, Chatterjee JS, et al. Telling stories, saving lives: creating narrative health messages. Health Commun 2015;30:154–163.

artificial intelligence; diabetic retinopathy; diagnostic screening programs; patient acceptance of health care

Copyright © 2022 Asia-Pacific Academy of Ophthalmology. Published by Wolters Kluwer Health, Inc. on behalf of the Asia-Pacific Academy of Ophthalmology.