Testing the Acceptability and Usability of an AI-Enabled COVID-19 Diagnostic Tool Among Diverse Adult Populations in the United States : Quality Management in Healthcare

Journal Logo

Original Research

Testing the Acceptability and Usability of an AI-Enabled COVID-19 Diagnostic Tool Among Diverse Adult Populations in the United States

Schilling, Josh MS; Moeller, F. Gerard MD; Peterson, Rachele MS, MBA; Beltz, Brandon PhD; Joshi, Deepti PhD; Gartner, Danielle BA; Vang, Jee PhD; Jain, Praduman BS, MSCS

Author Information
Quality Management in Health Care 32(Supplement 1):p S35-S44, January/March 2023. | DOI: 10.1097/QMH.0000000000000396

Abstract

At-home coronavirus disease-2019 (COVID-19) testing offers several benefits in a relatively cost-effective and low risk manner. Specifically, at-home COVID-19 testing can be more accessible for people with limited ability to travel or limited access to clinical locations that offer testing by trained professionals; for instance, people in rural locations or people without reliable health coverage. Additionally, testing at-home can offer greater convenience and flexibility to anyone wishing to get tested for COVID-19. Finally, at-home testing can alleviate some of the burden on health care providers by avoiding direct contact between health care workers and potentially exposed people, thereby minimizing the need for personal protective equipment required by medical workers.1 Despite the numerous advantages of at-home COVID-19 testing, growing evidence suggests differences in their accuracy relative to clinic-administered tests.2 One way to improve the accuracy of COVID-19 screening is to combine existing at-home COVID-19 test kits with an easily accessible self-diagnostic symptom screening surveys.

Symptom screening via questionnaires and online surveys would allow researchers to generate a predictive algorithm to inform individuals and providers in combination with tests. In fact, in a previously conducted, Institutional Review Board (IRB)-approved study, Vibrent and partners developed logistic regression models to predict the probability of COVID-19 using enhanced symptom screening, which has an area under the receiver operating curve, AROC, of 92%, indicating it is very accurate and comparable to in-home laboratory tests3 (see Images 1 and 2 in Supplemental Digital Content Appendix 1A, available at: https://links.lww.com/QMH/A97). This web-based symptom screening survey is built on technology that uses algorithms to predict the likelihood of COVID-19 infection based on self-reported symptoms.4 Furthermore, the symptom screening survey with its integrated public health data can provide patients and providers, employees and employers, students, and universities with timely information to support testing requests. Although still early in its diagnostic utility, we consider symptom self-reporting via electronic means to be an important step in COVID-19 screening processes to enhance the accuracy of COVID-19 screening, surveillance, and reporting given its many advantages.

One advantage of this web-based diagnostic symptom screening survey is that self-reported symptoms can provide health care providers another data point to enhance their screening and reporting process. Yet another advantage is that clinically validated recommendations can be shared with patients just after submitting their symptoms via electronic tools. As a way to ascertain its utility before widespread use, the current study was conducted to test the acceptability and usability of this web-based diagnostic symptom screening survey used with at-home COVID-19 test kits among a diverse adult population in the United States.

Although recent research on the usability of COVID-19 at-home test kits is encouraging,5,6 we seek to better understand whether this unique artificial intelligence (AI)-enabled COVID-19 testing tool that combines the at-home COVID-19 test kits and a web-based diagnostic symptom screening survey is acceptable and usable. One reason for this study was to assess whether the AI-enabled COVID-19 testing tool has low acceptance (ie, individuals are resistant to using it). In case of resistance, individuals are more likely to pivot towards alternatives such as not getting screened or opting for in-person testing. This leads us to our first research question (RQ):

  • RQ1: How did the acceptability of the at-home COVID-19 test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among participants?

The second reason this study conducted was to assess whether this AI-enabled COVID-19 testing tool is challenging for individuals to complete successfully without errors. If so, incorrect usage will likely result in inaccurate results.6 This leads us to the second research question (RQ):

  • RQ2: How usable was the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

One of the main strengths of this work is the use of a diverse population and the inclusion of underrepresented minorities that have lower access to health care7–9 and show higher rates of COVID-related death and hospitalization.10,11 AI-enabled digital diagnosis tools can provide immediate and accurate diagnosis to patients, particularly to underserved populations who benefit the most from the low cost and self-management. However, despite the increased internet use, the digital divide continues to constitute a key barrier in adopting digital health informatics by underserved populations.12 Racial and ethnic disparities can persist in remote screening and data collection tools, such as telemedicine13 and web-based surveys.14 Moreover, the adoption of web-based surveys can vary across different groups of aging adults based on their demographic, financial, and health-related variables.15

Although at-home COVID testing can reduce the logistical burden and stigma associated with in-clinic testing, people with low health literacy may misinterpret the test instructions.16 There is an increased interest in understanding the specific barriers faced by underserved communities, including rural ethnic minorities, in using home-based COVID tests.17 Funded by the National Institute of Health, the RADx Underserved Populations (RADx-UP) Consortium was created to study COVID-19 testing patterns in communities across the United States. Recent RADx-UP studies show that individuals with low socioeconomic status report lower motivation in using COVID-self tests.18

In the United States, it is well documented that racial and ethnic groups experience differences in access to health care.7–9 The COVID-19 pandemic, although novel in its onset, was not unique in this lens where an excess burden presented in Black Americans.10 Consider among non-Hispanic Blacks in the United States, the rate of hospitalization for COVID-19 is 2.5x and rate of death is 1.7x greater compared to non-Hispanic Whites, although the rate of infection is equivalent according to Centers for Disease Control and Prevention (CDC) data through February 1, 2022.11

In comparison, the rate of death for Hispanic or Latinos is 1.1x that of Whites, or nearly equivalent.11 Further, differences among Black Americans are observed in public health beliefs, awareness, and practices—broadly19–22 and related to testing23,24 and vaccination25,26 for COVID-19.

To address the above-discussed issues related to the existing racial and ethnic disparities in the web-based surveys and at-home toolkits, we introduced the following questions to assess differences in the acceptability and usability of these tools across races:

  • RQ3: How did the acceptability of the at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among races?
  • RQ4: How did the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit compare among races?

We extended the RQ3 and RQ4 research questions to other demographic variables, including age and gender, to address the acceptability and usability of the web-based screening and at-home screening tools in these groups. All 3 variables (age, gender, and race/ethnicity) are included in the CDC Human Infection Case Report Form.27 Age is a particularly important demographic variable, as the adoption of web-based surveys can vary among elderly adults.15 RQ5 and RQ6 describe age-specific acceptability and usability questions.

  • RQ5: Were there age-related differences in the acceptability of the at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening?
  • RQ6: Were there age-related differences in the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

RQ7 and RQ8 address sex-specific acceptability and usability questions, as previous research suggests sex difference in web survey participation, although the results are mixed.28,29

  • RQ7: Were there gender-related differences in the acceptability of the at-home test kit, web-based symptom screening survey, or in-clinic screening?
  • RQ8: Were there gender-related differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit?

Due to the impact of health literacy on the interpretation of at-home test kits,16 education was included as an additional demographic variable in our research questions (RQ9 and RQ10). Individuals with low education may have more difficulties in following the instructions of the at-home test kits and understanding the web-based health questions about their symptoms.

  • RQ9: Were there differences in the acceptability of the at-home test kit, web-based symptom screening survey, and in-clinic screening across individuals with different educational backgrounds?
  • RQ10: Were there differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit across individuals with different educational backgrounds?

The AI-enabled COVID-19 testing tool is an innovative approach with great potential to improve the quality of remote COVID-19 testing, especially in underserved communities that can benefit from the low costs and easy-to-access features of this diagnostic tool. It is therefore imperative to show that our diagnostic solution does not exacerbate existing health care disparities. Hence, we examined its acceptability and usability across several demographic variables such as race, age, gender, and education. For example, Blacks and African Americans have been particularly affected by the COVID-19 pandemic.10–11 Race, gender, and socioeconomic status tend to relate to disparities in COVID-19 screening, testing, and prevention.20,21,23–24 Yet another example is that people 65 years and older have died from COVID-19 at a much higher rate than expected.30–31

METHODS

Study recruitment

Prospective participants were recruited through advertisements on Virginia Commonwealth University's (VCU) website, email lists, and flyers on the VCU campus. Participants were eligible if they were 18 years or older and located within the Richmond metropolitan area for 10 days. If interested and eligible, participants joined the study through the research study website where they provided their electronic consent and were instructed through the study tasks. Research staff were available by phone and in-person to answer questions participants had. All data were collected between June and October 2021.

Study design and procedures

The study design was a case/control design where each study group would cease enrollment when the number of participants who have completed all study-related procedures reached the enrollment targets (as shown in the Supplemental Digital Content Table 1, available at: https://links.lww.com/QMH/A93). Based on the group selected, participants were asked to complete up to 2 at-home COVID-19 tests, an in-clinic polymerase chain reaction (PCR) test in the Richmond, Virginia, area, and complete a series of surveys throughout the study (see Supplemental Digital Content Table 1, available at: https://links.lww.com/QMH/A93 for the study design). The web-based diagnostic symptom screening survey asked questions about demographics, basic health information, and current COVID-19 and flu symptoms (if any). At the end of the study, participants were given a survey asking about the acceptability and usability of at-home COVID-19 test kits and the study website (including the web-based diagnostic symptom screening survey). Participants were given 2 at-home rapid antigen COVID-19 tests and a clinic-administered PCR COVID-19 test at no cost. The test kits that were used in this study were Food and Drug Administration (FDA) approved and available off the shelf in various grocery stores and pharmacies. Participants completed these activities on their own schedule, using their own computers or mobile devices, and scheduled their in-clinic test in Richmond, Virginia. The at-home COVID-19 test used for the study was the QuickVue At-home OTC COVID-19 Test, which utilizes a self-administered nasal swab and has been approved for use under emergency authorization.5 Detailed instructions and demonstration videos can be found at https://quickvueathome.com/.

Compensation to all participants was pro-rated, according to study procedures completed. Participants were able to choose from a variety of electronic gift cards and payment was provided at the end of study completion, or immediately upon voluntary withdrawal from the study. The maximum compensation value was $175. The study was approved by Western IRB (study 1309332). The data collection effort was approved carried out by VCU; their IRB (HM20022035) deferred to Western IRB. The analysis of de-identified data was approved by the George Mason University IRB (1743684-1). The Vibrent Research Platform, which hosted the website and survey forms, is a Federal Information Security Management Act (FISMA)-certified and Federal Risk and Authorization Management Program (FedRAMP)-ready system.

Measures

Participants reported demographic information including gender (female or male), race (White, Black or African American, Asian, American Indian or Alaska Native, and others), ethnicity (Hispanic/Latino or non-Hispanic/Latino), education (grades 9-11, grades 12 or GED, 1-3 years after high school, college 4 years or more, advanced degree), and age (18-20, 21-44, 45-64, and 65 and older) via surveys on the research website. Participants also reported any COVID-19 and flu symptoms (eg, “In the last 14 days, did you experience any of the following gastrointestinal symptoms?”) via the web-based diagnostic symptom screening survey. The web-based diagnostic symptom screening survey was an important input for the COVID-19 prediction software algorithms that we tested as part of a larger effort that is beyond the scope of this article5 (see Supplemental Digital Content Appendix 1B, available at: https://links.lww.com/QMH/A98 for the complete symptom screening survey).

The acceptability of the AI-enabled COVID-19 testing tool components (the at-home test kit and symptom survey) was evaluated with 6 survey items.32 These survey items related to screening preference, likelihood of future use, and perceived accuracy of COVID-19 tests (see Supplemental Digital Content Appendix 2, available at: https://links.lww.com/QMH/A99). With regard to screening preferences, although our primary objective was to identify whether participants preferred to screen using the at-home COVID-19 test or on an app or website (ie, the web-based diagnostic symptom screening survey), we also asked participants their preference for in-clinic screening to act as a benchmark for traditional screening methods. The usability of the AI-enabled COVID-19 testing tool components, the at-home test kit, and the web-based diagnostic symptom screening survey was determined by 2 survey items—the overall ease of use of the COVID-19 test kit and overall ease of use of the website that hosted the web-based diagnostic symptom screening survey.

COVID screening preference

The first acceptability item asked, “If you need to screen yourself for COVID-19 or a similar disease in the future, which of the following options would you prefer?”: (a) at home with a physical test kit; (b) reporting my symptoms on an electronic app or website; and (c) at a clinic administered by health care professionals. Participants responded to this question on a 5-point scale ranging from “prefer a great deal” (1) to “do not prefer” (5).

Likelihood of future use

The second acceptability item asked, “If given the opportunity, would you use the same COVID-19 test kit again?” Participants responded to this item with “yes” (1), “maybe” (2), “no” (3), and “I'm not sure” (4).

Perceived accuracy

The third acceptability item asked, “Do you feel like your COVID-19 test results were accurate?” Participants responded with “yes” (1), “no” (2), and “I'm not sure” (3).

The usability of the AI-enabled COVID-19 testing tool components (the at-home test kit and the web-based diagnostic symptom screening survey) was determined by 2 survey items detailed below. There was an overall rating of ease of use as well as specific ratings for the major steps of using the test kit and the website, including the web-based diagnostic symptom screening survey.

Test kit ease of use

One usability item asked, “How easy or difficult was it to use the COVID-19 test kit?” Participants responded to this item using a 5-point scale, “extremely easy” (1) to “extremely difficult” (5).

Website ease of use

The second usability item asked, “How easy or difficult was it to use the website?” Participants responded using a 5-point scale, “extremely easy” (1) to “extremely difficult” (5).

Data analysis plan

Participants who provided knowingly incomplete or inaccurate information and those who had signed up for the study multiple times were excluded from analyses (n = 18). The raw sample size for analyses was n = 822. Research questions were evaluated in R v4.0.2 with direct package dependencies on tidyverse6 v1.3.0 and survey33 v4.0.

Weighted data analysis

Data were weighted to reflect the distribution of demographic variables in United States. The raw data of 822 samples were resampled to 5000 using rake34 to match national percentages along the dimensions of gender, age, race, and education. The (normalized) marginal percentages35 are as follows: for gender was 50.8% for female and 49.2% for males; for age was 4.78% for [18, 20], 41.2% for [21, 44], 32.9% for [45, 64], and 21.1% for patients 65 and older; for race was 85.1% for White and 14.9% for Black; and for education was 36.2% for grades 9 through 12 (including diploma equivalent to grade 12), 31.8% for junior college (and/or equivalent), 20.1% for college (4 years degree), and 11.9% for advanced degrees beyond college.

Ten research questions (RQ1-RQ10) were used in this study. RQ1 was tested by running 3 separate summary independent t tests to correspond to each of the acceptability variables (ie, current screening preference, likelihood of future use, and perceived accuracy). Thus, we conducted independent t tests (α = .05, 95% confidence level) to compare (a) in-clinic versus at-home; (b) on app versus at-home; and (c) on app versus in-clinic. RQ2 was addressed by descriptive statistics, reported as mean and standard deviation (SD).

RQ3 through RQ8 were tested by running separate univariate analyses to correspond to each of the acceptability (ie, current screening preference, likelihood of future use, and perceived accuracy) and usability (ie, test kit ease of use and symptom screening ease of use) variables for each of independent variables (race, age, gender, and education). The independent variable was entered as the fixed factor and the acceptability or usability variable was entered as the dependent variable. Research questions were answered by focusing on whether the overall F-test statistic reached statistically significance (P < .05) at 95% confidence level. If the overall test statistic reached statistical significance, Tukey's post hoc analysis was conducted to identify which pairwise comparisons had significant differences.

RQ7 and RQ8 were tested running 5 separate independent-samples t tests to each of the acceptability (ie, current screening preference, likelihood of future use, and perceived accuracy) and usability (ie, at-home COVID-19 test kit ease of use and web-based symptom screening survey ease of use) variables for one predictor variable (gender). While gender was entered as the grouping variable (male = 1, female = 0), the acceptability and usability variables were entered as testing variables. The research questions were answered by focusing on whether overall t-test statistic reached statistical significance (P < .05) at 95% confidence level.

RESULTS

Demographic statistics

General demographics are shown in Supplemental Digital Content Table 2 (available at: https://links.lww.com/QMH/A94). The majority of participants were female (65.5%). The sample consisted of mostly White (45.6%) and Black or African American (42.5%), followed by Asian (3.4%), American Indian or Alaska Native (1.6%), and others (0.2%). The average age of the sample was mean = 38.9 years (SD = 13.97; age range = 18-92). Furthermore, a little over half of the sample was at least a college graduate and earned an income of at least $50 000.

  • RQ1: How did the acceptability of the at-home COVID-19 test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among participants?

Across all participants, there was a stronger preference t(8237) = 33.24, for screening at-home with a COVID-19 test kit (mean = 1.93, SD = 1.14) compared with in-clinic testing (mean = 2.85, SD = 1.46, P < .05). Likewise, there was a stronger preference t(8341) = 26.76, for screening at-home with a COVID-19 test kit (mean = 1.93, SD = 1.14) compared with reporting symptoms on the web-based diagnostic symptom screening survey on a website or an app (mean = 2.65, SD = 1.39, P < .05). No other statistically significant differences were noted in screening preferences (P = nonsignificant).

Furthermore, in terms of likelihood of future use of the at-home COVID-19 test kit, participants were, on average, willing to take the COVID-19 test kit again (mean = 1.07, SD = 0.42), and were confident in the accuracy of the at-home COVID-19 test kit (mean = 1.09, SD = 0.35).

  • RQ2: How usable was the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

On average, the participants in the current sample found it extremely easy to use the at-home COVID-19 test kit (mean = 1.11, SD = 0.33) as well as the web-based symptom screening survey (mean = 1.27, SD = 0.56).

  • RQ3: How did the acceptability of the at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among races?

At-home test kit screening preferences did not show statistically significant differences across racial groups (P = nonsignificant). However, with regard to the web-based diagnostic symptom screening survey, Black participants (mean = 2.52, SD = 1.46) had a statistically significantly stronger preference for using the web-based diagnostic symptom screening than White participants (mean = 2.67, SD = 1.38, P < .05). With regard to in-clinic screening, Black participants (mean = 2.26, SD = 1.35) had a statistically significantly stronger preference for in-clinic screening compared with White participants (mean = 2.96, SD = 1.45, P < .001).

  • RQ4: How did the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit compare among races?

Usability of the web-based diagnostic symptom screening survey statistically significantly differed across races (F(1,4379) = 12.23, P < .001) such that White participants (mean = 1.26, SD = 0.52) found the web-based diagnostic symptom screening survey to have higher usability compared with Black participants (mean = 1.34, SD = 0.70, P < .001). Similarly, usability of the at-home COVID-19 test kit statistically significantly differed across races (F(1,4376) = 78.95, P < .001) such that White participants (mean = 1.09, SD = 0.30) found the at-home COVID-19 test kit to have higher usability compared with Black participants (mean = 1.22, SD = 0.46, P < .001).

  • RQ5: Were there age-related differences in the acceptability of the at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening?

At-home test kit screening preferences statistically significantly differed across individuals of different age groups (F(3,4377) = 32.55, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those between the ages of 45 and 64 years (mean = 1.75, SD = 1.04) and 21 and 44 years (mean = 1.89, SD = 1.15) had a statistically significantly stronger preference for at-home testing compared with participants 18 through 20 years of age (mean = 2.32, SD = 0.95, P < .001). Likewise, those between 45 and 64 years (mean = 1.74, SD = 1.04) and 65 years and older (mean = 2.13, SD = 1.22) had a statistically significantly stronger preference for at-home testing compared with those between 21 and 44 years of age (mean = 2.32, SD = 0.95, P < .001). Finally, those between 45 and 64 years of age (mean = 1.74, SD = 1.04) had a statistically significantly stronger preference for at-home testing compared with those between 65 years and older (mean = 2.13, SD = 1.22, P < .001). No other statistically significant differences were noted.

Similarly, web-based symptom screening preferences statistically significantly differed across individuals of different age groups. Specifically, Tukey's post hoc analyses demonstrated that those between the ages of 21 and 44 years (mean = 2.77, SD = 1.46) and 45 and 64 years (mean = 2.40, SD = 1.43) had a stronger preference for web-based diagnostic symptom screening compared with those between 18 and 20 years of age (mean = 3.06, SD = 1.09, P < .05). Furthermore, those between 45 and 64 years (mean = 2.40, SD = 1.43) had a statistically significantly stronger preference for web-based diagnostic symptom screening compared with those between 21 and 44 years of age (mean = 2.77, SD = 1.46) and those between 65 years and older (mean = 2.71, SD = 1.21, P < .001). No other statistically significant differences were noted.

Finally, in-clinic screening preferences also statistically significantly differed across individuals of different age groups. Specifically, Tukey's post hoc analyses demonstrated that those between the ages of 18 and 20 years (mean = 2.70, SD = 1.38) demonstrated statistically significantly stronger preferences for in-clinic screening compared with those between 21 and 44 years of age (mean = 3.11, SD = 1.38, P < .001). Furthermore, those between 45 and 64 years (mean = 2.79, SD = 1.48) had a statistically significantly stronger preference for in-clinic screening compared with those between 21 and 44 years of age (mean = 3.22, SD = 1.38, P < .001). Finally, those between 65 years and older (mean = 2.52, SD = 1.52) had a statistically significantly stronger preference for in-clinic screening compared those between 21 and 44 years (mean = 3.22, SD = 1.38) and those between 45 and 64 years of age (mean = 2.79, SD = 1.48, P < .001). No other statistically significant differences were noted.

  • RQ6: Were there age-related differences in the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

Usability of the web-based diagnostic symptom screening survey differed statistically significantly across individuals of different age groups (F(3, 4377) = 65.72, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those between 18 and 20 years (mean = 1.04, SD = 0.39) reported statistically significantly higher usability of the web-based diagnostic symptom screening survey compared with those between 21 and 44 years (mean = 1.21, SD = 0.48), those between 45 and 64 years (mean = 1.24, SD = 0.63), and those who were 65 years or older (mean = 1.47, SD = 0.54, P < .001). Furthermore, those between 21 and 44 years (mean = 1.21, SD = 0.48) and those 45 and 64 years (mean = 1.24, SD = 0.63) had a statistically significantly higher usability of the web-based diagnostic symptom screening survey compared with those who were 65 years or older (mean = 1.47, SD = 0.54, P < .001). No other statistically significant differences were noted.

Usability of the at-home COVID-19 test kit differed statistically significantly across individuals of different age groups (F(3,4374) = 12.36, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those from 65 years and older (mean = 1.17, SD = 0.37) reported statistically significantly higher usability compared with those between 21 and 44 years of age (mean = 1.11, SD = 0.32) and those between 45 and 65 years of age (mean = 1.08, SD = 0.29, P < .001). No other statistically significant differences were noted.

  • RQ7: Were there gender-related differences in the acceptability of the at-home test kit, web-based symptom screening survey, or in-clinic screening?

At-home test kit screening preferences differed statistically significantly across genders such that males (mean = 1.83, SD = 1.06) had a stronger preference for at-home kits compared with females (mean = 2.02, SD = 1.19, P < .001). Similarly, in-clinic screening preferences differed statistically significantly across genders such that females (mean = 2.68, SD = 1.47) had a stronger preference for in-clinic screening compared with males (mean = 3.03, SD = 1.43, P < .001). No other statistically significant differences were noted.

  • RQ8: Were there gender-related differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit?

Usability of the web-based symptom screening survey differed statistically significantly across genders (F(1,4379) = 7.79, P < .001) such that female participants (mean = 1.23, SD = 0.49) found the web-based diagnostic symptom screening survey to have higher usability compared with male participants (mean = 1.31, SD = 0.61).

Additionally, usability of the at-home COVID-19 test kit differed statistically significantly across genders (F(1,4376) = 6.83, P < .05) such that male participants (mean = 1.10, SD = 0.30) found the at-home COVID-19 test kit to have higher usability compared with female participants (mean = 1.12, SD = 0.35).

  • RQ9: Were there differences in the acceptability of the at-home test kit, web-based symptom screening survey, and in-clinic screening across individuals with different educational backgrounds?

At-home COVID-19 test kit preference differed statistically significantly across individuals with different levels of education (F(3,4377) = 36.40, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those with a college degree (mean = 1.72, SD = 1.03) had a stronger preference for at-home COVID tests compared with those with an educational attainment of grades 9 through 12 (mean = 2.05, SD = 1.12), those with an educational attainment of junior college (mean = 1.83, SD = 1.09), and those with an advanced degree (mean = 2.18, SD = 1.37, P < .001). Furthermore, those with an educational attainment of junior college (mean = 1.83, SD = 1.09) had a stronger preference for at-home COVID-19 tests compared with those with an advanced degree (mean = 2.18, SD = 1.12, P < .001). No other statistically significant differences were noted.

Web-based symptom diagnostic screening survey preference differed statistically significantly across individuals with different levels of education (F(3,4329) = 27.51, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those in grades 9 to 12 (mean = 2.58, SD = 1.26) had a stronger preference for the web-based diagnostic symptom screening survey compared with those with an advanced degree (mean = 3.17, SD = 1.51, P < .001). Furthermore, those in junior college (mean = 2.56, SD = 1.39) demonstrated a stronger preference for the web-based diagnostic symptom screening survey compared with those with a college degree (mean = 2.62, SD = 1.37) or those with an advanced degree (mean = 3.17, SD = 1.51, P < .001). No other statistically significant differences were noted.

In-clinic screening preferences differed significantly across individuals with different levels of education (F(3,4364) = 111.6, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those in grades 9 through 12 (mean = 2.34, SD = 1.45) had a stronger preference for in-clinic testing as compared with those in college (mean = 3.18, SD = 1.40), those in junior college (mean = 3.09, SD = 1.34) or those with advanced degrees (mean = 3.21, SD = 1.45, P < .001). No other statistically significant differences were noted.

  • RQ10: Were there differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit across individuals with different educational backgrounds?

Usability of the web-based diagnostic symptom screening survey differed across individuals with different educational backgrounds (F(3,4377) = 13.43, P < .001). Specifically, Tukey's post hoc analyses demonstrated that those in grades 9 through 12 (mean = 1.33, SD = 0.55) reported lower usability for the web-based diagnostic symptom screening compared with those with advanced degrees (mean = 1.16, SD = 0.40), those in college (mean = 1.26, SD = 0.56), and those in junior college (mean = 1.25, SD = 0.59, P < .05). Furthermore, those with advanced degrees (mean = 1.15, SD = 0.40) reported higher usability for the web-based diagnostic symptom screening survey compared those in junior college (mean = 1.25, SD = 0.59, P < .05). No other statistically significant differences were noted.

Usability of the at-home COVID-19 test kit also differed across individuals with different educational background (F(3,4374) = 25.4, P < .05). Specifically, Tukey's post hoc analyses demonstrated that those in college (mean = 1.11, SD = 0.32) demonstrated higher usability for at-home COVID-19 test kits compared with those in grades 9 through 12 (mean = 1.15, SD = 0.38). On the other hand, those in junior college (mean = 1.05, SD = 0.22) had higher usability for at-home COVID tests compared with those in grades 9 through 12 (mean = 1.15, SD = 0.38, P < .001). Finally, those in junior college (mean = 1.05, SD = 0.22) reported having higher usability of at-home COVID-19 tests compared with those in college (mean = 1.11, SD = 0.32) and those with advanced degrees (mean = 1.14, SD = 0.37, P < .001).

DISCUSSION

The primary objective of the current study was to examine the acceptability and usability of an AI-enabled COVID-19 testing tool that combined a web-based diagnostic symptom screening survey and an at-home COVID-19 test kit. A secondary objective was to examine whether there were any significant differences in the acceptability and usability of the AI-enabled COVID-19 testing tool across racial groups (in particular, Blacks and Whites), and across educational backgrounds, age groups, or genders.

One key observation from this research was that all participants found it easy to use the at-home COVID-19 kit and the web-based screening survey. This finding gives a clear picture with regard to the acceptability of the AI-enabled COVID-19 testing tool that consisted of an at-home COVID-19 test kit and a web-based diagnostic symptom screening survey; the tool as a whole demonstrated good acceptability. A second observation was that the results demonstrated that in terms of acceptability, participants regardless of race, age, gender, or educational background preferred both components of the AI-enabled COVID-19 testing tool (ie, at-home test kits and the web-based diagnostic symptom screening survey) more than the in-clinic testing. Finally, participants also found both the components of the AI-enabled COVID-19 testing tool to be extremely easy to use. Acceptability and usability scores by demographics are described in Supplemental Digital Content Table 3 (available at: https://links.lww.com/QMH/A95).

These findings should be interpreted in the context of 3 considerations. One consideration is the versatility of the AI-enabled COVID-19 testing tool (.., the web-based symptom diagnostic screening survey and a physical at-home test kit) in that it allows for the use of any FDA-approved COVID test kits to be combined with the web-based symptom screening survey, thereby expanding the scope of application of the tool itself. The second consideration is that although at-home screening using the COVID-19 test kit was the most preferred screening method, participants nonetheless demonstrated a moderate preference for screening via the web-based diagnostic symptom screening survey, as evidenced by a score around the midpoint of the scale. In fact, screening on a web-based diagnostic symptom screening survey was preferred far more than the traditional but familiar method of screening in-clinic, suggesting that individuals may be willing and open to trying the AI-Enabled tool for COVID-19 screening. The final consideration is that, given the novelty of the approach of the web-based diagnostic symptom screening, moderate acceptability may reflect less on the tool and more on participants' perceptions or attitudes toward the tool (eg, lack of trust and confidence) relative to familiar and tangible alternatives such as the physical, self-administered, at-home COVID-19 test kit. Beyond the initial adoption period, with more clinical validation, education, and awareness of the potential merits of a web-based diagnostic symptom screening survey, individuals' perceptions, and importantly, intentions to use the AI-enabled COVID-19 testing tool are likely to become more favorable and increase over time. These findings, viewed in the light of the results, suggest that participants were willing to re-take the at-home COVID-19 test kit in the future and were confident in the results they received, which suggests that participants are open to using the tool as a whole and are more willing to choose it over traditional screening methods like in-person screening.

With regard to the usability of the AI-enabled COVID-19 testing tool, results clearly demonstrated that participants found the at-home COVID-19 test kit and the web-based symptom screening survey to have good usability. These results are consistent with existing literature that suggests high ease of use with at-home test kits.12,15 Therefore, the results suggest that the AI-enabled COVID-19 testing tool may be a highly usable option in the COVID-19 testing landscape wherein several COVID-19 alternatives may not be as easily usable, accessible, or fraught with other limitations. This is especially notable in light of the fact that the likely accuracy of the AI-enabled COVID-19 testing tool could reduce the need for and costs associated with multiple COVID-19 tests.

Furthermore, results of the current study suggested that there were several differences in the acceptability and usability of the COVID-19 testing tool across different racial groups, age groups, and genders, as well as educational backgrounds. For instance, White participants preferred using the at-home COVID-19 tests more so than Black participants. These findings are consistent with prior research that showed that non-Hispanic White participants demonstrated greater acceptability toward at-home COVID-19 test kits compared with non-Hispanic Black participants.6 Furthermore, a less surprising finding in the current study is the weaker preference for the AI-enabled COVID-19 testing tool (ie, the at-home COVID-19 test kit and the web-based symptom survey) among older participants relative to younger participants. Indeed, considerable research has documented the various barriers that older individuals face when it comes to technology adoption.2,36 Yet another finding in the current study was that female participants found it easier (ie, usability) to use the at-home COVID-19 test kit compared with male participants. One reason for this may be that it may be easier for females than males to physically handle the testing materials such as the small test tube and test strip. However, more research needs to be conducted to identify whether the gender differences found in the current study are broadly generalizable in ways that may be used to improve testing. Finally, the current study also found that individuals with advanced educational backgrounds demonstrated greater acceptability toward the at-home COVID-19 test kit as compared with those with less advanced educational backgrounds. These results are consistent with other studies that show greater acceptability of at-home COVID-19 tests in those with at least a college degree than those with some college or those with a high school degree or less.9 For instance, a US clinical trial NCT04502056 reported that physician intervention to explain COVID-19 knowledge resulted in smaller knowledge gaps compared with no intervention, with no significant effects on self-reported safety behavior by race for Black or White individuals without a college degree.37 Said another way, providing instructions or clarifying information to participants can balance some of the differences observed among those with varying educational backgrounds. Yet another clinical trial NCT04371419 demonstrated that knowledge gaps reduced for Black participants who viewed physician-delivered video messages about COVID-19, with race-matched providers resulting in increased information seeking in these subjects as well.37 Hence, overall, these findings suggest that, with a few exceptions, the AI-enabled COVID-19 testing tool is a viable option in terms of its acceptability among users, and their ability to carry out necessary steps.

User confidence and technological literacy impact attitudes toward web-based screening tools when compared with at-home COVID-19 test kits, and additional work needs to be done to close this gap in acceptability. Consideration should be paid to the ways in which instructional materials are developed and provided to users to carry out the at-home COVID-19 testing steps and should accommodate a variety of testing scenarios.38 For example, providing live coaching or recorded demonstrations for completing the at-home COVID-19 test for people with access to an internet-enabled device may result in higher reported rates of understanding, acceptability, and trust.

REFERENCES

1. WT Hsu MY, Shen CF, Hung KF, Cheng CM. Home sample self-collection for COVID-19 patients. Adv Biosyst. 2020;4(11):e2000150. doi:10.1002/adbi.202000150.
2. Guglielmi G. Rapid coronavirus tests: a guide for the perplexed. Nature. 2021;590(7845):202–205. doi:10.1038/d41586-021-00332-4.
3. McNeil BJ, Hanley JA. Statistical approaches to the analysis of receiver operating characteristic (ROC) curves. Med Decis Making. 1984;4(2):137–150. doi:10.1177/0272989X8400400203
4. Alemi F, Vang J, Bagais W, et al. Combined symptom screening and at-home tests for COVID-19. Special Issue: Diagnosis of COVID-19 in the Community. Qual Manag Health Care. 2023;32:(1 suppl):S11–S20. doi:10.1097/QMH.0000000000000404
5. Shuren JE. Coronavirus (COVID-19) Update: Authorization for Quidel QuickVue at-Home COVID-19 Test. Silver Spring, MD: US Food and Drug Administration; 2021.
6. Wickham H, Averick M, Bryan J, et al. Welcome to the tidyverse. J Open Source Software. 2019;4(43):1686. doi:10.21105/joss.01686.
7. Aleshire ME, Adegboyega A, Escontrías OA, Edward J, Hatcher J. Access to care as a barrier to mammography for Black women. Policy Polit Nurs Pract. 2021;22(1):28–40. doi:10.1177/1527154420965537.
8. Feagin J, Bennefield Z. Systemic racism and U.S. health care. Soc Sci Med. 2014;103:7–14. doi:10.1016/j.socscimed.2013.09.006.
9. Cook BL, Trinh NH, Li Z, Hou SS, Progovac AM. Trends in racial-ethnic disparities in access to mental health care, 2004-2012. Psychiatr Serv. 2017;68(1):9–16. doi:10.1176/appi.ps.201500453.
10. Rentsch CT, Kidwai-Khan F, Tate JP, et al. Patterns of COVID-19 testing and mortality by race and ethnicity among United States veterans: a nationwide cohort study. PLoS Med. 2020;17(9):e1003379. doi:10.1371/journal.pmed.1003379.
11. Centers for Disease Control and Prevention. Risk for COVID-19 Infection, Hospitalization, and Death By Race/Ethnicity. Washington, DC: United States Department of Health and Human Services. https://www.cdc.gov/coronavirus/2019-ncov/covid-data/investigations-discovery/hospitalization-death-by-race-ethnicity.html. Accessed March 7, 2022.
12. Huh J, Koola J, Contreras A, et al. Consumer health informatics adoption among underserved populations: thinking beyond the digital divide. Yearb Med Inform. 2018;27(1):146–155. doi:10.1055/s-0038-1641217.
13. Adepoju OE, Chae M, Ojinnaka CO, Shetty S, Angelocci T. Utilization gaps during the COVID-19 pandemic: racial and ethnic disparities in telemedicine uptake in federally qualified health center clinics. J Gen Intern Med. 2022;37(5):1191–1197. doi:10.1007/s11606-021-07304-4.
14. Jang M, Vorderstrasse A. Socioeconomic status and racial or ethnic differences in participation: web-based survey. JMIR Res Protoc. 2019;8(4):e11865. doi:10.2196/11865.
15. Couper MP, Kapteyn A, Schonlau M, Winter J. Noncoverage and nonresponse in an internet survey. Soc Sci Res. 2007;36(1):131–148.
16. Woloshin S, Dewitt B, Krishnamurti T, Fischhoff B. Assessing how consumers interpret and act on results from at-home COVID-19 self-test kits: a randomized clinical trial. JAMA Intern Med. 2022;182(3):332–341. doi:10.1001/jamainternmed.2021.8075.
17. Thompson MJ, Drain PK, Gregor CE, et al. A pragmatic randomized trial of home-based testing for COVID-19 in rural Native American and Latino communities: protocol for the “Protecting our Communities” study. Contemp Clin Trials. 2022;119:106820. doi:10.1016/j.cct.2022.106820.
18. Bien-Gund C, Dugosh K, Acri T, et al. Factors associated with US public motivation to use and distribute COVID-19 self-tests. JAMA Netw Open. 2021;4(1):e2034001. doi:10.1001/jamanetworkopen.2020.34001.
19. Chandler R, Guillaume D, Parker AG, et al. The impact of COVID-19 among Black women: evaluating perspectives and sources of information. Ethn Health. 2021;26(1):80–93. doi:10.1080/13557858.2020.1841120.
20. Jimenez ME, Rivera-Núñez Z, Crabtree BF, et al. Black and Latinx community perspectives on COVID-19 mitigation behaviors, testing, and vaccines. JAMA Netw Open. 2021;4(7):e2117074. doi:10.1001/jamanetworkopen.2021.17074.
21. Jones J, Sullivan PS, Sanchez TH, et al. Similarities and differences in COVID-19 awareness, concern, and symptoms by race and ethnicity in the United States: cross-sectional survey. J Med Internet Res. 2020;22(7):e20001. doi:10.2196/20001.
22. Torres C, Ogbu-Nwobodo L, Alsan M, et al. COVID-19 Working Group. Effect of physician-delivered COVID-19 public health messages and messages acknowledging racial inequity on black and white adults' knowledge, beliefs, and practices related to COVID-19: a randomized clinical trial. JAMA Netw Open. 2021;4(7):e2117115. doi:10.1001/jamanetworkopen.2021.17115.
23. Mody A, Pfeifauf K, Bradley C, et al. Understanding drivers of coronavirus disease 2019 (COVID-19) racial disparities: a population-level analysis of COVID-19 testing among black and white populations. Clin Infect Dis. 2021;73(9):e2921–e2931. doi:10.1093/cid/ciaa1848.
24. Schaffer DeRoo S, Torres RG, Ben-Maimon S, Jiggetts J, Fu LY. Attitudes about COVID-19 testing among black adults in the United States. Ethn Dis. 2021;31(4):519–526. doi:10.18865/ed.31.4.519.
25. Nguyen LH, Joshi AD, Drew DA, et al. Racial and ethnic differences in COVID-19 vaccine hesitancy and uptake [published online ahead of print February 28, 2021]. medRxiv. doi:10.1101/2021.02.25.21252402.
26. Willis DE, Andersen JA, Bryant-Moore K, et al. COVID-19 vaccine hesitancy: race/ethnicity, trust, and fear. Clin Transl Sci. 2021;14(6):2200–2207. doi:10.1111/cts.13077.
27. Centers for Disease Control and Prevention. Human Infection With 2019 Novel Coronavirus Case Report Form. Atlanta, GA: Centers for Disease Control and Prevention; 2019.
28. Kwak N, Radler B. A comparison between mail and web surveys: Response pattern, respondent profile, and data quality. J Off Stat. 2002;18(2):257.
29. Smith G. Does gender influence online survey participation?: A record-linkage analysis of university faculty online survey response behavior. ERIC Doc Reprod Serv. 2008;1:1–21.
30. Centers for Disease Control and Prevention. COVID-19 Data Tracker. https://COVID-19.cdc.gov/COVID-19-data-tracker/#demographics. Accessed January 6, 2022.
31. Wölfel R, Corman VM, Guggemos W, et al. Virological assessment of hospitalized patients with COVID-2019. Nature. 2020;581(7809):465–469. doi:10.1038/s41586-020-2196-x.
32. Alemi F, Guralnik E, Vang J, et al. Guidelines for triage of COVID-19 patients presenting with non-respiratory symptoms [published online ahead of print 2021]. SSRN J. doi:10.2139/ssrn.3884931.
33. Lumley T. Survey: analysis of complex survey. http://r-survey.r-forge.r-project.org/survey/. Published 2021.
34. Deming WE, Stephan FF. On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. Ann Math Statist. 1940;11(4):427–444. doi:10.1214/aoms/1177731829.
35. U.S. Census Bureau. Quickfacts: United States. https://www.census.gov/quickfacts/fact/table/US/LFE046219. Accessed March 14, 2022.
36. O'brien MA, Rogers WA, Fisk AD. Understanding age and technology experience differences in use of prior knowledge for everyday technology interactions. ACM Trans Access Comput. 2012;4(2):1–27. doi:10.1145/2141943.2141947.
37. Cassuto NG, Gravier A, Colin M, et al. Evaluation of a SARS-CoV-2 antigen-detecting rapid diagnostic test as a self-test: diagnostic performance and usability. J Med Virol. 2021;93(12):6686–6692. doi:10.1002/jmv.27249.
38. European Centre for Disease Prevention and Control. Considerations on the Use of Self-Tests for COVID-19 in the EU/EEA. Stockholm, Sweden: European Centre for Disease Prevention and Control; 2021.
Keywords:

acceptability; COVID-19; testing; usability; web-based screening

Supplemental Digital Content

© 2023 The Authors. Published by Wolters Kluwer Health, Inc.