Secondary Logo

Journal Logo

A Comparison of Paper with Electronic Patient-Completed Questionnaires in a Preoperative Clinic

VanDenKerkhof, Elizabeth G., RN, DrPH; Goldstein, David H., MB, BcH, MSc, FRCPC; Blaine, William C., MSc; Rimmer, Michael J.

doi: 10.1213/01.ane.0000168449.32159.7b
Technology, Computing, and Simulation: Research Report
Free
Appendices
Appendices

In this unblinded randomized control trial we compared electronic self-administered Pre-Admission Adult Anesthetic Questionnaires (PAAQ) using touchscreen technology with pen and paper. Patients were recruited in the Preassessment Clinic if they had completed a PAAQ in the surgeon’s office. Patients were randomized to study PAAQ using paper, hand-held computer (PDA), touchscreen desktop computer (kiosk), or tablet. Patients also completed a preference and satisfaction survey. The main outcome measures were percent agreement between the prestudy and study PAAQ and time to completion. Only six of the 366 patients approached refused to participate. The median time to completion of the PAAQ was shortest on the kiosk (2.3 min) and longest on the PDA (3.2 min) (χ2 = 14.5; P = 0.002). The mean agreement between the prestudy and the study PAAQ was approximately 94% across all study arms. The proportion of participants expressing comfort before and after completing the PAAQ increased from 10% to 97% on the computerized arms and from 60% to 64% on the paper arm. Touchscreen computer technology is an accurate, efficient platform for patient-administered PAAQ. Patients expressed comfort using the technology and preference for computerized versus paper for future questionnaires.

IMPLICATIONS: Self-administered electronic health questionnaires using touchscreen computer technology are an accurate means of collecting patient information in the preoperative setting and can provide a valuable basis for an electronic perioperative patient record. Patients expressed comfort and satisfaction with this method of questionnaire completion.

Department of Anesthesiology, Queen’s University, Kingston, Ontario, Canada

Supplemental data available at www.anesthesia-analgesia.org.

Accepted for publication April 15, 2005.

This study was funded by Dr. VanDenKerkhof’s Queen’s University Research Initiation Grant. Viewsonic loaned a tablet for use during the study. All other equipment and software for the study was provided by the Queen’s University Anesthesiology Informatics Laboratory (QUAIL). QUAIL’s funding and/or in kind support has been obtained through grants from Canadian Foundation for Innovation, Ontario Innovation Trust, Queen’s University, Kingston General Hospital, Claire Nelson Bequest Fund, Bickell Foundation, Avaya, Viewsonic, and Compaq.

Address correspondence and reprint requests to Elizabeth VanDenKerkhof, Department of Anesthesiology, Queen’s University, 76 Stuart Street Kingston ON K7L 2V7, Canada. Address e-mail to ev5@post.queensu.ca.

Patient-completed electronic questionnaires have been used to effectively evaluate and triage patients (1,2) and to assign objective health status indices (3). Moreover, the implementation of a preoperative evaluation triage system based on the use of an electronic questionnaire has been shown to significantly reduce patient interview cost, patient interview time, and patient dissatisfaction (2).

Despite the obvious clinical utility of collecting electronically entered patient data, there is a paucity of literature on the accuracy of data collected in this manner. No study has simultaneously compared the accuracy of patient-entered data collected by electronic and paper methods and compared them with a “gold standard.” Such a comparison is necessary because inferences about the accuracy of electronically gathered information can only be made free of bias when both systems (i.e., paper and computer) are measured against the same method (4). Studies conducted more than a decade ago suggested that the presentation of electronic questionnaires has important implications for the successful implementation of patient-driven informatics (1,3). These findings highlight the importance of developing well designed interface systems, which are associated with benefits such as reduced training time, reduced error rates, and improved productivity (5–7). No studies have been conducted evaluating how the advantages in interface systems and portable computer technology impact on error rates and productivity for patient-completed questionnaires. The purpose of this study was to compare agreement between and efficiency of the current method of collecting preoperative data (paper) with three electronic interfaces.

Back to Top | Article Outline

Methods

The study was conducted in February/March 2002 and included 360 ASA physical status I-III patients scheduled for surgery in 2 tertiary care hospitals in Kingston, Ontario, Canada. Patients were asked to participate in the study if they were at least 18 yr of age, could read and write English, were alert and lucid, and had fully completed the Pre-Admission Adult Anesthetic Questionnaire (PAAQ) before arriving at the preadmission assessment clinic (see Appendix 1 in data supplement to article at www.anesthesia-analgesia.org). The original self-completed paper PAAQ is produced by the department of anesthesiology and is uniform in design and not specialty specific. It is completed in the surgeon’s office by the patient as part of standard care 1-2 wk before surgery with minimal instruction or assistance from office personnel. Patients were asked to give their fully informed and written consent to participate in the study, and approval to conduct the study was obtained from Queen’s University Research Ethics Board.

For the purpose of this study the prestudy PAAQ was replicated for use on three electronic platforms: hand-held computer (PDA), tablet, and desktop computer with a touchscreen (kiosk) (Fig. 1). Participants were randomly assigned to one of the three platforms or a paper (control) arm. The control arm was provided with a blank version of the original PAAQ and they were asked to check a “yes” or “no” response for each question. The electronic PAAQ was programmed to include only closed-ended (yes/no) responses; therefore participants on the paper arm were specifically asked to only check “yes” or “no” responses and to not provide written comments. This would allow for a more accurate comparison of time to completion between the paper and electronic platforms. In the three computer arms, the subjects were asked to enter a “yes” or “no” response on a touchscreen using their finger (kiosk and tablet) or a stylus (PDA). In both the paper and computer groups, subjects were asked to indicate whether they had additional comments by placing a check mark next to a “Comment” box. Within the computer group, on completion of each questionnaire the data were transferred in real-time to the database via wireless radiofrequency (RF 802.11B).

Figure 1.

Figure 1.

Instructions were provided to each participant on how to complete the PAAQ on the given device. In all groups time was measured from when participants answered the first question until they completed the last question. In the paper group time was measured by a research assistant using a stopwatch, whereas in the computer arms time was measured electronically via the computer. In all cases, time was measured without the knowledge of the patient. On completion of the PAAQ, patients were asked to complete a preference and satisfaction survey (see Appendix 2 in data supplement to article at www.anesthesia-analgesia.org).

The main objectives of the study were 1) to compare the agreement and time to completion of the electronic version of the self-administered PAAQ with the current paper-based version, 2) to assess patient satisfaction regarding the use of an electronic self-assessment questionnaire, and 3) to identify patient issues surrounding the use of an electronic medium to collect self-assessment information.

The primary outcome measures were time-to-completion and mean percent agreement between electronic and paper PAAQs. To test the hypothesis that there is no difference in completion times between paper and each of the computer arms, a sample size of 150 patients in the paper arm and 50 in each of the electronic groups would provide 90% power to detect a difference in the mean completion times of 0.53 standard deviations and 80% power to detect a difference in the mean completion times of 0.46 standard deviations. With this sample size we would also have 90% power to detect a change in the mean percent agreement rate from 29 of 30 questions to 28 of 30 questions between the 2 groups. Because of the potential for dropouts, the final sample size was set at 60 in each of the computer arms and 180 in the paper arm. All tests were 2-sided using an α of 0.05.

Descriptive statistics were performed on demographic data to assess for significant differences across the four study arms (paper, PDA, tablet, kiosk). Age was analyzed using one-way analysis of variance and sex was analyzed using Pearson χ2. Mean percent agreement was calculated between the prestudy PAAQ and the study PAAQ. One-way analysis of variance was used to assess for differences in mean percent agreement across the four study arms. Kruskal Wallis was used to assess for statistically significant differences in the median time-to-completion of the PAAQ across the four study arms. Frequencies and percentages were calculated on the satisfaction data.

Back to Top | Article Outline

Results

Three-hundred-sixty-six patients were approached and 360 patients were recruited to the study. As a result of illness 6 patients refused to participate (1.6%). Of the recruited subjects, 2 were unable to complete the PAAQ because of technical difficulties with the PDA; therefore 358 (99.4%) patients completed the PAAQ portion of the study. All 360 participants completed the preference and satisfaction questionnaire. There were no significant differences in demographic characteristics across the four study arms (Table 1). Age ranged from 18 to 92 yr. Approximately two thirds of participants were female. The majority of participants were scheduled for ophthalmic, orthopedic, or general surgery (Table 1).

Table 1

Table 1

The median time to completion was shortest on the kiosk (2.3 minutes) and longest on the PDA (3.2 minutes) (χ2 = 14.5, P value = 0.002) (Table 2). The mean percent agreement between the prestudy and the study PAAQ was approximately 94% across all study arms (Table 2). With respect to agreement between paper and electronic responses for individual questions, the highest levels of disagreement was reported when patients were asked about history of allergies (23%–29%) (Fig. 2). Participants were more likely to report the presence of allergies on the prestudy questionnaire than on the electronic questionnaire. The purpose of this study was not to assess the validity of responses on the prestudy or the study PAAQ but, rather, to compare the agreement between responses using the current method (paper questionnaire) and various electronic platforms. Thus, the validity of responses to questions regarding, for instance, the presence of allergies was not assessed.

Table 2

Table 2

Figure 2.

Figure 2.

Responses to the computer preference and satisfaction survey are found in Table 3. Eighty percent of participants on the paper, tablet, and kiosk arms reported using a computer at least weekly and 55% of PDA participants reported at least weekly computer use. Over 90% of participants found the study PAAQ easy to complete. Twenty percent of participants in the paper arm reported being worried about loss of their data should it be collected in an electronic format, whereas 5% to 16% of subjects in the computer arms had this concern. Approximately 25% of study participants expressed concern about loss of paper data. The proportion of participants expressing comfort before and after completing the study PAAQ went from 10% to 97% on the computerized arms and 60% to 64% on the paper arm. Table 4 reports comments made by patients on the preference and satisfaction questionnaire. Patients generally expressed enthusiasm about the potential for electronic capture of preoperative questionnaires.

Table 3

Table 3

Table 4

Table 4

Back to Top | Article Outline

Discussion

The findings of this study support research indicating that electronic collection of patient data in the preoperative assessment clinic is accurate and efficient (1,2,8). The literature highlights the importance of a well designed computer interface for the successful implementation of computerized patient questionnaires (1,3). In this study using touchscreen technology, patients reported being able to complete the electronic questionnaire with comfort and ease. Patients who completed the computerized study PAAQ reported greater preference for this method over paper, possibly because only one question was displayed per screen shot and because of the novel nature of collecting patient information in this way. Of the three computer applications, more patients preferred the kiosk over paper, and the kiosk was the most efficient with respect to completion time. Several factors may have contributed to this finding, including the resemblance of the kiosk to a desktop computer, the larger screen area, and the permanent rather than portable nature of the kiosk. Participants reported greater ease in reading the PAAQ on the computer than on paper. This may be explained in part by the small type size and crowded nature of the paper PAAQ. Only 55% of respondents in the PDA arm reported using a computer at least weekly, which may explain in part the longest PAAQ completion time on the PDA arm (3.2 minutes versus 2.9 minutes on paper arm). However, given the reported ease and comfort completing the PAAQ on the PDA, some reporting bias may have occurred as a result of participants misinterpreting “computer use” with “hand-held use”.

A limitation of the study was its inability to directly compare the three computer interfaces; however, it would be onerous for patients and it would lead to spurious results, if patients were asked to complete the same questionnaire on various interfaces multiple times. A second limitation is the lack of blinding. Because of the nature of the study, blinding was impossible. Although every effort was made to provide participants with a neutral explanation of the study, participants may have been biased by the novel nature of the study and the impression that the researchers wanted results favoring the computer. A further limitation of the study was the restriction to closed-ended questions; however given the lack of information available about the feasibility of having patients complete electronic health questionnaires it was decided that free text entry using a stylus or keyboard would be too onerous for many patients. As an alternative to free text entry, patients were given the option to select the “Comment” box if they wished to provide further verbal comments to a clinician.

Strengths of the study include the randomized nature of the design, thereby allowing for a comparison among the four study arms. A further advantage was the inclusion of a prestudy paper arm (usual care or gold standard), which provided the ability to directly compare prestudy and study responses to the same questions. Within the paper arm, the comparison between the prestudy PAAQ and the study PAAQ provided the degree of validation one could expect in this population when using the current standard for data collection. Given that percent agreement was not significantly different between the electronic arms and the paper arm, one can assume that variation in question responses between prestudy PAAQ and study PAAQ completion may have been attributable to real events that occurred, change of mind on the part of the patient, or errors in completing or entering questionnaire responses. Another strength of the study is the high participation rate. Only six patients refused to participate in the study, primarily as a result of illness at the time of their visit to the preoperative clinic. Only two patients were unable to complete the study as a result of technical difficulties with the PDA. Almost one quarter of the participants were scheduled for ophthalmic surgery, indicating that some form of visual impairment may have been present in these patients, yet completion rates were 100 percent. Given the high recruitment rate of 98.4% (360/366 patients) in our study, we believe the results of this study can be generalized to all other settings with a patient population representative of the study population.

This study provided a preliminary assessment of the feasibility of electronic patient-completed questionnaires using touchscreen technology to collect preadmission health history information. The computerized PAAQ is part of a larger perioperative tool being developed at this center (9). The electronic preoperative questionnaire data will populate the preanesthetic assessment software providing information and alerts to clinicians performing subsequent assessments. Future studies are needed to assess the feasibility of electronic patient-completed drug and allergy information. In addition, secondary questions concerning matters such as smoking history and details of diseases should be incorporated into the electronic questionnaire, thereby making this information available in an electronic format to health care providers within and across health care institutions. Work is currently underway to integrate the PAAQ into an electronic patient record that can be accessed by all health care professionals in our institutions. Future studies are also required to assess the impact of portable computers on the patient - health care provider encounter.

We would like to acknowledge the assistance of the staff in the Pre-Admission Clinic at Hotel Dieu Hospital and the enthusiasm of the research subjects. We would also like to thank Beth Orr and the QUAIL team at KGH.

Back to Top | Article Outline

References

1.Lutner RE, Roizen MF, Stocking CB, et al. The automated interview versus the personal interview: do patient responses to preoperative health questions differ? Anesthesiology 1991;75:394–400.
2.Parker BM, Tetzlaff JE, Litaker DL, Maurer WG. Redefining the preoperative evaluation process and the role of the anesthesiologist. J Clin Anesth 2000;12:350–6.
3.Roizen MF, Coalson D, Hayward RS, et al. Can patients use an automated questionnaire to define their current health status? Med Care 1992;30:MS74–84.
4.Hogan WR, Wagner MM. Accuracy of data in computer-based patient records. J Am Med Inform Assoc 1997;4:342–55.
5.Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003;163:1409–16.
6.VanDenKerkhof EG, Goldstein DH, Lane J, et al. Using a personal digital assistant enhances gathering of patient data on an acute pain management service: a pilot study. Can J Anaesth 2003;50:368–75.
7.VanDenKerkhof EG, Goldstein DH, Rimmer MJ, et al. Evaluation of hand-held computers compared to pen and paper for documentation on an acute pain service. Acute Pain 2004;6:115–21.
8.Stone AA, Shiffman S, Schwartz JE, et al. Patient non-compliance with paper diaries. BMJ 2002;324:1193–4.
9.Goldstein DH, VanDenKerkhof EG, Rimmer MJ. A model for real time information at the patient’s side using portable computers on an acute pain service (new media). Can J Anaesth 2002;49:749–54.
© 2005 International Anesthesia Research Society