The Institute of Medicine's 1999 report on medical errors1 focused public attention on the importance of measuring health care quality and outcomes. Following the report, the Association of American Medical Colleges (AAMC) called for a “collaborative effort to ensure that the next generation of physicians is adequately prepared to recognize the sources of error in medical practice… and to engage fully in the process of continuous quality improvement (CQI).”2 In its follow up report, Crossing the Quality Chasm,3 the Institute of Medicine warned that quality problems are pervasive, and health care frequently fails to deliver its potential benefit. The report made a series of recommendations for redesigning systems of care, including preparing the work-force to better serve patients in a world of rapid change.
The ability of providers to measure the quality and outcomes of health care delivered to patients and to improve those outcomes has long been an issue. Over the past three decades a variety of health professions schools and residency training programs have piloted attempts to teach the principles and practice of CQI to learners.4,5,6,7,8,9,10,11,12 Methods have included didactic lectures or seminars, observation of quality improvement activities, personal improvement projects, and participation in peer-review activities, as well as longitudinal quality improvement projects in inpatient and outpatient settings. Increasingly, medical students and residents are trained in ambulatory offices, and the use of trainees to introduce CQI efforts into private practices could represent an underutilized resource in our efforts to improve the quality of care provided to the public. As part of the Health Resources and Services Administration's Undergraduate Medical Education for the 21st Century project (UME-21), the University of Connecticut School of Medicine developed a CQI curriculum for its second- and third-year curriculum. We report the effects of the curriculum on students' knowledge, attitudes, and beliefs as well as the effects of the intervention on quality indicators in the participating practices for the first class (class of 2001) to complete the project.
METHOD
The curriculum involved a training module for CQI and chart abstraction followed by an office-based quality improvement project involving chart audit, intervention, and remeasurement. Didactic and seminar sessions addressing clinical outcomes and quality measurement, clinical protocol development, chart abstraction, and clinical-process change through CQI interventions were introduced during the second year of the curriculum. Additionally, all students participated in a two-and-one-half hour CQI chart-abstraction seminar. The seminar reviewed the history and theory behind CQI and identified the similarities and differences between CQI and clinical outcomes research. Formal training in chart abstraction was followed by practice abstraction with sample patient charts (lacking all identifiers), including an answer key to allow immediate feedback. All students signed a confidentiality agreement developed in collaboration with the Connecticut Attorney General's Office.
Following their CQI training, all students (in groups of two to four) participated in a full CQI cycle at 24 student continuity practice (SCP) sites (see Figure 1) where students receive training one-half day per week for the first three years of medical school. Type II diabetes was chosen as the clinical entity for the pilot project. A “project-in-a-box” for diabetes was developed by Qualidigm, formerly the Connecticut Peer Review Organization, involving a review of the literature on the clinical entity, clinical protocols, validated chart-abstraction tools, a computerized database, analysis software, and a physicians' “toolkit,” including a variety of disease-specific interventions targeted at identified “opportunities for improvement.”
Figure 1: University of Connecticut School of Medicine UME-21 continuous quality improvement (CQI) cycle.
Seventy-seven second-year medical students (the entire class of 2001) participated. The SCPs were all associated with a private multi-site fully integrated medical group practice. The SCPs' preceptor—physicians were approached by project staff and invited to participate. A random sample of patients with diabetes mellitus at each site was developed from billing data. Students abstracted 20–30 patient charts from each practice and returned the completed abstractions to the project coordinator at the Area Health Education Center Program office, who entered the data into a database.
A feedback report was returned to students and preceptors identifying how each practice did on each quality indicator relative to the aggregate performance of all practices. Each practice-based group, in collaboration with its preceptor, identified opportunities for improvement and developed interventions to enhance outcomes of care or chose appropriate interventions from the toolkit prepared by Qualidigm (e.g., diabetes guideline-based care tracking forms, patient reminders for foot care, patient home glucose-monitoring materials, etc.). Students and their preceptors implemented the interventions at the end of the students' second year. For remeasurement purposes, the students abstracted another random sample of patients from the same practice six months after implementing the interventions. Following analysis, students and preceptors received a report containing pre- and post-intervention data with aggregate data for comparison.
The effect of the CQI training, chart-abstraction, and intervention experiences on the students was measured using a pre- and post-training questionnaire (shown in Table 1) containing 40 questions organized into six domains: the nature of CQI, the focus of CQI, the principles of CQI, the important concepts of CQI, the use of a team approach in CQI, and perceived confidence in performing CQI activities. Each item was rated on a five-point Likert scale. The nonparametric paired Wilcoxon signed-rank test was used to compare the pre— and post—knowledge, attitudes, and beliefs scores for all students. Attitudinal assessment with formative feedback was obtained after the abstraction experience. The effects of the project on both process and out-come indicators at the practice level were assessed using baseline and remeasurement performance data.
Table 1: Medical Students' Mean Knowledge, Attitudes, and Beliefs Scores Regarding Continuous Quality Improvement before and after Training, University of Connecticut School of Medicine, Class of 2001 (n = 69)
RESULTS
Students' Knowledge, Attitude, Behavior Survey
Table 1 shows that knowledge of the nature, concepts, and principles of CQI improved significantly after training. Mean ratings of self-efficacy (confidence in carrying out CQI activities), initially near the midpoint of the Likert scale (suggesting ambivalence) improved significantly in 11 of 15 areas after the training. Participants' scores regarding analyzing data and using data to identify root causes of process issues and develop solutions remained in the mid range of 3.0. Students' ratings of their abilities to create and monitor implementation plans improved significantly with training.
Post-chart Audit and Data-return Feedback Survey
The students completed a feedback survey regarding the chart-audit experience after their baseline chart abstraction and they reviewed the data with their SCP preceptors. Both quantitative and qualitative data were collected. Fifty-three of 77 students (69%) completed the questionnaire.
The post-abstraction survey (data not presented) contained 14 questions organized into three domains: overview of the CQI project; problems or challenges to the CQI project; and benefits of the CQI project. Responses were provided on a five-point Likert scale (strongly agree to strongly disagree).
In general, students were either neutral regarding the overall value of the chart-audit learning experience (43.4%) or did not find it to be a valuable learning experience (41.6%). Almost two thirds (64.1%) either disagreed or strongly disagreed with the statement that the chart audit was “too intrusive into patient confidentiality.” About half of the students (45.3%) agreed or strongly agreed that the chart audit was “beneficial to the office practice.” Eighteen students (34.6%) indicated that the experience was “beneficial to the patient,” 18 were undecided about the beneficial outcomes to the patient, and 16 (30.8%) reported no benefit to the patient.
In terms of potential problems or challenges to the project, most respondents (83.0%) reported they had sufficient time to complete the project, and almost two thirds (62.3%) reported that the tasks and expectations were clearly defined. Most students either disagreed (41.5%) or strongly disagreed (26.4%) with a statement pertaining to poor mentoring as a problem or challenge to the project. Only 28.3% indicated that the disorganized office system resulted in inefficiency, hindering the project. A commonly reported potential problem or challenge was that the project was too “cumbersome and uninteresting.”
The students were more optimistic when reporting benefits of the project; 43% either agreed or strongly agreed that they had gained an appreciation that work (patient care) must be treated as a process lending itself to incremental change and improvement. Slightly less than one third (32.1%) gave neutral responses, while eight (15.1%) disagreed, and five (9.4%) strongly disagreed. A total of 35.8% either agreed or strongly agreed on the value of making decisions by relying on data, 37% were neutral, and the remainder (26.5%) either disagreed or strongly disagreed with the statement. Nearly half of the respondents (48.1%) reported that a benefit of the audit was improved quality of patient care, 17 were neutral, and the remaining ten (19.2%) either disagreed or strongly disagreed that the project improved patient care. Nearly half (46%) either agreed or strongly agreed that the experience improved documentation of procedures at the site. Roughly one fourth (28%) were neutral, and only 16% disagreed or strongly disagreed with the statement.
Text responses to seven open-ended questions regarding the experience were analyzed using the constant comparative method to identify recurrent themes until saturation was achieved and no new themes emerged.13,14
Overall, the students recognized the importance of comprehensive, well-organized charts and the potential benefits of CQI in improving patient care. Nevertheless, they expressed a number of concerns, including frustration with the activity itself, lack of support at the SCP sites, and skepticism about the project's being an efficient use of time, given competing educational demands. Each of these themes and sub-themes is summarized in Table 2, with supportive illustrative quotes.
Table 2: Samples of Medical Students' Attitudinal Feedback after Chart-abstraction Experience, Continuous Quality Improvement Curriculum, University of Connecticut School of Medicine, Class of 2001
Chart-abstraction Data
A total of 513 charts were abstracted for process-of-care measures for the baseline sample, and 380 were abstracted after the intervention (see Table 3). Baseline and post-intervention samples were statistically similar except for a higher percentage of women (51.3% versus 41.9%, (χ2 = 7.75, p < .01) in the remeasurement sample. The rates of documentation of foot and eye exams increased significantly (51.3% to 70.2%; p < .001 and 26.9% to 37.8%; p < .001, respectively). The percentage of patients who had at least one glycohemoglobin measurement during the study period, which was high at baseline, did not change significantly (86.4% versus 85.7%; p = .764). The mean value for glycohemoglobin dropped significantly from 7.71% at baseline to 7.22% at remeasurement (p < .001).
Table 3: Changes in Performance Measures for Diabetes Mellitus from the Baseline to Remeasurement in Continuous Quality Improvement Curriculum, University of Connecticut School of Medicine
DISCUSSION
The monitoring of quality and the reduction in the rates of medical errors are no longer optional activities in the practice of medicine. The Joint Commission on the Accreditation of Health Care Organizations, other accrediting organizations, health maintenance organizations, employers, Medicare, Medicaid, and other payers as well as consumers are all demanding accountability for health care quality. The AAMC, the Council on Graduate Medical Education, and residency reivew committees from every specialty and subspecialty are requiring that CQI be added to the curricula for all health professionals. Medical educators are wrestling with how best to fulfill these new requirements.
The results of this study show that CQI projects involving medical students can have significant impact on improving the indicators of quality for the care delivered by the practices in which they participate while introducing students and physicians to the process of quality measurement and improvement. This approach educates the next generation of providers and introduces physicians in practice to the techniques of CQI and practice-specific outcomes data to improve outcomes for their patients.
To successfully integrate CQI and medical-error prevention into the already busy undergraduate medical education curriculum, a program must successfully compete with other new technologies, diseases, and treatments, all of which may seem more exciting and pertinent to the developing physician. The evaluation data collected in this project suggest that lecture—seminar and chart-abstraction training effectively gives most students an appreciation of the value of CQI activities in improving the outcomes of care for their patients. Students, however, continued to view the data-collection process as “non-educational and irrelevant to their education.” Their attitudes toward chart abstraction may account for the difficulty in getting third-year students to complete their post-intervention data collection. Their attitudes also account for the smaller number of charts abstracted per practice (513 charts abstracted at baseline versus 380 post-intervention). For the foreseeable future, all of the ambulatory practices in which our students (and most students nationally) participate will use paper records, necessitating time-consuming chart abstraction to gather data for quality measurement.
Students made several useful suggestions to improve this project: (1) scheduling the chart audit during less intense portions of the curriculum; (2) making the chart-audit experience less time-consuming and onerous (this would need to be weighed against the decreased amount of practice-specific process and outcomes data collected); (3) better integrating the CQI project into the core medical school curriculum by using the chart-audit experience as a method to model and reinforce clinical research skills taught in clinical epidemiology, such as population sampling, calculation of statistical power, etc.; and (4) having each student complete the project at his or her own SCP site (not in small groups as in the first cycle of this project). Students also requested greater involvement in planning the project and analyzing the data. All of these suggestions have been integrated into subsequent cycles of the project.
The students indicated that they had learned several valuable lessons from their participation in the chart-abstraction portion of the project, despite its apparent lack of popularity. The feedback suggests that they gained an understanding of the value of legible charting to the outcomes of care and a better understanding of use of guidelines in the management of patient populations. Seminar facilitators remarked that “students knew the diabetes care guidelines cold.” Some students, however, remained ambivalent regarding their abilities to use data to identify root causes of poor outcomes and create solutions to problems that were identified. But, in general, students felt that once they had identified problems, they could create, implement, and monitor plans to address these quality issues.
A major limitation of the study is the lack of control groups for both the students that participated in the training and the practices that participated in the project. Because all data collection was done by student-performed chart audit, performance data were not available for non-teaching practices. Changes in available therapies and greater awareness of practice guidelines may have contributed to the positive results we found. Thus, we cannot definitively say that the project was the cause of the changes measured in students' knowledge, attitudes, and behaviors, or the improvements in process and outcomes measured in the participating practices. Additionally, because the project was conducted at one medical school and in private practices belonging to one management services organization, the results may not be generalizable to other medical schools with different curricula or to other types of practice models.
The CQI curriculum and project are being sustained beyond the UME-21 funding period and have been integrated into the SCP in the second and third years of our medical school's curriculum. The cost of the project is minimal. The major cost of the project, acquisition and analysis of the chart-audit data, will be minimized by using pooled practice-specific claims data and HEDIS (Health plan Employer Data Information Set) measures available from health plans in the state. The chart audit will no longer be necessary, hopefully increasing students' acceptance of the project and exposing them to use of electronic quality data, as well.
In conclusion, our data show that medical students can successfully initiate CQI activities at practices in which they participate with positive effects on the quality of care delivered. The use of medical students to initiate these efforts may represent an underutilized resource in efforts to improve the quality of care afforded the public. As medical schools continue to shift their students into community ambulatory practices for longitudinal experiences, the model we describe will become increasingly generalizable. Getting learners to embrace the process of quality measurement and improvement, however, will be challenging as long as data collection remains tedious. Creative approaches to data collection and measurement (use of available managed care organization claims data) and active involvement of students in the planning process will be necessary to improve student experiences. We recommend further efforts in this area.
REFERENCES
1. Kohn LT, Corrigan JM, Donaldson MS (eds). Committee on Quality of Health Care in America. Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press, 2000.
2. Cohen J. Letter to Medical School Deans. Washington, DC: Association of American Medical Colleges, December 1999.
3. Committee on Quality of Health Care in America. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press, 2001.
4. Barbaccia JC. Introducing quality assurance and medical audit into the UCSF Medical Center curriculum. J Med Educ. 1976;51:386–91.
5. Weeks WB, Robinson MS, Brooks WB, Batalden PB. Using early clinical experiences to integrate quality-improvement learning into medical education. Acad Med. 2000;75:81–4.
6. Mulligan JL, Garg ML, Skipper JK, McNamara MJ. Quality assurance in undergraduate medical education at the Medical College of Ohio. J Med Educ. 1976;51:378–85.
7. Barr DM, Wollstadt LJ, Goodrich LL, Pittman JG, Booker CE, Evans RL. The Rockford School of Medicine undergraduate quality assurance program. J Med Educ. 1976;51:370–7.
8. Headrick LA, Neuharuser D, Schwab P, Stevens DP. Continuous quality improvement and the education of the generalist physician. Acad Med. 1995;70(1 suppl):S104–S109.
9. Weingart SN. A house officer-sponsored quality improvement initiative: leadership lessons and liabilities. Jt Comm J Qual Improv. 1998;24:371–8.
10. Gordon PR, Carlson L, Chessman A, Kundrat ML, Morahan PS, Headrick LA. A multi-site collaborative for the development of interdisciplinary education in continuous improvement for health professional students. Acad Med. 1996;71:973–8.
11. Headrick LA, Richardson A, Priebe GP. Continuous improvement learning for residents. Pediatrics. 1998;101:768–73.
12. Kyrkjebo JM. Beyond the classroom: integrating improvement learning into health professions education in Norway. Jt Comm J Qual Improv. 1998;25:588–97.
13. Glaser B, Strauss A. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago, IL: Aldine, 1967.
14. Strauss AL, Corbin J. Grounded Theory in Practice. London, U.K.: Sage, 1990.