Secondary Logo

Journal Logo

Research Report

The Veterans Affairs Learners’ Perceptions Survey

The Foundation for Educational Quality Improvement

Keitz, Sheri A. MD, PhD; Holland, Gloria J. PhD; Melander, Evert H. MBA; Bosworth, Hayden B. PhD; Pincus, Stephanie H. MD, MBA for the VA Learners’ Perceptions Working Group

Author Information
  • Free


Providing care for U.S. veterans while at the same time educating tomorrow’s health care providers are fundamental commitments of the U.S. Department of Veterans Affairs (VA). The VA’s Veterans Health Administration (VHA) has over 6 million veterans enrolled in its health care delivery system. Presently 4.5 million of these veterans are cared for through a nationwide network that includes 163 hospitals, more than 1,000 outpatient clinics, nursing homes, domiciliaries, and home care programs. The VA’s medical care appropriation for fiscal year 2002 was over $21 billion.1 Since 1946, when President Harry Truman signed the law that established the Department of Medicine and Surgery (now VHA), affiliations between the VA and academic institutions have become an invaluable national training resource for medical students and resident physicians. The affiliations also serve as a vital component to recruiting and maintaining excellent VA staff and ensuring quality of care provided directly by the VA.

The VA’s graduate medical education (GME) is conducted through affiliations with university schools of medicine. Currently 130 VHA medical facilities are affiliated with 107 of the nation’s 125 medical schools. Through these partnerships, almost 29,000 residents receive some of their training in the VA every year.2 Accounting for approximately 9% of U.S. GME,3 the VA pays for 8,700 resident–physician positions in almost 2,000 residency programs.4 In the VA fiscal year 2001, appropriations in support of physician education totaled $706 million.5

Despite the VA’s significant commitment to GME through allocation of faculty resources, resident salary dollars, and clinical learning environments, there has been no systemwide attempt to measure learners’ perceptions of the clinical training experience in VA settings. The Government Performance and Results Act (GPRA) of 19936 required agencies to establish measurable performance goals and to report on the results annually to the Office of Management and Budget. In support of the GPRA, VHA’s Office of Academic Affiliations (OAA) was charged with developing a tool to measure performance for the VA’s academic mission. This tool could be used as a yearly quality indicator to highlight strengths and opportunities for improvement in VA clinical training programs.

In this report, we outline the development, validation, and implementation of a nationwide VA Learners’ Perceptions Survey for all clinical health trainees. We focus on results for residents and report differences in learner perceptions among residents in IM, surgery, subspecialty training, and psychiatry.


Learners’ Perceptions Working Group

In 1999 OAA established a VA Learners’ Perceptions Survey Steering Committee of individuals with multidisciplinary expertise in VA clinical training. Steering Committee members, staff from a contractor with expertise in survey methodology, and OAA staff make up the VA Learners’ Perceptions Working Group, whose mission was to examine and measure elements of learner satisfaction for all health care trainees in the VA system. The Working Group oversaw the study design, development, and conduct of the survey, and the management and analysis of data.

Literature Review and Focus Group Studies

To identify items of importance for clinical education, we conducted a systematic review in 1999 and again in January 2002 of the medical literature published from 1975 to the present. Our search identified 239 articles of which we selected 157 for further review. We graded these articles based on respondent sample size (>50), number of sites, and response rate. Our literature review identified 152 items of importance to clinical training.

The literature review served as the background for 15 focus-group sessions held at five VA medical centers during December 1999 and January 2000. We conducted focus groups for medical students, resident–physicians, physician faculty, associated health faculty, nursing students, and graduate and undergraduate associated health trainees to explore characteristics of clinical training and to provide content validation. We conducted focus groups until we reached saturation (i.e., no new themes emerged) and exhaustiveness (i.e., similar themes generated by participants from very different perspectives).7

Questionnaire Development

After literature review and focus groups, we identified common and recurrent themes pertaining to attributes of the health care training experience. We collapsed the full list into conceptually distinct domains. For each domain, a questionnaire was written that asked respondents to rate their satisfaction with their VA training experience using a five-point Likert scale for most items (very satisfied, somewhat satisfied, neither, somewhat dissatisfied, and very dissatisfied).

A pilot test was conducted in 22 geographically diverse VA medical centers to ensure clarity, internal consistency, and independence of the domains. A total of 1,092 questionnaires were completed and returned; of these, 437 (40%) were completed by residents. The remaining trainees were other health professional trainees (e.g., nurses, dentists, or pharmacists).

Factor analysis confirmed the grouping of variables into domains and explained the pattern of correlations among the items for each domain. The survey items were collapsed into four domains based on factor loadings: faculty/preceptors, learning environment, working environment, and physical environment. Multiple regression analyses determined which specific elements contributed to overall satisfaction for each domain. We used a revised 57-item questionnaire for nationwide distribution. The final questionnaire took 15 minutes to complete.

Participants and Setting

The VA system has 130 facilities that participate in resident training nationwide. In March 2001, 3,338 residents were registered among all 130 sites. During April and May of 2001, an independent vendor mailed our questionnaires directly to these registered residents. Up to three mailings of the questionnaire and two mailings of reminder postcards were sent to increase questionnaire completion. We also made the questionnaire available on the Internet. By June 30, 2001, we received completed questionnaires from 1,775 registered residents (53.2%).

Physician Groups

For analytic purposes, we divided residents into five groups based on similarity of their training programs. The groups consisted of medicine (general IM, geriatric medicine, and preventive medicine); surgery (anesthesiology, anesthesiology pain management, colon and rectal surgery, general surgery, neurological surgery, ophthalmology, orthopedic surgery, otolaryngology, plastic surgery, thoracic surgery, urology, and vascular surgery); subspecialty trainees (allergy and immunology, cardiovascular disease, critical care, dermatology, endocrinology, gastroenterology, hematology, hematology/oncology, infectious diseases, nephrology, neurology, oncology, physical medicine and rehabilitation, pulmonary disease, rheumatology, and spinal cord medicine); and psychiatry (addiction psychiatry, geriatric psychiatry, and psychiatry). The remaining group of other residents (pathology, radiology) did not logically fit into any of the four categories, and we excluded them from our analysis for this report. We conducted a full analysis using the completed questionnaires of the 1,436 resident physicians who were in medicine (n = 706), surgery (n = 291), subspecialty training (n = 266), and psychiatry (n = 173).

Statistical Analysis

Questionnaire results are reported here as the proportion of residents reporting satisfaction (a sum of very and somewhat satisfied responses). Because each resident did not answer every question, data are presented as percentages of residents responding to each question. For the numerical Overall Satisfaction Score, the mean with standard deviation is reported. Our primary goal was to describe residents’ satisfaction. We also used chi-square tests and one-way analysis of variance {analyses of variance [ANOVA]} to assess differences among groups as appropriate. To adjust for multiple comparisons, we only underscore differences that occur when p ≤ .001. This approximates a Bonferroni correction for the 51 tests performed. Missing values were handled using a deleted observation method. We performed all statistical procedures using a standard statistical software package.

Ethical Considerations

The U.S. Office of Management and Budget, which reviews and approves federal government sponsored surveys, approved our survey. We maintained confidentiality by keeping respondent information in a separate database and reviewing only aggregate data.


Demographic Data

Overall, 61% of the respondents were men (see Table 1). A quarter (24%) of all respondents were in their first postgraduate year (PGY 1) with three quarters of the total group in PGY 1–3. In the medicine and psychiatry groups, few trainees were PGY 4 or greater. In contrast, more surgery and subspecialty respondents were PGY 4–6 (43% and 62%, respectively), representing a higher-level of training (p < .001).

Table 1
Table 1:
Demographic Characteristics of 1,436 Residents Registered at 130 Veterans Affairs Training Sites in the United States, by Residents’ Training Program, 2001

Global Measures of Satisfaction

Residents’ responses to questions on overall satisfaction with the VA training experience are shown in Table 2. On a scale of 0 to 100, where 100 is a perfect score and 70 is a passing score, residents gave an average score of 79.1 ± 13.2 to their VA clinical training experience. Most residents (84%) would have recommended VA training to peers, and 81% would have chosen a VA training experience again if given a chance. There was no significant variation in the responses among the four resident groups.

Table 2
Table 2:
Residents’ Overall Satisfaction with Their Veterans Affairs Clinical Training Experience, by Residents’ Training Program, 2001

Measures of Satisfaction within Specific Domains

Proportions of residents who reported satisfaction with individual items within the four domains of the VA Learners’ Perceptions Survey are shown in Table 3.

Table 3
Table 3:
Proportions* of Residents Who Were Satisfied with Specific Items in Individual Domains of the Veterans Affairs Learners’ Perceptions Survey, by Residents’ Training Program, 2001

Satisfaction with Clinical Faculty/Preceptors

A large majority of trainees reported satisfaction with the faculty on all items, with approximately 90% of trainees reporting satisfaction with the clinical skills of their preceptors, faculty approachability, and teaching ability. Seventy-seven percent to 86% of residents reported satisfaction with all remaining faculty items. Overall, 87% of trainees reported overall satisfaction with the clinical faculty.

When divided into subgroups of residents, satisfaction was high and similar among all groups. The only item that differed among the groups was satisfaction with accessibility of faculty, with the surgery trainees reporting the lowest proportion of satisfaction (p = .001).

Satisfaction with Learning Environment

A large majority of residents reported satisfaction with the degree of autonomy in the VA setting, with 94% of residents reporting satisfaction. Residents were also satisfied with time working with patients (87%), as well as the degree of supervision (84%).

When divided into subgroups of residents, there was no significant variation in satisfaction with degree of autonomy, degree of supervision, or preparation for future training. However, satisfaction varied significantly among the groups for five of the 13 items (p ≤ .001). Medicine residents were the least satisfied group, reporting the lowest proportion of satisfaction for four of these five items. The range of responses varied significantly from the least to most satisfied group: time working with patients (83% for medicine to 92% for psychiatry), spectrum of patient problems (72% for medicine to 86% for surgery), quality of care (67% for medicine to 83% for psychiatry), and amount of “scut” work (32% for medicine to 57% for psychiatry). For teaching conferences, medicine residents had the highest proportion reporting satisfaction (71%), while surgery had the lowest proportion of satisfied residents (57%). Despite the variability in responses for the items in the learning environment domain, overall satisfaction with the learning environment did not differ among the groups (p = .22).

Satisfaction with Working Environment

Most residents were satisfied with faculty/preceptor morale (84%), as well as issues pertaining to the automated (electronic) patient record system (83%) and computer access (83%). When divided into subgroups of residents, satisfaction varied significantly among the groups for five of the 13 items (p ≤ .001). As was seen in learning environment, medicine trainees were the least satisfied group, with the lowest proportion of trainees reporting satisfaction for four of these five items. The range of responses varied significantly from the least to most satisfied group: laboratory services (55% for medicine to 72% for psychiatry), radiology services (46% for medicine to 69% for psychiatry), ancillary/support staff morale (42% for medicine to 61% for subspecialty trainees), and ancillary/support staff (38% for medicine to 62% for psychiatry). For the automated (electronic) patient record system, surgery had the lowest proportion of satisfied trainees (75%), while psychiatry had the largest proportion of satisfied trainees (91%). Overall, satisfaction with working environment did not vary among the trainee groups.

Satisfaction with Physical Environment

Most residents were satisfied with availability of phones (84%), convenience of facility location (83%), lighting (82%), and personal safety (80%). When divided into subgroups of residents, satisfaction varied significantly among the groups for only two of the 12 items (p ≤ .001). For personal safety and heating/air conditioning, psychiatry residents were the least satisfied (71% and 69%, respectively), while surgery residents had the largest proportion of satisfied respondents (89% and 83%, respectively). Overall satisfaction with physical environment did not vary among the resident groups.


Quality indicators for resident education are lacking. No tools have been developed to systematically assess programmatic strengths and weaknesses in clinical training settings or to measure progress toward a higher goal. The VA Learners’ Perceptions Survey is the first validated tool to address comprehensive learner satisfaction in clinical trainees. To our knowledge, no current or prior wide-ranging, nationwide, quality improvement process exists for physician and other clinical trainees. Given the size of the VHA system and its commitment to an education mission, there are few better opportunities to survey and affect a large number of residents nationwide. The results of our survey suggest that, for all residents, overall satisfaction with the VA learning experience was high. In addition, trainees uniformly reported high satisfaction with all aspects of their interactions with VHA faculty/preceptors and some aspects of the learning environment domain.

One intriguing finding is that nearly 95% of residents reported satisfaction with their degree of autonomy and roughly 85% of trainees reported satisfaction with the availability of faculty and degree of supervision. In contrast to the private sector, VHA supervisory guidelines are based on the Accreditation Council for Graduate Medical Education’s standard of graduated levels of responsibility.8 VHA guidelines are not related to documentation required for third-party billing but rather remain focused on resident supervision from an educational perspective. The autonomy afforded by the VA system may be critical to effective clinical training.

The VA Learners’ Perceptions Survey also raises some important questions about the need for improvements in the VHA system. Residents gave an overall numeric score of 79 to their VA clinical training experience. This score remained consistent across all groups of residents. This single, overall score suggests that there are challenges within the VHA system but does not tell us where to find them. We must look at individual domains to gain clues to the learning needs and values that may require attention in particular groups of trainees.

In comparison to high satisfaction with preceptors and some aspects of the learning environment domain, fewer residents were satisfied with the working and physical environment domains. Specifically, fewer residents reported satisfaction with items pertaining to ancillary, laboratory, and radiology services, amount of “scut” work, and availability and maintenance of needed equipment. These perceptions may reflect frustrations within a fiscally constrained system in which a fixed set of resources are available to provide care for a growing number of increasingly ill and complex patients.

One purpose of our project was to create and validate a comprehensive questionnaire that would capture most domains important to learner satisfaction in the VA system. Through the complementary methods of a systematic literature review, exploratory focus groups, and pilot testing with validation, we confirmed the independence and importance of the four domains of the VA Learners’ Perceptions Survey: faculty/preceptors, and learning, working, and physical environments. The articulation of these themes adds to our understanding of what constitutes a good learning environment and also generates hypotheses about which factors can influence these areas. The VA places great emphasis on measures as indicators of quality. The introduction of an education measure brings attention to the teaching mission and places education issues directly in the focus of VA leadership.

A second purpose was to create a survey tool that would allow us to differentiate between strengths and weaknesses within a program as well as to differentiate characteristics of satisfaction among trainees in different disciplines. The respondents were able to differentiate among items within each domain. For example, overwhelming satisfaction with the degree of autonomy (94%) was matched with mediocre satisfaction with amount of “scut” work (40%) within the learning environment domain. Further, participants in different training programs responded to individual items in ways consistent with the dissimilar nature of their programs. For example, 69% of psychiatry residents, who are unlikely to require significant support from radiology service, reported satisfaction in this area, whereas only 46% of medicine residents were satisfied with radiology services at their facilities.

In addition, our results are applicable to the entire VA system. The examined sample was representative of a larger sample of VA trainees. When we compared the percent of respondents in each training category with the percent of VA funded positions nationally, they were nearly identical. Furthermore, the gender balance in the respondent group and in the specialties and training groups nationwide were also nearly identical. Finally, respondents represented 130 sites, reflecting the geographic diversity of the VHA system.

Our findings extend the work previously published in the medical education literature concerning learner satisfaction. Prior to our report, there were only two validated survey instruments for measuring aspects of physician trainee satisfaction. Seelig et al. reported the validation9 and implementation10 of a questionnaire based on residency stressors in IM residents from five training programs, and Pololi et al.11 reported the validation and use of an instrument to measure medical students’ learning environment. These studies were based on responses from a small number of trainees at few sites. Several national surveys studied targeted questions regarding learner satisfaction, including stress in residency,12 work hours,13 content and adequacy of training in orthopedic residents,14 professionalism,15 and satisfaction with the evaluation process.16 None of these studies used a validated instrument, and none of the studies, regardless of sample size, attempted to measure overall satisfaction with a broad set of satisfaction domains.

Our study had several limitations. Multiple comparisons may have led to statistical error, and relatively large sample size may have allowed us to identify differences that were statistically significant but not educationally important. Therefore, we limited our discussion to differences that occur when p ≤ .001 to allow a cautious interpretation of the statistical differences among the resident groups.

We acknowledge that the VA Learners’ Perceptions Survey is not a direct measure of the effectiveness of education programs and that perceptions do not necessarily reflect realities. However, recent reports outlining alarming rates of burnout,17 stress and educational debt,12 and mood changes18 underscore our need to understand and respond to learners’ perceptions. When we began the VA Learners’ Perceptions Survey project, it was our goal to improve understanding and quality of education in the VHA. Looking forward, the greatest challenge will be to focus on modifiable elements of the educational environment and then design interventions to try to influence them. The ability to follow these measures on a yearly cycle allows prospective testing of targeted interventions. We want not only to gain knowledge about the VHA learning environment but also to empower action toward a higher goal.

As manifestations of a strained and rapidly changing health care system threaten our ability to train competent, independent, compassionate physicians for the future, our attention to residents’ satisfaction is essential. It represents an educational imperative for medical education and patient care in the VA and the nation. The VA Learners’ Perceptions Survey represents an essential starting point in establishing a standard for quality measurement in clinical education. We view this as a necessary first step toward identifying critical issues facing our residents. Our long-term purpose is to focus on creating action plans for those items that can be changed.

Other members of the VA Learners’ Perceptions Working Group are David C. Aron, MD, MS; Louis Stokes, Cleveland Department of Veterans Affairs Medical Center and Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH; John M. Boyle, PhD, Schulman, Ronca & Bucuvalas, Inc., Silver Spring, MD; C. Richard Buchanan, DMD, FICD, Department of Veterans Affairs, Washington, DC and University of the Pacific School of Dentistry, San Francisco, CA; Grant W. Cannon, MD, VA Salt Lake City Health Care System and Department of Medicine, University of Utah, Salt Lake City, UT; Christopher T. Clarke, PhD, VA Medical Center and St. Louis University, St. Louis, MO; Stephen J. Dienstfrey, MA, MBA, Schulman, Ronca & Bucuvalas, Inc., Silver Spring, MD; Sheila C. Gelman, MD, VA Health care System of Ohio, Department of Veterans Affairs, Cincinnati, OH; Stuart C. Gilman, MD, MPH, Veterans Affairs Employee Education System, Long Beach, CA and Department of Medicine, University of California, Irvine, CA; Mark Graber, MD, VA Medical Center, Northport, NY and Department of Medicine, SUNY at Stony Brook, NY; Charles G. Humble, PhD, Office of Quality and Performance, Department of Veterans Affairs, Morrisville, NC; Linda D. Johnson, PhD, RN, Office of Academic Affiliations, Department of Veterans Affairs, Washington, DC; Catherine P. Kaminetzky, MD, MPH, Durham VA Medical Center and Department of Medicine, Duke University Medical Center, Durham, NC; Mark Meterko, PhD, VA HSR&D Management and Decision Research Center, Boston, MA; Don D. Mickey, PhD, VA Medical Center, Durham, NC; Gary Nugent, VA Medical Center, Omaha, NE; Dilpreet K. Singh, MS, MPA, Office of Academic Affiliations, Department of Veterans Affairs, Washington, DC; Anne M. Tomolo, MD, MPH; Louis Stokes Cleveland Department of Veterans Affairs Medical Center and Department of Medicine, Case Western Reserve University School of Medicine, Cleveland, OH; and Antonette M. Zeiss, PhD, VA Palo Alto Health Care System, Palo Alto, CA.

The views expressed in this article are those of the authors and do not necessarily represent the views of the U.S. Department of Veterans Affairs. For their expert and thorough consultation, we are grateful to Eugene Z. Oddone, MD, MHSc (Center for Health Services Research in Primary Care, Durham VA Medical Center, Durham, North Carolina, and Division of General Internal Medicine, Department of Medicine, Duke University Medical Center, Durham, North Carolina) and Morris Weinberger, PhD (Center for Health Services Research in Primary Care, Durham VA Medical Center, Durham, North Carolina, and Department of Health Policy and Administration, School of Public Health, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina). We are indebted to our contractor, Schulman, Ronca & Bucuvalas, Inc., for their professional support in managing the survey, database, and statistical analysis.


1.Facts About the Department of Veterans Affairs. Washington, DC: Department of Veterans Affairs, 2002.
2.FY 2001 VHA Annual Report of Health Services Training (RCS 10–0161). Washington, DC: Department of Veterans Affairs, Office of Academic Affiliations, 2001.
3.Appendix II Graduate Medical Education. JAMA. 2001;286:1095–1107.
4.Graduate Medical Education, Veterans Health Administration Office of Academic Affiliations. 〈〉. Washington, DC: Department of Veterans Affairs. Accessed 5 June 2003.
5.FY 2003 Budget Submission, Medical Programs, vol 2. Washington, DC: Department of Veterans Affairs, 2002.
6.Government Performance and Results Act of 1993. Washington, DC: U.S. Government Printing Office, 1993.
7.Corbin J, Strauss A. Basics of Qualitative Research. Thousand Oaks, CA: Sage Publications, 1990.
8.VHA Handbook on Supervision. Washington, DC: Veterans Health Affairs, Department of Veterans Affairs, 2001.
9.Seelig CB, DuPre CT, Adelman HM. Development and validation of a scaled questionnaire for evaluation of residency programs. South Med J. 1995;88:745–50.
10.Seelig CB. Quantitating qualitative issues in residency training: development and testing of a scaled program evaluation questionnaire. J Gen Intern Med. 1993;8:610–13.
11.Pololi L, Price J. Validation and use of an instrument to measure the learning environment as perceived by medical students. Teach Learn Med. 2000;12:201–7.
12.Collier VU, McCue JD, Markus A, Smith L. Stress in medical residency: status quo after a decade of reform? Ann Intern Med. 2002;136:384–90.
13.DaRosa DA, Prystowsky JB, Nahrwold DL. Evaluating a clerkship curriculum: description and results. Teach Learn Med. 2001;13:21–26.
14.Dailey SW, Brinker MR, Elliott MN. Orthopedic residents’ perceptions of the content and adequacy of their residency training. Am J Orthoped. 1998;27:563–70.
15.Arnold EL, Blank LL, Race KE, Cipparrone N. Can professionalism be measured? The development of a scale for use in the medical environment. Acad Med. 1998;73:1119–21.
16.Day SC, Grosso LJ, Norcini JJ Jr., Blank LL, Swanson DB, Horne MH. Residents’ perception of evaluation procedures used by their training program. J Gen Intern Med. 1990;5:421–26.
17.Shanafelt TD, Bradley KA, Wipf JE, Back AL. Burnout and self-reported patient care in an internal medicine residency program. Ann Intern Med. 2002;136:358–67.
18.Bellini LM, Baime M, Shea JA. Variation of mood and empathy during internship. JAMA. 2002;287:3143–46.
© 2003 Association of American Medical Colleges