Home Current Issue Previous Issues Published Ahead-of-Print Collections For Authors Journal Info
Skip Navigation LinksHome > October 2007 - Volume 82 - Issue 10 > Evaluating CAM Education in Health Professions Programs
Academic Medicine:
doi: 10.1097/ACM.0b013e31814a5152
CAM Education

Evaluating CAM Education in Health Professions Programs

Stratton, Terry D. PhD; Benn, Rita K. PhD; Lie, Désirée A. MD, MSEd; Zeller, Janice M. PhD, RN; Nedrow, Anne R. MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Stratton is assistant dean, Student Assessment and Program Evaluation, and assistant professor, Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, Kentucky.

Dr. Benn is research investigator, Department of Family Medicine, and director of education, Integrative Medicine Program, University of Michigan Medical School, Ann Arbor, Michigan.

Dr. Lie is clinical professor and director, Division of Faculty Development, Department of Family Medicine, University of California, Irvine, School of Medicine, Irvine, California.

Dr. Zeller is professor, Department of Adult Health Nursing, and associate professor, Department of Immunology/Microbiology, Rush University Medical Center, Chicago, Illinois.

Dr. Nedrow is associate professor, Departments of Medicine and Obstetrics and Gynecology, Oregon Health and Science University School of Medicine, Portland, Oregon.

Correspondence should be addressed to Dr. Stratton, Office of Medical Education, University of Kentucky College of Medicine, MN104 Medical Science Building, Lexington, KY, 40536-0298; telephone: (859) 323-2785; fax: (859) 323-2076; e-mail: (tdstra00@email.uky.edu).

Collapse Box

Abstract

As medical, nursing, and allied health programs integrate complementary and alternative medicine (CAM) content into existing curricula, they face many of the same challenges to assessment and evaluation as do more traditional aspects of health professions education, namely, (1) specifying measurable objectives, (2) identifying valid indicators, and (3) evaluating the attainment of desired outcomes.

Based on the experiences of 14 National Center for Complementary and Alternative Medicine (NCCAM) education grant recipients funded between 2000 and 2003, the authors cite selected examples to illustrate strengths and deficits to “mainstreaming” CAM content into established health professions curricula, including subjecting it to rigorous, systematic evaluation.

In addition to offering recommendations for more strenuously evaluating key CAM-related educational outcomes, the authors discuss related attitudes, knowledge, and skills and how these, like other aspects of health professions training, may result in enhanced patient care through modifications in clinical (provider) behaviors.

In the United States, the increasing popularity and use of complementary and alternative medical (CAM) therapies by consumers1–5 has spawned widespread interest among health care educators.6 Indeed, initial debate over whether CAM content should be taught within conventional health care programs has been replaced by questions related to how much, when, and what kind—with many programs now offering courses containing CAM content.7,8 Students, too, have recognized the need for exposure to the CAM therapeutic options available to their future patients.9,10

Because CAM is but one of myriad “orphan” topics (e.g., bioethics, geriatrics) seeking time in already overcrowded health professions curricula, a significant emphasis has been placed on documenting its “worth” via rigorous curricular evaluation.11 External pressures for heightened social accountability of training programs12 have further placed the onus of responsibility on health professions curricula to ensure competent, compassionate, and professional health care providers. Part and parcel of fulfilling this obligation to both society and the trainees themselves is “the use of appropriate assessment tools and measures of outcome.”13

Historically, educational evaluation was synonymous with the methodological issues of research design, and the barriers encountered were often discussed in the context of threats to reliability and validity.14 As the field gradually broadened to encompass more diverse approaches to evaluation (e.g., naturalistic evaluation), it became clear that neither social/educational programs nor their evaluation exist in a social vacuum.15 As a result, multiple stakeholders, personal and institutional values, and the social and political contexts of evaluation were incorporated into the education evaluation paradigm.16 Furthermore, in the measurement arena, the recognition that reliability is not inherent in a given measure—but, rather, the ends toward which such measures (data) are directed17—heightened the need to design rigorous approaches which delineate both trainee competencies (learner assessment) and program success (program evaluation).

Like any curricular innovation, efforts to document the effects of integrating CAM materials into existing curricula are laden with many of the same concerns, limitations, and barriers as other topics. First, desired outcomes of even the most focused program are often not adequately specified. Second, especially in the case of novel curricula, reliable and valid operational measures may not exist. Third, the dynamic nature of educational settings can result in extraneous (confounding) factors that limit the scientific credibility of intervention research,18 leading evaluators to erroneously attribute outcomes to curricular efforts rather than other influences.19 These difficulties of evaluation are particularly great when the goals of assessment are to assess how the innovative curricular elements change students' behavior, and how these elements make differences in the ways physicians care for patients.

With these issues in mind, we examined the experiences of CAM educational programs funded by the National Center for Complementary and Alternative Medicine (NCCAM), drawing additionally from non-NCCAM supported studies that have empirically examined efforts to impact providers CAM-related behaviors. In the article, we (1) identify strengths and weaknesses of efforts to define and evaluate key outcomes vis-à-vis learner attitudes, knowledge, and skills, (2) describe specific approaches to document the acquisition of CAM-related clinical skills (behaviors), and (3) discuss the potential benefits of providers' CAM-related attitudes, knowledge, and skills to patient care.

We propose that one of the contributions of our article is that it shows how much is yet to be learned in confronting the evaluation challenges mentioned above. Specifically, in our discussion, we delineate methodological weaknesses and gaps in the existing literature that severely limit our knowledge of the impact of educational interventions on short- and long-term behavior change. Related to the above point, we indicate the dire paucity of studies that rigorously examine how desired provider behaviors related to CAM may positively affect patient care (e.g., patient adherence, patient satisfaction, illness outcomes).

Back to Top | Article Outline

The Information We Used

Data for this study originated from all the health professions programs (12 medical programs, two nursing programs) receiving CAM Education Project Grants from the NCCAM, plus the five programs subcontracted under a single award made to the American Medical Student Association (AMSA). These awards were funded in cohorts of five per year in 2000, 2001, and 2002–2003. Data for our analysis were taken from

* responses by 17 of the 19 program investigators (response rate = 89%) to our confidential e-mailed questionnaire asking for documentation of grantee-specific evaluation practices;

* summaries of evaluation-related discussions from annual meetings of the NCCAM projects' principal investigators (PIs); and

* narrative information solicited from selected program PIs.

Survey respondents provided respective programmatic information on

target population(s);

specific area(s) targeted for evaluation;

assessment measures and methods;

follow-up on CAM trainees;

program implementation barriers; and

key evaluation results.

All information provided by CAM education programs was obtained under separate IRB-approved protocols governing the collection and use of evaluation data for research purposes.

Back to Top | Article Outline

What We Learned

CAM knowledge

Identifying the breadth and depth of “what must be known” is key to all formal health professions education programs. Accrediting agencies, professional organizations, and training programs themselves dictate the knowledge base necessary for competent practice in a given profession. Moreover, within undergraduate medical training, much curricular content remains driven by what students will be tested on as part of the United States Medical Licensing Exam (USMLE).

At best, the task of defining a discrete knowledge base represents a moving target, further complicated by the changing practice environment (e.g., the provider–patient relationship), the changing training environment (e.g., medical education financing), and the escalating accumulation of potentially relevant scientific knowledge (e.g., the Human Genome Project). These factors, in turn, challenge educators to continually reexamine the merit of what is taught within already overcrowded health professions curricula.

As mentioned, the rising use of CAM modalities and treatments among the U.S. public has served to legitimize its inclusion in many health professions' knowledge bases. However, on a profession-specific level, exactly what CAM-related information this should encompass has remained largely unspecified. Indeed, within the NCCAM education grant-funding mechanism, how programs chose to define this domain was determined by the availability, interest, or expertise of local resources such as faculty members, patient populations, or community providers. Additionally, materials were also selected by virtue of their “goodness of fit” within existing curricula.

For example, in undergraduate medical programs where the first two years are composed largely of basic science instruction, many CAM education programs integrated relevant content into pharmacology (e.g., herbals), physiology (e.g., mind–body interactions), introductory clinical skills (e.g., history taking, evidence-based medicine [EBM]), and behavioral science (e.g., doctor–patient interaction) courses.

Efforts to assess knowledge acquisition ran the gamut from subjective self-assessments to more objective measures such as performance on standardized examinations or completion of online modules. Courses with already clearly defined learning objectives were more easily able to define comparable outcomes related to CAM knowledge; conversely, those with vague, unmeasurable, or overly broad objectives often lacked clear definitions of exactly what was to be learned.

To gauge knowledge acquisition, most programs relied heavily on standardized examinations, usually administered in a “one-shot” fashion. In contrast, those programs that attempted to document knowledge gain tended to rely on more subjective measures such as pre–post self-assessments. For example, efforts at the University of Kentucky College of Medicine (UKCOM), where undergraduate medical students estimated their levels of understanding of 10 different CAM modalities on entering their training (in their first year of medical school) and again nearing completion (in their fourth year), were typical. Results from this particular program showed that self-assessed knowledge gains were highest among those modalities targeted within the curriculum (i.e., acupuncture, chiropractic, massage therapy).

However, more objective measures of knowledge gain, although less common, were not entirely nonexistent. For example, the Rush University College of Nursing and the AMSA-sponsored University of Massachusetts Medical School both assessed knowledge gained by using objective exams at time of program entry and again at completion. Similarly, the University of Michigan Medical School assessed changes in CAM knowledge longitudinally, from year to year, by evaluating medical students at the beginning of each academic year.

Somewhat less frequent were efforts to have faculty, preceptors, or other observers assess trainees' knowledge. These observational ratings, although also not without error,20 could be strengthened when combined with other data sources. This strategy was followed at the UKCOM, where evaluators asked clinical preceptors, including CAM providers and residency program directors, to rate learners' general knowledge of CAM concepts exhibited during clinical training.

As mentioned, allowing programs to define their respective knowledge bases was considered essential to elicit truly innovative educational demonstration projects. However, the focus on specific CAM modalities varied widely across institutions, and exam items were typically constructed to meet the needs of a particular program or course. Lacking a central item bank from which to draw CAM-related exam questions, the crafting of reliable and valid test items generally fell to course directors with varying levels of expertise in this arena.

In summary, the specified CAM-related knowledge bases varied across programs, as did the types of assessments. Ultimately, program evaluation and students' knowledge are contingent on precise outcomes measures, and these were underdeveloped when the NCCAM-funded programs began.

Back to Top | Article Outline
Attitudes toward CAM

The role of attitudes in health professions education and practice remains somewhat unclear. Whereas early models posited attitudes as an important predictor of certain health behaviors,21 others saw attitudes as supplemental to other factors (e.g., self-efficacy).22 Although still debated, attitudes are generally viewed as “necessary but not sufficient” requisites to behavior or behavior change.

Given the role attitudes may play either directly21 or indirectly22 in behavior, an important component of the programs' evaluation plans focused on stakeholders' attitudes toward CAM. The CAM education programs used a variety of approaches to characterize attitudes. These included focus groups, interviews, self-reflection, and surveys to assess learners' attitudes toward CAM. The majority of programs used confidential pre–post questionnaires with medical, nursing, or other health professions students to characterize attitude change over time. A smaller number of programs extended this approach to faculty members and/or medical residents.

Because no validated measures of CAM attitudes existed, program directors developed new measures as part of their evaluation plans. For example, the 26-item Integrative Medicine Attitude Questionnaire (IMAQ) was developed by investigators at the Maine Medical Center, an early NCCAM awardee.23 This instrument, originally designed to assess attitudes of internists toward CAM, was later incorporated into the evaluation efforts of other CAM education programs.

Another instrument, the 10-item CAM Health Beliefs Questionnaire, was developed by investigators at the AMSA-sponsored University of California Irvine (UC-Irvine) School of Medicine program, who used the IMAQ to establish its criterion validity.24 Similarly, a questionnaire developed at the University of Minnesota Medical School asked four questions (previous training in, desired training in, personal experience with, and perceived effectiveness) of 26 different CAM modalities. This tool was later published on their Web site. Aspects of these early measures were used by program directors at the University of Michigan Medical School,25 the University of Texas Medical Branch School of Medicine,26 and the Oregon Health and Science University (OHSU) School of Medicine27 to develop more even more refined instruments.

In addition to the use of surveys, a small number of grantees used qualitative approaches to assess attitudes, such as focus groups, interviews, student key informants, and reflective writing exercises. The University of California–San Francisco School of Medicine complemented their qualitative approaches with student “moles” to provide written reflections on CAM.

All grantees believed that attitudes of key stakeholders were critical to the attainment of the educational goals outlined in their individual programs. In the case of students, assessment of attitudes was focused primarily on students' views toward CAM as content beneficial to their professional training. This was often couched in the context of respecting patients' initiatives toward self-care, and offering sound clinical guidance of CAM modalities in an informed but nonjudgmental manner.

Several programs documented a significant shift in learner attitudes over time, whereas others who had CAM-friendly attitudes at baseline recorded only minor changes. In a few institutions, students or trainees tended to view CAM as more relevant to their professional training than did faculty.

No program noted significant declines in stakeholder attitudes toward CAM. However, on completing the undergraduate medical curriculum, students at the UKCOM expressed less interest in continuing to learn about CAM than at the start of their training. Similarly, OHSU investigators also found evidence suggesting that their curriculum may be oversaturated with CAM content. However, whether these patterns represented CAM “overload” or were merely an artifact of the levels of CAM evaluation relative to other content areas is not known.

Back to Top | Article Outline
CAM skills

The introduction of the USMLE Step 2 Clinical Skills exam in 2004, along with the parallel emphasis on lifelong learning and clinical competencies, has heightened the need to validly assess clinical skills in medical students. As such, educators have sought to identify appropriate core skills and competencies that lend themselves to reliable measurement and assessment.28

One concrete recommendation that emerged from an annual meeting of CAM education program directors was that the two following CAM-related questions be incorporated into health professions trainees general history-taking activities: (1) Are you currently using any herbals, dietary supplements, or home remedies? and (2) Are you currently receiving care from any complementary or alternative providers? Since CAM use is often not routinely disclosed, these questions facilitate both disclosure of patients' CAM use and discussion with providers regarding treatment safety and efficacy. Both items are taught as part of students' basic clinical skills training and are assessed during their actual (real or simulated) clinical work.

Established as the “gold standard” for assessing clinical behaviors, the objective structured clinical exam (OSCE) uses simulated patients (SPs) trained to follow a scripted scenario in which discrete, predefined behaviors are objectively observed and recorded.29 Approximately half of the NCCAM grantees reported developing CAM-focused OSCEs to assess student skills. Within the clinical encounter, rating scales or checklist items were used to document students' (1) consideration of patients' CAM use as related to their presenting condition, (2) attitudes toward patients who use CAM modalities, and (3) advising and counseling patients on specific CAM modalities. Depending on the program, medical students were assessed between the second to fourth years of training.

A number of OSCEs also incorporated some element of EBM related to CAM. For example, in one such OSCE used as part of UC-Irvine's30 clinical performance assessment, an SP presents with increasingly painful and debilitating osteoarthritis of the knee, which had been unresponsive to analgesics that were causing gastrointestinal side effects. On direct questioning, the SP disclosed use of an herbal supplement and asked for the student's recommendation concerning its continued use. In an alternative scenario, the SP asked about the use of acupuncture for the same condition. In both OSCE stations, the student was instructed to appraise printed information downloaded by the patient from a non-evidence-based Web resource. After the encounter, the student was asked to perform a 15-minute Internet-based search, locate a randomized clinical trial or systematic review to address the patient's questions, and use this information to appropriately counsel the patient.

The utility of this and similar cases lies in their adaptability to test students' EBM skills vis-à-vis numerous modalities and clinical conditions. Clinical skills for eliciting CAM use and for counseling about alternatives to pharmacotherapy are embedded within existing competencies such as EBM, professionalism, communication, and cultural competency skills.

Using a comparable simulated scenario, the UKCOM sought to examine students' responses to patients' inquiries about an herbal supplement to treat stress and anxiety.31 The SP presented with the generic chief complaint of stress. After a thorough history by the student, the SP “unexpectedly” produced one of two types of evidence regarding the potential effectiveness of a supplement: (1) information gleaned from two Internet Web sites, or (2) two PubMed abstracts. In addition to being expected to respond to the patient in a professional and nonjudgmental manner, students were asked to complete a postencounter exercise to rate the rigor of the evidence and the likely mechanism of action (real versus placebo).

Back to Top | Article Outline

Discussion

Challenges to evaluation

In the absence of a single, established set of approved for CAM education or competency standards, an array of curricula exist to provide health professions students with the necessary knowledge, attitudes, and skills to address CAM-related issues in their respective fields. Consequently, the approaches to evaluating curricular efforts were equally diverse and involved the development and refinement of assessment tools to measure a wide variety of attitudes, beliefs, motivations, knowledge bases, and skills.

The diverse nature of these educational initiatives resulted in no two NCCAM grantees using the exact same evaluation strategy, methods, or tools. This diversity of approaches limits the ability to compare programs and outcomes. At this point in time, it would seem that evaluating the influence of CAM education on student learning and learner outcomes is best accomplished on an institution-by-institution basis.

As previously mentioned, the relative novelty of CAM education limited the availability and subsequent use of established, psychometrically sound measures.

Some program directors also reported inadequate formal assessment and evaluation expertise within their institutions. Even when available, philosophical differences between CAM educators and evaluators occasionally hampered the design and efficient execution of the evaluation plan. In some cases, shifting institutional priorities resulted in loss of access to personnel responsible for data collection, data management, or statistical analysis. To adapt, many programs streamlined the scopes of their evaluations.

The NCCAM education programs used several different methods (e.g., course/faculty evaluations, focus groups, interviews) to evaluate the quality of individual courses and to assess learner knowledge and attitudes. Whether CAM-related attitudes and knowledge translate into actual clinical behaviors remains a question for most programs, because assessments of bona fide behavior change did not widely occur. Because few programs included postgraduate follow-ups as part of their evaluation plans, the long-term impact of the NCCAM-sponsored CAM education on trainees remains largely unexplored. One school, UC-Irvine, did track one cohort of medical students across three years and found that their positive attitudes toward CAM had been maintained.24

Overall, it proved to be relatively easy for programs to define the purpose and levels of assessment in most evaluation efforts. On the other hand, determining the focus of the assessment (i.e., what is to be assessed?) posed a greater challenge to evaluation efforts, as most grantees grappled with operationally defining outcomes of knowledge, skills, attitudes/values, and behaviors.

Issues related to organization and implementation seemed to be the most prominent factor limiting programs' evaluation efforts. These included difficulties with mobilizing support for and coordination of evaluation efforts. Problems associated with inadequate staffing, organizational tensions, and philosophical rifts also reportedly posed challenges.

Back to Top | Article Outline
Enhancing future assessment and evaluation efforts

One of the most beneficial outcomes of the NCCAM education initiative has been the development of new measures (e.g., validated survey instruments, OSCE stations) and the application of novel and diverse methodologies to assessing educational process and outcomes. Such accomplishments should help provide a foundation from which future CAM curricular evaluation efforts can be directed.

A weakness highlighted by our analysis, and one which is persistently problematic in much general32 and medical educational research,33,34 is the limited ability of research designs to infer causality. That is, whereas evaluations tended to focus on documenting educational outcomes, they often lacked the rigor to allow the direct attribution of observed changes to program efforts. Ceiling effects attributable to high pretest scores among nonrandomly assigned learner groups also may have limited investigators' abilities to demonstrate postprogram gains in knowledge or attitudes.31

Measurable declines in the prevalence of intervention (researcher-manipulated variables) research in general education18 have been attributed to a number of potential factors, including (1) changes in researchers' perceptions of rigorous methodological standards, (2) challenging practical constraints, and (3) necessary resources to conduct scientifically credible educational intervention research.18 Indeed, directors from several programs alluded to difficulties designing and implementing evaluation and assessment activities, because of (1) lack of adequate, available expertise, (2) overly broad or poorly written educational objectives, and (3) challenges in coordinating data collection within existing systems and structures. To improve evaluation efforts, we offer the following recommendations.

First, institutions devising CAM education programs should consider collaborating (such as pooling common items across survey instruments and determining long-term effectiveness of the educational programs). Second, CAM learning objectives and proposed competencies need to be followed. Such information can guide the development of standardized testing. Finally, evaluators of CAM curricula should seize the opportunity to reverse the unfortunate trend in educational research toward drawing causal conclusions from “nonmanipulated correlational studies.”32 Only then, by clearly specified outcomes, precise measurement, and rigorous research design, will evaluation efforts advance to the forefront of examining enhanced health and patient care outcomes.

Back to Top | Article Outline

Acknowledgments

The authors gratefully acknowledge educational funding from the National Center for Complementary and Alternative Medicine (NCCAM) – Projects #R25 AT000682 (TDS), #R25 AT000812 (RKB), #R25 AT000529 (DAL), #R25 AT000359 (JAZ), and #R25 AT001173 (ARN).

Back to Top | Article Outline

References

1 Eisenberg DM, Davis RB, Ettner SL, et al. Trends in alternative medicine use in the United States, 1990–1997: results of a follow-up national survey. JAMA. 1998;280:1569–1575.

2 Wootton JC, Sparber A. Surveys of complementary and alternative medicine: part I. General trends and demographic groups. J Altern Complement Med. 2001;7:195–208.

3 Astin JA. Why patients use alternative medicine: results of a national study. JAMA. 1998;279:1548–1553.

4 Palinkas LA, Kabongo ML; San Diego Unified Practice Research in Family Medicine Network. The use of complementary and alternative medicine by primary care patients. A SURF*NET study. J Fam Pract. 2000;49:1121–1130.

5 Rafferty AP, McGee HB, Miller CE, Reyes M. Prevalence of complementary and alternative medicine use: state-specific estimates from the 2001 Behavioral Risk Factor Surveillance System. Am J Public Health. 2002;92:1598–1600.

6 Frenkel M, Ben-Ayre E. The growing need to teach about complementary and alternative medicine: questions and challenges. Acad Med. 2001;76:251–254.

7 Brokaw JJ, Tunnicliff G, Raess BU, Saxon DW. The teaching of complementary and alternative medicine in U.S. medical schools: a survey of course directors. Acad Med. 2002;77:876–881.

8 Wetzel MS, Eisenberg DM, Kaptchuk TJ. Courses involving complementary and alternative medicine at U.S. medical schools. JAMA. 1998;280:784–787.

9 Murdoch-Eaton D, Crombie H. Complementary and alternative medicine in the undergraduate curriculum. Med Teach. 2002;24:100–102.

10 Chaterji R, Tractenberg RE, Amri H, et al. A large-sample survey of first- and second-year medical student attitudes toward complementary and alternative medicine in the curriculum and in practice. Altern Ther Health Med. 2007;13:30–35.

11 Wartman S, Davis A, Wilson M, Kahn N, Sherwood R, Norwalk A. Curricular change: recommendations from a national perspective. Acad Med. 2001;76 (4 suppl):S140–S145.

12 Cohen JJ. Professionalism in medical education, an American perspective: from evidence to accountability. Med Educ. 2006;40:607–617.

13 Gruppen LD, White C, Fitzgerald JT, Grum CM, Woolliscroft JO. Medical students' self-assessments and their allocations of learning time. Acad Med. 2000;75:374–379.

14 Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Boston, Mass: Houghton Mifflin; 1963.

15 Stufflebeam DL, Webster WJ. An analysis of alternative approaches to evaluation. Educ Eval Policy Anal. 1980;2:5–20.

16 Rossi PH, Freeman HE. Evaluation: A Systematic Approach. 4th ed. Newbury Park, Calif: Sage Publications; 1989.

17 Thompson B, Vacha-Haase T. Psychometrics is datametrics: the test is not reliable. Educ Psychol Meas. 2000;60:174–195.

18 Hsieh PH, Acee T, Chung WH, et al. Is educational intervention research on the decline? J Educ Psychol. 2005;97:523–529.

19 Pathman DE. Medical education and physicians' career choices: are we taking credit beyond our due? Acad Med. 1996;71:963–968.

20 Williams RG, Klamen DA, McGaghie WC. Cognitive, social and environmental sources of bias in clinical performance ratings. Teach Learn Med. 2003;15:270–292.

21 Fishbein M, Ajzen I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Reading, Mass: Addison-Wesley; 1975

22 Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice-Hall; 1986.

23 Schneider CD, Meek PM, Bell IR. Development and validation of IMAQ: Integrative Medicine Attitude Questionnaire. BMC Med Educ. 2003;3:5.

24 Lie D, Boker J. Development and validation of the CAM Health Belief Questionnaire (CHBQ) and CAM use and attitude among medical students. BMC Med Educ. 2004;4:2.

25 Gruppen L, Benn R, White C, Fantone J, Smith K, Warber S. Psychometric characteristics of an attitude instrument for complementary and alternative medicine. Paper presented at: Research in Medical Education (RIME) conference; November 2003; Washington, DC

26 Frye AW, Sierpina VS, Boisaubin EV, Bulik RJ. Measuring what medical students think about complementary and alternative medicine (CAM): a pilot study of the complementary and alternative medicine survey. Adv Health Sci Educ Theory Pract. 2006;11:19–32.

27 Nedrow AR, Istvan J, Haas M, et al. Implications for education in complementary and alternative medicine: a survey of entry attitudes in students at five health professional schools. J Altern Complement Med. 2007;13:381–386.

28 Kligler B, Maizes V, Schachter S, et al. Core competencies in integrative medicine for medical school curricula: a proposal. Acad Med. 2004;79:521–531.

29 Sloan DA, Donnelly MB, Schwartz RW, Strodel WE. The Objective Structured Clinical Examination. The new gold standard for evaluating postgraduate clinical performance. Ann Surg. 1995;222:735–742.

30 Lie DA, Boker J. Comparative survey of complementary and alternative (CAM) attitudes, use, and information-seeking behavior among medical students, residents, and faculty. BMJ Med Educ. 2006;6:58.

31 Stratton TD, McGivern JL, Dassow PL, Elder WG Jr. The perceived efficacy and evidence-base of a complementary/alternative treatment modality in an objective structured clinical exam (OSCE). Paper presented at: Association of American Medical Colleges (AAMC) Research in Medical Education (RIME) conference; October–November 2006; Seattle, Wash

32 Robinson DH, Levin JR, Thomas GD, Pituch KA, Vaughn S. The incidence of “causal” statements in teaching-and-learning research journals. Am Educ Res J. 2007;44:400–413.

33 Lynch DC, Whitley TW, Willis SE. A rationale for using synthetic designs in medical education research. Adv Health Sci Educ Theory Pract. 2000;5:93–103.

34 Newman M. Fitness for purpose evaluation in problem based learning should consider the requirements for establishing descriptive causation. Adv Health Sci Educ Theory Pract. 2006;11:391–402.

© 2007 Association of American Medical Colleges

Login

Article Tools

Share