Skip Navigation LinksHome > July 2000 - Volume 75 - Issue 7 > Evaluation Methods for Prevention Education
Academic Medicine:
I. the Context for Prevention Education

Evaluation Methods for Prevention Education

Blue, Amy V. PhD; Barnette, J. Jackson PhD; Ferguson, Kristi J. PhD; Garr, David R. MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Blue is assistant dean for curriculum and evaluation, Medical University of South Carolina (MUSC) College of Medicine, Charleston, South Carolina. Dr. Barnette is associate dean for education and student affairs and Dr. Ferguson is associate professor, Department of Community and Behavioral Health, both at the University of Iowa College of Public Health, Iowa City, Iowa. Dr. Garr is professor of family medicine, associate dean for primary care, Department of Family Medicine, MUSC, Charleston, South Carolina.

Correspondence and requests for reprints should be addressed to Dr. Blue, Medical University of South Carolina College of Medicine, 96 Jonathan Lucus Street, Suite 601, P.O. Box 250617, Charleston, SC 29425.

Collapse Box

Abstract

The knowledge, skills, and attitudes associated with prevention cut across clinical disciplines. Thus, they are often subsets of disciplines not otherwise present in the traditional curriculum (e.g., epidemiology or statistics) or considered the province of many disciplines (e.g., risk reduction or cancer screening). Evaluation of elements of prevention education can often become lost in the myriad other outcomes that are assessed in students, or they are intermingled with other content and skills. This article highlights the value of assessing students' competence in prevention knowledge, skills, and attitudes, provides general guidance for programs interested in evaluating their prevention instructional efforts, and gives specific examples of possible methods for evaluating prevention education.

While it is important to tailor assessment methods to local institutional objectives, it is possible to share assessment methods and materials regionally and nationally. Sharing problems, as well as successes, encountered in developing appropriate assessment methods will advance the field of evaluation of prevention curricula.

“Evaluation drives learning” is a statement frequently heard in curriculum committee meetings, course directors' meetings, departmental hallway conversations, and other venues. This comment is usually made in the context of a discussion about how best to ensure that students acquire the requisite knowledge, skills, and attitudes articulated in the objectives of a specific course or overall medical school curriculum. The interdependent nature of teaching, learning, and assessment supports this statement. Although evaluation is often neglected as an integral element of instruction, the final step in the instructional process for teachers and learners is to determine the extent to which learning objectives have been mastered.1,2

As described in this supplement, prevention education in the medical school curriculum is a complicated undertaking. While the knowledge, skills, and attitudes related to prevention could be packaged under a single course rubric, most information relating to prevention is integrated into existing courses in the medical school curriculum. The knowledge, skills, and attitudes associated with prevention cut across clinical disciplinary lines and thus are frequently subsets of disciplines not otherwise present in the traditional medical school curriculum (e.g., epidemiology or statistics) or are frequently considered the province of multiple disciplines (e.g., risk reduction or cancer screening). Evaluation of prevention education outcomes can often become lost in the myriad other outcomes that are assessed in students, or the elements of prevention education are intermingled with other content and skills. The purposes of this paper are to (1) highlight the value of assessing student competence in prevention knowledge, skills, and attitudes; (2) provide general guidance for programs interested in evaluating their prevention instructional efforts; and (3) provide specific examples of possible methods of evaluating prevention education.

Back to Top | Article Outline

REVIEW OF CURRENT EFFORTS TO EVALUATE STUDENT COMPETENCE IN PREVENTION

Results of the Prevention Curriculum Assistance Program (PCAP) survey3 indicate that, for the majority of prevention areas, the most frequently used method of measuring student competence is the written test. Unstructured observation was the second most frequently used method of evaluation cited in the responses to the self-assessment analysis employed as part of the PCAP. Between 30% and 50% of respondents indicated interest in receiving assistance to help improve their schools' methods for evaluating prevention curricula.

A review of the literature indicates that a variety of methods are used to evaluate students' competencies in clinical skills and knowledge following their participation in topic-specific prevention curricula. Written examinations have been used to assess students' knowledge,4 including pre- and post-tests,5 case-based, modified essay examinations,6 and the tailored response test (TRT).7 Surveys, both pre- and post-administration, have been used to assess changes in students' knowledge, attitudes, beliefs about role responsibility, and confidence in clinical skills.4,8–10 Standardized patients have been used to evaluate students' clinical skills.5,10 Peters et al.4 reported observing and coding videotaped interviews with patients to assess clinical behavior. Some authors reported the use of multiple methods to evaluate students' competency, including written examinations and standardized patient interactions during objective structured clinical examinations (OSCEs),5 self-ratings and OSCEs,10 and knowledge tests, attitude surveys, clinical practice surveys, and videotaped patient interviews.4 Results were also used to evaluate the instructional effectiveness of the particular prevention curricular program.

In summary, written examinations and surveys have been the predominant methods of evaluating students' knowledge and skills related to prevention. This is not surprising, given that written examinations are a common method of assessing students' knowledge in medical school. Clinical performance examinations with standardized patients appeared to be used less frequently. This could be in part because standardized patients are not appropriate for assessment of particular prevention knowledge and skill areas. Their less frequent use may also have reflected the resources needed for this type of assessment.

Back to Top | Article Outline

GENERAL PRINCIPLES OF ASSESSMENT

Bloom and colleagues classified educational objectives in three domains: cognitive, affective, and psychomotor. The cognitive domain focuses on six areas of cognitive abilities, including knowledge, comprehension, application, analysis, synthesis, and evaluation.11 These domains build on each other (starting at the knowledge or recall level). The affective domain primarily deals with attitudes, interests, values, and belief systems.12 The psychomotor domain primarily deals with reflexes, basic physical movements, perceptual and physical abilities, skilled movements, and non-discursive communication.13 As learners progress in their understanding of material, instruction and therefore evaluation of instruction should progress as well. Many assessment methods have the potential to provide data that span more than one of the domains, leading to efficiency of measurement and the ability to examine relationships among the competencies.

General principles of student assessment1 can aid a faculty member in selecting methods most appropriate for the learning context. These principles are:

1. The assessment process must clearly specify what is to be assessed in the learner.

2. Assessment methods should be selected because of their relevance to the learning goals or aspects of performance to be measured.

3. Comprehensive assessment should use a variety of methods.

4. Selection of assessment methods should be based on knowledge of their limitations, such as those related to reliability, validity, efficiency, and propriety.

5. Assessment should serve a useful purpose and not be an end in itself.

6. Specific remediation strategies should be developed a priori for those areas in which students are assessed as deficient.

Other issues, such as associated costs, time, personnel, physical space, equipment, other resources, and technical expertise, must also be considered during the development and application of any particular assessment method. Lack of sufficient resources in any of these areas can serve as a barrier to effective assessment. Individual curricular programs must weigh the costs versus rewards of one particular evaluation method over another. Nonetheless, as Rowntree14 has pointed out, the ultimate criterion for any assessment method is its educational relevance: Does the assessment method match the content and style of the teaching and learning experience?

Back to Top | Article Outline

METHODS TO EVALUATE STUDENT COMPETENCE IN PREVENTION CURRICULAR AREAS

We have identified a variety of assessment methods for discussion as possible ways to evaluate student competencies in the prevention curricular areas of clinical prevention, quantitative methods, community dimensions of medical practice, and health services organization and delivery. They are categorized in Table 1, relative to the prevention curricular areas and competency domains. Furthermore, the assessment methods have also been classified relative to their potential function in providing primary and secondary assessment information in the three competency domains established by Bloom and others, i.e., cognitive, affective, and psychomotor. Each method is described briefly, and strengths and weaknesses are cited. (For further details about many of these methods, see Hopkins et al.,15 Mehrens and Lehmann,16 Nitko,17 Popham,18 Streiner and Norman,19 and Sudman and Bradburn.20) We also suggest specific examples to evaluate student competencies in prevention curricular areas. These suggestions are intended to stimulate and broaden the reader's own thinking about assessment in prevention education and are not an exhaustive list of evaluation examples.

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Written Examinations

Written examinations have been the backbone of assessment for many decades and are often used to evaluate students' knowledge of subjects. They are easily administered and can have excellent construct validity and reliability. Written examinations may be classified as being fixed or open-response methods. Fixed-response methods include tests using multiple-choice, true-false, matching, or fill-in-the blank types of items. Testing may be done using paper-and-pencil or computer-based methods. Written examinations using fixed-response formats provide the opportunity to assess a range of cognitive abilities in a short time span, but have limited utility in assessing affective or psychomotor competencies. They are easily scored, and such scoring can include measures of internal consistency. Written examinations using fixed-response formats are often criticized on the basis that they are useful for assessing lower-level cognitive competencies (e.g., recall of facts) only. The key to effective fixed-response written examinations use is to develop high-quality examination questions21 that are based on educational objectives and measure higher-order cognitive skills.

Open-response written examinations, usually referred to as essay exams, involve answers that are more than simply fill-in-the-blank and require integration of material.22 Great care should be used in the wording and instructions; the instructor must predetermine the characteristics of the desired answer and do everything possible to reduce scoring subjectivity. One advantage of open-response formats over fixed-response formats is the potential for assessing higher-level cognitive skills (e.g., they can require students to apply or synthesize material from a variety of sources). Open-response items can also be used to assess affective competencies.

Evaluation suggestions. Written examinations can evaluate students' knowledge of many areas of prevention education.

Fixed-response formats could be used to evaluate students' knowledge of:

* recommendations for clinical prevention, such as screening tests, immunizations, and chemoprophylaxis, commonly used in primary care;

* the interpretation of quantitative measures that describe the burden of disease in a population (e.g., incidence, prevalence);

* the methods used to assess the quality of health care (e.g., HEDIS, patient satisfaction surveys);

* commonly used measures of association (e.g., relative and attributable risk or odds ratio); and

* basic principles and value of economic analysis (e.g., cost-effectiveness analysis and cost-benefit analysis).

Open-response formats could:

* present data and ask students to interpret and apply basic concepts and tools of statistical analysis (e.g., measures of central tendency and type I and II errors);

* present a community health scenario and ask the student to describe the steps to follow (identify the target population, identify the population's health needs, etc.) to implement appropriate community-responsive, population-based health care; or

* ask students to describe the clinical, ethical, and legal issues associated with case-finding and screening programs.

Back to Top | Article Outline
Oral Presentations

Students may be asked to make formal oral presentations about particular subjects in class. This usually provides an opportunity for assessing cognitive and psychomotor competencies and, to a lesser extent, affective competencies. Assessment of formal oral presentations requires the establishment of assessment criteria and grading protocols. Students should be informed about these criteria and grading standards. Informal oral presentations include contributions to small-group discussions, such as asking questions or offering opinions. Informal oral presentations are more difficult to assess. Proper assessment requires keeping valid and reliable records of students' participation in small-group discussions, a time-consuming and somewhat subjective process for faculty members.

Evaluation suggestions. Oral presentations can assess students' knowledge, understanding, ability to apply information, and, to a lesser extent, affective competencies, in many prevention education areas.

* Students could present critiques of articles that require application of their (1) knowledge of quantitative methods, such as the appropriateness and correctness of study design, methods of data collection, sources of bias and confounding, and correct interpretation of results; and (2) knowledge of basic elements of study designs commonly found in the medical and public health literature.

* Presenting public policy reports, students can describe the structure and function of a local public health system, how it provides services to the population, and its relationships to other health care organizations.

* Making a presentation, students can use existing sources of data to discuss a major cause of morbidity and mortality and risk factors for that cause (i.e., cardiovascular disease, diabetes, etc.).

* Presenting a position paper about current issues affecting a group of health care professionals (e.g., physicians, nurse practitioners, or physicians' assistants), students can describe and outline the regulation and governance of that health care professional group.

* While presenting a patient, the student can focus on prevention issues, such as identifying clinical preventive services appropriate for the patient and those aspects of the patient's problems that might have been preventable.

* While presenting a patient and reporting about diagnostic and screening tests performed for the patient, the student can apply his or her knowledge of sensitivity, specificity, positive predictive value, validity, and reliability to the specific tests.

Back to Top | Article Outline
Questionnaires/Surveys

Questionnaires or surveys can be used to assess students' attitudes and interests, their evaluations of courses or instruction, and their beliefs about the importance of a particular topic. In addition, questionnaires or surveys may be used to identify gaps in cognitive knowledge or educational needs that students have identified. Questionnaires or surveys can therefore be used for program evaluation as well as student assessment. Likert surveys, semantic differentials, checklists, and rating forms are types of survey or questionnaire formats. Delivery of these instruments has most often been through print or oral administration (in person or by telephone), but more recently surveys have been delivered via e-mail or the Internet.

Essential to successful questionnaire or survey research is the design of the items to make sure they cover the subject of interest, are unambiguous to the respondents, and minimize problems such as acquiescence, primacy effects, and response bias. Development of a scale within a questionnaire or survey requires significant expertise in psychometric scaling techniques (e.g., factor analysis or multidimensional scaling).

Evaluation suggestions. Questionnaires or surveys can be used to assess students' perceptions regarding prevention issues and practices. A survey or questionnaire could assess students'

* knowledge and attitudes toward physicians' responsibilities to public agencies (e.g., reporting child abuse to legal authorities, reporting adverse drug events to the Food and Drug Administration);

* intentions to continue practicing in the future their acquired communication and psychomotor skills associated with providing clinical preventive services;

* attitudes about working with other health professionals in the delivery of preventive services; and

* confidence in their abilities to apply the components of a community-responsive population-based health intervention.

Back to Top | Article Outline
Student-developed Products

Courses and educational experiences can require students to develop products such as papers, article critiques, concept maps,23,24 homework assignments, and theses/dissertations. Other products may be less focused, such as keeping logs or anecdotal records about activities and development of case studies. The portfolio is another type of student-developed product. It is an organized collection of products that exemplify the knowledge, attitudes, and skills attained by the student in the program; it may also include a statement of goals and philosophy. With the present ease of entering all types of documents into a computer database, it is highly likely that electronic portfolios may be used more frequently. While the primary function served by evaluation of student products is the assessment of cognitive competencies, it can also provide information for affective and psychomotor assessment. Detailed and communicated expectations for student-developed products, including criteria for grading, are important to this assessment approach. For portfolios, well-established instructions and criteria must be developed.

Evaluation suggestions. Student-developed products can be used to assess students' knowledge, skills, and attitudes related to prevention education through a variety of methods. These include

* concept maps, which can be used to assess the degree to which students understand the many underlying principles of issues/concepts related to prevention. For example, to describe methods of health care financing in the United States, students would be required to identify the methods of health care financing for preventive, curative, and rehabilitative services, and then explore how health care financing influences access to and utilization of health care services and health care outcomes. Or, to identify immunization recommendations, students would be required to go beyond identifying current recommendations and to demonstrate an understanding of the relationships of such principles as immunity, cost-benefit analysis, and the balancing of individual versus public health considerations. Another example is: to describe the development and implementation of public policy and disease prevention and health promotion, students would be required to identify the governmental sources for policy development, the social, political, and economic forces influencing policy development and implementation, barriers to development and implementation, and activities associated with implementation.

* written reflections about a community-based experience, with a focus on how characteristics of the individuals and populations served in the experience (e.g., language, religious beliefs, income, culture, race) may affect the occurrence of disease in the population and the provision of health services to the population.

* written reports that detail the activities associated with a community-responsive population-health intervention (i.e., identify target population, identify health needs of target population, etc.).

* a written case report that focuses on one health problem (i.e., heart disease, accidents, diabetes) and requires the student to delineate the social, economic, and political forces influencing the problem and associated health care services in the United States.

* a written report that describes the influences of access, utilization, and quality of health services on a particular health indicator (e.g., birth outcomes).

* a written report that asks students to examine a specific global health problem and issues associated with it, such as population control or risk of the spread of contagious disease.

* a case study or project that requires identification of the potential adverse health outcomes for defined populations at risk in the community (disabled, nursing home residents, agricultural workers) and appropriate clinical preventive services that address these outcomes.

* a student project that requires design of a system that facilitates the inclusion of prevention and health promotion services in a clinical practice, such as a patient-reminder system for routine screening examinations.

* portfolios may be used to document a student's longitudinal exposure to prevention curriculum topics and activities throughout a four-year medical school experience. A portfolio could include a checklist that indicates the types of preventive services they had delivered, how often they had offered such services, and samples of the ways in which they had incorporated prevention into their care of patients, including the creation of patient-education materials.

Back to Top | Article Outline
Simulations

Paper-based simulations can be used to assess student performance as well, although with the expansion of technology, these have evolved to become computer-based. Computer-based simulations can ask for students' responses through typing free-text or responding to a menu of options. As with the other assessment methods where subjectivity enters into the assessment process, the use of simulations for assessment must be based on well-developed and communicated criteria for their evaluation. Computer-based simulations have the potential to provide more realistic and complex simulations than paper-based versions. Although such simulations are very labor-intensive and costly to develop, they do hold high potential for efficient assessment of knowledge and skills in a low-risk environment.

Evaluation suggestions. Simulations can be used to assess students' knowledge and skills related to prevention. Two examples are

* a computer-based clinical case simulation that requires students to take an appropriate patient history, particularly with respect to prevention issues. Students can then be asked what further tests they would recommend to the patient and to describe any additional recommendations (such as screening tests or immunizations). Students' progress through the simulation can be documented and evaluated.

* a computer simulation of an epidemic that asks students to investigate an epidemic and use concepts of the epidemic occurrence of disease.

Back to Top | Article Outline
Observation

Direct observation of students' clinical skills is the primary way in which achievement of psychomotor or skill-based objectives can be evaluated. A faculty member or resident may conduct direct observation of the student-patient encounter. The advantage of “real-time” observation is the ability to assess how well students are able to apply prevention principles to the care of patients and to provide immediate feedback to students about their abilities. However, given the many demands on faculty members' time, such direct observation can be difficult. Pituch et al.25 describe the brief structured observation (BSO) as a method to help clinical teachers provide efficient assessment and feedback to learners in busy clinical settings.

Examinations with standardized patients and objective structured clinical examinations (OSCEs) are standard methods to assess students' clinical skills.26,27 The OSCE was developed to provide a standardized context in which students' clinical skills are observed and documented. Faculty members or trained standardized patients can document the students' behaviors during the standardized patient encounter. The development of standardized patients for use in clinical performance examinations requires the development of the patient cases and accompanying checklists, the training of patients, recruitment of faculty evaluators if faculty are used to document the student's behavior during the standardized patient encounter, and facilities in which to administer the examination.

Evaluation suggestions. Observation, particularly standardized-patient and OSCE forms, can assess students' requisite skills and knowledge needed to deliver effective clinical preventive services.

* Community preceptors or faculty can directly observe and evaluate students' performances of screening tests and skills necessary to take universal precautions.

* OSCE stations can include tasks that explicitly assess the practice of clinical prevention skills, such as conducting prevention counseling (e.g., smoking cessation, diet modification), counseling about indications and possible side effects of immunizations, and prescribing chemoprophylaxis (e.g., counseling for hormone-replacement therapy, potential side effects with aspirin prophylaxis).

Back to Top | Article Outline

CONCLUSION

Many schools are searching for appropriate methods to assess students' knowledge, skills, and attitudes related to achievement of prevention curricular objectives. While it is important to tailor assessment methods to local institutional objectives, it is possible to share assessment methods and materials on a regional and national basis. For example, the Southern California Macy Consortium has made available standardized patient cases for other institutions to use, and prevention curriculum programs could follow a similar model to share products. Several national initiatives should aid the process of national and regional resource exchange. The AAMC's curriculum database (CurrMIT)28 will be one source for documenting existing assessment methods as well as existing curriculum materials. The AAMC's Medical School Objectives Project29 will be likely to provide additional resources, as will the UME-21 Project30 funded by the Health Resources and Services Administration. Sharing problems, as well as successes, encountered in developing appropriate assessment methods will advance the field of evaluation of prevention curricula.

Back to Top | Article Outline

References

1. Linn RL, Gronlund NE. Measurement and Assessment in Teaching. Upper Saddle River, NJ: Prentice-Hall, 1995.

2. Gronlund NE. Constructing Achievement Tests. 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1982.

3. Garr DR, Lackland D, Wilson D. Prevention education and evaluation in U.S. medical schools: a status report. Acad Med. 2000;75(7 suppl):S14–S21.

4. Peters AS, Schimpfhauser FT, Cheng J, Daly DL, Kostyniak PJ. Effect of a course in cancer prevention on students' attitudes and clinical behavior. J Med Educ. 1987;62:592–600.

5. Heard JK, Cantrell M, Presher L, Klimberg VS, San Pedro GA, Erwin DO. Using standardized patients to teach breast evaluation to sophomore medical students. J Cancer Educ. 1995;10:191–4.

6. Nieman LZ, Joseph R. Defining and accomplishing clinically related objectives in an eight-hour oncology course for first-year medical students. J Cancer Educ. 1992;7:227–31.

7. Allen SS, Harris IB, Kofron PM, et al. A comparison of knowledge of medical students and practicing primary care physicians about cardiovascular risk assessment and intervention. Prev Med. 1992;21:436–48.

8. Gopalan R, Santora P, Stokes EJ, Moore RD, Levine DM. Evaluation of a model curriculum on substance abuse at The Johns Hopkins University School of Medicine. Acad Med. 1992;67:260–6.

9. Dismuke SE, McClary AM. Evaluation of an educational program in preventive cardiology. Am J Prev Med. 1990;6:99–105.

10. Allen SS, Bland CJ, Dawson SJ. A mini-workshop to train medical students to use a patient-centered approach to smoking cessation. Am J Prev Med. 1990;6:28–33.

11. Bloom BS, Engelhart MD, Furst EJ, Hill WH, Krathwohl DR. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. New York: David McKay, 1956.

12. Krathwohl DR, Bloom BS, Masia BB. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook II: Affective Domain. New York: David McKay, 1964.

13. Harrow AJ. A Taxonomy of the Psychomotor Domain: A Guide for Developing Behavioral Objectives. New York: Longman, 1972.

14. Rowntree D. Assessing Students: How Shall We Know Them? East Brunswick, NJ: Nichols, 1987.

15. Hopkins KD, Stanley JC, Hopkins BR. Educational and Psychological Measurement and Evaluation. 7th ed. Englewood Cliffs, NJ: Prentice-Hall, 1990.

16. Mehrens WA, Lehmann IJ. Measurement and Evaluation in Education and Psychology. New York: Holt, Rinehart, and Winston, 1973.

17. Nitko AJ. Educational Tests and Measurement: An Introduction. New York: Harcourt Brace Jovanovich, 1983.

18. Popham WJ. Modern Educational Measurement: A Practitioner's Perspective. 2nd ed. Boston, MA: Allyn and Bacon, 1990.

19. Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 2nd ed. Oxford, England: Oxford University Press, 1995.

20. Sudman S, Bradburn NM. Asking Questions: A Practical Guide to Questionnaire Design. Washington, DC: Jossey-Bass, 1983.

21. Case S, Swanson D. Constructing Written Test Questions for the Basic and Clinical Sciences. Philadelphia, PA: National Board of Medical Examiners, 1997. (Available online at <http://www.nbme.org>.)

22. Palmer D, Rideout E. Educating Future Physicians of Ontario (EFPO) Project. Evaluation Methods. A Resource Handbook. Hamilton, Ontario, Canada: McMaster University Program for Education Development, 1995.

23. Novak JD. Concept maps and vee diagrams: two metacognitive tools to facilitate meaningful learning. Instructional Science. 1990;19:29–52.

24. Edmondson KM. Concept maps and the development of cases for problem-based learning. Acad Med. 1994;69:108–10.

25. Pituch K, Harris M, Bodgewic S. The brief structured observation—a tool for focused feedback. Acad Med. 1999;74:599.

26. Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J. 1975;Feb. 22, 1:447–51.

27. Stillman PL. Technical issues: logistics. Acad Med. 1993;68:464–70.

28. Association of American Medical Colleges Curriculum Database—“CurrMIT.” <http://www.amc.org/meded/curric/start.htm>.

29. Association of American Medical Colleges. Learning objectives for medical student education—guidelines for medical schools: report I of the Medical Schools Objectives Project. Acad Med. 1999;74:13–8.

30. Health Resources and Services Administration. UME-21 Project. <http://www.aacom.org/UME-21/>.

Cited By:

This article has been cited 8 time(s).

American Journal of Health Promotion
Eliminating disparities: Empowering health promotion within preventive medicine
Dibble, R
American Journal of Health Promotion, 18(2): 195-199.

American Journal of Preventive Medicine
Case-based teaching in preventive medicine - Rationale, development, and implementation
Epling, JW; Morrow, CB; Sutphen, SM; Novick, LF
American Journal of Preventive Medicine, 24(4): 85-89.
10.1016/S0749-3797(03)00028-X
CrossRef
American Journal of Preventive Medicine
Evaluation of a preventive medicine curriculum - Incorporating a case-based approach
Sutphen, SM; Cibula, DA; Morrow, CB; Epling, JW; Novick, LF
American Journal of Preventive Medicine, 24(4): 90-94.
10.1016/S0749-3797(03)00027-8
CrossRef
American Journal of Preventive Medicine
Future applications of case-based teaching in population-based prevention
Morrow, CB; Epling, JW; Teran, S; Sutphen, SM; Novick, LF
American Journal of Preventive Medicine, 24(4): 166-169.
10.1016/S0749-3797(03)00040-0
CrossRef
American Journal of Preventive Medicine
Self-directed learning in population health - A clinically relevant approach for medical students
Trevena, LJ; Clarke, RM
American Journal of Preventive Medicine, 22(1): 59-65.
PII S0749-3797(01)00395-6
CrossRef
Academic Medicine
Viewpoint: A Proposal for Teaching Basic Clinical Skills for Mastery: The Case Against Vertical Integration
Benbassat, J; Baumal, R
Academic Medicine, 82(1): 83-91.
10.1097/01.ACM.0000249875.80170.92
PDF (79) | CrossRef
Academic Medicine
Prevention for the 21st Century: Setting the Context through Undergraduate Medical Education
Pomrehn, PR; Davis, MV; Chen, DW; Barker, W
Academic Medicine, 75(7): S5-S13.

PDF (143)
American Journal of Preventive Medicine
Evaluating the teaching of clinical preventive medicine - A multidimensional approach
Dickey, LL; Tran, K
American Journal of Preventive Medicine, 20(3): 190-195.

Back to Top | Article Outline

© 2000 Association of American Medical Colleges

Login

Article Tools

Images

Share