Skip Navigation LinksHome > May 2009 - Volume 84 - Issue 5 > A Blueprint to Assess Professionalism: Results of a Systemat...
Academic Medicine:
doi: 10.1097/ACM.0b013e31819fbaa2
Professionalism

A Blueprint to Assess Professionalism: Results of a Systematic Review

Wilkinson, Tim J. MB, ChB, M Clin Ed, PhD, FRACP; Wade, Winnie B. MA (Curriculum Studies), MA (Education); Knock, L Doug MSc

Free Access
Article Outline
Collapse Box

Author Information

Professor Wilkinson is associate dean (medical education), University of Otago, Christchurch, New Zealand.

Ms. Wade is director of education, Royal College of Physicians, London, United Kingdom.

Mr. Knock is deputy librarian, Queen Elizabeth Hospital, London, United Kingdom. When this report was written, he was medical education information specialist, Royal College of Physicians, London, United Kingdom.

Correspondence should be addressed to Professor Wilkinson, University of Otago, Christchurch, C/- The Princess Margaret Hospital, PO Box 800, Christchurch, New Zealand; telephone: 64-3-3377899; fax: 64-3-3377975; e-mail: (tim.wilkinson@otago.ac.nz).

Collapse Box

Abstract

Purpose: Assessing professionalism is hampered by varying definitions and these definitions' lack of a clear breakdown of the elements of professionalism into aspects that can be measured. Professionalism is multidimensional, so a combination of assessment tools is required. In this study, conducted during 2007–2008, the authors aimed to match assessment tools to definable elements of professionalism and to identify gaps where professionalism elements are not well addressed by existing assessment tools.

Method: The authors conducted literature reviews of definitions of professionalism and of relevant assessment tools, clustered the definitions of professionalism into assessable components, and clustered assessment tools of a similar nature. They then created a “blueprint” whereby the elements of professionalism are matched to relevant assessment tools.

Results: Five clusters of professionalism were formed: adherence to ethical practice principles, effective interactions with patients and with people who are important to those patients, effective interactions with people working within the health system, reliability, and commitment to autonomous maintenance / improvement of competence in oneself, others, and systems. Nine clusters of assessment tools were identified: observed clinical encounters, collated views of coworkers, records of incidents of unprofessionalism, critical incident reports, simulations, paper-based tests, patients' opinions, global views of supervisor, and self-administered rating scales.

Conclusions: Professionalism can be assessed using a combination of observed clinical encounters, multisource feedback, patients' opinions, paper-based tests or simulations, measures of research and/or teaching activities, and scrutiny of self-assessments compared with assessments by others. Attributes that require more development in their measurement are reflectiveness, advocacy, lifelong learning, dealing with uncertainty, balancing availability to others with care for oneself, and seeking and responding to results of an audit.

We see professionalism as central to the practice of medicine, yet the difficulty of its assessment is nearly as great as the value we place on it. Progress in assessing knowledge and skills has seen a move to authentic assessments that better match the expectations of doing the job. This progress has further highlighted the need to strengthen our assessment of professionalism. Yet, professionalism as a concept can be difficult to pin down.1 There is universal acceptance that it is important, and most people agree when they see that it is missing, yet definitions range broadly.2,3 For some, it may be seen as a unidimensional entity and it is simply called “professionalism”; for others, it has become so broad as to encompass everything a doctor needs to do to undertake his or her job. Most agree than a core component of professionalism is a commitment on the part of the individual practitioner to self-monitor4–10 and improve.11

The need to measure professionalism better is further highlighted because it is under threat. For example, external regulation may undermine intrinsic motivation to improve. Also, shorter working hours mean that some doctors may find it harder to develop an enduring commitment and sense of accountability. Finally, financial incentives and disincentives can compete with personal, moral, and ethical responsibilities.

There have been attempts to develop a number of new assessment tools that try to grasp the essence of, or at least a component of, professionalism. For example, many medical workplaces are now using multisource feedback as an assessment tool. This fills an important gap, yet there is now the trap that some may view the sole use of multisource feedback as being synonymous with providing a comprehensive assessment of an individual's professionalism.

In view of the broad range of definitions of professionalism, alongside the development of a number of new assessment tools, we saw the need to try to draw together some of the threads from both these areas, a research agenda that has been strongly endorsed elsewhere.11 What might a more programmatic approach to the assessment of an individual practitioner's professionalism look like? To answer this, we needed to assimilate the various definitions of professionalism, collate the assessment tools that would be useful, and map those tools to the elements of professionalism (a blueprint) so that areas of overlap and assessment gaps could be identified. Such gaps could then inform where new assessment tools should be developed or where previous assessment tools could be adapted.

We were helped in this task by groundwork completed previously by other authors. A earlier systematic review of measures that have been used concentrated on the period between 1982 and 2002 and summarized assessment instruments available up to then.12 This was a useful starting point, which highlighted the lack of well-documented studies of instruments that can be used to measure professionalism. The second useful piece of work was undertaken by van de Camp et al10 to try to define professionalism by conducting a thorough literature review, thematic analysis, and validity check in 2004. Since then, there has been important work in developing consensus statements on professionalism—for example, from the Royal Colleges of Physicians,11,13 the Charter on Medical Professionalism arising from the Medical Professionalism Project,14 and the British General Medical Council statements on good medical practice.15

Measuring or assessing professionalism is hampered by two major problems. Although there are many definitions of professionalism, these are often so broad that they do not lend themselves to aspects that are easily assessable. Furthermore, there is no agreed consensus, and views on professionalism may change over time.3 The existing definitions also lack a clear breakdown of the elements of professionalism into aspects that could be measured. The second problem is that there have been a number of attempts to develop tools to measure professionalism, and much progress has been made. Yet, we know from other lessons learned in assessment that single tools are rarely able to assess complex areas adequately. A combination of tools will be required; however, the critical question is what that combination might entail.

A programmatic approach is likely to be needed16,17 whereby multiple snapshots of an individual's professionalism can be taken and then collated into a whole to develop a clear picture of that person's strengths and weaknesses and to provide a body of evidence on which to base summative decisions.18

This study had four aims:

* To synthesize the various definitions and interpretations of professionalism

* To describe a toolbox of possible assessment methods

* To produce a blueprint that matches assessment tools to the identified elements of professionalism

* To identify gaps where professionalism elements are not well matched by assessment tools

Back to Top | Article Outline

Method

We carried out this study during 2007–2008 in five stages: (1) a literature review of definitions of professionalism, (2) a thematic analysis of the definitions of professionalism, (3) a literature review of tools to assess elements of professionalism, (4) creation of a blueprint whereby the elements of professionalism are matched to relevant assessment tools, and (5) identification of assessment gaps.

In undertaking the literature review to identify definitions of professionalism, we were particularly interested in building on the work of van de Camp et al,10 who undertook a similar literature review and thematic analysis in 2004, but we also concentrated on studies that used a systematic process to develop consensus statements or to reach a shared understanding of a definition. The initial search was conducted within the Medline (1996–2007) database and was significantly supplemented by checking references for additional publications, enabling us to incorporate seminal work such as the Medical Professionalism Charter,14 Royal College statements,11 and the General Medical Council's statement of good medical practice.15 In excess of 50 articles were identified, although more than 20 were rejected through their duplication of existing concepts or definitions.

Each of us undertook a thematic analysis of the definitions of professionalism by identifying the key elements from each definition. We then discussed any areas of difference and agreed on consensus elements and themes. We clustered those elements by taking account of two aims: to cluster them into similar attributes and to cluster them into themes that might use similar assessment techniques. From this, we aimed to develop a working definition of professionalism that captured all the relevant aspects. Alongside this was the need to clarify the behavioral manifestations of some key elements if the definitions were unclear.

We used those elements as the foundation for an expanded literature review to identify examples of relevant assessment tools. We searched for terms including the elements themselves (e.g., “teamwork,” “reliability”) combined with variations describing the tool, such as “instrument” and “examination” as well as terms including “assess,” “evaluate,” “measure,” and their derivatives (e.g., “assessment, “evaluation,” and “measurement”). The search was originally conducted within Medline and was expanded through manually checking bibliographical references for further publications. We concentrated particularly on articles published since 2002 to build on the work undertaken by Lynch et al.19 We were especially interested in identifying tools that could be used as part of a summative process—that is, tools that, when combined with other tools, might be sufficiently robust to inform summative decisions. This meant discarding many interesting but less relevant ideas on how professionalism could be taught or learned. We undertook a similar, but simpler, thematic analysis of these identified assessment tools and thereby clustered each tool into those of a similar nature that seemed to assess similar attributes.

We then created a blueprint whereby we matched the attributes of professionalism to the assessment tools.

Finally, we identified the gaps where activities did not have an existing assessment tool or where a single tool may not fully assess an attribute adequately.

Back to Top | Article Outline

Results

Defining professionalism

A classification of the themes arising from definitions or interpretations of professionalism, mapped against the relevant references, is offered in Table 1.

Table 1
Table 1
Image Tools

Some terms arose that required clarification. The first was “self-regulation,” which is widely accepted as an integral component of a profession. Self-regulation of a profession has implications beyond self-regulation at an individual level. At the level of the individual, which is the focus of this report, we believe the term “self-regulation” to be insufficiently explicit because it could be interpreted as meaning preserving the status quo. Instead, we have chosen the term “commitment to autonomous maintenance and continuous improvement of competence.” We have further expanded this concept to include oneself, others, and the systems in which one works.

The second term was “altruism,” which was sometimes inferred as meaning subjugating oneself for others, yet this contrasted with maintaining a healthy work–life balance. We have therefore adopted the concept, “Balance availability to others with care for oneself.” This concept arose in relation to patients but also in relation to colleagues, so we have placed it within each of the two themes that focus on patients and on colleagues, respectively. The third term was “maturity,” which was mentioned in two articles.7,10 We found this difficult to define and were not convinced it could be classified into a separate, assessable entity on its own. Fourth, professionalism has its own underpinning base of knowledge that can be assessed with traditional knowledge tests, such as multiple-choice questions. Predominantly, however, professionalism is about what someone does, rather than what he or she knows. In developing a blueprint, we did not wish to ignore the underpinning knowledge base20,21 but, instead, wished to place our emphasis higher on Miller's22 pyramid; that is, toward “doing” and away from just “knowing.” Finally, some definitions include ensuring that a patient's family are well informed. The concept of family has different meanings for different people, so we preferred the phrase “people who are important to the patient.”

Nearly all definitions of professionalism included some element of reflectiveness and/or self-monitoring. The purpose of this is to improve one's competence. We therefore decided that these elements should be placed within the theme of improving competence in oneself.

Back to Top | Article Outline
Identification of assessment tools

These clustered into groupings according to their use. Table 2 shows examples of tools within each grouping. We explain the groupings below.

Table 2
Table 2
Image Tools
Back to Top | Article Outline
Assessment of an observed clinical encounter.

The mini-CEX is an example of this type of assessment tool.23–26 This tool is used to assess a 15- to 30-minute observed snapshot of a doctor/patient interaction that is conducted within actual patient-care settings using real patients and that has a structured marking sheet that covers predefined generic areas. Validity derives from using authentic interactions, and reliability is achieved by ensuring aggregation of multiple assessments and multiple assessors. Standardization between sites can be achieved with examiner training and by collating scores from several encounters. The original mini-CEX asks for assessment of professionalism as a single global entity. Modifications to this have been made to look at specific aspects of professionalism through the development of the Professionalism Mini-Evaluation Exercise (P-MEX),27 which can assess four discrete areas: doctor–patient relationship skills, reflective skills, time management, and interprofessional relationship skills.

Back to Top | Article Outline
Collated views of coworkers.

This is usually achieved through multisource feedback (MSF), which is the systematic collection and feedback of data on an individual's performance, acquired from a number of stakeholders. In the past, this has sometimes been referred to as 360-degree assessment.24,28–34 Typically, the person being assessed nominates 10 to 20 assessors who collectively can comment on the specified range of that person's abilities. The assessors may include supervising consultants, registrars, nurses, allied health professionals, and clerical staff. MSF can be used to assess actual behaviors within the workplace that are difficult to assess within formal assessment conditions. It can be used to assess skills and behaviors that can sometimes be concealed within a formal assessment.

Back to Top | Article Outline
Record of incidents of unprofessionalism.

This is used on an “as-required” basis whereby an observed incident of unprofessional behavior can be reported and collated centrally. An overview group would look at the reports to determine whether a pattern of behavior is apparent and/or whether further action is needed.35,36

Back to Top | Article Outline
Critical incident report.

This method asks the doctor to reflect on a critical incident he or she has experienced or witnessed.37–39 Because the incident is self-identified, it contrasts with a record of an incident of unprofessionalism described above. It can encourage reflection and attention to elements of professionalism, but it is dependent on the type of incident to determine which aspect of professionalism is being assessed.

Back to Top | Article Outline
Simulation.

Simulations are contrived scenarios that resemble real-life situations but that usually use models or simulated patients.33,40 Sometimes, these can be incorporated within an objective structured clinical examination (OSCE).33 Simulations can be used to assess rare or unpredictable situations or to standardize assessment of higher-order communication skills. Because they are conducted within an artificial context, this can reduce validity, although many “high-fidelity” simulations can be very realistic. They can be useful in assessing how well someone works under pressure. Single simulations, like single OSCE stations,41 can be unreliable.

Back to Top | Article Outline
Paper-based test.

This requires provision of a scenario, such as an ethical dilemma or video encounter, and a series of questions to be answered.42 It can test underlying knowledge of some principles of professionalism, moral reasoning or decision making, and what should be done, but it cannot assess what a candidate actually might do in practice.

Back to Top | Article Outline
Patients' opinions.

This is usually obtained by collating questionnaire-based opinions of patients about the nominated person's abilities in specified areas.33,43–46 It can be used to assess actual behaviors within the workplace that are difficult to assess within formal assessment conditions. It is a direct survey of the key stakeholders of a health service. However, as discussed later, some patient populations can be more critical than others, so interpretation of results should be in conjunction with other assessments and with an understanding of the population that has been surveyed.

Back to Top | Article Outline
Global view of supervisor.

This is a summary view, usually by a supervisor, reported on a form with predefined criteria. The criteria help to define the areas of importance, but the tendency for them to be used as views of single observers at single points in time can make them unreliable and difficult to defend,47 despite demonstrations of internal consistency. However, such a summary can be useful if it is used repeatedly over time and if it draws on the evidence derived from other assessments. If multiple raters are used and the results are collated, then it functions like multisource feedback. We have therefore taken the view that it is not an assessment instrument in itself but more a means to report a summary of assessments. For these reasons, we have not included this in our blueprint as an assessment tool, but we acknowledge that it can have an important role in a programmatic assessment process.

Back to Top | Article Outline
Self-administered rating scale.

This is a questionnaire-based tool that an individual uses to assess his or her personal attributes or attitudes. It can aid reflection, but it has limited use in summative assessments, because it cannot assess what a person actually does.

Back to Top | Article Outline
Assessment blueprint

The overall blueprint is shown in Table 3. Note that critical incident report is not on the blueprint, because the areas it maps against would be individual and idiosyncratic. If we take the view that the best assessments are ones of direct observation of the behaviors of interest, then the mini-CEX,23–26 and particularly the P-MEX,27 would be core components of an assessment program. Some behaviors can be concealed if a person knows that he or she is being directly observed, so the collated views of coworkers (MSF) and of patients (patient opinion surveys) become complementary sources of information. Moral reasoning could be assessed by a simulation or, more efficiently, by a paper-based scenario. The gaps, or remaining attributes that would not be well assessed using these methods alone, are

Table 3
Table 3
Image Tools

* Reflectiveness/self-assessment

* Lifelong learning

* Dealing with uncertainty

* Advocacy

* Balance availability to others with care for oneself

* Seek and respond to results of an audit

* Advancing knowledge

Back to Top | Article Outline

Discussion

In this study, we attempted to clarify the elements of professionalism and to cluster them into assessable components. This process has confirmed that professionalism is multifaceted, and therefore a person could be excellent in one aspect and deficient in another. Furthermore, the assessment blueprint demonstrates how no single tool is able to measure effectively a person's professionalism as a whole and that several tools will be required.

The themes of professionalism that we have chosen are not the only way the cake could be cut, but we have attempted to synthesize the range of definitions and themes used by others into a unified whole. Over time, we anticipate that this classification could be challenged or refined. However, in the meantime, there is a pressing need to align these themes with assessment instruments.11,48

The blueprint demonstrates that direct observations (through the mini-CEX23–26 and P-MEX27) and collated views (through MSF and patients' opinions) are crucial elements because they capture many aspects in reliable, valid, and feasible ways. Medicine has, at times, been rather defensive about using patients' opinions as a measure of anything, arguing that external factors might have a significant impact on how a patient views his or her doctor. Doctors, for example, do not and should not always acquiesce to patients' demands, yet failure to do so could result in unfavorable ratings from that patient. The message and the messenger can sometimes be confused so that doctors might receive poor ratings if the messages they bring are unpalatable. In contrast, patients are the reason for our profession to exist, are the most important stakeholders, and appreciate having their views heard. Just as any instrument in isolation cannot measure a doctor's professionalism, so too can patients' opinions be misleading if taken on their own. However, patients' opinions do complement other sources of information, and the blueprint shows they fill an important gap.

Portfolios have often been suggested as a means to assess professionalism.49 The function of a portfolio is to collate data from a variety of sources to form a body of evidence.50 Its value is therefore dependent on the contributing data. If the data are restricted to only a few elements of professionalism, then an incomplete picture will be formed. Furthermore, it acknowledges that the evidence will require a combination of global judgments alongside more structured instruments. Both approaches are reliable, provided data from sufficient numbers of observations and observers are aggregated.51,52 This reinforces the need for a systematic collection of evidence based on a blueprint, such as we have produced. Nevertheless, the whole of professionalism is more than the sum of the parts,3 and there is a need to be able to take an overview of all elements. We therefore see the portfolio as having an important role in collating evidence, but not as the source of that evidence. In itself, however, it is not an assessment tool of self-assessment or reflection. It therefore has a second important role in a person's professional development by providing an opportunity to self-assess, reflect on the contents of the portfolio, and improve.

This leaves some important elements that are not easily assessed using mini-CEX, P-MEX, MSF, patients' opinions, paper-based tests, or simulations (listed at the end of the previous section). However, insight could be assessed by asking a person to complete a mini-CEX, P-MEX, or MSF form about himself or herself and by comparing that score with the scores of others. Used in this way, a measure of insight could be gained by noting any areas of discrepancy. Reflection is an element within the P-MEX, but this is only around isolated events, so there may be a need to adapt existing tools53 or develop additional measures of reflection. A person's ability to advance knowledge could be assessed by documenting publications, presentations, research, or teaching activities.

The remaining attributes are less amenable to assessment with existing discrete tools but could be assessed with new tools or through appropriate assessment processes. For example, reflectiveness, advocacy, lifelong learning, dealing with uncertainty, balancing availability to others with care for oneself, and seeking and responding to results of an audit could be the foci of discussions with a supervisor or colleague. As such, the commitment to looking at these areas could be assessed, but it is less clear whether the attributes themselves could be accurately assessed. Although these processes would gather useful information on these attributes, these areas should also be high priorities for the development of novel assessment methods.

The strength of the blueprint that we developed is the multifaceted approach taken to this problem, by drawing together the varying definitions and measures of professionalism. The main limitation, however, is related to this, as the classifications we have chosen could be refined or debated. We acknowledge that a variety of classifications could be used, but we would argue that the mix of tools that should be used and developed is unlikely to be altered by such reclassifications.

We conclude that professionalism can be assessed using a combination of mini-CEX, P-MEX, MSF, patients' opinions, paper-based tests, simulations, measures of research and/or teaching activities, and scrutiny of self-assessments compared with assessments by others. A portfolio is a useful means to support such a program of assessment. Attributes that require more development in their measurement are reflectiveness, advocacy, lifelong learning, dealing with uncertainty, balancing availability to others with care for oneself, and seeking and responding to results of an audit. These attributes should be the focus of development of tools and/or processes. The few tools that do exist need to be adapted.

Back to Top | Article Outline

References

1 Ginsburg S, Regehr G, Lingard L. Basing the evaluation of professionalism on observable behaviors: A cautionary tale. Acad Med. 2004;79(10 suppl):S1–S4.

2 Cochran A. Professionalism: I know it when I see it, but how do I measure it? Focus Surg Educ. Spring 2008:8–10.

3 Borrero S, McGinnis KA, McNeil M, Frank J, Conigliaro RL. Professionalism in residency training: Is there a generation gap? Teach Learn Med. 2008;20:11–17.

4 Frohna A, Stern D. The nature of qualitative comments in evaluating professionalism. Med Educ. 2005;39:763–768.

5 Hilton SR, Slotnick HB. Proto-professionalism: How professionalisation occurs across the continuum of medical education. Med Educ. 2005;39:58–65.

6 Jha V, Bekker HL, Duffy SRG, Roberts TE. Perceptions of professionalism in medicine: A qualitative study. Med Educ. 2006;40:1027–1036.

7 Kearney RA. Defining professionalism in anaesthesiology. Med Educ. 2005;39:769–776.

8 Rabinowitz D, Reis S, Van RR, Alroy G, Ber R. Development of a physician attributes database as a resource for medical education, professionalism and student evaluation. Med Teach. 2004;26:160–165.

9 Swick HM. Toward a normative definition of medical professionalism. Acad Med. 2000;75:612–616.

10 Van De Camp K, Vernooij-Dassen MJ, Grol RP, Bottema BJ. How to conceptualize professionalism: A qualitative study. Med Teach. 2004;26:696–702.

11 Royal College of Physicians. Doctors in Society: Medical Professionalism in a Changing World. Report of a Working Party. London, UK: Royal College of Physicians; 2005.

12 Veloski JJ, Fields SK, Boex JR, Blank LL. Measuring professionalism: A review of studies with instruments reported in the literature between 1982 and 2002. Acad Med. 2005;80:366–370.

13 Levenson R, Dewar S, Shepherd S. Understanding Doctors: Harnessing Professionalism. London, UK: King's Fund; 2008.

14 Medical Professionalism Project. Medical professionalism in the new millennium: A physicians' charter. Lancet. 2002;359:520–522.

15 General Medical Council. Good Medical Practice. London, UK: General Medical Council; 2006.

16 Arnold L, Shue CK, Kalishman S, et al. Can there be a single system for peer assessment of professionalism among medical students? A multi-institutional study. Acad Med. 2007;82:578–586.

17 Epstein RM, Dannefer EF, Nofziger AC, et al. Comprehensive assessment of professional competence: The Rochester experiment. Teach Learn Med. 2004;16:186–196.

18 Wilkinson TJ. Assessment of clinical performance: Gathering evidence. Intern Med J. 2007;37:631–636.

19 Lynch DC, Surdyk PM, Eiser AR. Assessing professionalism: A review of the literature. Med Teach. 2004;26:366–373.

20 West CP, Huntington JL, Huschka MM, et al. A prospective study of the relationship between medical knowledge and professionalism among internal medicine residents. Acad Med. 2007;82:587–592.

21 LoboPrabhu S, King C, Albucher R, Liberzon I. A cultural sensitivity training workshop for psychiatry residents. Acad Psychiatry. 2000;24:77–84.

22 Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 suppl):S63–S67.

23 Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: A method for assessing clinical skills. Ann Intern Med. 2003;138:476–481.

24 Healthcare Assessment and Training. Assessment Services. Available at: (http://www.hcat.nhs.uk/assessments). Accessed January 12, 2009.

25 Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini-clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77:900–904.

26 Hatala R, Ainslie M, Kassen BO, Mackie I, Roberts JM. Assessing the mini-clinical evaluation exercise in comparison to a national specialty examination. Med Educ. 2006;40:950–956.

27 Cruess R, McIlroy JH, Cruess S, Ginsburg S, Steinert Y. The professionalism mini-evaluation exercise: A preliminary investigation. Acad Med. 2006;81(10 suppl):S74–S78.

28 Archer JC, Norcini J, Davies HA. Use of SPRAT for peer review of paediatricians in training. BMJ. 2005;330:1251–1253.

29 Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:1655–1660.

30 Evans R, Elwyn G, Edwards A. Review of instruments for peer assessment of physicians. BMJ. 2004;328:1240–1243.

31 Norcini JJ. Peer assessment of competence. Med Educ. 2003;37:539–543.

32 Lockyer J. Multisource feedback in the assessment of physician competencies. J Contin Educ Health Prof. 2003;23:4–12.

33 Accreditation Council for Graduate Medical Education, American Board of Medical Specialties. Toolbox of Assessment Methods. Version 1.1. Chicago, Ill: Accreditation Council for Graduate Medical Education and American Board of Medical Specialties; 2000.

34 Lockyer JM, Violato C, Fidler H. The assessment of emergency physicians by a regulatory authority. Acad Emerg Med. 2006;13:1296–1303.

35 Ainsworth MA, Szauter KM. Medical student professionalism: Are we measuring the right behaviors? A comparison of professional lapses by students and physicians. Acad Med. 2006;81(10 suppl):S83–S86.

36 Fontaine S, Wilkinson TJ. Monitoring medical students' professional attributes: Development of an instrument and process. Adv Health Sci Educ Theory Pract. 2003;8:127–137.

37 Stark P, Roberts C, Newble D, Bax N. Discovering professionalism through guided reflection. Med Teach. 2006;28:e25–e31.

38 Baernstein A, Fryer-Edwards K. Promoting reflection on professionalism: A comparison trial of educational interventions for medical students. Acad Med. 2003;78:742–747.

39 Verkerk MA, de Bree MJ, Mourits MJ. Reflective professionalism: Interpreting CanMEDS' “professionalism.” J Med Ethics. 2007;33:663–666.

40 Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10–28.

41 Singer P, Robb A, Cohen R, Norman G, Turnbull J. Performance-based assessment of clinical ethics using an objective structured clinical examination. Acad Med. 1996;71:495–508.

42 Simpson D, Helm R, Drewniak T, et al. Objective Structured Video Examinations (OSVEs) for geriatrics education. Gerontol Geriatr Educ. 2006;26:7–24.

43 Weaver MJ, Ow CL, Walker DJ, Degenhard EF. A questionnaire for patients' evaluations of their physicians' humanistic behaviors. J Gen Intern Med. 1993;8:135–139.

44 Tamblyn R, Benaroya S, Snell L, McLeod P, Schnarch B, Abrahamowicz M. The feasibility and value of using patient satisfaction ratings to evaluate internal medicine residents. J Gen Intern Med. 1994;9:146–152.

45 Wilkinson TJ, Fontaine S. Patients' global ratings of student competence. Unreliable contamination or gold standard? Med Educ. 2002;36:1117–1121.

46 Mackillop L, Armitage M, Wade W. Collaborating with patients and carers to develop a patient survey to support consultant appraisal and revalidation. Clin Manage. 2006;14:89–94.

47 Wilkinson TJ, Wade WB. Problems with using a supervisor's report as a form of summative assessment. Postgrad Med J. 2007;83:504–506.

48 Arnold L. Assessing professional behavior: Yesterday, today, and tomorrow. Acad Med. 2002;77:502–515.

49 Kalet AL, Sanger J, Chase J, et al. Promoting professionalism through an online professional development portfolio: Successes, joys, and frustrations. Acad Med. 2007;82:1065–1072.

50 Wilkinson TJ, Challis M, Hobma SO, et al. The use of portfolios for assessment of the competence and performance of doctors in practice. Med Educ. 2002;36:918–924.

51 Wilkinson TJ, Frampton CM, Thompson-Fawcett MW, Egan AG. Objectivity in objective structured clinical examinations: Checklists are no substitute for examiner commitment. Acad Med. 2003;78:219–223.

52 Hodges B, Regehr G, McNaughton N, Tiberius R, Hanson M. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–1134.

53 Aukes LC, Geertsma J, Cohen-Schotanus J, Zwierstra RP, Slaets JP. The development of a scale to measure personal reflection in medical practice and education. Med Teach. 2007;29:177–182.

54 Chard D, Elsharkawy A, Newbery N. Medical professionalism: The trainees' views. Clin Med. 2006;6:68–71.

55 Cohen JJ. Professionalism in medical education, an American perspective: From evidence to accountability. Med Educ. 2006;40:607–617.

56 Hauck FR, Zyzanski SJ, Alemagno SA, Medalie JH. Patient perceptions of humanism in physicians: Effects on positive health behaviors. Fam Med. 1990;22:447–452.

57 Wagner P, Hendrich J, Moseley G, Hudson V. Defining medical professionalism: A qualitative study. Med Educ. 2007;41:3.

58 Golnik K, Goldenhar L. The ophthalmic clinical evaluation exercise: Reliability determination. Ophthalmology. 2005;112:1649–1654.

59 Shayne P, Gallahue F, Rinnert S, Anderson CL, Hern G, Katz E. Reliability of a core competency checklist assessment in the emergency department: The standardized direct observation assessment tool. Acad Emerg Med. 2006;13:727–732.

60 Rees C, Shepherd M. The acceptability of 360-degree judgements as a method of assessing undergraduate medical students' personal and professional behaviours. Med Educ. 2005;39:49–57.

61 Musick DW, McDowell SM, Clark N, Salcido R. Pilot study of a 360-degree assessment instrument for physical medicine & rehabilitation residency programs. Am J Phys Med Rehabil. 2003;82:394–402.

62 Lockyer J, Blackmore D, Fidler H, et al. A study of a multi-source feedback system for international medical graduates holding defined licences. Med Educ. 2006;40:340–347.

63 Papadakis MA, Loeser H, Healy K. Early detection and evaluation of professionalism deficiencies in medical students: One school's approach. Acad Med. 2001;76:1100–1106.

64 Gisondi M, Smith-Coggins R, Harter P, Soltysik R, Yarnold P. Assessment of resident professionalism using high-fidelity simulation of ethical dilemmas. Acad Emerg Med. 2004;11:931–937.

65 Mazor KM, Zanetti ML, Alper EJ, et al. Assessing professionalism in the context of an objective structured clinical examination: An in-depth study of the rating process. Med Educ. 2007;41:331–340.

66 Baldwin DC, Bunch WH. Moral reasoning, professionalism, and the teaching of ethics to orthopaedic surgeons. Clin Orthop. 2000;378:97–103.

67 Humphrey HJ, Smith K, Reddy S, Scott D, Madara J, Arora VM. Promoting an environment of professionalism: The University of Chicago “Roadmap.” Acad Med. 2007;82:1098–1107.

68 Hall MA, Zheng B, Dugan E, et al. Measuring patients' trust in their primary care providers. Med Care Res Rev. 2002;59:293–318.

69 Hurst YK, Prescott CL, Rennie JS. The patient assessment questionnaire: A new instrument for evaluating the interpersonal skills of vocational dental practitioners. Br Dent J. 2004;197:497–500.

70 Prislin MD, Lie D, Shapiro J, Boker J, Radecki S. Using standardized patients to assess medical students' professionalism. Acad Med. 2001;76(10 suppl):S90–S92.

71 Van Zanten M, Boulet JR, Norcini JJ, McKinley D. Using a standardised patient assessment to measure professional attributes. Med Educ. 2005;39:20–29.

72 Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global rating forms enable program directors to assess the ACGME competencies? Acad Med. 2004;79:549–556.

73 Gauger PG, Gruppen LD, Minter RM, Colletti LM, Stern DT. Initial use of a novel instrument to measure professionalism in surgical residents. Am J Surg. 2005;189:479–487.

74 Van De Camp K, Vernooij-Dassen MJ, Grol RP, Bottema BJ. Professionalism in general practice: Development of an instrument to assess professional behaviour in general practitioner trainees. Med Educ. 2006;40: 43–50.

75 de Haes JC, Oort F, Oosterveld P, ten Cate O. Assessment of medical students' communicative behaviour and attitudes: Estimating the reliability of the use of the Amsterdam attitudes and communication scale through generalisability coefficients. Patient Educ Couns. 2001;45:35–42.

76 Tromp F, Rademakers JJ, Ten Cate TJ. Development of an instrument to assess professional behaviour of foreign medical graduates. Med Teach. 2007;29:150–155.

77 De Haes JC, Oort FJ, Hulsman RL. Summative assessment of medical students' communication skills and professional attitudes through observation in clinical practice. Med Teach. 2005;27:583–589.

78 Sarp N, Yarpuzlu AA, Mostame F. Assessment of time management attitudes among health managers. Health Care Manag (Frederick). 2005;24:228–232.

79 Chisholm M, Cobb H, Duke L, McDuffie C, Kennedy W. Development of an instrument to measure professionalism. Am J Pharm Educ. 2006;70:85.

80 Majumdar B, Keystone JS, Cuttress LA. Cultural sensitivity training among foreign medical graduates. Med Educ. 1999;33:177–184.

81 Godkin MA, Savageau JA. The effect of a global multiculturalism track on cultural competence of preclinical medical students. Fam Med. 2001;33:178–186.

82 Blackall GF, Melnick SA, Shoop GH, et al. Professionalism in medical education: The development and validation of a survey instrument to assess attitudes toward professionalism. Med Teach. 2007;29:e58–e62.

Cited By:

This article has been cited 7 time(s).

Academic Emergency Medicine
Assessing Professionalism: Summary of the Working Group on Assessment of Observable Learner Performance
Rodriguez, E; Siegelman, J; Leone, K; Kessler, C
Academic Emergency Medicine, 19(): 1372-1378.
10.1111/acem.12031
CrossRef
Medical Teacher
Building a professionalism framework for healthcare providers in China: A nominal group technique study
Pan, H; Norris, JL; Liang, YS; Li, JN; Ho, MJ
Medical Teacher, 35(): E1531-E1536.
10.3109/0142159X.2013.802299
CrossRef
Medical Teacher
Decision-making bias in assessment: The effect of aggregating objective information and anecdote
Tweed, MJ; Thompson-Fawcett, M; Wilkinson, TJ
Medical Teacher, 35(): 832-837.
10.3109/0142159X.2013.803062
CrossRef
Journal of the American Podiatric Medical Association
Podiatric Medical Students' Perceptions of Professionalism in the Clinical Setting A Qualitative Analysis
Parsley, NL; Harris, IB
Journal of the American Podiatric Medical Association, 102(6): 434-445.

Medical Teacher
Assessment of professionalism: A consolidation of current thinking
Goldie, J
Medical Teacher, 35(2): E952-E956.
10.3109/0142159X.2012.714888
CrossRef
Annals of Family Medicine
Power to Advocate for Health
Stange, KC
Annals of Family Medicine, 8(2): 100-107.
10.1370/afm.1099
CrossRef
Journal of General Internal Medicine
Does Gender Moderate Medical Students' Assessments of Unprofessional Behavior?
Stratton, TD; Conigliaro, RL
Journal of General Internal Medicine, 27(): 1643-1648.
10.1007/s11606-012-2152-z
CrossRef
Back to Top | Article Outline

© 2009 Association of American Medical Colleges

Login

Article Tools

Images

Share