Skip Navigation LinksHome > May 2011 - Volume 86 - Issue 5 > The Medical Educator, the Discourse Analyst, and the Phoneti...
Academic Medicine:
doi: 10.1097/ACM.0b013e318212feaf
International Medical Graduates

The Medical Educator, the Discourse Analyst, and the Phonetician: A Collaborative Feedback Methodology for Clinical Communication

Woodward-Kron, Robyn PhD; Stevens, Mary PhD; Flynn, Eleanor MBBS

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

Dr. Woodward-Kron is senior lecturer, Medical Education Unit, Melbourne Medical School, University of Melbourne, Victoria, Australia.

Dr. Stevens is an honorary research fellow, Medical Education Unit, Melbourne Medical School, University of Melbourne, Victoria, Australia, and a Humboldt postdoctoral fellow, Institute of Phonetics and Speech Processing, Ludwig-Maximilians-Universitaet, Munich, Germany.

Dr. Flynn is senior lecturer, Medical Education Unit, Melbourne Medical School, University of Melbourne, Victoria, Australia, and a palliative care specialist.

Correspondence should be addressed to Dr. Woodward-Kron, Medical Education Unit, Melbourne Medical School, 202 Berkeley St., University of Melbourne, Victoria 3010, Australia; telephone: (61) 3-8344-3072; fax: (61) 3-9035-8873; e-mail: robynwk@unimelb.edu.au.

First published online March 23, 2011

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A50.

Collapse Box

Abstract

Frameworks for clinical communication assist educators in making explicit the principles of good communication and providing feedback to medical trainees. However, existing frameworks rarely take into account the roles of culture and language in communication, which can be important for international medical graduates (IMGs) whose first language is not English. This article describes the collaboration by a medical educator, a discourse analyst, and a phonetician to develop a communication and language feedback methodology to assist IMG trainees at a Victorian hospital in Australia with developing their doctor–patient communication skills. The Communication and Language Feedback (CaLF) methodology incorporates a written tool and video recording of role-plays of doctor–patient interactions in a classroom setting or in an objective structured clinical examination (OSCE) practice session with a simulated patient. IMG trainees receive verbal feedback from their hospital-based medical clinical educator, the simulated patient, and linguists. The CaLF tool was informed by a model of language in context, observation of IMG communication training, and process evaluation by IMG participants during January to August 2009. The authors provided participants with a feedback package containing their practice video (which included verbal feedback) and the completed CaLF tool.

The CaLF methodology provides a tool for medical educators and language practitioners to work collaboratively with IMGs to enhance communication and language skills. The ongoing interdisciplinary collaboration also provides much-needed applied research opportunities in intercultural health communication, an area the authors believe cannot be adequately addressed from the perspective of one discipline alone.

Frameworks for teaching and learning doctor–patient communication, such as the Calgary–Cambridge Guide,1,2 the SEGUE Framework,3 and the Maastricht History-Taking and Advice Checklist,4 can assist educators in making explicit the principles of good communication to medical trainees and students. Their use and effectiveness for providing formative as well as summative feedback are well established in the literature.2,5,6 Such frameworks, however, do not usually take into account the role of language and culture in doctor–patient interviews, despite the growing reliance on international medical graduates (IMGs) to address health workforce shortages in Australia, Canada, the United Kingdom, and the United States.7,8 Yet background language and culture, including attitudes toward patient behaviors and patient-centered communication and autonomy, can have an impact on the effectiveness of these interactions.9–12 The omission of this dimension from existing communication frameworks therefore raises questions about their suitability for teaching communication in intercultural settings such as IMG communication training. Consideration of speech comprehensibility—the impact of the speaker's first language (L1) on the pronunciation of English,13 and the effect this can have on intelligibility for interactants in the health setting—also seems to be absent from current communication frameworks.

Published descriptions of the language dimension of IMGs' communication needs focus on readily identifiable elements for nonlanguage specialists (e.g., problems with slang and vernacular language).10,14 Largely unreported, and perhaps beyond the expertise of the clinical educator/supervisor, are the less tangible linguistic aspects of grammatical accuracy, cohesion, and speech comprehensibility. For example, considerable fluency and grammatical competence, particularly with tenses, is required to elicit from patients comprehensive symptom information including time course (e.g., have you ever had; how long have you been having; when did you; do you have; are you having any). From a linguistic perspective, errors or gaps in eliciting symptom information may have several causes. As Pilotto and colleagues15 point out, supervisors may need assistance in addressing the multifaceted communication needs of IMGs.

Language researchers and practitioners have worked collaboratively with medical educators to shed light on specific aspects of communication16,17 as well as to develop bridging courses for IMGs seeking licensure.9,18 Benefits of the latter type of collaboration include raising awareness of language issues such as interference of the IMG's L1 on the IMG's spoken English. For example, language practitioners found that an IMG's “brusque” manner of communication reported by patients and colleagues was predominantly due to speech rate, syllable length, and intonation, all of which were influenced by the speaker's L1.9 However, there is little in the literature on interdisciplinary collaborations during IMG resident training and only a small body of interdisciplinary research in this area.19

In this article, we report on the development, implementation, and impact of the Communication and Language Feedback (CaLF) tool and methodology for IMGs developed by linguists collaborating with a medical educator. We propose that sustainable interdisciplinary collaborations such as this one have the potential not only to enhance IMGs' intercultural communication but also to benefit their supervisors and educators and, ultimately, to improve the quality of patient care and safety.

Back to Top | Article Outline

Setting and Framework

Setting

To obtain general registration in Australia, most IMGs must first pass an English language test, the Australian Medical Council (AMC) multiple-choice examination, and then the AMC clinical examination, which includes a 16-station objective structured clinical examination (OSCE). An urban teaching hospital in the state of Victoria, Australia, served as the setting for the development and implementation of the CaLF tool and methodology. The hospital, like other Victorian hospitals, offers IMG trainees opportunities to prepare for the AMC OSCEs through practice role-plays facilitated by a hospital-based medical clinical educator (MCE). In this hospital's two-hour, weekly communication training sessions, IMG trainees practice interviewing a patient (typically played by an MCE, a nurse educator, a medical student, or, when funding allows, a simulated patient) in front of colleagues or at an OSCE-style station with simulated examination conditions. At the conclusion of the role-play, the MCE and peers provide feedback. Participating IMGs are mainly from the Indian subcontinent, Iran, and China. Attendance in these voluntary sessions ranges from 5 to 14 IMG trainees. Attendance patterns are positively influenced by upcoming clinical examination dates and negatively by hospital rosters and rotations.

Back to Top | Article Outline
Goals

To address IMGs' communication training needs, our team collaborated to

* develop a communication and language feedback methodology using as feedback mechanisms (1) video recording and (2) written comments (via the CaLF tool),

* develop, implement, and refine the feedback methodology during IMG trainees' communication skills training sessions,

* evaluate the methodology, including the utility of the CaLF tool, and

* negotiate ongoing implementation of the methodology at the conclusion of the project.

We decided to incorporate video recording as a feedback mechanism because doing so allows language specialists to undertake fine-grained analysis of communication breakdowns. Previous studies have incorporated video into the assessment of communication skills6,20; others have specifically incorporated video into the delivery of feedback by having participants view their videotaped interactions.21,22

Back to Top | Article Outline
The project team

Our project team included a medical educator (E.F.) and two linguists: a discourse analyst (R.W.K.) and a phonetician (M.S.). We had invaluable input from the hospital-based MCE who conducted the training sessions, and we benefited from discussions with an MCE from a rural Victorian teaching hospital. Our team medical educator has extensive experience as an IMG clinical supervisor as well as in developing and implementing education and assessment programs for IMGs. She is an experienced OSCE examiner in postgraduate and undergraduate settings. Her approach to teaching communication skills is informed by experiential learning and reflective practice, and her principles for effective communication are supported by established communication frameworks and consensus statements.1,23 She took primary responsibility for the communication dimension of the feedback: that is, identifying the essential tasks and behaviors of effective patient-centered communication. The linguists took responsibility for the language dimension of the CaLF tool: that is, they focused on how the IMGs' language choices for the communication tasks may have affected their interactions. The discourse analyst has expertise in teaching English as a second language and has taught intercultural communication to medical students. The phonetician was new to health communication teaching and research.

The linguists' approach to this study was informed by a functional model of language.24 In such a model, the contexts of culture and immediate situation are seen to influence language at the content level or semantic level (i.e., representation, interaction, organization) and at the expression level of sounds (phonology) or writing. This interdependence underpinned the development of the language aspect of our feedback methodology, as did theories of second language (L2) acquisition, in particular the role and impact of the L1 on L2 learning (e.g., transfer of the L1's grammar structures into the L2; mispronunciations due to the absence or similarity of sounds/phonemes from L1 to L2).

Back to Top | Article Outline

Development and Evaluation of the CaLF Tool and Methodology

We began the project in January 2009 and developed and implemented the CaLF methodology and tool over an eight-month period, completed in August 2009. Our project design incorporated three incremental phases. In the observational research phase (phase 1), the linguists attended two IMG communication training sessions, during which they made field notes and contributed verbal feedback on communication aspects of the IMGs' practice role-plays. In the development phase (phase 2), we attended six more sessions in which we continued to take field notes, contributed verbal feedback, and developed the CaLF tool. Phases 1 and 2 were important familiarization phases for us as well as for the IMGs; we wished the IMGs to feel that we were active participants in the training sessions so that they would feel comfortable with us when we began filming them.

In the implementation and evaluation phase (phase 3), which spanned 10 sessions, we commenced video recording during the role-plays held in the classroom and implemented the CaLF methodology. We video recorded IMGs during 8 of the 10 sessions. In addition, we held practice OSCE sessions using a simulated patient at the same time as 3 of these 10 sessions, to allow individual IMGs to participate in a practice exam with an experienced OSCE examiner (E.F.). These practice sessions were held in another room with one of the linguists filming the interaction. Because we could manage only one OSCE station during these 3 sessions, the rest of the IMGs remained in the regular sessions with the other linguist and the MCE. In this final phase, the IMGs also participated in a process evaluation of the feedback methodology and CaLF tool.

Western Health Low Risk Human Research Ethics Panel gave ethics approval to video record the IMGs' OSCE role-plays and practice OSCE sessions, to take observational notes of the trainees in order to develop the CaLF tool and methodology, to show the videos for educational and research purposes, to evaluate the utility of the feedback methodology using focus groups and survey methods, and to report the findings of the project. All IMGs participating in the training sessions when the project team was present gave their consent to participate in all aspects of the project.

Back to Top | Article Outline
Developing the CaLF tool (phases 1 and 2)

In phase 1, the linguists observed that the MCE provided feedback to IMG trainees on several aspects of clinical communication: organization of the clinical interview, appropriateness and accuracy of medical and lay terms used in discussion of symptoms and management, elicitation of patient symptoms and concerns, ability to establish rapport with the patient and gain the patient's trust, and extralinguistic factors that affected communication, such as posture and eye contact. In this phase, the MCE invited the linguists to provide verbal feedback immediately after the role-play to the IMGs in which we identified gaps in the MCE's feedback specific to language use, including

* organization of the interview at the macro and micro levels (respectively, overall and subsections such as exploration of presenting complaint),

* the conversational and logical progression of the interview,

* lexico-grammar (word choices and grammar structures),

* grammatical accuracy,

* sound production (vowel, consonant and syllable production) when this interfered with comprehension,

* the prosody–semantic interface (the intersection between sounds of speech and meaning), including the role of intonation, pitch, and stress for emphasis, and

* discourse semantics (interactional strategies such as questioning, clarifying, confirming, encouraging).

We also provided feedback to some IMG trainees on the interference of aspects of their L1's grammar and phonological system on their spoken English. As an example, one IMG's fluency and comprehensibility were impeded when he had to articulate specific consonant clusters, for example, at the start of the word “spontaneous” (SPon/tan/e/ous), because such sequences do not occur in his L1 (which has a consonant–vowel syllable structure). Awareness of the root cause of this problem was enlightening for both the IMG and the MCE.

As we moved into phase 2, our observations and field notes informed the criteria for the CaLF tool, while the organization of these criteria into categories or fields was underpinned by the theoretical framework outlined above. That is, the levels of language were a starting point for “translating” and incorporating into the CaLF tool linguistic aspects such as discourse organization, discourse semantics, lexico-grammar, and phonology. The phonetician and discourse analyst continued to observe IMG role-plays and practice sessions and to give feedback only on aspects of language that affected communication, whereas the MCE gave his regular feedback on clinical as well as communication aspects.

As we designed the CaLF tool, we took into consideration issues of utility, comprehensiveness, and its role in learning. It was clear from the IMGs' questions during the early stages of the project that they were seeking from the linguists more detailed feedback on language and communication aspects than the MCE could provide. They wished us to identify examples of problematic questioning strategies and give alternative suggestions (i.e., wordings and phrases), examples of grammatical and pronunciation errors, and so on. (Several of the IMGs knew the International Phonetic Alphabet; in these cases, there was a shared terminology between the IMGs and the linguists for accurately describing the articulation of the sounds of English.) Furthermore, the feedback they received from the MCE on communication was often unsystematic as discussion frequently moved quickly to the clinical aspects of the case.

These factors informed the design of the CaLF tool. We included writing space for summative comments and structured the tool to give feedback across the levels of language. Therefore, the criteria are Structure/Organization, Interaction, Word Choice, Speech Clarity, and Other. These criteria include prompt points; for example, the prompts for Speech Clarity are speech errors, speech rate, rhythm, intonation, stress, vowels, and consonants. We included “Other” as a category because we observed that intercultural dissonance or gaps in clinical knowledge tended to have negative impacts on language and communication skills. We also included explanations of feedback criteria at the end of the tool. When completing the CaLF tool, the evaluator provides comments on those criteria deemed most relevant to the IMG's performance. For an example of a completed CaLF tool that shows the types of language feedback provided to the IMGs on their doctor–patient role-plays and practice OSCE interactions, see Supplemental Digital Appendix 1 (http://links.lww.com/ACADMED/A50).

Back to Top | Article Outline
Developing and evaluating the CaLF methodology (phase 3)
Developing the methodology.

We video recorded role-plays and used the CaLF tool to provide feedback in 10 sessions. During 3 of these sessions, as noted above, we also recorded role-plays conducted as timed OSCE practice sessions under examination conditions with simulated patients from the University of Melbourne's simulated patient program. Although the cases for both the classroom role-play sessions and the OSCE practice sessions were taken from the AMC Handbook of Clinical Assessment,25 the team medical educator (E.F.) had input into the selection and content of the OSCE practice cases. In the OSCE practice sessions, trainees received verbal feedback immediately after the role-play from the team medical educator, the simulated patient, and the linguist participating in the filming. Trainees participating in the regular sessions received verbal feedback after role-plays from the MCE and the linguist video recording the sessions. The linguists used the CaLF tool as a guide to give the verbal feedback in phase 3. The IMGs who were filmed (11 in the OSCE sessions; 9 in the regular sessions) later received a feedback package, including the completed CaLF tool and their video recording, which included all the verbal feedback they received during the session.

In the week between sessions, one linguist (M.S.) reviewed each trainee's video and made further notes about the trainee's performance on the CaLF tool. When time permitted, both linguists viewed the video together to bring different areas of expertise to the analysis. This analysis process required 15 to 20 minutes for each video. We saved the CaLF tool with our notes on performance and the video to a DVD, which we then gave to the IMG at the next training session. When possible, we discussed the analysis with the IMG, which allowed the phonetician to demonstrate production of vowels, consonants, or consonant clusters for IMGs who had speech comprehensibility issues.

The video recording allowed us to take a detailed diagnostic approach to the feedback. In some instances, the phonetician transcribed the videos to investigate more closely aspects that were problematic in trainees' practice OSCEs such as organization of the history taking or elicitation of the course of the symptom. We also sought to identify the cause of misunderstandings (e.g., when the patient sought clarification) and to pinpoint moments in the video that illustrated our comments so that the trainee could review them. In a few cases, we used the transcripts to provide quantitative evidence of the number of times the IMG asked a certain question type (e.g., Did you/Did the pain) or interrupted the patient.

We also used the videos to provide detailed phonetic analysis. In one instance, the MCE requested that we investigate the tone of an IMG participant from the Indian subcontinent, because he had received feedback from clinical colleagues that the trainee sounded “robotic” with patients. By “tone,” the MCE was referring to the intonation of the IMG's spoken English, which was at a relatively constant midlevel pitch throughout his speech, and to the rhythm, whereby all syllables were relatively equal in terms of duration and prominence. These specific aspects are very typical of Indian English, but not Australian English, in which differences in pitch and rhythm play a crucial role in signaling new information to listeners. The feedback we provided concerning intonation and rhythm during the training session did not seem to convince the IMG trainee that these aspects would make it difficult for native Australian English listeners to understand him, or that modifying his intonation could improve his daily interactions with colleagues and patients. Using the video of the IMG's practice session, the phonetician was able to address the issue in a more objective manner by extracting a pitch trace sound file from the recording and presenting to him some audio examples as well as pitch trace diagrams (Figure 1). The pitch trace diagram for this IMG (top panel) shows a flat pitch with no clearly identifiable peaks, which would indicate stressed words and syllables. In Australian speech, peaks indicating stress would normally appear on one or two words in the sentence. The phonetician used the trainee's pitch trace diagram to illustrate for him a number of stark differences between his speech and that of an Australian speaker (bottom panel), in which clear stress peaks are identifiable in the speaker's utterance (e.g., a lot, understanding).

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Summary of the CaLF methodology.

In sum, the CaLF methodology is a means of providing multisource feedback to IMG trainees participating in doctor–patient role-plays in simulated settings. It incorporates video recording of the role-play and providing feedback on the clinical, communication, and language aspects of trainee performance. An MCE or experienced OSCE examiner provides verbal feedback on clinical and communication aspects, a simulated patient provides verbal feedback from the patient perspective, and a linguist/language specialist provides verbal feedback and written comments on language aspects via the CaLF tool. (The verbal feedback is video recorded.) When resources allow, the linguist can analyze the video for diagnostic purposes, particularly in cases where supervisors have reported or expressed concerns about English language competence. The video, CaLF tool, and any additional materials are provided to the IMG, who can further discuss the contents of the video with an MCE. With the permission of the participants, the videos can also be shown to and discussed with peers.

Back to Top | Article Outline
Evaluating the tool and methodology.

The project evaluation included ongoing process evaluation designed to inform the development of the CaLF tool and methodology as well as a summative evaluation toward the end of the eight-month project. Evaluation included observation of improved communication skills for participating IMGs and their success in the clinical exam (8 of the 10 participating IMGs who took the AMC clinical exam during the project period passed the examination). The ongoing process evaluation was especially valuable because we were able to record improved communication via longitudinal comparisons of individual IMGs' performance and feedback received from the MCE, simulated patient, and linguist. Although it is beyond the scope of this article to provide the linguistic evidence of this improvement, for two of the participants, comments from the examiner (E.F.) indicated that they demonstrated improved organization of the history taking, more effective elicitation strategies, and improved speech comprehensibility.

We planned a summative evaluation to assess the utility and effectiveness of the CaLF methodology at the end of phase 3 through qualitative and quantitative measures (survey of and focus-group discussion with IMG trainees who had participated). However, the outcome of this formal evaluation was compromised by the irregular attendance patterns of participating IMGs due to roster commitments and the time frame of the project. At the evaluation session in which we conducted the survey and focus group, only five of the IMGs present had received the written CaLF feedback and video files, and none of these five had been able to view the video because of file format issues. As well as alerting us to problems with file transfer and compatibility, this feedback gave us the opportunity to show the videos and feedback in a group session, as all participants were keen to see their recordings as well as to learn from their peers. In accordance with the ethics guidelines for this project, we sought agreement from all participants before screening the videos in a group setting. This group feedback session generated a large amount of discussion and contributed to a general increase in the participating IMGs' awareness of the significant role of communication in the clinical interview. It also provided a rare opportunity for the MCE to view trainees' performance under examination conditions with a simulated patient and an external examiner (i.e., the team medical educator).

Back to Top | Article Outline

Implications and Challenges

When we commenced this project, our aim was to develop a communication feedback methodology that MCEs could use in training sessions for IMGs preparing for the AMC clinical examination. With hindsight, we now recognize that we had envisaged designing a tool for others to use and ending our involvement at the conclusion of the project. The CaLF methodology with the CaLF tool, however, has developed as a methodology that is collaborative and interdisciplinary, which is both its strength as well as a potential barrier for implementation. Its strength is that it recognizes that some IMG communication issues may go beyond the expertise of clinically trained educators and provides a means (the CaLF tool) of addressing this collaboratively. However, there is the issue of the accessibility of language expertise. Not all hospitals have established links to university-based applied linguistics departments or medical education units with faculty who have backgrounds in linguistics or teaching English as a second language. In such situations, the CaLF methodology with the CaLF tool can provide educators with a means of approaching language experts and inviting them into hospital-based training settings. Furthermore, the video methodology allows a clinical educator to record practice sessions and send the files to a language expert for analysis, thus circumventing the need for a linguist or language expert to be present. Such a feedback arrangement would benefit IMGs and their supervisors in rural settings. This is important in countries such as Australia where “the tyranny of distance”26 continues to shape training interactions.

Another challenge is maintaining collaborations of this kind beyond the time frame of a funded project. In our case, at the completion of the project's funding, we wished to pursue our collaborative research into the factors that affect the success of IMG trainee–patient interactions, whereas the IMG participants and hospital-based MCE wished to maintain the link with the university, the medical educator, the language experts, and the simulated patient program. We therefore negotiated to continue our visits to the training setting with the hospital meeting the costs of the simulated patients. We have successfully applied for further funding to continue our collaboration, and we continue to demonstrate the methodology to medical educators at sites in Victoria.

Back to Top | Article Outline

Conclusion

The CaLF methodology provides a means of addressing the communication and language needs of IMGs, an aspect of training that is of ongoing concern but that has received only scant attention in the medical literature. This project has demonstrated that applied linguists can work productively in an interdisciplinary collaboration with medical educators and MCEs in the health setting to address communication issues that go beyond the latter group's expertise. As well as providing specialist feedback on aspects of communication, the linguists, through their presence in the training sessions, raised awareness of the various aspects or levels of language for communication, the intersection of culture and language issues, and the notion of interference of one's L1 on English. Although the methodology does require clinical educators to seek the input of a language specialist, the tool provides a starting point for medical educators to approach applied linguists to investigate possibilities for collaboration. Ongoing interdisciplinary collaborations that may result from innovations such as the CaLF methodology can provide much-needed applied research opportunities in intercultural health communication, an area we believe cannot be adequately addressed from the perspective of one discipline alone.

Back to Top | Article Outline

Acknowledgments:

The authors would like to acknowledge the contributions of Dr. Sean Fabri (Western Health), Dr. Roger Coates and colleagues (Goulburn Valley Health), Ms. Gillian Fawcett (Western Health), Ms. Margo Collins (University of Melbourne), and the participating international medical graduates. The authors would also like to acknowledge the insights provided by the anonymous reviewers and the Academic Medicine editorial team. The CaLF tool is available from the corresponding author on request.

Back to Top | Article Outline

Funding/Support:

This project was funded by International Medical Graduate Funding Strategy (2008–2009), Department of Human Services, Victoria, Australia.

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

Ethics approval was given by Western Health Low Risk Human Research Ethics Panel (WHRP:2009.L3).

Back to Top | Article Outline

Previous presentations:

This study was presented in part at the Communication, Medicine and Ethics Conference, Cardiff, Wales, June 25–27, 2009 (poster); at A World of Expertise International Medical Graduates Showcase Day, Department of Human Services, Melbourne Exhibition and Convention Centre, Melbourne, Victoria, Australia, August 5, 2009 (paper); at the second International Discourses and Cultural Practices Conference, University of Sydney, Sydney, New South Wales, Australia, July 7–9, 2009 (paper); and at the 15th National Prevocational Medical Education Forum, Melbourne, Victoria, Australia, November 7–9, 2010 (invited workshop).

Back to Top | Article Outline

References

1Kurtz S, Silverman J. The Calgary–Cambridge Referenced Observation Guides: An aid to defining the curriculum and organizing the teaching in communication training programmes. Med Educ. 1996;30:83–89.

2Silverman J, Kurtz S, Draper J. Skills for Communicating With Patients. 2nd ed. Abingdon, UK: Radcliffe; 2005.

3Makoul G. The SEGUE framework for teaching and assessing communication skills. Patient Educ Couns. 2001;45:23–34.

4Thiel J, Ram P, Van Dalen J. MASS-Global Manual 2000: Guidelines to the Rating of Communication Skills and Clinical Skills of Doctors With the MAAS-Global. http://www.hag.unimaas.nl/Maas-Global_2000/GB/MAAS-Global-2000-EN.pdf. Accessed January 26, 2011.

5Kramer AW, Düsman H, Tan LH, Jansen JJ, Grol RP, van der Vleuten CP. Acquisition of communication skills in postgraduate training for general practice. Med Educ. 2004;38:158–167.

6Laidlaw TS, Kaufman DM, MacLeod H, van Zanten S, Simpson D, Wrixon W. Relationship of resident characteristics, attitudes, prior training and clinical knowledge to communication skills performance. Med Educ. 2006;40:18–25.

7Barton D, Hawthorne L, Singh B, Little J. Victoria's dependence on overseas trained doctors in psychiatry. People Place. 2003;11(1):54–64.

8Mullan F. The metrics of the physician brain drain. N Engl J Med. 2005;353:1810–1818.

9Hoekje B. Medical discourse and ESP courses for international medical graduates (IMGs). English Specific Purposes. 2007;26:327–343.

10McDonnell L, Usherwood T. International medical graduates: Challenges faced in the Australian program. Aust Fam Physician. 2008;37:481–484.

11Woodward-Kron R. Learner medical discourse and intercultural clinical communication: Towards a contextually informed teaching framework. In: Solly M, ed. Verbal/Visual Narrative Texts in Higher Education. Zurich, Switzerland: Peter Lang; 2008.

12Hamilton J, Woodward-Kron R. Developing cultural awareness and intercultural communication through multimedia: A case study from medicine and the health sciences. System. 2010;38:560–568.

13Swan M, Smith B, eds. Learner English: A Teacher's Guide to Interference and Other Problems. 2nd ed. Cambridge, UK: Cambridge University Press; 2001.

14Hall P, Keely E, Dojeiji S, Byszewski A, Marks M. Communication skills, cultural challenges and individual support: Challenges of international medical graduates in a Canadian heathcare environment. Med Teach. 2004;26:120–125.

15Pilotto L, Duncan G, Anderson-Wurf J. Issues for clinicians training international medical graduates: A systematic review. Med J Aust. 2007;187:225–228.

16Heritage J, Robinson J, Elliott M, Beckett M, Wilkes M. Reducing patients' unmet concerns in primary care: The difference one word can make. J Gen Intern Med. 2007;22:1429–1433.

17Roberts C, Wass V, Jones R, Sarangi S, Gillett A. A discourse analysis study of ‘good’ and ‘poor’ communication in an OSCE: A proposed new framework for teaching students. Med Educ. 2003;37:192–201.

18Wright R, McCullagh M. Using DVD Material in Teaching English for Medicine. Cambridge, UK: Cambridge University Press; 2008.

19Roberts C, Sarangi S, Southgate L, Wakeford R, Wass V. Oral examinations—Equal opportunities, ethnicity, and fairness in the MRCGP. BMJ. 2000;320:370–375.

20Ram P, Grol R, Rethans JJ, Schouten B, van der Vleuten C, Kester A. Assessment of general practitioners by video observation of communicative and medical performance in daily practice: Issues of validity, reliability and feasibility. Med Educ. 1999;33:447–454.

21Carroll K, Iedema R, Kerridge R. Reshaping ICU ward round practices using video-reflexive ethnography. Qual Health Res. 2008;18:380–390.

22Roter D, Larson S, Shinitzky H, et al. Use of an innovative video feedback technique to enhance communication skills training. Med Educ. 2004;38:145–157.

23von Fragstein M, Silverman J, Cushing A, Quilligan S, Salisbury H, Wiskin C. UK consensus statement on the content of communication curricula in undergraduate medical education. Med Educ. 2008;42:1100–1107.

24Halliday MAK. An Introduction to Functional Grammar. 2nd ed. London, UK: Arnold; 1994.

25Australian Medical Council. AMC Handbook of Clinical Assessment. Canberra, Australia: Australian Medical Council; 2007.

26Blainey G. The Tyranny of Distance: How Distance Shaped Australia's History. Melbourne, Australia: Sun Books; 1966.

Supplemental Digital Content

Back to Top | Article Outline

© 2011 Association of American Medical Colleges

Login

Article Tools

Images

Share