The traditional approach to education in the health professions involves disseminating information through lectures, workshops, and printed materials, to increase clinicians’ knowledge.1 However, systematic reviews have shown that many educational interventions that have successfully increased clinicians’ knowledge have failed to have a significant impact on clinicians’ behavior and health care outcomes.1,2 The assumption that gains in knowledge lead to changes in clinical practice and improvements in patients’ outcomes has, thus, been called into question.1
Similarly, studies of clinical practice guidelines (CPGs) have shown these guidelines have little effect on clinical practice unless their dissemination is supported by intensive reinforcement efforts.3 Surveys of clinicians’ awareness of CPGs suggest that clinicians’ lack of knowledge of the guidelines is not the main factor explaining lack of compliance.4 It has been shown in a number of specific clinical situations, including vaccinations in febrile infants5 and dietary counseling,6 that although most physicians can demonstrate their knowledge of a recommendation and the evidence supporting its implementation, few actually behave in accordance with this knowledge when practice patterns are analyzed.
The literature suggests that the ability to affect clinical behavior through educational interventions and CPGs is unlikely to increase until the factors that affect clinicians’ decisions about whether or not to implement knowledge are better understood.7 The exploration of the factors involved in the gap between knowledge and behavior is a critical first step in the design of interventions intended to influence clinicians’ behavior.
Previous studies have identified individual factors, apart from knowledge, that influence clinical behavior. These factors include patients’ characteristics,8 beliefs about a patient's agenda,9 a desire to avoid difficult discussions,9 convenience or time constraints,10 and physicians’ self-perceptions.11 These factors were identified through studies that involved surveys and self-reporting. Reliance on clinicians’ recall and disclosure is a major methodological limitation when exploring influences on physicians’ behavior because discrepancies between clinicians’ tacit and explicit policies have been demonstrated.12 These studies are also limited by the fact that they have explored individual influences on clinicians’ behavior out of context. Although these studies provide important preliminary information regarding influences on clinical behavior, they cannot provide a rich understanding of the multiple influences on behavior and how these interact within the context of a clinical encounter. To date, no study has comprehensively examined the multifactorial influences that affect the gap between knowledge and behavior in any clinical model.
The purpose of the current study was to use the clinical model of autism to explore the factors that contribute to the gap between knowledge and behavior. We chose to use the early recognition of signs of autism by primary care clinicians as the clinical domain for this study for three main reasons: recently published evidence-based guidelines recommend early referral for diagnostic assessment when autism is suspected,13 there is evidence of a need for change in practice toward earlier diagnosis,14 and the stigma and long-term developmental implications of a diagnosis of autism were expected to produce a variety of influences on clinician behavior. This study provided a typical evidence-based educational intervention about autism, evaluated clinicians’ actions subsequent to the intervention through clinical encounters with standardized patients, and explored their decisions for action using semistructured interviews. The unique benefit of this design was the provocation of the gap between knowledge and performance in a context that allowed the examination of its etiology.
Method
Participants
The study's participants or “study population” consisted of family medicine residents (FMRs) at the University of Toronto in the 2001–02 academic year. Participation was voluntary, and informed consent was obtained from all participants. Ethical approval was obtained from the Hospital for Sick Children Research Ethics Board, Toronto, Ontario, Canada. FMRs were selected as a study population to minimize the self-selection bias that would occur if this study were to take place in the context of continuing medical education (CME) for practicing physicians. Twenty-two residents were initially recruited, with the expectation of saturating the range of attitudes and experience regarding early recognition of autism in the sample.15 One FMR did not attend; the total number of participants was 21. The sample size was determined based on the qualitative methodologic standard of eight participants for the long interview method.16 Although the interview employed in this methodology was only 30 minutes long, the narrow focus of the discussion (one 15-minute clinical encounter) makes it reasonable to consider this interview an in-depth exploration of a focused topic. We expected participants would demonstrate at least two different patterns of behavior during the clinical encounters (i.e., some would demonstrate a knowledge–behavior gap and some would not), and with 21 participants, we expected at least eight to demonstrate each pattern of behavior.
Educational Intervention
We developed a two-hour interactive video case-based seminar about delayed language development (see Figure 1). This seminar included information about diagnosis, treatment, and outcomes in autism. To obtain baseline information regarding the study population's relevant knowledge, five FMRs were recruited to participate in a focus group, which was audiotaped, transcribed, and analyzed using a grounded theory approach.17 The resultant information was used to refine the content of the interactive seminar and to produce relevant clinical scenarios (see below).
Figure 1.:
Qualitative study design exploring the degrees of gap between clinical knowledge and behavior in residents following an educational intervention, University of Toronto, 2001–02.
The educational intervention was delivered by the principal investigator (TK) to FMRs as part of their regular academic seminar series. Participants for the clinical encounters were recruited from the seminar attendees.
The purpose of the educational intervention was to establish a shared knowledge base for all participants, so influences on clinical behavior other than knowledge could be studied. Identical multiple-choice tests of knowledge were given prior to and immediately following the educational intervention to document any change in knowledge that occurred as a result of the intervention. The test consisted of 15 questions about key knowledge points covered in the seminar. The content of the test was based on the recent practice parameter on the early diagnosis of autism.13 The test was assessed for content validity by two developmental pediatricians.
The educational intervention and multiple-choice tests were piloted with FMRs at one training site to ensure that the seminar could transmit the key information covered on the tests. Because they were not different from the data obtained subsequently, the pilot data were included in the study analysis.
Clinical Encounters (See Figure 1)
Six to eight weeks following the educational intervention, study participants completed a series of four 15-minute clinical encounters designed to expose gaps between their knowledge and clinical behavior. The clinical encounters allowed participants to demonstrate their knowledge by identifying signs of autism in videotaped cases and then act on the basis of their knowledge as they discussed their management plans with standardized patients playing the roles of the parents of the children in the videotapes.
Each participant first took a brief history, and then viewed a two-minute videotape of a child, which corresponded to the history given by the standardized parent. Each participant was asked to note, in a written list, observations about the videotaped child, as well as an overall clinical impression. This observational record served as documentation of the participant's relevant knowledge at the time of the encounter. Each participant then discussed his or her observations and management plans with the standardized parent. The four clinical encounters were completed in sequence. Three scenarios (1, 2, and 3) acted as setting stations portraying children with language delays and other developmental delays, which served to reproduce the diagnostic uncertainty present in real-life situations and to allow participants to become accustomed to the format of the encounters and the presence of a video camera (present, turned on, but not recording in the first three stations). The fourth scenario was the target scenario, representing a child with autism, and this encounter was videotaped.
The clinical encounters served three purposes. First, the observation records demonstrated whether the participants had retained the knowledge regarding the signs of autism gained weeks earlier at the educational intervention. Second, the clinical encounters facilitated the process of theoretical sampling.17 To ensure that an appropriate range of influences on behavior was explored, it was important to include clinicians who would demonstrate a range of behaviors in the relevant clinical situation. By interviewing participants who demonstrated different choices of action in the clinical encounters, a reasonable range of perspectives was represented in the interviews. Finally, the target station stimulated discussion in the subsequent interview. The interviews were, thus, not subject to recall bias, which would have been a significant threat to the validity of the study had the participants been asked to recall the influences on their behavior during relevant real-life clinical situations.
The clinical encounters all occurred within a six-month period. The actors playing the standardized parent roles were each trained by the principal investigator (TK) and a standardized patient trainer to ensure consistency. Participants were not given any feedback about their performance during the clinical encounters, and whenever possible all participants from a given training site completed the clinical encounters on the same evening to minimize discussion of the encounters among participants prior to completion of the study.
Semistructured Interviews
Immediately following the fourth clinical encounter (portraying a child with autism), each participant completed a 30-minute semistructured interview. The interviews were conducted by research assistants unacquainted with any of the participants. The fact that the interviewers were research assistants and not clinicians was explained to each participant prior to the clinical encounter session. During each interview, the interviewer had access to the participant's observational record from the fourth case. The interviewer and the participant watched the videotape of the management discussion portion of that participant's encounter with the fourth standardized parent. The interviewer's observations were used to stimulate discussion of the thoughts, feelings, decision-making processes, and attitudes that were involved in the encounter. To explore these tacit variables, the “discourse-based interview” technique18 was applied, which involves having the interviewee articulate the reasoning behind a choice of action and the reasons for deciding against any alternative actions. Interviews were audiotaped and transcribed, with all identifying information removed.
Data Analysis
To explore the level of change from baseline knowledge following the educational intervention, the pre- and postintervention test scores were compared using a paired-samples t test. The postintervention test scores of postgraduate year one participants (PGY1s) were compared with those of postgraduate year two participants (PGY2s), and the scores of the attendees who did not participate in the subsequent clinical encounters were compared with the scores of those who did, using a two-way analysis of variance (ANOVA).
The observational record and the videotape of the target clinical encounter for each participant were compared by the principal and co-principal investigators (TK, LL). This process identified any discord between knowledge and clinical behavior. The participants were divided into groups based on the presence or absence of a knowledge–behavior gap. Participants who correctly identified the signs of autism in the target encounter, discussed these concerns with the standardized parent, and initiated a referral for a diagnostic assessment, were classified into the “No Gap” group because their choice of clinical action correlated with their demonstrated knowledge of the signs of autism, and with the emphasis on early referral during the educational intervention. Participants who identified the signs of autism but did not discuss their concerns nor initiate a diagnostic referral were considered to have demonstrated varying degrees of gap between their knowledge and their clinical behavior (“Degrees of Gap” group). Data from the one participant who did not identify the signs of autism were not included in the analysis. An independent-samples t test was used to compare the posttest scores of the No Gap and Degrees of Gap groups. A Fisher exact probability test was used to compare the number of PGY1s and PGY2s in the No Gap and Degrees of Gap groups.
The interview transcripts were analyzed for selected and emergent themes in the grounded theory tradition.17 Selected themes included gaps between demonstrated knowledge and clinical behavior, as well as influences on clinical decision making. A preliminary coding structure was developed by the principal investigator (TK). The preliminary coding structure was refined and expanded by a team of three researchers (TK, LL, GR) through discussion and consultation of the transcripts. The final coding structure was then applied to all transcripts. NVivo 1.2 (2000) software was employed for detailed coding and analysis of associations among themes.
The final coding structure with definitions of themes and representative transcript excerpts was presented to a convenience sample of four study participants. A 60-minute return-of-findings focus group was conducted so that these insiders might provide feedback to expand, challenge, or confirm the analysis. The focus group was audiotaped and transcribed with identifying information removed. The principal investigator (TK) analyzed the transcript and the important themes were discussed with the co-primary investigator and included in the analysis.
Results
The educational intervention was administered during the Family and Community Medicine academic half-day at five teaching hospitals affiliated with the University of Toronto. The number of participants at each seminar ranged from eight to 20.
Knowledge Assessment
Completed pre- and postintervention tests were received from 54 FMRs: 28 PGY1s, 24 PGY2s, and two who did not specify their level of training. The mean preintervention score was 8.4 (standard deviation [SD] 2.2). The mean posttest score was significantly higher at 12.6, (SD 1.72, p < .001). There was no significant difference between the mean postintervention score of the PGY1s (12.4, SD 1.67) and PGY2s (12.8, SD 1.86, p = .59). There was no significant difference between the mean postintervention score of those who did not participate in the subsequent clinical encounters (12.4, SD 1.85), and the mean postintervention score of those who did (13.2, SD 1.31, p = .74).
Knowledge–Behavior Gap
Twenty of the 21 participants in the clinical encounter phase of the study (95%) correctly identified and interpreted the signs of autism in the fourth clinical encounter, as documented on their observational records. Based on their choice of clinical action, nine participants were classified into the No Gap group and nine were classified into the Degrees of Gap group. The two remaining participants could not clearly be included in either group. One of these participants presented autism as a definitive diagnosis rather than as a possibility, as would be appropriate after a brief encounter in a primary care setting. The other participant did not specifically discuss the possibility of autism, but did clearly initiate a referral for a developmental assessment that would have resulted in autism diagnosis.
The No Gap group's mean postintervention test score did not differ significantly from the Degrees of Gap group's score (13.7, SD 1.51 versus 12.9, SD 1.07, p = .23). There was no difference in the number of PGY1s and PGY2s in the No Gap and Degrees of Gap groups (p = .31).
Themes
Analysis of the interview transcripts resulted in the identification of two major themes.
Rationalizations
The first of these themes was called “Rationalizations.” Rationalizations were explanations that participants used to justify their choice of clinical action. There were eight major rationalizations (see Table 1) employed by participants in both the No Gap and the Degrees of Gap groups. Our participants stated that their actions were influenced by their relationships with patients and colleagues, their respect for the patient's agenda, their perception of an ongoing knowledge deficit, their personal clinical style, their desire to have the patient ultimately comply with recommendations (justifying their behavior as a means to an end), their adherence to ideals of clinical practice, their awareness of the stigma attached to the term “autism,” and the presence of systems barriers which limited their action.
Table 1: Definitions and Representative Excerpts from Interview Transcripts for Eight Dimensions of the “Rationalizations” Theme Describing Explanations Given by Participants for Their Choice of Clinical Action, University of Toronto, 2002
Remarkably, our participants used the same rationalizations to justify opposite choices of action. This phenomenon is illustrated most clearly in the relationships and the means to an end rationalizations. Consider the following excerpts in which the participants are describing how they were influenced by the fact that they were meeting the standardized parent for the first time:
It's difficult to deal with such problems with a new patient that you don't have a relationship with, and they don't trust you and they don't know you and they don't know how much you care. So you have to really manage their feelings and get the message without being too rough. (R12)
It's my first time meeting her, and she might walk out the door, I might never see her again. So that's why I was a bit more direct. (R15)
Thus the newness of the physician–patient relationship prompted some participants to be more direct and others to be less direct in their feedback. The same phenomenon is seen in the following excerpts, in which the participants justify their choice of action based on their intended objective (to have the child investigated):
But I felt I had to say it, because how am I going to get the investigations if I don't … tell her what I think one of the possibilities is? (R13)
I wanted definitely for her to be onside to investigating him more. My end goal was to make sure that we could have David back for more assessments. And … I needed to not tell her the whole story … that I saw, because … I didn't want her to get upset and back away from that goal. (R2)
Thus some participants felt that a direct approach would convince the standardized parent that investigations were necessary, while others felt that a direct approach would scare her away and decrease the likelihood of getting the investigations done.
Conditions for Action
The other main theme was entitled “Conditions for Action.” This theme encompassed the two factors that promoted clinical action in accordance with knowledge (see Table 2).
Table 2: Definitions and Representative Excerpts from Interview Transcripts for the Two Dimensions of the “Conditions for Action” Theme Describing Promoters of Clinical Action Based on New Knowledge, University of Toronto, 2002
The first condition for action was a sense of urgency. In order for the participants to initiate a potentially difficult conversation about the possibility of autism, it was necessary for them to feel that the need to do so was relatively urgent. In this study design, there was no method of quantifying sense of urgency, and thus no comment can be made about whether the No Gap group had a more acute sense of urgency than did the Degrees of Gap group. The participants in the No Gap group described their sense of urgency as a motivation for discussing autism in their feedback. The participants in the Degrees of Gap group, although they did not address their concerns about autism with the standardized parent, clearly identified that a sense of urgency was an important reason to consider doing so.
The second necessary condition for action was an adequate level of certainty about the clinical findings that were relevant to the action. Participants did not have to be certain that the child in the target encounter had autism. Rather, they required a sense of certainty that the signs they saw in the child were abnormal enough, and unlikely enough to resolve spontaneously, that a discussion of their concerns (and the resultant parental anxiety) was warranted. As with sense of urgency, participants in both the No Gap and Degrees of Gap groups discussed the issue of certainty. Some participants in the Degrees of Gap group expressed that they would need to have more certainty before they would address the possibility of autism with a parent:
(In a real-life clinical setting) you have a chance to look at their behavior and see, and follow them along. It's so difficult to make one assessment, I think. So if you see them multiple times, you get more comfortable … with your assessment, I think, with the child. To be able to come out and say, “This is what I think is going on, I think we should do something.”(R1)
The participants in the No Gap group mentioned how their certainty was increased in the standardized case, and that in a real-life situation, they might be less certain, and less likely to act:
(In a real-life clinical setting) I wouldn't have been quite as confident, or as sure about my course of action, perhaps, because I would have seen them not thinking about it, not thinking of a language disorder to start with. And not having a nice videotape that actually shows all those things. During an office visit, I wouldn't have picked all those things up. (R8)
Conclusions
This study has important implications for research and practice in health professional education. The first involves the novel methodology we developed. The use of standardized clinical encounters as a sampling tool for qualitative research has not been previously described. In our design, participants’ choice of action was directly observed in the standardized encounters, which ensured that participants represented a range of approaches to the relevant clinical situation. The use of questionnaires has been described previously as a method of ensuring that a study population demonstrates a representative range of attitudes.19 However, the present study's methodology allows a more objective look at clinicians’ perspectives than does a questionnaire because the sampling involves observed behavior rather than self-reported attitudes.
Another important implication of this study involves the participants’ rationalizations. These eight rationalizations are compatible with previous research exploring barriers to implementation of CPGs (patient-related factors, systems barriers, etc.).20 Our study was conducted with one group of clinicians, in one clinical domain. Due to the general nature of most of the rationalizations, and the degree of similarity in the issues raised by all participants, we anticipate that these rationalizations will be transferable to other clinical domains and other groups of clinicians. It will be important, however, for this transferability to be tested in subsequent studies.
Our methodology provides a perspective that previous quantitative surveys of barriers to implementation of knowledge could not. The qualitative design allowed a global exploration of rationalizations for choice of action, allowing the spectrum of applicable rationalizations to be uncovered. More importantly, the design exposed the fact that identical rationalizations are used to justify different courses of action. On a quantitative questionnaire, for example, all participants would have endorsed the fact that the physician–patient relationship was an important influence on decision making, but the differences in the nature of the influence for individual clinicians would not have been apparent.
The demonstration of the individuality of the rationalization process sheds significant doubt on the practical applications of the “barriers” research.20 Any effort to remove “barriers” to the implementation of guidelines or of knowledge might not produce the expected results because these factors might affect behavior differently for different clinicians or in different contexts. Our results would suggest that efforts to change these barriers should at least be individualized or should not be entertained without a careful study of the impact of such factors in different contexts. Perhaps consideration should be given to reconceptualizing these barriers as “motivations” for different kinds of action.
Another possible interpretation of the variable application of rationalizations is that these factors may not be the important determinants of behavior that clinicians report them to be. The fact that clinicians can use the same specific patient characteristic to justify opposite courses of action may be an indication of the power of the rationalization process. Choice of action in a given situation may reflect the clinician more than the context, and it may be that clinicians invoke whatever rationalization fits best with their preferred choice of action rather than being influenced by these factors prior to the decision-making process. This interpretation of the rationalizations data is consistent with the medical decision-making literature, which has shown that clinicians place more weight on evidence that confirms a hypothesis than on disconfirming evidence, and that disconfirmatory facts are often investigated with the purpose of eliminating their impact by finding grounds to reject them.21 Further qualitative studies of the clinical decision-making process could shed more light on the complexities of these interactions.
One further implication for educational practice involves the two conditions for action identified in this study: level of certainty and sense of urgency. Because these factors play an important role in the promotion of clinical action based on knowledge, they may prove to be important factors to emphasize in educational interventions. Focusing on information that elevates sense of urgency and level of certainty may help streamline the content of educational interventions while maximizing efficacy. The development of tools to measure changes in sense of urgency and in level of certainty is a potential strategy for evaluation of educational interventions. The combination of educational interventions with public awareness campaigns relating to urgency, and with the development of clinical tools such as screening checklists to promote certainty, may improve our ability to affect clinicians’ behavior.
In conclusion, this study successfully exposed the gap between knowledge and behavior in a context that allowed the examination of its etiology. It has provided new insights into the complexities of the rationalization process used by clinicians to justify their choices of action and has identified factors that promote clinical action based on knowledge. These concepts may be applied to educational research and practice in the future to maximize the impact of educational endeavors on clinical practice.
Acknowledgments
The authors would like to thank the Standardized Patient Program of the Faculty of Medicine, University of Toronto, for its invaluable participation in this project. Financial support for this research was provided by the Physicians’ Services Incorporated Foundation and the Canadian Institutes of Health Research-Association of Canadian Medical Colleges Committee on Research in Medical Education. Salary support for the principal investigator was provided by the Royal College Fellowship for Studies in Education and the Duncan L. Gordon Fellowship.
References
1.Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA. 1999;282:867–74.
2.Mowatt G, Grimshaw JM, Davis DA, Mazmanian PE. Getting evidence into practice: the work of the Cochrane Effective Practice and Organization of Care Group (EPOC). J Contin Educ Health Prof. 2001;21:55–60.
3.Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ. 1998;317:465–8.
4.Meyers DG, Steinle BT. Awareness of consensus preventive medicine practice guidelines among primary care physicians. Am J Prev Med. 1997;13:45–50.
5.Siegel RM, Schubert CJ. Physician beliefs and knowledge about vaccinations: are Cincinnati doctors giving their best shot? Clin Pediatr. 1996;35:79–83.
6.Kushner RF. Barriers to providing nutrition counseling by physicians: a survey of primary care practitioners. Prev Med. 1995;24:546–52.
7.Poses RM. One size does not fit all: questions to answer before intervening to change physician behavior. Jt Comm J Qual Improv. 1999;25:486–95.
8.McKinlay JB, Potter DA, Feldman HA. Non-medical influences on medical decision-making. Soc Sci Med. 1996;42:769–76.
9.Bedell SE, Delbanco TL. Choices about cardiopulmonary resuscitation in the hospital. When do physicians talk with patients? N Engl J Med. 1984;310:1089–93.
10.Brown A, Kent GG. Factors associated with the decision to refer patients with anxiety disorders or sexual dysfunction. Fam Pract. 1992;9:32–5.
11.Dowrick C, Gask L, Perry R, Dixon C, Usherwood T. Do general practitioners’ attitudes towards depression predict their clinical behaviour? Psychol Med. 2000;30:413–9.
12.Harries C, Evans J, Dennis I, Dean J. A clinical judgment analysis of prescribing decisions in general practice. Trav Hum. 1996;59:87–111.
13.Filipek PA, Accardo PJ, Ashwal S, et al. Practice parameter: screening and diagnosis of autism: report of the Quality Standards Subcommittee of the American Academy of Neurology and the Child Neurology Society. Neurology. 2000;55:468–79.
14.Howlin P, Moore A. Diagnosis in autism. Autism. 1997;1:135–62.
15.Morse JM. The significance of saturation. Qual Health Res. 1995;5:147–9.
16.McCracken G. The Long Interview. Newbury Park, CA: Sage, 1988.
17.Glaser B, Strauss A. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Aldine, 1967.
18.Odell L, Goswami D, Herrington A. The discourse-based interview: a procedure for exploring the tacit knowledge of writers in nonacademic settings. In: Odell L, Goswami D (eds). Writing in Nonacademic Settings. New York: Guilford Press, 1985.
19.Coleman T, Williams M, Wilson A. Sampling for qualitative research using quantitative methods. 1. Measuring GPs’ attitudes towards discussing smoking with patients. Fam Pract. 1996;13:526–30.
20.Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–65.
21.Hall KH. Reviewing intuitive decision-making and uncertainty: the implications for medical education. Med Educ. 2003;36:216–24.