In their recent essay, Hak and Maguire promote the use of qualitative studies of the tutorial process in problem-based learning (PBL).1 The authors say that “Research to date has largely neglected to focus on the actual activities and learning processes that mediate and moderate the relationship between PBL programs and their cognitive outcomes.” They say “it is not known exactly how PBL produces positive educational outcomes. In particular, it is unclear which aspects of the PBL tutorial process are essential for producing them.” Their remedy is qualitative studies: “Only qualitative studies of the tutorial process itself can help us begin to understand what kinds of student and tutor behaviors contribute to the desired cognitive effects of PBL groups.”
My concern is that this recommendation is based on the presumption that there is something to understand—that PBL really works. However, numerous research studies and several literature reviews reveal no persuasive evidence for the effectiveness of PBL curricula.2 In my overview of research on PBL curricula, I concluded that the results showed “no convincing evidence that PBL improves knowledge base and clinical performance, certainly not of the magnitude expected given the resources required for a PBL curriculum.”2 In a commentary on my overview, Norman and Schmidt acknowledged that “PBL does not result in dramatic differences in cognitive outcomes… (it) has been oversold by its advocates, promising enormous benefits and largely ignoring the associated resource costs.”3 In an earlier paper (cited by Hak and Maguire), Norman and Schmidt said, “It is ironic that a professional community that prides itself on adherence to the scientific method has swung so strongly toward this innovation, despite considerable evidence that the differences in favor of PBL, at least at the level of curriculum comparisons, are small indeed.”4
I agree that the differences “are small indeed,” but I hasten to add that even those small differences are not convincingly attributable to PBL.2 For the most part, the differences were obtained with non-randomized comparisons; students were self-selected or selected by special PBL admissions committees. This biased the results in favor of PBL, as shown by pre-existing differences between the PBL- and standard-curriculum groups that were of the same magnitude as the outcome differences. Other studies started out as randomized, but, because of differential participation in certain evaluation activities (e.g., a standardized patient exercise), the comparisons showing favorable results for interpersonal skills were again biased in favor of PBL. One quasi-randomized study that compared three curriculum types over five years of training showed a medium-sized difference in diagnostic reasoning for the last year of training.5 But a further analysis of those data showed that the effect of curriculum was roughly equivalent to an effect of only an additional three or four weeks of medical school training. (Would other studies show equally negligible effects if their results could be transformed into a familiar metric?) Another study reported large effects on students' performances in paper cases, but case scoring was based on student compliance with the usual steps of PBL (which are not a part of standard curricula) and conventional scoring showed no effect on diagnostic accuracy (89% versus 89%).6 More disturbing than these small and unconvincing differences are the reports of negative (and sizable) effects of PBL on licensure examination performances.
My point then is that there is no need for qualitative studies of tutorial groups to determine what makes PBL curricula effective. The evidence shows that PBL curricula have no effect, except for a possible deleterious effect on licensure examination performance (and PBL's reliance on novice students rather than faculty experts for teaching basic science in PBL easily explains the latter, without the need for qualitative studies). Surprisingly, some advocates of PBL dismiss the evidence, saying that the curriculum studies were premature and that qualitative studies should have been conducted first to determine what PBL actually is (presumably in the sense of what makes it effective). Maybe so, but this is no reason for dismissing the evidence to date; the enthusiasm for innovative educational methods should not preclude a critical examination of the evidence. And in light of that evidence, Hak and Maguire's call for qualitative studies does not seem indicated.
1. Hak T, Maguire P. Group process: the black box of studies on problem-based learning. Acad Med. 2000;75:769–72.
2. Colliver JA. Effectiveness of problem-based learning curricula: research and theory. Acad Med. 2000;75:259–66.
3. Norman GR, Schmidt HG. Effectiveness of problem based learning curricula: theory, practice, and paper darts. Med Educ. 2000;34:721–8.
4. Norman GR, Schmidt HG. The psychological basis of problem-based learning: a review of the evidence. Acad Med. 1992;67:557–65.
5. Schmidt HG, Machiels-Bongaerts M, Hermans H, ten Cate TJ, Venecamp R, Boshuizen HPA. The development of diagnostic competence: comparison of a problem-based, an integrated, and a conventional medical curriculum. Acad Med. 1996;71:654–8.
6. Hmelo CE. Cognitive consequences of problem-based learning for the early development of medical expertise. Teach Learn Med. 1998;10:92–100.