Secondary Logo

Journal Logo

Method/Model Presentation

The Evolution of a Physical Therapy Research Curriculum: Integrating Evidence-Based Practice and Clinical Decision Making

Ross, Ellen C, PT, PhD; Anderson, Ellen Zambo, PT, MA, GCS

Author Information
Journal of Physical Therapy Education: December 2004 - Volume 18 - Issue 3 - p 52-57

Abstract

INTRODUCTION

Following the transition of professional physical therapist education from the baccalaureate to the master's level, courses in research design and statistics became a standard component of physical therapy curricula. Traditionally, a master's degree requires the production of original research and the master's thesis, although this requirement is not universal and alternative models have been developed.1 A research curriculum centered on the production of original research is likely to follow the organization and emphasis of commonly used research methods texts2,3 which reflect the steps taken in conducting research.4 Students are taught to identify a research question, determine an appropriate design and analysis to answer the question, collect and analyze the data, and interpret the data in the context of previous knowledge in the field. The assumption implicit in these curricula is that students will best master the skills of critical appraisal of research through learning to conduct research, however limited the scale of that research.5,6

This assumption has been questioned on two levels. The first question is whether it is necessary to conduct research in order to learn to critically appraise and apply research. In his 1992 editorial, “Living Without Student Research Projects,” Rothstein equates this assumption with “suggesting that writing a play is a prerequisite to reading Shakespeare.”7(p333)

An additional question is whether participation in a limited research project actually gives a student the skills necessary to make best use of evidence in clinical decision making. It may be that students' abilities to identify a body of research and critically appraise a wide range of research designs is limited by the focus on a small-scale research project.5

This questioning of the usefulness of original student research, as well as various forces in education and clinical practice, has been stimulating changes in the teaching of research in professional physical therapy education. Evidence-based clinical decision making is becoming an expectation of physical therapy practice, making it essential for physical therapist education programs to graduate students who are adept in identifying clinical questions, seeking best evidence, critically appraising that evidence, and incorporating it into practice. Thus, faculty in physical therapist education programs have been exploring various options for incorporating evidence-based practice (EBP) into their curricula.8

Evidence-based practice requires the integration of three components: the best available evidence, patient values, and clinical expertise.9 The clinical expertise component is highly dependent on clinical decision-making skills and experience. Professional students in the first semester of their education do not have the background and experience needed to engage in skilled clinical decision making, and, therefore, their ability to understand and engage in all the components of EBP is limited. In addition, they do not have the interviewing skills necessary to ascertain a patient's priorities and values. In order to develop facility in EBP, students need to learn not only research appraisal skills, they also need to learn clinical decision-making and interviewing skills. When our professional physical therapist education program changed from a master's degree to a Doctor of Physical Therapy (DPT) degree, we decided to revise our curriculum by melding the teaching of clinical decision making with teaching research appraisal skills into one course series titled “Clinical Inquiry.” Interviewing skills, including practice in ascertaining the patient's perspective and values, are taught in clinical courses throughout the curriculum. The purpose of this paper is to describe these curricular changes and to discuss the challenges of melding traditional research curriculum content with EBP content. We will provide an overview of the entire Clinical Inquiry series and details about the content and development of the first course in the series, “Clinical Inquiry I.”

THE FORMER CURRICULUM: RESEARCH AND CLINICAL DECISIONMAKING COMPONENTS

In our master's level physical therapist education program, the research content was taught in a series entitled Scientific Inquiry. The first-class in the series, Scientific Inquiry I, was a two credit class in which students were taught research design, as well as how to conduct literature searches and read an article (Tables 1 and 2). Scientific Inquiry II, a two credit class, covered statistics. In Scientific Inquiry III, students worked in small groups with a faculty advisor. In these groups students articulated a question to be answered by reference to the literature, conducted literature searches, and analyzed and synthesized the literature they reviewed. The questions posed were sometimes clinical, but often were methodological in nature, supporting the research agenda of the faculty member. Scientific Inquiry IV was a continuation of Scientific Inquiry III, and Scientific Inquiry V taught students about disseminating research.

Table 1
Table 1:
Sequence and Credits of Old and New Curricula
Table 2
Table 2:
Content of Old and New Curricula

Concurrent with the Scientific Inquiry series in the master's curriculum was a series entitled Clinical Analysis (Tables 1 and 2). The purpose of the series was to facilitate development of clinical decision-making skills through the use of patient scenarios, literature searches and weekly small group discussions. The student groups were given patient scenarios and were helped to develop one or more clinical questions, which they then sought to answer through finding various sources of information. Although an attempt was made to critically appraise the articles they brought back to the group, students were often limited in this ability, having not yet covered the necessary material in the Scientific Inquiry class.

The revamping of the curriculum for the change to the DPT degree provided the perfect opportunity to address this limitation in the master's curriculum. Our goal in revising the curriculum was to integrate clinical decision making with research content and EBP, and also to increase the efficiency with which this content was taught, since students were on campus for fewer semesters than they had been in the master's program. By integrating these components, we created a course series that emphasized the interconnectedness of clinical decision making and critical appraisal of research.

THE DEVELOPMENT OF “CLINICAL INQUIRY I”

One of the major differences in the new Clinical Inquiry I course, as well as the entire Clinical Inquiry series, was that a patient scenario drove the need for students to understand and critically appraise research. In preparing for this course, we first worked on determining a clinical scenario around which the course would revolve. We felt that this was critical to demonstrating to students the relevance of research to clinical practice. The clinical scenario involved a patient with peripheral neuropathy who had concerns about balance and falling. A case scenario was written, presenting information about the patient's complaints and social history (Figure 1).

Figure 1
Figure 1:
Case scenario for Clinical Inquiry I.

Once the scenario was developed, we conducted an electronic database search (MEDLINE, PEDro, and CINAHL) for articles on peripheral neuropathy and falls that were relevant to the patient scenario. From these articles, we selected ones with a range of designs, both from the experimental design perspective and the EBP categories. These articles were provided to the students and were discussed from the perspectives of research design, critical appraisal, and clinical decision making throughout the course.

There were two major components to the course: a research design component and a clinical decision-making component. The development of each will be discussed individually.

Research Design Component

To teach students to understand and appraise research design in Clinical Inquiry I, we drew from our traditional research design course, as well as from the evidence-based medicine (EBM) and EBP literature. Melding the content taught in the previous research design course with the content necessary for EBP proved challenging in a number of ways. Some of these challenges, which will be expanded on in this section, included: 1) reconciling different appraisal questions used in traditional research versus EBP, 2) reconciling different emphasis on measurement in the two approaches, 3) reconciling different sets of designs in the two approaches, and 4) illuminating the relationship between the EBP categories of therapy, prognosis, diagnosis, etc, and the various research designs.

Critical appraisal questions. The critical appraisal questions taught in our traditional research class had considerable overlap with the critical appraisal questions necessary for EBP, but there were also questions and perspectives unique to each. In our traditional research class, students were taught to ask the following questions about an article: 1) Have the authors stated their research question or hypothesis clearly, and is it one of importance to the profession? 2) Is the question based on a sound theoretical framework? 3) Has the right design been used to answer the question? 4) Does the design ensure good experimental control? 5) Are the authors' conclusions reasonable given the results?

If we contrast these with the EBM questions developed by Sackett et al,9 we see the large majority of their questions focus on our question 4 (above):“Does the design ensure good experimental control?” At this point, it might be helpful to briefly describe the EBM questions. While there are different questions for each category of study (therapy, diagnosis, prognosis, etc), all the sets of questions follow a similar format. The questions fall into three main categories: 1) Are the results of the study valid? 2) Are the valid results of the study important? 3) Are these valid, important results applicable to the patient in question? The questions in the first category are specific questions about aspects of the design necessary to ensure good experimental control; those in the second category are generally about the magnitude of the result; those in the third category relate to the clinical decision-making aspect of EBM.

What we thought was missing from the EBM framework were questions about: 1) the soundness of the research question and theoretical framework, 2) whether the study design is appropriate to answer the question, 3) whether the authors actually answered the question, and 4) whether the conclusions are reasonable given the results. Missing from our traditional approach were questions about the magnitude of the results (reliance on P-values rather than the strength of the effect), and application of the results to a specific patient scenario. Therefore, in Clinical Inquiry I, we taught students to address all of these questions when reading an article. Students were also taught how to “grade” an article based on the EBP levels of evidence, but with an understanding that there are important and useful articles in physical therapy research which don't fall into any of the EBM levels, and that a well-controlled, randomized controlled trial could suffer from major flaws in conceptualization, limiting its usefulness.

Measurement theory. In our traditional research design class, reliability and validity of measurement were covered extensively. However, sensitivity and specificity, concepts essential for understanding articles about diagnosis, were only referred to briefly. Conversely, the questions Sackett et al9 propose to guide evaluation of an intervention study do not specifically address the reliability and validity of the tools being used to measure the outcome. Thus, there were significant shortcomings in both the traditional and EBP approaches to measurement.

These differences may be, in part, an indication of some inherent differences between medicine and physical therapy; many of the outcomes we are interested in are difficult to measure, and so we focus more on issues of reliability and validity. It may also be reflective of the different evolutions of research in medicine versus physical therapy. In their book, PDQ Epidemiology,10 Streiner and Norman point out that clinical epidemiology, which forms the basis of medical research, evolved from classic epidemiology, where generally the variables of interest are dichotomous: life or death, illness or no illness. They point out that increasingly, physicians and epidemiologists are becoming interested not just in life and death, illness or no illness, but also in the quality of life, and that this introduces new measurement challenges. They describe some of the difficulties in measuring quality of life, that there is possibly “more error of measurement than might be expected in categorical measures such as diagnosis,”10(p108) and that there are rarely “gold standards,” such as autopsy or biopsy, against which to evaluate measurement instruments. “Epidemiologists must acquire new skills, borrowed from such disciplines as psychology, education, and economics, to understand and contribute to the development of these measures.”10(p108) The authors further point out that the development and use of these new measurement tools requires comfort with “unfamiliar concepts like reliability and construct validity.”10(p108) As physical therapists begin adopting some of the tools and analyses of epidemiology, so too, it seems that medical researchers and epidemiologists are beginning to move toward the use of measurement tools and techniques physical therapists have found relevant for quite some time. This convergence bodes well for the future of health care research, and a well-prepared physical therapist student needs to have a working knowledge of both worlds. Thus, our Clinical Inquiry I course places equal emphasis on reliability, validity, sensitivity, and specificity.

Research designs. Research designs are another area where there are significant differences between the traditional model and the EBP model. In our traditional research designs course, we categorized research broadly into experimental and nonexperimental. Within the category of experimental, quasiexperimental designs were also discussed, and nonexperimental was broken down into correlational/exploratory or descriptive designs. This, with some variation, is the categorization scheme presented in two widely used physical therapy research texts.2,3 As Portney and Watkins3 point out, viewing various research designs as a continuum from descriptive through experimental is helpful, as some designs overlap the categories, and do not fall exclusively into one or another category.

In EBP, a different set of research designs are described: randomized controlled trial, cohort, case-control, and cross-sectional designs. Some of the differences between the two sets of designs are purely nomenclature: a randomized controlled trial versus a pretest-posttest control group design. However, some of the differences are substantive and may reflect the different history and evolution of medical versus physical therapy research. Discussion of cohort and case-control designs was covered only very briefly in a small section on epidemiology in our traditional curriculum. In-depth understanding of these designs is essential, however, for EBP. Conversely, there is almost no discussion of correlational or descriptive research in the EBP literature, as these are not considered a high enough level of evidence to contribute much to clinical decision making. Given the relative “youth” and sparseness of the body of literature in many areas of physical therapy research, these less rigorous designs may be appropriate and may be the only evidence available. In developing our Clinical Inquiry I course, we felt it was important for students to know both sets of terminology, and how they relate to one another. We also discuss the evolution of a body of knowledge, and the fact that a case study or case series may be the most appropriate designs to use for topics on which little or nothing has been previously published.

Relationship between EBP categories and research designs. In Sackett et al's widely cited book, Evidence-Based Medicine: How to Practice and Teach EBM,9 the following categories of research are outlined: diagnosis and screening, prognosis, therapy, harm/etiology, economic analysis, and systematic reviews. These categories are based on the intent of the study: to investigate a diagnostic test, or to determine the effectiveness of an intervention. One of the challenges of melding the traditional research design curriculum with EBP is making clear to students the relationship between these categories and the research designs discussed in the previous section. Different designs are optimal for each of the different categories. Although this information is available in the EBP literature,9,11-18 it is not emphasized and, in our experience, not readily apparent to the novice. Therefore, in our Clinical Inquiry I course, this relationship is made explicit. We discuss which designs are most appropriate for each category, why they are most appropriate, and why some provide stronger evidence than others.

Clinical Decision-Making Component

The research design component of Clinical Inquiry I ran in parallel with the second component of the course, the clinical decision-making component. The clinical decision-making component was run in small groups, with clinicians acting as group facilitators and students actively participating in group discussions. In addition to the case scenario, The Guide to Physical Therapist Practice19 served as a tool to facilitate discussion about possible examination components and interventions that might be useful for patient management. As the students read the preselected articles that were provided, they became aware of specific examination tools and interventions to use with this patient. The articles stimulated discussion about larger issues of measurement, in addition to practical application of research findings to individual patients.

Later in the semester, the students were provided with additional information about the patient scenario, including a diagnosis of non-insulin-dependent diabetes and peripheral neuropathy. Articles that had been discussed were revisited with this new information in mind. Students were asked to develop a plan of care based on this additional clinical information, and the information they had drawn from the articles.

Once a plan of care was fairly well established, students were then asked to consider if they had sufficient evidence to support every aspect of their plan of care. If not, they were asked to develop a question, using the EBP Patient, Intervention, Comparison, and Outcome (PICO) format, whose answer would offer support for a chosen intervention or plan of care. Students were directed to perform a literature search with their question in mind and return the following week with an article they had found and read. The instructor then facilitated a group discussion about the articles that were reviewed. The discussion of the articles included identification of the key aspects of the article, such as the hypothesis, variables, and design; critical appraisal of the article; and application of the findings to the patient scenario.

Learning to search research and EBM electronic databases was covered in both the research design and the clinical decision-making components of the course. In the clinical decision-making component, students were encouraged to incorporate evidence from both EBM reviews and individual research articles into their discussions about the patient and the development of her plan of care.

CURRICULAR OUTCOMES

The purpose of this paper was to describe a course that integrates the teaching of research design with EBP, and with the clinical decisionmaking skills that are necessary for EBP. The series of which this course was a part was developed to more efficiently and effectively integrate content that had been taught previously—critical appraisal of research and clinical decision making—with the new content of EBP. The efficiency of the new curriculum can be seen in Table 1, which shows that there were 15 credits for this content in the old curriculum versus 10 credits in the new curriculum. In addition to this streamlining of the curriculum, the changes had the desired effect of making clear to the students that the ability to find and critically appraise research literature is an integral component of the clinical decision-making process. These twin threads of EBP and clinical decision making are woven throughout the entire curriculum, which allowed us to reduce the number of credits dedicated to small group discussions of clinical decision making that were in the old Clinical Analysis series (Tables 1 and 2).

One of the potential barriers to integrating EBP throughout a physical therapist education curriculum is the faculty members' knowledge about and comfort with EBP. Because we felt it was so important to incorporate EBP throughout the curriculum, all faculty members received education in EBP and in developing strategies for incorporating EBP in their courses. EBP is consistently reinforced throughout the curriculum because all faculty members recognize its importance to the practice of physical therapy and to the education of physical therapists.

Our success in effectively integrating clinical decision making, EBP, and research design in Clinical Inquiry I can be seen in some of the comments made by students on the course evaluation. In response to the open-ended question, “What was the most worthwhile aspect of this course?” students wrote answers such as: “Finding out how to tie clinical decision making to EBP”; “I like the fact that I can now critically appraise research studies”; “Applying research to a patient”; “Learning more in depth on how to analyze research articles”; “Learning to take more initiative to search for research and realize the importance of EBP”; and “The group discussions were nice because they made us think about how we might apply our data in the clinic.”

Our curriculum also built in opportunities for students to share their knowledge of EBP with the wider physical therapy community. During the third course in the series, Clinical Inquiry III, students worked in small groups with a faculty member to develop and answer a clinical question in an EBP format. The final product of this course was a poster. Last year, seven posters were created in this course, and all seven were presented at the Annual Meeting of the New Jersey Chapter of the American Physical Therapy Association. In addition, during their clinical experiences, all students are required to present at least one in-service, in which they demonstrate the use of EBP to answer a clinical question. We consistently get feedback from the clinical instructors that these in-services are very informative, and that the students do a good job of presenting the skills and application of EBP to the staff.

IMPLICATIONS FOR PHYSICAL THERAPIST EDUCATION

Over the past several years, teaching research in professional physical therapist education programs has changed from primarily teaching students to be producers of research to primarily teaching students to be critical consumers of research. This change has, in part, been driven by the need for physical therapists to practice in an evidence-based manner: to be able to rapidly identify a clinical question, and find, appraise, and apply research findings to help make clinical decisions. The shift in health care toward evidence-based practice, along with physical therapy's evolution to a doctoring profession, has created a situation in which physical therapist educators must develop effective and efficient strategies for integrating the teaching of research with the principles of evidence-based practice and clinical decision making. We have described a course series in a physical therapist education curriculum that achieves this goal, and may be useful to other physical therapist educators in their curriculum development.

REFERENCES

1. Mostrom E, Capehart G, Epstein N, Woods R, Triezenberg H. A multitrack inquiry model for physical therapist professional education. Journal of Physical Therapy Education. 1999;13(2):17-25.
2. Domholdt E. Physical Therapy Research: Principles and Applications. 2nd ed. Philadelphia, Pa: WB Saunders Company; 2000.
3. Portney LG, Watkins MP. Foundations of Clinical Research: Applications to Practice. 2nd ed. Upper Saddle River, NJ: Prentice Hall Health; 2000.
4. Tickle-Degnen L. Evidence-based practice forum: teaching evidence-based practice. American Journal of Occupational Therapy. 2000;54(5):559-560.
5. French B. Developing the skills required for evidence-based practice. Nurse Education Today. 1998;18(1):46-51.
6. Tracy JE. Role of research in the entry-level physical therapy curriculum. Journal of Physical Therapy Education. 1992;6(1):28-32.
7. Rothstein JM. Living without student research [Editor's Note]. Phys Ther. 2004;72:332-334.
8. Scherer S, Smith MB. Teaching evidence-based practice in academic and clinical settings [Research Corner]. Cardiopulmonary Physical Therapy Journal. 2002;13(2):23-27.
9. Sackett DL, Straus SE, Richardson S, Rosenberg WM, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Toronto: Churchill Livingston; 2000.
10. Streiner DL, Norman GR. PDQ Epidemiology. 2nd ed. London: BC Decker Inc; 1998.
11. Guyatt GH, Sackett DL, Cook DJ. User's guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 1993;270(21):2598-2601.
12. Guyatt GH, Sackett DL, Cook DJ. Users' guides to the medical literature. II. How to use an article about therapy or prevention. B. What were the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA. 1994; 271(1):59-63.
13. Jaeschke R, Guyatt G, Sackett DL. Users' guides to the medical literature. III. How to use an article about a diagnostic test. A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 1994; 271(5):389-391.
14. Jaeschke R, Guyatt GH, Sackett DL. Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? The Evidence-Based Medicine Working Group. JAMA. 1994; 271(9):703-707.
15. Greenhalgh T. How to read a paper: papers that report diagnostic or screening tests [erratum appears in BMJ 1997: 315(7113):942]. BMJ. 1997;315(7107): 540-543.
16. Greenhalgh T. Papers that summarise other papers (systematic reviews and meta-analyses). BMJ. 1997;315(7109):672-675.
17. Greenhalgh T. How to read a paper: getting your bearings (deciding what the paper is about). BMJ. 1997;315(7102):243-246.
18. Greenhalgh T. How To Read a Paper: The Basics of Evidence-Based Medicine. 2nd ed. London: BMJ Books; 2001.
19. Guide to Physical Therapist Practice. 2nd ed. Alexandria, Va: American Physical Therapy Association; 2001.
Keywords:

Evidence-based practice; Clinical decision making; Research curriculum; Physical therapist education; Doctor of physical therapy.

Copyright2004 (C) Academy of Physical Therapy Education, APTA