Secondary Logo

Journal Logo


Utilization of the Clinical Reasoning Assessment Tool Across a Physical Therapy Curriculum

Application for Teaching, Learning, and Assessment

McDevitt, Amy PT, DPT; Rapport, Mary Jane PT, DPT, PhD; Jensen, Gail PT, PhD; Furze, Jennifer PT, DPT

Author Information
Journal of Physical Therapy Education: December 2019 - Volume 33 - Issue 4 - p 335-342
doi: 10.1097/JTE.0000000000000110



Clarifying the Clinical Reasoning Process

Physical therapists provide patient care within an evolving and dynamic health care system for an increasingly complex patient population, thereby requiring higher order thinking, adaptability, and effective clinical reasoning skills. In order to meet the health needs of society and provide optimal patient care, clinicians need to demonstrate effective clinical reasoning skills on a daily basis.1–4 Clinical reasoning (CR) is not only a skill, but CR is also considered to be a “complex phenomenon” that involves thinking and decision-making processes.5 The capability to use CR allows physical therapists and other health professionals to make difficult decisions even when the patient and situation are complex or uncertain. The development of CR relies on adaptive expertise, and it can be challenging to support and advance CR development both in the classroom and clinical setting.5–8 To better understand the intricacies of this process, Durning et al9 describe CR as an encompassing mental process combined with behaviors exhibited amidst shared decision-making which includes the patient, the provider, and the environment. Physical therapist educators have begun to understand clinical reasoning as a connection between metacognition, reflecting upon the thinking process or self-monitoring,10,11 and the development and application of clinical knowledge within the contextual environment of our patients. This understanding provides a new challenge as we prepare the next generation of physical therapists; if we want to assist learners in their development of CR abilities, we must have greater insight into the learning process through creation of assessment tools that can both inform our teaching and reveal the learner's developmental progression of clinical reasoning over time.

Similar to medicine, nursing, and other health professions, clinical reasoning is a foundational element of physical therapy practice.2,12–14 The grounded theory works on expertise in physical therapist practice, identified clinical reasoning as a core domain of expertise that included collaboration with the patient and reflection as important components. According to Gruppen,12 “there are still significant aspects of clinical reasoning that are largely ignored in the literature. Because it is often defined in terms of cognition, such things as context, affect, and institutional factors have rarely been examined for relevance to clinical reasoning.” (page 4) Christensen et al7 have further described clinical reasoning by physical therapists as a nonlinear cognitive process in which the clinician synthesizes information with the patient and family in the context of the task and the setting. The clinician reflectively integrates information with previous knowledge and best available evidence in order to take deliberate action. This explanation highlights the foundational aspects of reflection, collaboration, and patient specific context, all of which culminate in a solid understanding of the clinical problem, resulting in sound clinical interventions.15

Teaching and Learning

While the profession has continued dialogue on how essential clinical reasoning is in the teaching and learning process in education for physical therapists from entry-level through residency and fellowship education, there is a critical need for the development of assessment tools that can provide us with greater insight into the learning process.14,16 Given the complexity of the clinical reasoning process and the contextual factors including the patient/family environment and psychosocial issues, this skill is best taught within the context of patient care and clinical cases and develops along a continuum.17 However, clinical practice alone is not sufficient to develop clinical reasoning skills. The development of clinical reasoning skills can be fostered through diverse opportunities that exist through a backdrop of authentic patient problems, including context, whilst providing opportunity for meaningful reflection.18 It is these contextual opportunities that promote the clinical knowledge we understand to be foundational to the development of clinical reasoning. Despite the importance of exposure and guidance through opportunities for clinical reasoning development, evidence suggests academic and clinical faculty may not have all of the necessary tools and strategies to facilitate and develop clinical reasoning skills in students.19,20 Educators, both clinical and academic, must be intentional about structuring learning experiences and asking appropriate reflective questions to shape the student's clinical reasoning template or infrastructure.

We know that clinical reasoning is not a generic skill. General problem solving abilities such as the ability to engage in a hypothetico-deductive process is not sufficient without well-organized and relevant knowledge. This knowledge is often case or domain specific.5,21 How the case is represented through relevant knowledge and experience is an important element in understanding the deeper structure of the clinical problem. Thus, we might also expect there could be differences in the application of clinical reasoning across different contexts.

The importance and emphasis on clinical reasoning in physical therapy education is clear and is further highlighted by the Commission on Accreditation in Physical Therapy Education (CAPTE). The accrediting agency requires Doctor of Physical Therapy (DPT) curricula to include content and learning experiences inclusive of clinical reasoning yet also provide opportunities for students to “use clinical judgment and reflection to identify, monitor, and enhance clinical reasoning to minimize errors and enhance patient/client outcomes.”22 Not only is clinical reasoning an accreditation expectation by CAPTE, it is also integrated into the clinical performance instrument and implicit throughout the Guide to Physical Therapy Practice. Even so, there is room for growth across physical therapy education programs according to the recent National Study on Excellence and Innovation in Physical Therapy Education. Jensen et al noted the following limitation in physical therapy education: “understanding of the learning sciences and theory is weak, including strategies for developing clinical reasoning abilities.”23 Therefore, the authors specifically challenged the physical therapy profession to “develop a comprehensive, longitudinal approach to teaching, learning and assessment of clinical reasoning abilities.”18 Facilitation of sound clinical reasoning along the continuum of physical therapy education can be challenging and yet assessment of the process may be even more enigmatic. The challenge is for educators to expand and improve their own understanding of learning theory and clinical reasoning while also approaching a means of effectively fostering these abilities longitudinally in their students.

Assessment of Clinical Reasoning

Assessment tools are used as the link between learning and performance. Assessment tools assist in our ability to understand student learning through a measure of performance. Assessment for learning as well as assessment of learning should be a fundamental goal of assessment.21 As educators we turn to tools as a means of quantifying progress in the learner. Assessment of CR in physical therapist students provides its own set of hurdles,3 and there is a paucity of literature measuring or tracking CR in a DPT curriculum.6 However, studies show that CR abilities in physical therapist (PT) students can improve over time.6,8 The ability to objectively assess CR throughout both didactic and clinical components of a curriculum has not been demonstrated extensively in the literature.6,8,24 Despite recommendations by CAPTE and the National Study on Excellence and Innovation, limited tools are available to assess and track the development of clinical reasoning. A longitudinal study on the development of diagnostic reasoning, a specific category of clinical reasoning in which the therapist determines the cause of the patient's symptoms, found that when presented with patient scenarios, second year physical therapy students' hypotheses trended toward incorporating biomechanical and movement elements known to contribute to a patient's injury versus applying a focus on anatomical structures that is more typically associated with first year students.4 Furthermore, there was little consistency in identifying and addressing patient contextual factors such as the patient's environment, support system, or impact of their health condition on quality of life, in clinical decision-making.4,8 Therefore, identifying benchmarks and measuring the progression of clinical reasoning through multiple phases of a curriculum may be essential to optimize students' understanding of their own clinical reasoning development. In addition, the progression of clinical reasoning development may also inform teaching and learning of clinical reasoning in DPT students across the curriculum.

Given the lack of existing assessment tools to assess clinical reasoning in physical therapy,6,24–28 the Clinical Reasoning Assessment Tool (CRAT) was created to identify and evaluate clinical reasoning abilities of a physical therapy learner over time. The CRAT is based upon 3 dimensions of student learning related to clinical reasoning development.3 The development of this tool to assess clinical reasoning skills is grounded in theoretical frameworks including 1) the structure and application of knowledge,29 2) development of skill acquisition based on the principles of the Dreyfus & Dreyfus model,30 and 3) explicit, consensus-based descriptions of clinical reasoning from experts to set the evaluation scale from beginner through proficient.13,14,31

The CRAT was recently renamed by the original authors and had formerly been referred to as the Clinical Reasoning Grading Rubric. This tool was intended to assess students' readiness to enter the clinical setting and aide in students' ability to view the progression of their clinical reasoning over time, thus facilitating student self-reflection and self-monitoring.3 The tool was intended to actively engage the learner in their development by identifying specific and detailed areas of strength or weakness within the 3 domains (content knowledge, procedural knowledge and psychomotor skill, and conceptual reasoning).3 (Appendix A, Supplemental Digital Content 1, for domain definitions and sample behaviors) Additionally, each domain of the tool contains open text boxes for assessors to include to narrative comments thus facilitating further description and explanation of student performance in a quantitative manner. Finally, the CRAT capitalized on the assessment process as an instrumental phase between teaching and learning and focused on both assessment for learning as well as assessment of learning. Given the lack of evidence in the literature assessing physical therapist students' CR abilities across a curriculum (both didactic and clinical), the primary purpose of this study was to further validate the use of the CRAT in another entry level physical therapist education program to reliably reflect student progress in acquisition and application of CR skills across didactic and clinical components of a physical therapist education program curriculum. Additionally, a secondary purpose was to determine whether case context was a predictor of performance on the CRAT.


A cross-sectional study design was utilized for this study. The CRAT was utilized at 4 specific time points across 2 years of a physical therapy curriculum. The CRAT included a score for each student on a visual analog scale (VAS) labeled from left to right as beginner, intermediate, competent, and proficient in each of the 3 domains of CR: content knowledge, procedural knowledge and psychomotor skills, and conceptual reasoning. A more complete description for each of these domains is in a previous publication by Jennifer Furze et al.3 In the current study, the VAS was scored and later analyzed using a 0–16 scale for each CR domain. Application of a 16-point numeric scale was added to the VAS for the purpose of data analysis in order to further objectify the VAS measurement in determining change over time. Analysis of variance (ANOVA) was used to determine whether time was a predictor of performance in each of the 3 CR domains at each specified time point and was also used to determine whether case context was a predictor of performance.

Participants and Setting

Fifty-five physical therapy students from the University of Colorado Physical Therapy Program from 2 consecutive class cohorts were assessed with this tool resulting in a total of 172 assessment tools scored by 11 faculty assessors. Only academically appointed core and clinical faculty were trained on the tool; therefore, all students across 2 cohorts were not assessed. The project was approved by the Colorado Multiple Institutional Review Board prior to analysis of data. Faculty assessors were trained during a 20-minute training session that included the following: 1) introduction to the CRAT, 2) review of the 3 domains of the tool, 3) examples of how to score students using the CRAT through an internet based training video (, and 4) an explanation of how to integrate and facilitate student self-reflection into the scoring of the tool. The tool was utilized by faculty assessors and clinical instructors across 4 time points in the curriculum. These time points, in chronological order, included the following: 1) a clinically based practical exam, 2) a standardized patient formative assessment, 3) a full-time clinical experience, and 4) a second standardized patient summative practical exam assessment.

The University of Colorado Physical Therapy program curriculum is comprised of 8 consecutive semesters beginning with summer semester of year 1 and culminating with graduation at the end of fall semester in year 3. The first phase of the curriculum is based heavily in foundational and clinical science courses with a transition in year 2 into patient management courses interspersed with increasing lengths of clinical education experiences and professional development courses. The curriculum culminates with advanced skills courses and a capstone project in the final semester before completion of the terminal clinical education experience. Figure 1 provides a representation of courses across the curriculum and assessment time points by semester.

Table 1
Table 1:
Compilation of All Mean Scores and Standard Deviations of CRAT Scores Across All Time Points (n = 172)

Semester 2, Year 1 (A Clinically Based Practical Examination)

Each student was examined by 2 different assessors across 2 different skills based stations (examination and intervention, respectively). The assessment consisted of a student performance of foundational clinical evaluation and intervention skills based on a patient vignette/clinical scenario. The patient was a second-year student who stayed in role as a patient. Faculty assessors completed the CRAT after feedback which included student self-reflection of their performance. Each assessor was blinded to the results of the other assessors.

Table 2
Table 2:
Compilation of All Mean Scores and Standard Deviations of CRAT Scores Across 3 Examples of Case Context (n = 105)

Semester 3, Year 1 (Standardized Patient Simulation Experience 1)

Each student was assessed by 3 different assessors across 3 unique content areas (musculoskeletal, neurological, and medical conditions). The summative assessment took place in the standardized patient simulation lab and included 3 consecutive encounters with 3 different standardized patients. Students and faculty watched a video of the student performance and met 2 weeks later for a 30-minute one-to-one feedback session. Faculty assessors completed the CRAT after feedback which included the student self-reflecting on their performance in a number of specified areas. Each assessor was blinded to the results of the other assessors.

Semester 4, Year 2 (Full-Time Clinical Education Experience)

Clinical Reasoning Assessment Tools were returned for 14 of the 66 students in that cohort. The assessors were community clinical instructors across various settings including outpatient, inpatient, home health, and pediatrics. Clinical assessors were trained using email to provide a written description of the CRAT domains and general instructions on how to use the tool. Clinical assessors were given access to the same training video as faculty. Clinical assessors were instructed to use the tool while observing the student during a single patient encounter.

Semester 5, Year 2 (Standardized Patient Simulation Experience 2)

Semester 5, year 2 measured a different cohort of students than the previous time points due to timing of the data collection with student availability. Each student was assessed by 3 different assessors across 3 unique content areas (musculoskeletal, neurological, and medical conditions). These were the same assessors who provided feedback for the standardized patient summative assessment, and they were paired again with the same students in order to track progress and growth and strengthen the feedback relationship. The assessment took place in the Center for Advancing Professional Excellence, used for simulation and standardized patient experiences, and included 3 consecutive encounters with 3 different standardized patients. Students and assigned faculty watched a video of the student performance and met 2 weeks later for a 30-minute one-to-one feedback session. Faculty assessors completed the CRAT after feedback which included the student self-reflecting on their performance in a number of specified areas. Each assessor was blinded to the results of the other assessors.

Data Collection and Analysis

Student scores on the tool were recorded by the faculty assessor on a VAS from beginner (far left) followed by intermediate, competent, and proficient (far right) using an “X” or a perpendicular line as shown in Supplemental Digital Content 2 (Appendix B, In order to further quantify where assessors marked the VAS, each phase (beginner, intermediate, competent, and proficient) was divided into 4 equal components in order to quantitatively capture the scores for data analysis. The quartile scale for each phase afforded a more robust comparison of where the students scored on the VAS. Prior to analysis, the marks for each CR domain using the 0–16 scale were measured by hand to determine the quartile using a 12-inch transparent ruler and the numeric scores were recorded. Analysis of variance was used to determine whether time was a predictor of performance in each of the 3 CR domains at each specified time point. An alpha level of .05 was set for all analyses. An additional analysis included using ANOVA to determine whether the type of content area in the form of a patient case (musculoskeletal, neurological, and medical conditions) was a predictor of performance in each of the 3 CR domains at 2 specified time points where standardized patient simulation experiences occurred. An alpha level of .05 was set for all analyses.


Fifty-five students across 2 consecutive class cohorts were included in the analysis. A total of 172 tools were scored and analyzed across 4 time points in the curriculum. First year students in their second semester scored a mean of 5.21 (±1.9) in content knowledge, 5.51 (±1.9) in procedural knowledge and psychomotor skill, and 5.09 (±1.9) in conceptual reasoning. First year students in their third semester scored a mean of 6.77 (±2) in content knowledge, 6.87 (±1.9) in procedural knowledge and psychomotor skill, and 6.63 (±1.7) in conceptual reasoning. Second year students in their fourth semester scored a mean of 7.5 (±2.3) in content knowledge, 8.5 (±3.7) in procedural knowledge, and 7.71 (±2.7) in conceptual reasoning. Second year students in their fifth semester scored a mean of 8.1 (±1.8) in content knowledge, 8.2 (±2.3) in procedural knowledge, and 8.12 (±1.8) in conceptual reasoning. Mean scores in each of the domains steadily increased at each time point (Table 1). Results of the ANOVA showed that each specified time point within the physical therapy curriculum was significantly predictive of performance in each the 3 domains of interest (P < .0001 for each) (Table 1 and Figure 2).

Analysis of variance was also used to determine whether content area in the form of the type of patient case or case context (musculoskeletal, neurological, and medical conditions) is predictive of content knowledge, procedural knowledge, and conceptual reasoning (Table 2). A total of 105 tools were analyzed across the standardized patient simulation experiences. On the musculoskeletal conditions case, students scored a mean of 7.72 (±2) in content knowledge, 8.13 (±2.3) in procedural knowledge, and 7.93 (±1.9) in conceptual reasoning. On the neurological conditions case, students scored a mean of 7.46 (±2.4) in content knowledge, 7.83 (±2.3) in procedural knowledge, and 7.35 (±2) in conceptual reasoning. On the medical conditions case, students scored a mean of 7 (±1.5) in content knowledge, 6.58 (±1.7) in procedural knowledge, and 6.72 (±1.6) in conceptual reasoning. Results of the ANOVA showed that case context is not predictive of content knowledge (P = .326) but is predictive of procedural knowledge (P = .007) and conceptual reasoning (P = .0297).


Physical therapy students in this study demonstrated progression of statistical significance in 3 domains of CR (content knowledge, procedural knowledge and psychomotor skills, and conceptual reasoning) across 4 time points (semester 2, year 1; semester 3, year 1; semester 4, year 2; semester 5, year 2) in both didactic and clinical components of a physical therapist education program using the CRAT. These results support the potential adoption of the CRAT to measure student development of CR over time; however, other meaningful properties of the tool such as the intra and inter-rater reliability of the CRAT remain unknown. Each assessment time point in the curriculum varied in overall structure, environment, and case content. Even so, the CRAT was able to be used at all points without additional modification, thus demonstrating stability of the CRAT as an assessment tool.

A qualitative study by Gilliland32 found differences in first and third year physical therapist students' CR strategies with regards to hypothesis generation, assessments made and treatment selection. More specifically, third year students used more sophisticated strategies of reasoning over first year students.32 In the present study, we reported first year students in their second semester scoring a mean of 5.2 (±1.9) in content knowledge, 5.51 (±1.9) in procedural knowledge, and 5.09 (±1.9) in conceptual reasoning, all of which are in the late “beginner” and early “intermediate” categories on the CRAT (when standard deviation is appreciated). According to the CRAT, sample behaviors in this category (based on the mean) include moderate evidence of foundational knowledge and application of the International Classification of Functioning, Disability and Health (ICF), moderate accuracy in performing tests and measures/interventions and evidence of being able to justify most tests and measures/interventions, and the ability to identify relevant patient problems while generating a working hypothesis and patient problem list. Second year students in their fifth semester scored a mean of 8.1 (±1.8) in content knowledge, 8.2 (±2.3) in procedural knowledge, and 8.12 (±1.8) in conceptual reasoning demonstrating a score representative of the late “intermediate” to early “competent” category. Thus, the sample behaviors would include more refined skills as well as stronger evidence of foundational knowledge and patient-related ICF components, stronger accuracy and efficiency in performing examination and intervention skills combined with appropriate communication, and improved ability to confirm or disprove a working hypothesis including stronger justification for decisions related to the patient case. Results of our study and the study by Gilliland32 demonstrate growth of student CR over time in early versus late students.

The improvement in mean scores across all 3 categories of the CRAT over the course of the curriculum may also be reflective of the time point of the assessment in the curriculum. Foundational science courses occur early in the curriculum; this may result in hypotheses focused on anatomical structures versus health conditions as the focus of the patient encounter. As the curriculum progresses to include clinical based coursework and clinical affiliations, hypotheses may focus more on the health condition and the patient as an individual.8,32 Similarly, a study by Boshuizen and Schmidt33 describes a decrease in the use of biomedical knowledge in the diagnosis of clinical cases as expertise increases over time in medical students. Therefore, the improvement of the scores across time in the 3 domains of CR may have been an expected result based on previous work reported on the development of CR.8,14,32,34 Additionally, students in this study did not score higher into the “competent” or “proficient” categories. Had this study captured results at the end of the terminal clinical affiliation (semester 8, year 3), we may have seen evidence of sample behaviors in those categories. However, a survey of clinical instructors concluded that clinical instructors do not expect students to achieve entry-level even in their third year of a physical therapist education program.7

In our opinion, these results demonstrated that the CRAT is an assessment tool that is generalizable across both didactic and clinical components of a physical therapy curriculum, across time points and settings, and across assessors. More importantly, the CRAT was a useful scoring tool capable of identifying growth in CR over time.


While the CRAT was a useful tool in assessing change in CR abilities over time and in various settings (i.e., didactic, simulation center, and clinical), use of the CRAT went far beyond benchmarking student progress across a VAS. In the absence of sound content/foundational knowledge and/or procedural knowledge/psychomotor skills, CR does not appear to develop independently. As an example, students who were limited in their foundational knowledge related to orthopedic health conditions could not appropriately synthesize or interpret patient information in order to make sound judgements (a sample behavior in the conceptual reasoning domain). Similarly, students who were limited in their procedural knowledge and psychomotor skill related to choosing and performing appropriate orthopedic tests and measures were unable to appropriately modify or adapt tests and measures based on the patient case (a sample behavior in the conceptual reasoning domain). This limitation also had an impact on their ability to make sound judgements and decisions specific to the patient. The recognition of the interplay between domains served to reinforce the importance (for students and assessors) of all aspects of the curricular focus with particular recognition of the foundational and clinical sciences as important preliminary building blocks instrumental to development of conceptual reasoning (Figure 1).

General Representation of the Curriculum Pogression at the University of Colorado Physical Therapy Program and Time Points in the Physical Therapy Curriculum Where the Study Was Completed

While this study was not designed to qualitatively analyze narrative comments provided by the assessors, information obtained from the comment boxes in each domain of the tool (Appendix B, Supplemental Digital Content 2, fostered further insight into the learner's specific areas of challenge based on the sample behaviors. In the procedural knowledge/psychomotor skill domain of a scored tool, one assessor wrote “the student chose and performed appropriate tests and measures accurately and safely but he lacked connection to the patient in his communication and was not able to ask effective questions which were relevant to the patient's stated goals.”

The ability to assess students over time with the same tool, afforded faculty the opportunity to appreciate, as Ambroseet al35 term, “learning as a process and not as a product” while also grounding us in our common expectations for students based on the predictable nature of skill acquisition. According to the Dreyfus and Dreyfus30 model, there are key concepts apparent in each stage of the learner from novice to expert with a shift from more rule driven and factual behaviors to intuitive behaviors and increased tolerance of uncertainty. The explicit integration of these concepts into the CRAT through the sample behaviors fostered more appropriate expectations for both the learner and the assessor which allowed for the setting of realistic goals with regards to progressive skill acquisition as it relates to CR.

Application to Teaching and Learning

The CRAT served as a useful tool in assessment of students; however, student performance may vary across case contexts. Based on our results, the case context was a predictor of performance in procedural knowledge and conceptual reasoning. Overall, higher means were reported for procedural knowledge in the musculoskeletal conditions case over the neurological conditions case and medical conditions case, and similarly for conceptual reasoning, with higher means reported in the musculoskeletal conditions case over the neurological conditions case and medical conditions case. While these results are interesting, insight into how context may impact student performance in CR involves multiple factors specific to a patient case and continues to be challenging to measure. According to Durning et al,36 performance across cases may vary given the unique context of every clinical encounter, and thus, contextual factors do influence CR. The results of the analysis on case context may indicate that students organize knowledge related to the diverse types of patient cases differently; therefore, further exploration of the case context and curriculum influence may be warranted.

Since the CRAT was utilized in 3 different patient-based scenarios (a practical exam and 2 summative standardized patient simulation experiences) in the didactic curriculum, we believe that it may help to inform and further equip faculty with more definitive information regarding students' strengths and areas of challenge in reasoning across the curriculum. Finally, by using the CRAT across various types of assessment and at multiple time points within a curriculum, we believe there is support for a common tool and approach to assessment as a method of standardizing CR language, obtaining more effective student reflection, faculty feedback, and improving student fluency related to their CR.

Study Limitations

Importantly, the methods of this study addressed a number of limitations identified in previous studies on CR assessment and development in entry-level physical therapist education programs. Limitations in previous work included the following: use of a single case simulation,4 use of a tool in the academic environment only,37 and the presence of a ceiling effect with a CR tool.38 By using the CRAT across multiple cases and points in time, as a measure in both academic and clinical settings and without a ceiling effect, this study reached a level that was previously not established.

Even so, this study had several limitations. The first limitation is that data collection occurred at a single institution thereby potentially decreasing the overall generalizability of the results. In addition, the collection of data across multiple clinical education sites did not allow for consistency with how the tool was utilized or scored by multiple clinical instructors representing a variety of sites. However, this expansive use of the tool could also be viewed as an enhancement of the results based on the innate variability across clinical settings. A second limitation is the application of the CRAT only with DPT students. In order to test for broader applicability of the CRAT, it would be important to test the tool across a more diverse developmental sample of PTs, including recent graduates, early career PTs, residents, and fellows. A third limitation is that the original authors of the tool did not intend for the VAS to be quantifiable and the accuracy in measurement associated with assigning a quantifiable scale to a non-numeric VAS may impact overall results. A fourth limitation is the a cross-sectional nature of the study; therefore, multiple cohorts of students were not followed over time. Finally, an additional limitation was the lack of response in returning the CRAT from the clinical instructors in semester 4, thus potentially impacting the data analysis and generalizability to the clinical setting.

Further Research and Future Directions

Further exploration of the CRAT is necessary to inform physical therapist education programs on the potential application of the tool in learner assessment in didactic courses and clinical education. CR skills must be fostered not only through diverse and authentic patient-based opportunities but also further facilitated by academic and clinical faculty who possess the skillset to guide the development of CR. Optimal development, the authors believe, occurs through the backdrop of a common CR language, a common approach to facilitating student feedback and self-reflection, and the ability to identify when and through what mechanism remediation is necessary. Gathering qualitative data through interviewing faculty assessors and students could further inform us and support our observations on the extent to which the CRAT influences teaching, learning, and assessment. Most importantly, assessment of the tool's inter and intra and inter-rater reliability will further inform educators on the psychometrics of the tool which would serve to improve its face validity and generalizability. Future work is required to continue to inform the CR process in our learners as we continue to assess where they are and where they should be across the full spectrum of the educational continuum.


The CRAT may serve as a useful tool for improving both the development and assessment of CR in DPT student learners. The CRAT not only provided insight into student performance of CR at various time points but also it informed us on the process of CR in DPT students which we believe will strengthen teaching of clinical-based knowledge and skill acquisition. The CRAT may be a start to more objective benchmarking in all domains of learning in physical therapist education, including skill acquisition as it relates to CR across time.

Data are Mean Scores (±SD) in Each Clinical Reasoning Domain at 4 Time Points (x axis). The Clinical Reasoning Assessment Tool was Scored on a 0–16 VAS Scale (y axis). Statistical Significance (P < .0001) was Demonstrated (4 Point Star) at Each of the Respective 4 Time Points Across the Curriculum. VAS = visual analog scale


1. Higgs J, Jones MA. Clinical decision making and multiple problem spaces. Clin Reason Health Prof. 2008;3:3–17.
2. Benner PE. Educating nurses: A call for radical transformation. In: The Jossey-Bass Higher and Adult Education Series. 1st ed. San Francisco, CA: Jossey-Bass; 2010.
3. Jennifer Furze P, Gale JR, Lisa Black PTD, Cochran TM, Jensen GM. Clinical reasoning: Development of a grading rubric for student assessment. J Phys Ther Educ. 2015;29:34.
4. Gilliland S. Physical therapist students' development of diagnostic reasoning: A longitudinal study. J Phys Ther Educ. 2017;31:31–48.
5. Higgs J, Jones MA, Loftus S, Christensen N. Clinical Reasoning in the Health Professions. Elsevier Health Sciences; 2019.
6. Furze J, Black L, Barr J. Exploration of students' clinical reasoning development in professional physical therapy education. J Phys Ther Educ. 2015;29:22–33.
7. Huhn K, Black L, Christensen N, Furze J, Vendrely A, Wainwright S. Clinical reasoning: Survey of teaching methods, integration, and assessment in entry-level physical therapist academic education. J Phys Ther Educ. 2017;97:175–186.
8. Gilliland S, Wainwright SF. Patterns of clinical reasoning in physical therapist students. Phys Ther. 2017;97:499–511.
9. Durning SJ, Artino AR Jr, Schuwirth L, van der Vleuten C. Clarifying assumptions to enhance our understanding and assessment of clinical reasoning. Acad Med. 2013;88:442–448.
10. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780.
11. Epstein RM. Reflection, perception and the acquisition of wisdom. Med Educ. 2008;42:1048–1050.
12. Gruppen LD. Clinical reasoning: Defining it, teaching it, assessing it, studying it. West J Emerg Med. 2017;18:4–7.
13. Benner PE. From Novice to Expert: Excellence and Power in Clinical Nursing Practice. Menlo Park, CA: Addison-Wesley Pub. Co., Nursing Division; 1984.
14. Jensen GM, Gwyer J, Shepard KF. Expert practice in physical therapy. Phys Ther. 2000;80:28–52;discussion 44-52.
15. Nikopoulou-Smyrni P, Nikopoulos CK. A new integrated model of clinical reasoning: Development, description and preliminary assessment in patients with stroke. Disabil Rehabil. 2007;29:1129–1138.
16. Furze JA, Tichenor CJ, Fisher BE, Jensen GM, Rapport MJ. Physical therapy residency and fellowship education: Reflections on the past, present, and future. Phys Ther. 2016;96:949–960.
17. Furze J, Kenyon LK, Jensen GM. Connecting classroom, clinic, and context: Clinical reasoning strategies for clinical instructors and academic faculty. Pediatr Phys Ther. 2015;27:368–375.
18. Jensen GM, Hack LM, Nordstrom T, Gwyer J, Mostrom E. National study of excellence and innovation in physical therapist education: Part 2-A call to reform. Phys Ther. 2017;97:875–888.
19. Christensen N, Nordstrom T. Facilitating the teaching and learning of clinical reasoning. In: Handbook of Teaching and Learning for Physical Therapists. 2013:183–199.
20. Sellheim DO. Influence of physical therapist faculty beliefs and conceptions of teaching and learning on instructional methodologies. J Phys Ther Educ. 2006;20:48–60.
21. Trowbridge RL, Rencic JJ, Durning SJ. Teaching Clinical Reasoning. Philadpelphia, PA: American College of Physicians; 2015.
22. CoAiPTE (CAPTE). Standards and required elements for accreditation of physical therapist education programs. 2016. Accessed August 24, 2017.
23. Jensen GM, Nordstrom T, Mostrom E, Hack LM, Gwyer J. National study of excellence and innovation in physical therapist education: Part 1-design, method, and results. Phys Ther. 2017;97:857–874.
24. Fu W Development of an innovative tool to assess student physical therapists' clinical reasoning competency. J Phys Ther Educ. 2015;29:14–26.
25. Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C. The script concordance test: A tool to assess the reflective clinician. Teach Learn Med. 2000;12:189–195.
26. Pottier P, Hardouin JB, Hodges BD, et al. Exploring how students think: A new method combining think-aloud and concept mapping protocols. Med Educ. 2010;44:926–935.
27. Haffer AG, Raingruber BJ. Discovering confidence in clinical reasoning and critical thinking development in baccalaureate nursing students. J Nurs Educ. 1998;37:61–70.
28. Carrier A, Levasseur M, Bédard D, Desrosiers J. Clinical reasoning process underlying choice of teaching strategies: A framework to improve occupational therapists' transfer skill interventions. Aust Occup Ther J. 2012;59:355–366.
29. Krathwohl DR. A revision of Bloom's taxonomy: An overview. Theor into Pract. 2002;41:212–218.
30. Dreyfus SE, Dreyfus HL. A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition. Berkeley, CA: California University Berkeley Operations Research Center 1980.
31. Edwards I, Jones M, Carr J, Braunack-Mayer A, Jensen GM. Clinical reasoning strategies in physical therapy. Phys Ther. 2004;84:312–315;discussion 331-315.
32. Gilliland S. Clinical reasoning in first-and third-year physical therapist students. J Phys Ther Educ. 2014;28:64–80.
33. Boshuizen HP, Schmidt HG. On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cogn Sci. 1992;16:153–184.
34. Doody C, McAteer M. Clinical reasoning of expert and novice physiotherapists in an outpatient orthopaedic setting. Physiotherapy. 2002;88:258–268.
35. Ambrose SA, Bridges MW, DiPietro M, Lovett MC, Norman MK. How Learning Works: Seven Research-Based Principles for Smart Teaching. John Wiley & Sons; 2010.
36. Durning S, Artino AR Jr, Pangaro L, van der Vleuten CP, Schuwirth L. Context and clinical reasoning: Understanding the perspective of the expert's voice. Med Educ. 2011;45:927–938.
37. Venskus DG, Craig JA. Development and validation of a self-efficacy scale for clinical reasoning in physical therapists. J Phys Ther Educ. 2017;31:14–20.
38. Brudvig TJ, Mattson D, Guarino A. Critical thinking skills and learning styles in entry-level doctor of physical therapy students. J Phys Ther Educ. 2016;30:3–10.

Clinical reasoning; Clinical reasoning tool; Clinical reasoning assessment

Supplemental Digital Content

Copyright 2019 © Academy of Physical Theraphy Education