Secondary Logo

Journal Logo

Competency-Based Validation of Neurologic Specialty Practice

Perry, Susan B. PT, DPT, MS, NCS; Rauk, Reva P. PT, PhD, MMSc, NCS; McCarthy, Arlene PT, DPT, MS, NCS; Milidonis, Mary K. PT, PhD, MMSc

Journal of Neurologic Physical Therapy: June 2008 - Volume 32 - Issue 2 - p 62-69
doi: 10.1097/NPT.0b013e31817584d9
Article
Free
SDC

Introduction: This paper describes a recent practice analysis of neurologic physical therapy. The purpose of the analysis was to describe the knowledge, skills, and abilities of a neurologic physical therapist and to distinguish elements that the neurologic specialist may perform differently from a nonspecialist. The analysis was done to revalidate and revise the Description of Specialty Practice used by the American Board of Physical Therapy Specialties (ABPTS). This Description supports the test plan for the neurologic certification examination.

Methods: A survey was developed by a subject matter expert group and was finalized by analysis of pilot survey data. The final survey consisted of 303 items about the duties, roles, procedures, and knowledge of neurologic physical therapists. Respondents were asked to rate items on frequency, importance, level of criticality, and level of judgment. Respondents were also asked to provide input about examination content percentages and their own clinical practice and demographics.

Results: The survey was sent to 590 members of the APTA’s Neurology Section. The responses of 187 (32% return rate) physical therapists were combined with 30 responses to the pilot survey for analysis. The data yielded a new neurologic certification examination test plan and description of neurologic specialty practice. This description emphasizes the professional practice expectations, tests and measures, and intervention skills that characterize a neurologic specialist.

Summary: The results of this survey represent a summary of current neurologic specialty practice and therefore support an outline for the neurologic board certification examination.

Physical Therapy Program (S.B.P.), Chatham University, Woodland Road, Pittsburgh, Pennsylvania; Staff Development and Research (R.P.R.), University of Utah Health Care, Rehabilitation Services, Salt Lake City, Utah; Rehabilitation Services Manager (A.M.), Kaiser Permanente Medical Center, Redwood City, California; Physical Therapy Program (M.K.M.), Cleveland State University, Cleveland, Ohio.

Supplemental information for this article can be found at www.jnptextra.org

Address correspondence to: Susan B. Perry, E-mail: perry@chatham.edu

Back to Top | Article Outline

INTRODUCTION

Specialist certification was established to provide formal recognition for physical therapists with advanced clinical knowledge, experience, and skills in a defined area of practice. Currently there are seven areas of certification in physical therapy: cardiovascular and pulmonary, clinical electrophysiology, geriatric, neurologic, orthopedic, pediatric, and sports. Certification in women’s health is planned for 2009. The American Board of Physical Therapy Specialties (ABPTS), along with the eight specialty councils, administers the specialization process. Certification is achieved through the successful completion of a standardized application and examination process. Each examination is constructed from a test plan reflective of current specialty practice. The test plan is an outline of key knowledge and skill areas (competencies) that will be evaluated by the examination.1 The test plan for the first version of the neurologic specialty examination was based on competencies determined by the consensus of the first Neurologic Specialty Council. The first group of board-certified clinical specialists in neurology was recognized in 1987.

In order to revalidate competencies for all certification areas in a more objective and credible manner than by consensus, ABPTS transitioned to the use of practice analysis methodology in the early 1990s. A practice analysis is a commonly used, systematic procedure for collecting and analyzing job-related information.1 Its purpose is typically to develop a detailed test plan for a credentialing examination in a knowledge-intensive profession.2 A common method of practice analysis is the development and administration of a task inventory, or survey.1,2 Furthermore, the practice analysis samples information from a large number of people, which adds validity to the test content.1,3 Since a credentialing examination must clearly reflect important tasks, the results of the survey help to provide this content.1 After survey administration, content experts meet to evaluate the survey data, make judgments about the importance of content areas, determine key practice expectations, and establish a test plan that includes content distribution of test items.3 In this way, the survey data yield an outline of competencies that describes specialty practice in physical therapy and defines the content areas for the specialist certification examination.

The first neurologic practice analysis survey was conducted in 1993.4 Board policies of the ABPTS require a practice analysis every 10 years to update practice specifications and revalidate existing competencies. A new nationwide analysis of neurologic specialty practice, funded by ABPTS, was conducted in 2002. The explicit purpose of this practice analysis was to discern “neurologic specialty practice,” ie, practice requiring advanced knowledge and skill for efficient, effective performance. The analysis was performed by surveying neurologic certified specialists (NCSs) and other physical therapists practicing in neurology about the knowledge and skills that denote a specialist. A panel of content experts participated in survey design and data evaluation. Based on previous practice analyses, the authors anticipated the survey would yield competencies on a continuum of importance, and those of the greatest importance would appear on a valid and updated test plan and description of neurologic specialty practice.

Back to Top | Article Outline

METHODS

The revalidation process included preliminary survey instrument development, a two-stage pilot survey, final survey instrument development, final survey administration, data analysis, evaluation of the results, and formulation of a new test plan and Description of Specialty Practice (published elsewhere5) to support the neurologic specialty certification examination.

Back to Top | Article Outline

Consensus Process Used to Develop the Survey Instrument

In 2001, the three members of the Neurologic Specialty Council (S.P., R.R., A.M.) met with five neurologic subject matter experts (SMEs) and a consultant. SMEs are commonly used during practice analyses to define job-related scope and content.2 The SMEs were nominated by the Council members based on their leadership and expertise in neurologic physical therapy.1 They were judged to be qualified to provide accurate and complete information about the practice duties and characteristics of neurologic specialty practitioners.6,7 All were board-certified neurologic specialists. Other characteristics of the SME group are summarized in Table 1. The role of the consultant (M.M.) was to guide the processes of survey construction and data evaluation. Her qualifications included expertise in survey research and previous experience with ABPTS practice analyses. These nine individuals began by considering important published work that might influence the survey framework. These documents included the Guide to Physical Therapy Practice,8 the existing 1993 Description of Advanced Practice in Neurologic Physical Therapy (DACP),9 Sackett’s10 work on evidence-based medicine (EBM), and the research of Jensen et al11 on expertise in physical therapy practice.

TABLE 1

TABLE 1

The group agreed on several broad principles that should guide the survey development process after reviewing these documents. In 2000, the ABPTS decided to adopt the Guide8 as a basic framework for all specialty practice analysis surveys. In particular, elements of the Patient/Client Management Model provided a logical survey structure. With the assumption that the Guide outlines basic physical therapy practice, specialty practice analysis surveys should focus on items that specialists would be expected to perform at a higher level of proficiency and efficiency compared to a nonspecialist. The group reviewed the 1993 DACP9 and found an expansion of neurologic practice in both breadth and complexity over the intervening years. For example, the group observed that by 2001, neurologic measures and interventions were guided more by published evidence than by classic approaches.

Sackett’s position on the use of evidence in practice heavily influenced the type of items that should appear on the survey. That is, tests and measures should be valid and objective, and interventions should have demonstrated effectiveness when possible. Sackett’s definition of EBM is the integration of the best evidence with clinical expertise and the patient’s unique values and circumstances.10 Therefore, the group believed that some survey items should reflect knowledge about evidence-based neurologic practice, expertise in clinical decision making, and consideration of the patient’s values and clinical state. While not expressly stated by ABPTS, the group believed board-certified specialists to be experts in their field. The core dimensions of expert practice in physical therapy identified by Jensen and colleagues11 include multidimensional patient-centered knowledge, collaborative and reflective clinical reasoning, a focus on movement and function, and personal traits of caring and commitment (or “virtuous behavior”). The group ascertained that these concepts were represented on the survey instrument by including items that inquired about patient information synthesis, methods of solving complex problems, and virtuous behavior. Virtuous behavior was evaluated by survey items such as “willingly devotes time and effort to resolve a complex problem” and “establishes trustworthy relationships with colleagues, patients/clients, employers, and the public.”

After considering core documents, the group reached consensus on the knowledge, skills, and abilities (KSAs) required of a neurologic specialty practitioner. Most credentialing multiple-choice examinations cannot explicitly evaluate the ability to perform a task. Rather, they assess the knowledge and skills thought to underlie competent task performance.1 Knowledge is the body of information applied directly to the performance of a task. Skill denotes the degree of mastery already acquired in an activity. Abilities are patterns of attributes needed to perform a function or task.12 For example, the group believed that neurologic specialists readily understand the interaction between multiple system impairments and the patient’s environment (knowledge), effectively provide assistance and cues that challenge the patient at an appropriate level (skill), and adapt training to a wide variety of environmental contexts (ability). The group also agreed on major roles/duties of a neurologic specialist, such as educator, leader, consultant, and scholar, and determined KSAs associated with those roles. Content such as this became the basis for the practice analysis survey items.

Organizations and professions typically use two to three rating scales in practice analyses. The most common of these address frequency, importance or criticality, and need at entry.1,13 Rating scales for the present survey were those adopted and standardized by the ABPTS during previous practice analyses (Fig. 1). Scales to measure importance, frequency, and criticality were used for survey items about the Patient/Client Management Model,8 including tests and measures, interventions, and other professional roles/duties. Such rating scales efficiently provide information about many types of activities so that the certification examination can gauge individual readiness to perform diverse jobs in a variety of settings.13 The importance scale was intended to measure the importance of a task to be able to practice as a neurologic specialist, regardless of its frequency. Frequency is commonly used to help determine certification examination content emphasis.1 Tasks that are performed more often should receive greater emphasis on the test. Test-retest reliability coefficients for importance and frequency task inventory ratings with hospital workers appear to have good reliability with a range of r = 0.76 to 0.80.14 Criticality was defined on the survey as the level at which an NCS does an activity more efficiently and effectively than other neurologic physical therapists. Criticality is an attempt to measure essential attributes and abilities that characterize NCS practice. Importance and level of criticality have been found to be closely correlated in a previous ABPTS practice analysis and by others.15,16 Research on validity of job analysis scales demonstrates that rating scales do provide useful information, although they are biased by human judgment.17

FIGURE 1.

FIGURE 1.

Scales to measure frequency, importance, and level of judgment were used for knowledge areas neurologic clinical specialists use in their work. Frequency and importance ratings were identical to those described above. A level of judgment scale was substituted for the level of criticality scale for rating knowledge. The use of a judgment scale helps to define whether examination questions should potentially be at the recall, application, or analysis level.1

Back to Top | Article Outline

Pilot Survey

The three Neurologic Specialty Council members developed a pilot survey based on the consensus process described above. The purpose of the pilot survey was to further distinguish elements of specialized neurologic clinical practice and gather feedback about the instrument. At the 2002 Combined Sections Meeting of the APTA, each council member gave the pilot survey to three neurologic specialists for a total of nine respondents. These respondents represented various regions of the country. A universal comment after completion of the survey was that the tool’s length was prohibitive. Minor revisions were made, but most of the survey items were retained for the next level of review.

The pilot survey was then mailed to 60 therapists (20 NCSs and 40 nonspecialists, randomly chosen from APTA’s Neurology Section members database), with a response rate of 33.3% (14 specialists and 16 nonspecialists). Subsequent data analysis helped determine whether all items should remain on the final survey. If the summed frequency, importance, and criticality scores (Fig. 1) for a specific survey item were ≤5.0, that item was reconsidered by the Council members for exclusion on the final survey. The following changes were made for the final version:

  • Six separate items were combined into two to reduce the level of detail;
  • Three items were omitted due to redundancy with items in other sections;
  • Two items were reworded for clarity;
  • Two items were omitted as independent items and added as examples to broad headings;
  • Ten items were omitted due to low relevance/importance (<2.0 total score).
Back to Top | Article Outline

Final Survey

The final survey consisted of seven sections: (1) Patient/Client Management Duties, (2) Other Professional Roles, (3) Specific Tests and Measures/Interventions, (4) Knowledge Areas, (5) Recommendations for Specialty Examination Content, (6) Individual Clinical Practice Percentages, and (7) Demographic Information. For sections 1, 2, and 3, the survey asked respondents to rate each item (a total of 202 items) on frequency, importance, and criticality (Fig. 1). Section 4 (121 items) used the same frequency and importance scales along with a level of judgment scale (Fig. 1). Since a major outcome of the survey was to be the specialty examination test plan, section 5 asked respondents to suggest specialty examination content percentages in four categories: duties, test/measures, interventions, and knowledge. Section 6 sought information about the respondent’s own clinical practice by asking the percentage of caseload consisting of patients with various diagnoses/conditions/problems. Section 7 consisted of demographic questions. The final survey can be found at www.jnptextra.org.

Back to Top | Article Outline

Final Survey Administration

In October 2002, the survey was mailed to 590 randomly chosen physical therapist members of APTA’s Neurology Section, excluding members who completed the pilot survey. The target group included 278 certified specialists (NCSs) and 312 nonspecialists due to the limited total number of certified specialists at that time (327). The survey was accompanied by a cover letter asking respondents to return the survey within three weeks. A postcard reminder was sent out just before the requested return date for the surveys.

Back to Top | Article Outline

Data Analysis

Demographic characteristics and ordinal survey data were analyzed descriptively. Means, SDs, and frequencies were computed for each item that was identical on the pilot and final surveys. Subgroup analysis (χ2) was done to determine whether there were significant differences in demographic characteristics between specialists and nonspecialists.

Survey items were listed from highest to lowest mean ratings for frequency, importance, and criticality in the following survey categories: clinical sciences, foundation sciences, specific interventions and tests/measures, critical inquiry, consultation, virtuous behavior, leadership, education, and duties related to each element of the Patient/Client Management Model. The decision rules for including or omitting survey items in the ultimate description of specialty neurologic practice were discussed and agreed on by all subject matter experts.13 The group reviewed each category of the survey data based on (1) highest importance and criticality ratings, (2) natural breaks in data, and (3) group consensus. The general decision rules were exclusion of items from the final practice description if the mean rating of the criticality variable was <1.9 (2.0 = moderately critical) and/or the mean importance rating was <2.0 (moderately important).

Back to Top | Article Outline

RESULTS

One hundred eighty-seven Neurology Section members, including 120 specialists and 67 nonspecialists, responded to the survey for a response rate of 32%. The response rate for NCSs was 42%, while the response rate for nonspecialists was 25% (Table 2). A χ2 test revealed no differences between specialists and nonspecialists except for geographic region (P = 0.007) and years in practice (P < 0.001) (Table 3). Practitioners in both groups responded from every region in the nation. While regional trends in neurologic practice may exist, their influence on the data would be limited given that survey respondents were not from one particular region. Table 4 illustrates a broad range of physical therapy experience in both the NCS and non-NCS respondent groups. The NCS group was fairly equally distributed in years in practice, while more than one third of the non-NCS group had more than 21 years of experience. The SME group judged that the relative differences in experience and geographic region would not substantially influence the data. Therefore, the data from both groups were combined. In addition, data from the pilot survey were added for a final N = 217. No respondent completed both the pilot and the final survey, and these surveys were sent to separate random samples. Data from both groups were combined because the ultimate goal was to have a representative sample of adequate size.1 It has been shown that adequate generalizability can be found from samples ranging from 200 to 400 respondents.18

TABLE 2

TABLE 2

TABLE 3

TABLE 3

TABLE 4

TABLE 4

Approximately 73% of the respondents were ages 25–44 years, with 27% older than 45 years of age. No respondents were younger than 25 years old. Ninety-two percent were white, 4% were Asian, 2% were African American, with the remaining respondents of other ethnic backgrounds. As shown in Figure 2, respondents were from a wide range of geographic regions, with the most from the East North Central region (18%) and the least from the East South Central region (5%). Sixty-eight percent had a baccalaureate or postbaccalaureate certificate, 30% had a master’s degree, and only 1% a DPT degree as their professional level of education. Figure 3 illustrates that 70% of respondents practiced physical therapy for more than 10 years, and 26% of respondents practiced more than 21 years. In fact, the highest frequency of respondents was in this latter group. By contrast, 57% had been practicing as a neurologic physical therapist more than 10 years and 13% for more than 21 years. The highest frequency of respondents was in the group that practiced in neurologic physical therapy for six to 10 years. Respondents worked in every practice setting, with 23% in a health system/hospital outpatient setting, 23% in academia, 17% in inpatient rehabilitation, and 15% in acute care.

FIGURE 2.

FIGURE 2.

FIGURE 3.

FIGURE 3.

Using the decision rules described above, the SME group analyzed the survey results to arrive at a final list of knowledge, skills, and abilities. Detailed statements were then derived from this list to formulate an outline description of current neurologic specialty practice. These statements are in the areas of knowledge (foundation, behavioral and clinical sciences, and sciences related to critical inquiry); professional roles, responsibilities, and values (professional behaviors including virtues, leadership, education, consultation, and evidence-based practice); and the elements of the Patient/Client Management Model.8 An abbreviated outline of the topics that were ultimately found to delineate neurologic specialty practice can be found in Table 5. A very detailed outline has been published elsewhere5 and is beyond the scope of this paper. Table 6 illustrates the final test plan. It includes the relative weight of each topic on the examination, represented by the percentage of test questions assigned to each topic. Section 5 on the survey asked respondents to recommend the percentage of examination questions devoted to each content area. The SME group considered the mean results from section 5, and then as a group made a judgment about what the percentages should be.1

TABLE 5

TABLE 5

TABLE 6

TABLE 6

Results from the survey section that asked about the percentage of therapist caseload represented by patients with various diagnoses can be found in Table 7. The most common diagnostic categories seen by respondents were stroke (29% of caseload), balance and vestibular disorders (17% of caseload), and traumatic brain injury (13% of caseload). This represents a rare summary of the types of patients seen by neurologic therapists. A wide variation appears to make up therapist caseloads, as evidenced by large ranges and SDs in these data. Given that the ranges extended to 100% in some instances, it also appears that some therapists treat patients who fall into only one diagnostic category.

TABLE 7

TABLE 7

Back to Top | Article Outline

DISCUSSION

The findings of this study represent the most comprehensive description of specialized physical therapy practice in neurology at the present time. Survey data were analyzed primarily based on importance and criticality for items rather than on frequency or level of judgment. It was the consensus of the SME group that even if a task, skill, or background knowledge was used infrequently, if respondents judged it to be of high importance, then it was considered to be a vital element of specialty practice. For example, there are some tasks that are crucial for patient safety and, while rarely performed, should be represented on the test plan.1 Data from the level of judgment scale was intended to guide the future development and sorting of certification examination questions into categories of recall, application, or analysis.

The model developed by Jensen et al11 on expert practice ultimately influenced the description of neurologic specialty practice. Aside from knowledge and clinical reasoning, Jensen and colleagues observed certain personal attributes in the expert practitioners whom they studied, such as compassion, a commitment to learning, and caring for their patients. The SME group believed strongly that these types of professional and altruistic qualities should be represented on the survey in order to validate whether they indeed are a critical part of neurologic specialty practice. The results of the survey supported the inclusion of such virtuous attributes in the final neurologic specialty description and test plan.

The outcome of the current validation survey is much different from that of the previous study completed in 19934,9 and reflects changes in neurologic practice over the past 10 years. The Guide to Physical Therapist Practice8 provided a framework for the data sought on the survey (eg, the elements of patient/client management and categories for tests and measures and interventions) and subsequently for the description of specialty practice. Many more objective tests and measures are included in the detailed outline, as are elements of evidence-based practice. There is less emphasis on evaluating reflexes (eg, developmental, righting, protective reflexes) as factors that reflect motor status, and there is more detail regarding the evaluative, prognostic, and diagnostic abilities of the therapist. Interventions in the current description are less a laundry list of techniques, but rather reflect the therapist’s ability to adapt, customize, and optimize treatment. As a reflection of current specialty practice, it is possible the survey results may be consulted by individuals developing a curriculum for a postprofessional residency program in neurologic physical therapy.

Survey responses from the NCS and non-NCS groups were combined for data analysis. The most important attribute about survey respondents in a practice analysis is that they are knowledgeable about the job and are therefore qualified to make judgments about the importance and frequency of job tasks.1 It is reasonable to expect that an NCS is qualified to make these judgments. The χ2 analysis was performed in an attempt to determine gross demographic differences between the NCS and non-NCS groups since such differences may reveal that the non-NCS group did not sufficiently “know the job.” In fact, there were no differences between the two samples except in geographic region and years of practice. Given that every geographic region was represented in each group, it is unlikely that this difference would signify the non-NCS group as less qualified to respond to the survey.

While the groups did not differ in age, the NCS group was fairly equally distributed in time frames of six to 10, 11 to 15, 16 to 20, and >21 years of practice (Table 4). Quite a bit more of the non-NCS group tended to be in practice >21 years (37% vs 20%). There were almost twice as many NCSs as non-NCSs; therefore, a relatively small number of highly experienced individuals could have influenced the percentage of non-NCS respondents practicing >21 years. It may have been beneficial to have a more experienced non-NCS group than a less experienced one, although it has been suggested that years of practice alone do not necessarily contribute to expertise.19,20 Also, varying levels of experience are important in a practice analysis, so that judgments are not skewed toward those with less or more experience.6 Finally, the entire sample was diverse in many characteristics, which contributes to the overall generalizability of the results.1

Limitations of the study include the moderate survey response rate and the use of a sample of convenience. The survey was mailed to members of APTA’s Neurology Section. These individuals may have a bias about neurologic practice different from non-Section members; however, this seemed to be the most reasonable method to ensure that the practice and/or experience of the respondent were in the area of neurology. Another limitation is the combination of data from the pilot and final surveys for data analysis since these two surveys were slightly different from each other.

The SME group relied on consensus to interpret and truncate the data. Specialty Council members additionally edited the final description for clarity and to reduce redundancy. The published Description of Specialty Practice in Neurologic Physical Therapy5 is a working document and will continue to be revised on a recurring basis for review and revalidation based on changes in practice patterns over time.

Back to Top | Article Outline

CONCLUSION

The description of specialty practice for neurologic physical therapy is based on the patient/client model in the Guide to Physical Therapist Practice,8 with emphasis on the professional practice expectations, tests and measures, and intervention skills that distinguish a neurologic specialist from a nonspecialist. This description of practice was validated through a survey of neurologic physical therapy practitioners. It can serve future neurologic specialists as a self-assessment tool from which to develop a study guide to prepare for the certification examination.

Back to Top | Article Outline

REFERENCES

1.Raymond MR. Job analysis and the specification of content for licensure and certification examinations. Appl Meas Educ. 2001;14:369–415.
2.Wang N, Schnipke D, Witt EA. Use of knowledge, skill, and ability statements in developing licensure and certification exams. Educ Meas Issues Pract. 2005;24:15–22.
3.Downing SM, Haladyna TM. Handbook of Test Development. Mahwah, NJ: Lawrence Erlbaum Associates, 2006.
4.Riolo L, Gill-Body K, Straume C. Competency revalidation study: description of advanced clinical practice in neurologic physical therapy. Neuroreport. 1995;19:48–50.
5.American Board of Physical Therapy Specialties. Neurologic Physical Therapy Description of Specialty Practice. Alexandria, VA: American Physical Therapy Association; 2004.
6.Landy F, Vasey J. Job analysis: the composition of SME samples. Pers Psychol. 1991;44:27–50.
7.Fine SA, Cronshaw SF. Functional Job Analysis: A Foundation for Human Resources Management. Mahwah, NJ: Lawrence Erlbaum Associates; 1999.
8.American Physical Therapy Association. Guide to Physical Therapy Practice. Second Edition. American Physical Therapy Association. Phys Ther. 2001;81:9–746.
9.American Board of Physical Therapy Specialties. Neurologic Physical Therapy Description of Advanced Clinical Practice. Alexandria, VA: American Physical Therapy Association; 1993.
10.Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone; 1997.
11.Jensen GM, Gwyer J, Shepard KF, Hack LM. Expert practice in physical therapy. Phys Ther. 2000;80:28–43.
12.Williams KM, Crafts JL. Inductive job analysis. In: Whetzel DL, Wheaton GR, eds. Applied Measurement Methods in Industrial Psychology. Palo Alto, CA: Davies-Black Publishing; 1997;66:51–88.
13.Raymond MR, Neustel S. Determining the content of credentialing examinations. In: Downing SM, Haladyna TM, eds. Handbook of Test Development. Mahwah, NJ: Lawrence Erlbaum Associates, 2006:181–224.
14.Wilson MA, Harvey RJ, Macy BA. Repeating items to estimate the test re-test reliability of task inventory ratings. J Applied Psychol. 1990;75:158–163.
15.American Board of Physical Therapy Specialties. Orthopedic Physical Therapy Description of Specialty Practice. Alexandria, VA: American Physical Therapy Association; 2002.
16.Sanchez JL, Fraser SL. On the choice of scales for task analysis. J Appl Psychol. 1992;77:545–553.
17.Morgeso RP, Campion MA. Social and cognitive sources of potential inaccuracy in job analysis. J Appl Psychol. 1997;82:627–655.
18.Kane MT, Miller T, Trine M, et al. The precision of practice analysis results in the professions. Eval Health Prof. 1995;18:29–50.
19.Jensen GM, Shepard KF, Hack LM. The novice versus the experienced clinician: insights into the work of the physical therapist. Phys Ther. 1990;70:314–323.
20.Resnik L, Jensen GM. Using clinical outcomes to explore the theory of expert practice in physical therapy. Phys Ther. 2003;83:1090–1106.
Keywords:

neurologic physical therapy; specialty practice

© 2008 Neurology Section, APTA