Secondary Logo

Journal Logo

A Quality Improvement Project in Balance and Vestibular Rehabilitation and Its Effect on Clinical Outcomes

ALMohiza, Mohammad A. PT, PhD; Sparto, Patrick J. PT, PhD; Marchetti, Gregory F. PT, PhD; Delitto, Anthony PT, PhD; Furman, Joseph M. MD, PhD; Miller, Debora L. PT, MBA; Whitney, Susan L. DPT, PhD

Journal of Neurologic Physical Therapy: April 2016 - Volume 40 - Issue 2 - p 90–99
doi: 10.1097/NPT.0000000000000125
Research Articles
Free
SDC
Watch Video Abstract

Background and Purpose: Unwarranted variation in practice is among the principal contributors of suboptimal outcomes in health care. This variation can be minimized via quality improvement initiatives. However, quality improvement projects focus mostly on assessing processes, and less attention is given to the effect of the variation on clinical outcomes. An effective implementation of a clinical treatment algorithm (CTA) could improve care for individuals with balance and vestibular disorders. The first aim of this quality improvement project was to examine adherence to a CTA developed by physical therapists who treat persons with balance and vestibular disorders. The second aim was to examine the effect of adherence on patient outcomes.

Methods: Twenty-three physical therapists who provided rehabilitation for individuals with balance and vestibular disorders participated in the quality improvement project. All physical therapists worked for the same health care provider, and developed the minimum data set and CTA. The physical therapists were cluster randomized into 2 groups; both groups received educational training and reminders regarding adherence to the CTA. The first group received the training and reminders after an 8-week baseline period (initial group), and the second group (delayed group) after a 12-week baseline period. The prescribed interventions were classified as being adherent or nonadherent to the CTA. Clinical outcomes, including the Activities-Specific Balance Confidence (ABC) scale, Dizziness Handicap Inventory (DHI), and the Global Rating of Change (GRC), were recorded at the initial evaluation and discharge for 454 individual with balance or vestibular disorders.

Results: Across the 16-week project, adherence rates improved significantly by 9% and 12% for the initial and delayed groups, respectively (P = 0.008), but there was no difference between groups related to the timing of the educational training and adherence reminders. Clinical outcomes improved for individuals, with balance or vestibular disorders but there was no differences in the change in ABC, DHI, and GRC scores based on whether the interventions were or were not adherent to the CTA.

Discussion and Conclusions: This quality improvement project was effective in increasing the adherence to the CTA in both groups. Although on average individuals with balance and vestibular disorders showed improvement on the clinical outcomes, there was no additional benefit in the clinical outcome for adherent interventions.

Video abstract is available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A125).

Supplemental Digital Content is Available in the Text.

School of Health and Rehabilitation Sciences (M.A.A., P.J.S, A.D., D.L.M., S.L.W.), University of Pittsburgh, Pittsburgh, Pennsylvania; College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia (M.A.A); Rangos School of Health Sciences (G.F.M), Duquesne University, Pittsburgh, Pennsylvania; and School of Medicine (J.M.F.), University of Pittsburgh, Pittsburgh, Pennsylvania Rehabilitation Research Chair (S.L.W.), Department of Rehabilitation Sciences, King Saud University, Saudi Arabia, Riyadh.

Correspondence: Mohammad A. ALMohiza, PT, PhD, King Saud University, Riyadh, Saudi Arabia (mmohiza@ksu.edu.sa).

Part of this work has been presented in a poster in the APTA Combined-Section Meeting 2015.

The authors declare no conflict of interest.

Supplemental digital content is available for this article. Direct URL citation appears in the printed text and is provided in the HTML and PDF versions of this article on the journal's Web site (www.jnpt.org).

Back to Top | Article Outline

INTRODUCTION

In the United States, the level of adherence of health care to quality standards is unknown.1,2 Approximately 40% of patients do not receive evidence-based interventions and approximately 25% of patients receive unnecessary care.3,4 Underuse, overuse, and misuse of care are quality issues that could harm patients.5 Unwarranted variation in practice is one of the leading causes of inadequacies observed in health care. To address this issue there needs to be a focus on establishing evidence-based interventions, and also on implementing these evidence-based practices into everyday care.6 Quality improvement initiatives have been demonstrated to reduce variation in practice and improve patient outcomes.7

Quality improvement is defined as a continuous organized process of using quality quantifiers to detect problems and to apply plans to enhance the quality of care.8 Many studies have shown advantages from the use of quality improvement approaches in many aspects of health care including improving patient and provider satisfaction and decreasing process variation and health care costs.9–11 However, quality improvement effects on health care outcomes are ambiguous.12 Several studies reported that the quality improvement initiatives in the United States and Canada were either not successful or reported less than 50% success.13–15 Lynn and colleagues16 suggested that to achieve a successful quality improvement project, the quality improvement process should be part of the clinicians' daily practice. Clinicians should be engaged in the quality improvement, which will help them to gain more insight into process of care, to understand it, and improve it.16

Clinical practice does not always reflect research findings and clinical guidelines.17–20 Clinical guidelines are defined as a set of rules that inform clinical decision making and provision of care21 and clinical treatment algorithms attempt to standardize the selection of treatment approaches. Adherence, which is defined as prescribing the correct intervention according to clinical guidelines/rules, is associated with improvements in quality of care.22 Locally developed guidelines and algorithms can be more effective than national guidelines, mainly when combined with management monitoring such as reminders.23 McGuirk et al24 reported a greater improvement in outcomes of patients with low back pain who were treated in clinics that utilized evidence-based guidelines developed for their study compared with patients who received usual care. However, quality improvement projects and implementation of guidelines concentrate primarily on evaluating the processes with less focus given to the effect of adherence to guidelines on clinical outcomes.25 Failure to change clinical behavior can decrease the chances of improvement in quality of health care.26 Behavioral intervention strategies that target clinical behavior include educational material dissemination,27 continuing medical education,28 and reminders.29

Our main goal was to implement and evaluate a quality improvement initiative in a neurologic outpatient practice specializing in care for persons with balance and vestibular disorders. The process of quality improvement implementation typically includes development of a minimum data set (MDS) and clinical treatment algorithm (CTA). We examined the effect of implementing a behavioral intervention consisting of educational training and adherence reminders on adherence to the CTA. In addition, we examined if adherence to the CTA had a beneficial effect on patient outcomes. Adherence in this study is defined as the consistency by which clinicians used the CTA during the initial evaluation. Compliance is defined as the completion of the MDS.

Back to Top | Article Outline

METHODS

A 16-week quality improvement project was carried out among physical therapists employed by UPMC Centers for Rehab Services (CRS) of the University of Pittsburgh Medical Center (UPMC). The project was reviewed and approved by the both the CRS and the UPMC Quality Improvement subcommittees, who determined that the project did not meet the federal definition of research and did not require oversight by the institutional review board.

The physical therapist sample (n = 23) worked at 15 outpatient neurological specialty clinics. The therapists had a mean of 7 ± 6 years of experience. Patients with balance and vestibular disorders of either peripheral or central etiology were included.

Facilities were cluster randomized into 2 groups according to these characteristics: the number of physical therapists in each facility, number of full-time versus part-time physical therapists, and physical therapists who split their time at more than one facility. The groups were then randomly assigned to the initial or delayed group. Physical therapists were blinded to their group assignment and to the difference between the 2 groups in terms of the timing of the behavioral intervention strategies.

Back to Top | Article Outline

Minimum Data Set Development

The MDS was created over 13 months with discussion about what was “essential” data to collect, data meaningfulness, and practicality. Two versions of the MDS were piloted over the 13 months, and the final MDS included the International Classification of Diseases (ICD-9) codes, medical history, date of onset, history of falls, symptoms of dizziness when (1) getting out of bed, (2) moving the head quickly, and (3) rolling in bed, the Dizziness Handicap Inventory30 (DHI), the head thrust test,31 dynamic visual acuity,32 provocation of symptoms during vestibulo-ocular reflex cancellation,33 ocular convergence testing,34 positional testing,35,36 the modified Clinical Test of Sensory Interaction and Balance (mCTSIB),37,38 gait speed,39 the Activities-Specific Balance Confidence (ABC) scale,40 the 4-item Dynamic Gait Index,41,42 and the plan of care. The plan of care was a list of treatment categories43 that served as intervention choices on the basis of the examination findings including eye-head activities, balance activities, an ambulation program, the canalith repositioning maneuver, optokinetic training, and patient education.

Back to Top | Article Outline

Clinical Treatment Algorithm Development

A draft of the CTA was presented to the physical therapists who discussed, tried, and improved the algorithm over 6 months. A clinical algorithm was provided for 10 of the 16 items of the MDS. These rules were based on the best available evidence and consensus (Table 1).

Table 1

Table 1

Back to Top | Article Outline

Quality Improvement Behavioral Intervention

Behavioral intervention strategies included dissemination of educational material, reminders, and educational training (Figure 1). Educational material (the MDS and CTA) was emailed to physical therapists 1 week before the starting date of the project. Physical therapists were reminded via email if they omitted one or more items on the MDS (compliance reminder) and were given 2 weeks to complete the missing MDS or justify their decision for not completing the omitted MDS.

Figure 1

Figure 1

An overall compliance rate of 90% to 95% in both groups was deemed appropriate to start the next phase consisting of educational training and adherence reminders to ensure that enough information in the forms was being collected to be able to evaluate the physical therapists' performance.8,44 This part of the intervention was delivered first to the initial group, followed by the delayed group 4 weeks later. Educational training included a webinar, short test, and competency training and testing. The webinar was 1.5 hours of theoretical information that was videotaped and distributed. A brief test including questions about the tests and measures in the evaluation forms and the corresponding treatment rules was completed. Competency training consisted of a practical session followed by a competency test. Adherence reminders were emailed to the initial and delayed groups after they completed the educational training. Physical therapists were given 2 weeks from receipt of the adherence reminders to correct the treatment choice according to the CTA or justify their decision regarding the plan of care.

Back to Top | Article Outline

Data Collection

Physical therapists transmitted the initial evaluation to data managers who were trained to de-identify and extract the data from the evaluation forms.

Back to Top | Article Outline

Type of Data Collected

Compliance was defined as completing all required items of the MDS. The initial evaluation form had 6 treatment categories (education, eye-head activities, optokinetic training, canalith repositioning, balance activities, and ambulation program), which were classified as adherent, nonadherent, or overutilized on the basis of the CTA. A treatment category was considered as adherent when the treatment category was checked off and the CTA recommended it, or when it was not checked off and not recommended by the CTA. A nonadherent treatment category was a treatment category that was not checked off in the evaluation form while recommended by the CTA. When a treatment category was checked off on the evaluation form while not supported by the CTA, it was classified as overutilized. Each initial evaluation that was performed was considered to be adherent if no treatment category was classified as nonadherent among the 6 treatment categories on the evaluation form. Mean compliance for completing the MDS and adherence to the CTAs was calculated weekly.

The weekly adherence percentage was averaged for each physical therapist for pre- and postquality improvement (QI) strategy implementation periods. The number and type of reminders (compliance or adherence) sent to each group were recorded. Compliance and adherence rates used in the analyses were the rates calculated after the physical therapists responded to the compliance or adherent reminders. By the end of week 16, a 2-week washout period was dedicated to send reminders and collect additional responses.

Three patient self-report outcome measures were agreed upon and used as benchmarks of patient improvement. Clients completed the ABC scale,40 the DHI,30 and the 15-point Likert Global Rating of Change (GRC)45 at least every 2 weeks. These outcome measures were selected because they primarily address activities and participation,46 and were independent of the rules in the CTA for the most part, except for the prescription of education for people with ABC scores below 70%.

The ABC is a self-report tool that quantifies the difficulty of performing activities and fear of falling in elderly individuals.40 It is scored from 0% to 100%, and higher scores indicate a more confident individual.40 The sensitivity and specificity of the ABC for falls prediction in community-dwelling older adults were 84% and 87%, respectively.47 The ABC has high internal consistency (α = 0.96) and good test-retest reliability (r = 0.92, P < 0.001).40 The ABC has a strong correlation with the DHI (r = 0.64, P < 0.001), which indicates convergent validity.48 The cutoff score for the ABC is 67% for fall risk.47 Our group of experts' clinical rule for the ABC was to provide education for patients with an ABC score less than 70%.

The DHI was designed to record the handicapping effect of dizziness.30 It is scored from 0 to 100 with lower scores indicating less handicap.30 The DHI has good internal consistency for the total score (α = 0.89), satisfactory internal consistency for subscales (α = 0.72-0.85), and high test-retest reliability (α = 0.97).30 Discriminant validity has also established a good relationship between DHI scores and the number of dizziness episodes.49 The DHI was also found to be responsive to change as a measure in vestibular rehabilitation.50 Our group of experts did not establish a clinical decision rule for DHI total or its subitems.

The Likert GRC measures the change in health status from the patient's perspective and assists in determining whether patients perceive that they are improving.45,51,52 The GRC is a 15-point score ranging from +7 (a very great deal better) to −7 (a very great deal worse), with 0 indicating no change.45 The GRC was divided into 3 ranks of change: +1 to +3 or −1 to −3, which indicate small change, +4 and +5 or −4 and −5, which indicate moderate change, and finally, +6 and +7 or −6 and −7, which indicate large change.45

The ABC and DHI were part of the MDS and baseline scores were retrieved from the initial evaluation forms. The GRC was administered only on follow-up and discharge. For the ABC, DHI, and GRC, when the discharge data were missing for an outcome measure, the intention to treat principle was utilized by considering the most recent follow-up data as the discharge data.

Back to Top | Article Outline

Statistical Analyses

A mixed-factor analysis of variance (ANOVA) was used to test the effects of group assignment (initial and delayed), and time (before and after the QI intervention) on the adherence to the CTA. Two models were considered. First, we examined the adherence in both groups for the period of 4 weeks before and after the initial group started the QI intervention, which allowed us to determine whether the adherence changed in the initial group that received the intervention compared with the delayed group that did not. Next, we investigated the adherence before and after each group started the QI intervention (ie, weeks 1-7 vs weeks 8-16 for the initial group, weeks 1-11 vs weeks 12-16 for the delayed group), which permitted us to examine whether the change in adherence after the training was different across both groups. To address the second goal, a mixed-factor repeated measures ANOVA was used to examine the effects of time (initial evaluation and discharge scores), adherence (adherent and nonadherent), and the interaction on the ABC and DHI scores. For the GRC, the discharge data for the adherent and nonadherent prescriptions were compared using the Mann-Whitney U test. A significance level of 0.05 was used.

Back to Top | Article Outline

RESULTS

Twenty-three physical therapists participated in the project with 4 physical therapists excluded because they were not assigned to a specific clinic. Nineteen physical therapists were randomized into 2 groups: 9 in the initial group and 10 in the delayed group. One physical therapist was dropped from the delayed group because they did not fax any evaluation forms in the postintervention period. Therefore, 18 physical therapists were included in the analyses (Figure 1). A total of 732 initial evaluation forms were received and 152 were not included because they were submitted by the 5 excluded therapists. Of the remaining 580 evaluation forms, 276 patients were seen by physical therapists in the initial group and 304 patients were seen by the delayed group.

Back to Top | Article Outline

Compliance

Compliance reminders were sent over the 16-week project. Figure 2 shows an increase in the weekly compliance rates before and after the physical therapists responded to the compliance reminders. Compliance reminders had an effect on the prereminders compliance rates mainly within the first 4 weeks; after that the compliance rates remained steady at higher levels.

Figure 2

Figure 2

A compliance rate of 90% to 95% or more was reached for both groups by week 7; therefore, the initial group began to receive the QI intervention during week 8. The delayed group received the QI intervention 4 weeks after the initial group, which was during week 12. All therapists involved in the QI project viewed the webinar with a mean online test score of 96% for both groups.

Back to Top | Article Outline

Adherence

To examine the effect of the QI intervention on the adherence of both groups of physical therapists to the CTA, the adherence rates were averaged for the 4 weeks before and after the initial group received the QI intervention. The average change in adherence increased by 5% across both groups (Table 2); however, there was not a significant difference between groups, within groups from 4 weeks before to 4 weeks after the intervention, and no interaction was detected. Figure 3 illustrates the adherence rates over the course of the 16-week project. Adherence rates increased in the initial group up to week 8, when they received the QI intervention, and then decreased in the 4-week period after. For the delayed group, there was no discernible effect before or after week 8.

Figure 3

Figure 3

Table 2

Table 2

Next, the effect of the QI intervention was examined using the average of adherence rates before and after the QI intervention was provided to each group (ie, initial group: 7 weeks before, 9 weeks after; delayed group: 11 weeks before, 5 weeks after). Average adherence rates increased significantly for both groups after the QI intervention (P = 0.008). The average change in adherence rates was 11% across both groups. The between-group effect was not significant, indicating that the adherence rates were similar. The interaction effect was not significant (Table 2).

There was variation in adherence with 4 of the physical therapists demonstrating 100% adherence in both the pre- and postintervention periods, and adherence improved from 75% to 95% to 100% from pre- to postintervention in 6 of the physical therapists. Adherence decreased in 2 of the physical therapists. Of the 4 physical therapists who did not participate in the development of the CTA, 2 showed 100% adherence postintervention.

When postintervention was compared with preintervention periods, it was found that overutilized treatment categories were reduced by 16%. The balance activities category was the most overutilized treatment and canalith repositioning was the least.

Back to Top | Article Outline

Effect of Adherence on Outcomes

The forms submitted by all 23 physical therapists who participated in the study were compared (adherent vs non-adherent) regardless of the randomized assignment to the initial and delayed groups. Of the 732 initial evaluations, no follow-up information was available for 278 patients. Thus, outcomes were examined for 454 patients. Table 3 demonstrates the number of initial evaluation forms that were included in the analysis of each clinical outcome. The number of patients who received adherent care was 397 and the number who received nonadherent care was 57. The age of the patients was significantly different between those who received adherent and nonadherent care; patients who received adherent care were younger (mean age of 34 and 51 years, respectively) (P <0.001). The level of adherent care did not differ because of sex or duration of symptoms.

Table 3

Table 3

A mixed-factor repeated-measures ANOVA was performed to compare baseline scores and discharge for the ABC (Table 4). Baseline scores were significantly different between patients who received adherent and nonadherent treatment, with greater ABC scores in those who received adherent treatment (mean = 73, standard deviation = 24) compared with nonadherent treatment (mean = 61, standard deviation = 26). Thus baseline score was entered into the model as a covariate. Also, the model was adjusted for patients' age because it was significantly different between the adherent and nonadherent treatment. The effect of baseline scores as a covariate was significant (P < 0.001) and it explained 83% of the variance in the ABC scores. Also, the effect of patients' age as a covariate was significant (P < 0.001) and it explained 5% of the variance in the ABC scores. There was a significant effect of time on the ABC scores (P < 0.001) with a large effect size (partial η2 = 0.4). The improvement in the ABC scores was 12 points across both groups. There was no difference in the change in the ABC scores between the adherent and no-adherent treatment (P = 0.4). In addition, the interaction between time and adherence was not significant (P = 0.4).

Table 4

Table 4

For the DHI, baseline scores were not different between adherent and nonadherent treatment (P = 0.06), but the model was adjusted for patients' age because it was significantly different between adherent and nonadherent treatment (Table 4). The effect of patients' age as a covariate was significant (P < 0.001) and it explained 3% of the variance in the DHI scores. The effect of time on the DHI scores was significant (P < 0.001). The improvement in the DHI scores was 17 points across both groups. Adherence did not significantly affect the change of the DHI scores (P = 0.3). Also, the interaction effect between time and adherence was not significant (P = 0.8). Patients reported a mean GRC score of 5 when the treatment was adherent and 4.5 when the treatment was nonadherent (Table 4).

Back to Top | Article Outline

DISCUSSION

As a quality improvement project, it is important to state that the results are not generalizable because the project was conducted within a single health care provider, using physical therapists that developed their own MDS and CTA. However, other providers engaged in QI may benefit from this reporting of the process that was undertaken. Our results showed 11% improvement in adherence rates to the CTA from the period before the QI intervention to after the QI intervention. Improvement in adherence to clinical guidelines after guidelines implementation was reported to be 5% to 10%.53 Bekkering et al54 defined the important difference in adherence to clinical guidelines as 20%, yet they concluded that a 20% improvement may be optimistic. Bekkering et al54 compared the adherence of physical therapists to low back pain clinical guidelines and found a difference of 12% in adherence rates in favor of the intervention group.

Previous studies showed that quality improvement projects that involve the clinicians in the process are likely to be successful.16 Overall, our QI project demonstrated high adherence rates, approximately 80% before the intervention and above 90% after providing the intervention. Because most of the physical therapists in our study were involved in developing the CTA, this could have played a role in the high adherence rates. Participating physical therapists had been involved in monthly meetings for over 5 years that review the latest findings in balance and vestibular rehabilitation. All new physical therapists in the system are educated about vestibular disorders and must undergo training at one of the hub centers before they practice at their outpatient facility. The similarity of training and the cross-talk of these physical therapists most likely contributed to the fact that there was less change than expected in this cohort of physical therapists.

The response rates to reminders of both types were not very high in this quality improvement project. The reason for this was in part related to the system we used to remind the physical therapists, which was email. To respond to the reminders, the physical therapists would need to retrieve a patient's information and re-fax or email their responses, which consumed time and effort. A more efficient method would be an electronic data entry system that promptly reminded the physical therapists about their lack of compliance to the MDS and/or non-adherence to the CTA, and the physical therapists would have to respond before submitting the electronic evaluation form in the future. Furthermore, electronic reminders were reported to increase the chance of receiving care according to clinical guidelines over paper-based reminders.55 In a study by Sequist et al,55 35% of the participating physicians in the survey reported that the electronic reminders encouraged them to act according to the recommendations. McDonald et al56 reported that the compliance rate (to order a test or record a finding) increased significantly from 12% and 20% to 23% and 49%, respectively. Also, the adherence rate (to alter a treatment plan) increased significantly from 29% to 43%.56 They used electronic reminders to cue the physicians when they were in the intervention period.56 However, it appears that giving the clinicians the choice to act upon reminders or ignore them would not be as effective as having the clinicians either act upon the reminders or briefly justify their nonrecommended decision. The reason given by the clinicians for not being adherent to the clinical guidelines at the time of the visit would be valuable information to amend the guidelines to fit atypical cases. Litzelman et al57 found that physicians who were required to respond to the electronic reminders showed significantly higher compliance rates (46%) than those who received electronic reminders without being required to respond (38%). A systematic review reported that the overall median change in adherence to guidelines as a result of electronic reminders was 6%; however, when responding to reminders that were required, the median increased to 13%.58

In this study, compliance rates were higher after the physical therapists responded to the compliance reminders, mainly within the first 4 weeks. These findings suggested that reminders were very effective in increasing the rates of compliance and that the physical therapists retained that improvement throughout the remainder of this quality improvement project. It was reported that completeness and consistency of medical records is essential to evaluate clinicians' performance.8,44 In contrast, a moderate increase in the weekly adherence rates occurred mainly after adherence reminders and educational training were provided to each group. The moderate increase in adherence rates was most likely related to the webinar and competency testing/training rather than adherence reminders because there were a small number of adherence reminders. It was reported that educational training is more effective than educational materials dissemination in terms of changing clinical behavior.17,59 Also, educational training and reminders have the same effectiveness in changing clinical behavior.59 Moreover, an intervention that combines educational training and reminders is expected to be more effective in changing clinical behavior than individual intervention.17,59

This quality improvement project was effective in decreasing the percentage of overutilized treatments after the behavioral intervention was provided. Overuse was defined as providing treatment that lacked evidence of effectiveness.60 The use of an algorithm could have an important effect in reducing overutilization, mainly when the algorithm is simple and easy to use.61 The canalith repositioning maneuver was overutilized the least, which we attribute to the specific rule indication (if positional testing is negative then canalith repositioning maneuver should not be provided). These findings support the idea that developing guidelines that have rules for negative examination results as well as positive results leads to less overutilization of treatment. Balance activities were overutilized most frequently. It is possible that because there were no examination findings that precluded balance activities, therapists may have prescribed them on the basis of indications other than the mCTSIB results. A review stated that well-established guidelines have a promising potential to decrease overutilization.62

Both adherent and nonadherent treatment demonstrated improvement in the scores of the ABC, DHI, and GRC. However, being adherent to the CTA did not enhance the improvement in the ABC, DHI, and GRC scores compared with being nonadherent. Bekkering et al63 found that the guideline implementation strategies they have used did not improve outcomes of patients with low back pain. In their study, the control group received the guidelines by email whereas the intervention group received education sessions, group discussion, role playing, feedback, and reminders.63 Also, they found improvement in the outcomes of patients treated by physical therapists in both groups, similar to our study.63

We compared the change in the ABC and DHI in this project with the change in the same measure from a study by Meretta et al.64 The change in the ABC was similar (12 points) in both the Meretta et al64 study and in this study. The change in the DHI was higher (17 points) in this study than in the Meretta et al64 study (11 points). The results from this quality improvement in terms of improvement in the ABC and DHI were comparable to previous studies.

Cherkin et al65 reported that providing education to clinicians did not enhance patient outcomes. Conversely, Fritz et al22 reviewed patient information retrospectively and classified patient records as adherent or nonadherent. They reported that patients who were treated in adherence to the guidelines showed greater improvement in disability and pain, and were more likely to achieve a successful physical therapy outcome than those receiving nonadherent care. Adherence to clinical guidelines was found to enhance clinical outcomes22; however, our results demonstrated no enhancement in the outcomes as a result of the implementation of the CTA, which was supported by Bekkering et al63 study.

Not all clinical outcomes in this study were related to specific clinical rules. The CTA recommends a fall risk education program when a person's score on the ABC was less than 70%. However, it is ideal to provide persons with balance and vestibular disorders with education regarding falls and other balance and vestibular problems whether or not it was recommended by the CTA. Falls education in this study was frequently provided regardless of the ABC scores. Also, the DHI and GRC were not linked to any rule in the CTA. It has been reported that measuring outcomes that are not responsive to guidelines may contribute to the lack of effectiveness findings of guidelines on patients outcomes.66 The ABC, DHI, and GRC were global clinical outcomes that the physical therapists chose to collect in this study as part of having the physical therapists involved in selecting the important indicators for improvement. Therefore, these global clinical outcomes might not have been responsive to the CTA as they were not rules-specific clinical outcomes. Future studies should use outcome measures that are responsive to the CTA and ensure treatment recommendations in the CTA are based on high-quality evidence. Gaps in evidence within guidelines and treatment algorithms should inform future research.

Another limitation of this quality improvement project was that the sample of physical therapists included in this project may not have been large enough to ensure sufficient statistical power to detect a significant effect of the behavioral intervention.67 Information related to the effect of adherence to guidelines on a broader array of clinical outcomes would be valuable. A design that has a prequality improvement period (baseline) would provide more information on the clinical behavior of physical therapists before any intervention was provided. Data were collected in this project by faxing evaluation forms, follow-up, and discharge data, which was time consuming and burdensome for the physical therapists. Electronic data entry might have decreased the amount of missing data, captured any changes in the plan of care and provided more comprehensive information about adherence.

Finally, treatment categories were generic and could cover a wide variety of treatment modalities and exercises, which might have led to choosing treatment categories that were not recommended by the CTA (overutilization) and thereby possibly inflating the adherence rate. The physical therapists were required to indicate the plan of care during the initial evaluation; however, they were not required to inform the investigators of any changes in the plan of care on subsequent visits, which could have changed the classification from adherent to nonadherent or vice versa.

Back to Top | Article Outline

CONCLUSIONS

The project was effective in demonstrating the same level of improvement in adherence to the CTA between groups. Both groups' adherence levels improved and overutilization of treatment decreased. Completeness of the evaluation forms (MDS) improved over the 16 weeks of study. The high adherence rates in both groups from the beginning of this project may be because the rules were broad, and they were developed and agreed upon by most of the participating physical therapists in advance of the QI project. Individuals with adherent and nonadherent treatment both showed improvement on the basis of the ABC, DHI, and GRC as outcome measures.

Back to Top | Article Outline

ACKNOWLEDGMENT

We thank Nuket Curran, Director of Quality and Risk Management, and the entire management team of UPMC Centers for Rehab Services for their support and assistance with the project. We would like to acknowledge the physical therapists of UPMC Centers for Rehab Services for their help and participation in this project. Nabeel ALGhamdi and Rob Cavanaugh are also acknowledged for their assistance with the data collection.

Back to Top | Article Outline

REFERENCES

1. McGlynn EA, Brook RH. Keeping quality on the policy agenda. Health Aff (Millwood). 2001;20(3):82–90.
2. (IOM) IoM. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: 2001.
3. Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001;39(8 Suppl 2):II46–II54.
4. Schuster MA, McGlynn EA, Brook RH. How good is the quality of health care in the United States? 1998. Milbank Q. 2005;83(4):843–895.
5. Becher EC, Chassin MR. Improving the quality of health care: who will lead? Health Aff (Millwood). 2001;20(5):164–179.
6. McCannon CJ, Berwick DM, Massoud MR. The science of large-scale change in global health. JAMA. 2007;298(16):1937–1939.
7. Carnett WG. Clinical practice guidelines: a tool to improve care. J Nurs Care Qual. 2002;16(3):60–70.
8. Weisman CS, Grason HA, Strobino DS. Quality management in public and community health: examples from women's health. Qual Manag Health Care. 2001;10(1):54–64.
9. Larson CO, Nelson EC, Gustafson D, Batalden PB. The relationship between meeting patients' information needs and their satisfaction with hospital care and general health status outcomes. Int J Qual Health Care. 1996;8(5):447–456.
10. Zangaro GA, Soeken KL. A meta-analysis of studies of nurses' job satisfaction. Res Nurs Health. 2007;30(4):445–458.
11. Shortell SM, Jones RH, Rademaker AW, et al. Assessing the impact of total quality management and organizational culture on multiple outcomes of care for coronary artery bypass graft surgery patients. Med Care. 2000;38(2):207–217.
12. Wagner C, van der Wal G, Groenewegen PP, de Bakker DH. The effectiveness of quality systems in nursing homes: a review. Qual Health Care. 2001;10(4):211–217.
13. Baer M, Frese M. Innovation is not enough: climates for initiative and psychological safety, process innovations, and firm performance. J Organ Behav. 2003;24(1):45–68.
14. Ho SJ, Chan L, Kidwell RE Jr. The implementation of business process reengineering in American and Canadian hospitals. Health Care Manage Rev. 1999;24(2):19–31.
15. Jarlier A, Charvet-Protat S. Can improving quality decrease hospital costs? Int J Qual Health Care. 2000;12(2):125–131.
16. Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med. 2007;146(9):666–673.
17. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii–iv, 1–72.
18. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230.
19. Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ. 1998;317(7156):465–468.
20. Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(6 Suppl):S57–60.
21. Nguyen HB, Corbett SW, Steele R, et al. Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality. Crit Care Med. 2007;35(4):1105–1112.
22. Fritz JM, Cleland JA, Brennan GP. Does adherence to the guideline recommendation for active treatments improve the quality of care for patients with acute low back pain delivered by physical therapists? Med Care. 2007;45(10):973–980.
23. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):1317–1322.
24. McGuirk B, King W, Govind J, Lowry J, Bogduk N. Safety, efficacy, and cost effectiveness of evidence-based guidelines for the management of acute low back pain in primary care. Spine. 2001;26(23):2615–2622.
25. Lesho EP, Myers CP, Ott M, Winslow C, Brown JE. Do clinical practice guidelines improve processes or outcomes in primary care? Mil Med. 2005;170(3):243–246.
26. Lorenzi NM, Riley RT. Managing change: an overview. J Am Med Inform Assoc. 2000;7(2):116–124.
27. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):1458–1465.
28. Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA. 1995;274(9):700–705.
29. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998;280(15):1339–1346.
30. Jacobson GP, Newman CW. The development of the Dizziness Handicap Inventory. Arch Otolaryngol Head Neck Surg. 1990;116(4):424–427.
31. Halmagyi GM, Curthoys IS. A clinical sign of canal paresis. Arch Neurol. 1988;45(7):737–739.
32. Longridge NS, Mallinson AI. The dynamic illegible E (DIE) test: a simple technique for assessing the ability of the vestibulo-ocular reflex to overcome vestibular pathology. J Otolaryngol. 1987;16(2):97–103.
33. Barr CC, Schultheis LW, Robinson DA. Voluntary, non-visual control of the human vestibulo-ocular reflex. Acta Otolaryngol. 1976;81(5–6):365–375.
34. Scheiman M, Gallaway M, Coulter R, et al. Prevalence of vision and ocular disease conditions in a clinical pediatric population. J Am Optom Assoc. 1996;67(4):193–202.
35. Dix MR, Hallpike CS. The pathology symptomatology and diagnosis of certain common disorders of the vestibular system. Proc R Soc Med. 1952;45(6):341–354.
36. Pagnini P, Nuti D, Vannucchi P. Benign paroxysmal vertigo of the horizontal canal. ORL J Otorhinolaryngol Relat Spec. 1989;51(3):161–170.
37. Shumway-Cook A, Horak FB. Assessing the influence of sensory interaction of balance Suggestion from the field. Phys Ther. 1986;66(10):1548–1550.
38. Wrisley DM, Whitney SL. The effect of foot position on the modified clinical test of sensory interaction and balance. Arch Phys Med Rehab. 2004;85(2):335–338.
39. Brandstater ME, de Bruin H, Gowland C, Clark BM. Hemiplegic gait: analysis of temporal variables. Arch Phys Med Rehab. 1983;64(12):583–587.
40. Powell LE, Myers AM. The Activities-Specific Balance Confidence (ABC) Scale. J Gerontol A Biol Sci Med Sci. 1995;50A(1):M28–34.
41. Shumway-Cook A, Woollacott M. Motor Control: Theory and Practical Application. Baltimore, MD: Williams & Wilkins; 1995.
42. Marchetti GF, Whitney SL. Construction and validation of the 4-item dynamic gait index. Phys Ther. 2006;86(12):1651–1660.
43. Alsalaheen BA, Whitney SL, Mucha A, Morris LO, Furman JM, Sparto PJ. Exercise prescription patterns in patients treated with vestibular rehabilitation after concussion. Physiother Res Int. 2013;18(2):100–108.
44. Gance-Cleveland B, Costin DK, Degenstein JA. School-based health centers. Statewide quality improvement program. J Nurs Care Qual. 2003;18(4):288–294.
45. Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Control Clin Trials. 1989;10(4):407–415.
46. Alghwiri AA, Marchetti GF, Whitney SL. Content comparison of self-report measures used in vestibular rehabilitation based on the international classification of functioning, disability and health. Phys Ther. 2011;91(3):346–357.
47. Lajoie Y, Gallagher SP. Predicting falls within the elderly community: comparison of postural sway, reaction time, the Berg balance scale and the Activities-specific Balance Confidence (ABC) scale for comparing fallers and non-fallers. Arch Gerontol Geriatr. 2004;38(1):11–26.
48. Whitney SL, Hudak MT, Marchetti GF. The activities-specific balance confidence scale and the dizziness handicap inventory: a comparison. J Vestib Res. 1999;9(4):253–259.
49. Fielder H, Denholm SW, Lyons RA, Fielder CP. Measurement of health status in patients with vertigo. Clin Otolaryngol Allied Sci. 1996;21(2):124–126.
50. Enloe LJ, Shields RK. Evaluation of health-related quality of life in individuals with vestibular disease using disease-specific and general outcome measures. Phys Ther. 1997;77(9):890–903.
51. Beninato M, Portney LG. Applying concepts of responsiveness to patient management in neurologic physical therapy. J Neurol Phys Ther. 2011;35(2):75–81.
52. Beninato M, Gill-Body KM, Salles S, Stark PC, Black-Schaffer RM, Stein J. Determination of the minimal clinically important difference in the FIM instrument in patients with stroke. Arch Phys Med Rehab. 2006;87(1):32–39.
53. Grol R. Improving the quality of medical care: building bridges among professional pride, payer profit, and patient satisfaction. JAMA. 2001;286(20):2578–2585.
54. Bekkering GE, Hendriks HJ, van Tulder MW, et al. Effect on the process of care of an active strategy to implement clinical guidelines on physiotherapy for low back pain: a cluster randomised controlled trial. Qual Saf Health Care. 2005;14(2):107–112.
55. Sequist TD, Gandhi TK, Karson AS, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005;12(4):431–437.
56. McDonald CJ, Wilson GA, McCabe GP Jr. Physician response to computer reminders. JAMA. 1980;244(14):1579–1581.
57. Litzelman DK, Dittus RS, Miller ME, Tierney WM. Requiring physicians to respond to computerized reminders improves their compliance with preventive care protocols. J Gen Intern Med. 1993;8(6):311–317.
58. Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ. 2010;182(5):E216–225.
59. Grimshaw JM, Eccles MP, Walker AE, Thomas RE. Changing physicians' behavior: what works and thoughts on getting more things to work. J Contin Educ Health Prof. 2002;22(4):237–243.
60. Freburger JK, Carey TS, Holmes GM. Physical therapy for chronic low back pain in North Carolina: overuse, underuse, or misuse? Phys Ther. 2011;91(4):484–495.
61. Muething S, Schoettker PJ, Gerhardt WE, Atherton HD, Britto MT, Kotagal UR. Decreasing overuse of therapies in the treatment of bronchiolitis by incorporating evidence at the point of care. J Pediatr. 2004;144(6):703–710.
62. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern Med. 2012;172(2):171–178.
63. Bekkering GE, van Tulder MW, Hendriks EJ, et al. Implementation of clinical guidelines on physical therapy for patients with low back pain: randomized trial comparing patient outcomes after a standard and active implementation strategy. Phys Ther. 2005;85(6):544–555.
64. Meretta BM, Whitney SL, Marchetti GF, Sparto PJ, Muirhead RJ. The five times sit to stand test: responsiveness to change and concurrent validity in adults undergoing vestibular rehabilitation. J Vestib Res. 2006;16(4–5):233–243.
65. Cherkin D, Deyo RA, Berg AO. Evaluation of a physician education intervention to improve primary care for low-back pain. II. Impact on patients. Spine. 1991;16(10):1173–1178.
66. Hetlevik I, Holmen J, Kruger O. Implementing clinical guidelines in the treatment of hypertension in general practice. Evaluation of patient outcome related to implementation of a computer-based clinical decision support system. Scand J Primary Health Care. 1999;17(1):35–40.
67. Portney LG, Watkins MP. Foundations of Clinical Research Applications to Practice. 3rd ed. Upper Saddle River, NJ: Pearson Education; 2009.
68. Cohen JW. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988.
    Keywords:

    balance and vestibular rehabilitation; clinical behavior; clinical outcomes; quality improvement

    Supplemental Digital Content

    Back to Top | Article Outline
    © 2016 Neurology Section, APTA