Health Care Management Review:
Implementation of evidence-based practices: Applying a goal commitment framework
Chou, Ann F.; Vaughn, Thomas E.; McCoy, Kimberly D.; Doebbeling, Bradley N.
Ann F. Chou, PhD, MPH, is Assistant Professor, Health Administration and Policy, College of Public Health; Assistant Professor, Department of Family Medicine, College of Medicine, University of Oklahoma Health Sciences Center; and VA Medical Center, Oklahoma City. E-mail: email@example.com.
Thomas E. Vaughn, PhD, is Associate Professor, Department of Health Management and Policy, College of Public Health, University of Iowa.
Kimberly D. McCoy, MS, is Statistician, Health Services Research and Development, Center of Excellence on Implementing Evidence-Based Practice, Roudebush Veterans Affairs Medical Center, Indianapolis.
Bradley N. Doebbeling, MD, MSc, is Senior Research Scientist, Indiana University Center for Health Services and Outcomes Research, Regenstrief Institute, Inc., Health Services Research and Development, Center of Excellence on Implementing Evidence-Based Practice, Roudebush Veterans Affairs Medical Center; and Professor of Medicine, Indiana University School of Medicine, Indianapolis.
Background: The implementation of evidence-based practices translates research findings into practice to reduce inappropriate care. However, this process is slow and unpredictable. The lack of a coherent theoretical basis for understanding individual and organizational behavior limits our ability to formulate effective implementation strategies.
Purpose: The study objectives are (a) to test the goal commitment framework that explains mechanisms impacting outcomes of major depressive disorder (MDD) screening guideline implementation and (b) to understand the effects of implementation outcomes on provider practice related to MDD screening.
Methods: Using data from the Determinants of Clinical Practice Guideline Implementation Effectiveness Study, the national sample included 2,438 clinicians from 139 Veteran Affairs acute care hospitals with primary care clinics. We used hierarchical generalized linear modeling to assess the following implementation outcomes: agreement with, adherence to, improvement in knowledge of guidelines, and delivery of best practices as a function of clinician input into implementation, teamwork, involvement in quality improvement activities, participative culture, interdepartmental coordination, frequency, and utility of performance feedback. We then estimated self-reported MDD screening practices as a function of these four implementation outcomes.
Findings: Results showed that having input into implementation, involvement in quality of care improvement, teamwork, and perceived value of performance feedback were positively associated with implementation outcomes. Provider self-assessed guideline adherence was positively associated with the likelihood of appropriate MDD screening.
Implications: Factors related to increased goal commitment positively predicted key implementation outcomes, which in turn enhanced care delivery. This study demonstrates that the goal commitment framework is useful in assisting managers to assess factors that facilitate implementation. In particular, participation, feedback, and team work equip organizational participants with better information about implementation targets, thereby increasing adherence. Instituting or improving systems or programs to facilitate timely, appropriate performance feedback and provider participation may help enhancing organizational change and learning.
Over the past two decades, the practice of evidence-based medicine has become integral in the provision of quality health care (Timmermans & Angell, 2001). In promulgating evidence-based practices, a variety of clinical practice guidelines have been developed to define standards of care. The discussion on evidence-based medicine is particularly timely as guideline use is an important component in current health care reform proposals. Although incorporating evidence into guidelines is an important step in driving practice change, promoting consistent delivery of care according to guideline recommendations remains a challenge.
The implementation of evidence-based practices promotes consistent uptake and integration of research findings into practice to reduce inappropriate care (Eccles & Grimshaw, 2004). However, the translation from research into practice has often been an unpredictable, slow, and haphazard process (Eccles & Grimshaw, 2004). For example, a number of studies have suggested that approximately 30% to 40% of the patients do not receive necessary care according to current scientific evidence and 20% to 25% of care provided has been medically unnecessary and potentially harmful (Eccles & Grimshaw, 2004; Schuster, McGlynn, & Brook, 1998). Reasons commonly cited for noncompliance include poor information, lack of support, and practitioner resistance (Schuster et al., 1998). Facilitating guideline adherence is a complicated process that is nested both structurally and as a set of processes within an organization, such as goal setting, monitoring, feedback, team composition, and work. In addition, individual adherence may be based on patient characteristics, practitioner knowledge and acceptance of guidelines, and incentives or sanctions impelling compliance.
Although published literature examining implementation of evidence-based care has been growing, the implementation process is infrequently driven by theoretical constructs (Drake et al., 2001; Greenhalgh et al., 2004; Magnabosco, 2006). Less than 10% of studies in this area have provided an explicit theoretical rationale for implementation strategies (Eccles & Grimshaw, 2004). Most implementation research has focused on the adoption and implementation of technical innovations in which the decision to adopt or to implement is made by a single individual (Rogers 2003). Although many theorists have proposed models to study factors influencing the implementation process, empirical tests remain limited (Grimshaw et al., 2004, 2006). Studies on implementation have also been limited to mostly single institutional case studies, often failing to capture the complexity of the implementation process (Klein & Sorra, 1996).
The lack of a coherent theoretical basis for understanding individual and organizational behavior limits our ability to formulate and to test hypotheses to study effective implementation strategies under varying circumstances and hinders evaluation of likely generalizability of results (Eccles & Grimshaw, 2004). In order for implementation efforts to be successful, it is important to explore theoretical underpinnings that may identify individual characteristics (e.g., knowledge, skills, and motivation) and organizational conditions that facilitate effective implementation (Wensing, van der Weijden, & Grol, 1998). Given that clinical practice is a form of human behavior, theories such as goal setting and goal commitment may offer appropriate frameworks to explain facilitators and barriers to behavioral changes among organizational participants during implementation of new practices (Eccles, Grimshaw, Walker, Johnston, & Pitts, 2005).
The Veteran Affairs (VA) Medical System has been particularly successful in using evidence-based clinical guidelines. The VA uses an integrated approach to guideline implementation by tying these efforts to performance report cards and other measures to redesign care and to improve guideline adherence (Ward et al., 2002). For example, the VA has launched the Quality Enhancement Research Initiative (QUERI) to facilitate the translation of best practices into usual clinical care in its medical centers (Hayward, Hofer, Kerr, & Krein, 2004; McQueen, Mittman, & Demakis, 2004). Together, the QUERI program and the VA's efforts in setting national performance measures provide a unique opportunity to better understand guideline implementation by testing empirically those factors that may affect the implementation process (McQueen et al., 2004).
Using the VA QUERI as a case, this study examines the use of evidence-based screening guideline for major depressive disorder (MDD), a common condition that has significant cost and policy implications. Depressive disorders are a major cause of social and occupational dysfunctions and cost over $40 billion per year in direct and indirect costs (Pignone et al., 2002). Although there is limited research on the implementation process in general, fewer still have focused on the implementation of behavioral health practices (Magnabosco, 2006; Torrey, Finnerty, Evans, & Wyzik, 2003). To that end, this study has two objectives. First, we will test a theoretical framework explaining mechanisms that impact intermediate outcomes of MDD guideline implementation. Second, we aim to understand the effects of these intermediate implementation outcomes on provider practice surrounding MDD screening.
Combining goal setting and goal commitment theories provides a theoretical framework through which factors facilitating guideline implementation can be explained. Goal setting theory suggests that performance is a function of goal specificity, acceptance, and commitment, moderated by information, rewards, feedback of results, and participation (Landy & Becker, 1987). Greater goal specificity and goal acceptance may improve achievement of desired outcomes (Locke & Latham, 1990). Specific goals and their acceptance can help to avoid incorrect actions. More importantly, the relationship between goal setting and performance strengthens as participants are committed to their goals, especially when the goal is difficult and performance needs to be sustained (Locke & Latham, 2002). Erez and Somech (1996) found that when a reasonable number and highly specific goals are identified and accepted, participants develop more specific outcomes and action plans, which in turn allow greater integration of and closer agreement with the goal.
The process that leads to goal commitment among organizational participants depends on interactive, external, and internal factors (Figure 1). Interactive factors include participation in the goal setting process and collaboration between management and other groups. External factors include the use of feedback mechanisms and the influence of peer groups or teams. Both the interactive and the external factors facilitate the internal cognitive processes that each participant undergoes to enhance participant expectancy and self-efficacy in accomplishing the assigned goals. The external, internal, and interactive factors have the potential to lead to greater goal commitment, which in turn affects performance (Locke & Latham, 1990).
Management literature has generally agreed that participation yields higher employee morale, satisfaction, and performance. In particular, participation inspires self-expression and respect and meets the ego needs of organizational participants. Participation allows them to more fully understand and support management's desires and expectations, thereby enhancing productivity (Miller & Monge, 1986). Moreover, participation engenders buy-in and escalates goal commitment, which may lead to greater goal acceptance and, consequently, improved performance (Locke & Latham, 1990).
Experimental studies examining outcomes of participation have suggested that increased worker participation in organizational decisions enhances organizational effectiveness while positively affecting satisfaction, trust, job involvement, and other work-related attitudes (Argyris, 1964; Hrebiniak,1974; Patchen, 1970). Goals that have been set via participant consensus may lead to higher performance than those that have been assigned or entrusted to the participant to follow the simple instruction of "doing one's best" (Latham & Yukl, 1975). A number of mechanisms have been posited to explain how participation may induce higher performance. Patchen (1970) reported that increased participation was positively related to satisfaction and achievement orientation. Argyris (1964) illustrated the benefits of viewing individuals in organizations as desiring autonomy and some degree of self-control or self-determination. Hrebiniak (1974) found that the level of participation of "the task group and related structural dimensions were more important than individual, supervisory, and technological characteristics in explaining work satisfaction and the level of interpersonal trust among subordinates." Locke (1968) found that participation in decision making mediated a person's goals, which in turn affected performance (Latham & Yukl, 1975; Locke, 1968). To that end, we hypothesize that
Hypothesis 1: Greater participation increases agreement with, adherence to, knowledge of guideline recommendations, and delivery of best practices.
The effect of teamwork and peer influence on goal commitment has been well documented in industry (Locke & Latham, 1990). Matsui, Kakuyama, and Onglatco (1987) found that goal commitment was higher in subjects who worked in groups and were assigned both individual and group performance targets. Weigart and Weldon (1988) reported that the combination of team work and giving individual feedback yielded higher goal commitment. As new organizational forms emerge and workforce becomes more diverse, the use of teams will transform accordingly in redefining the notion of hierarchy (Steers, Mowday, & Shapiro, 2004).
The effect of team work on goal commitment may be explained via two mechanisms: (a) social pressure and (b) competition. A number of authors have cautioned about social loafing that may occur when individuals work as a team. Social loafing, on the basis of the unidentifiable nature of individual contributions to a group product, often explains lower individual motivation in terms of the perceived weak relationship between individual effort and sanctions or reward (George, 1992). Harkins and Petty (1982) found that social loafing did not occur when individuals thought that they could make a unique contribution to group performance, even if they are un identifiable. However, task visibility in providing medical care within the group may remain high and therefore discourages social loafing (Liden et al., 2004). In fact, in these situations, goal commitment may be heightened when individuals work as a team because goals that carry responsibility to others may engender social pressure for the individual to follow through (Locke & Latham, 1990). In addition, if individual performance can be pinpointed, then individual accountability is more readily visible in a team setting, motivating individual team members to contribute equally (Brickner & Bukatko, 1987). Whenever individuals receive feedback concerning specific aspects of their jobs or performance within a team, spontaneous competition is developed (Latham & Yukl, 1975). Also, teams often receive feedback in relation to group norms and performance scores in which they can compare their team with others in the organization (Locke & Latham, 1990). All of these processes may foster competition within the organization and elevate performance as a result. Therefore, we hypothesize that
Hypothesis 2: Team interaction increases agreement with, adherence to, knowledge of guideline recommendations, and delivery of best practices.
Various studies have suggested that feedback of outcomes would improve and sustain performance at desired levels (Earley, Northcraft, Lee, & Lituchy, 1990). Latham and Yukl (1975) described four mechanisms through which feedback or "knowledge of results" would lead to increased effort and performance. Feedback may (a) induce participants who previously did not have specific goals to set a goal for performance improvement to a certain level, (b) motivate participants to raise their goal level after attaining a previous goal, (c) serve as an indicator to alert participants of an insufficient current level of performance, which may result in greater effort, and (d) inform participants of ways in which to improve methods of performing the task.
Feedback to participants can come from a variety of sources, including personal experiences, anecdotal events, information from management, public news, and media sources. In health care, feedback can also come from colleagues who work on the same team in providing care or even from patients because of increased advocacy for patient-centered care. Feedback from management and peers provides metrics that participants may use to adjust their goal commitment and translate the feedback into action and improved performance (Locke & Latham, 1990). Furthermore, goal commitment escalates when information about how to perform the task is distributed, enhancing the execution of assigned goals (Earley et al., 1990). Therefore, we predict that
Hypothesis 3: Greater feedback of results increases agreement with, adherence to, knowledge of guideline recommendations, and delivery of best practices.
The cognitive process of integrating external and interactive factors with the individual's own characteristics leads the individual to determine whether achieving the goal or making progress toward the goal is possible or probable (Locke & Latham, 1990). A component of the expectancy of success is self-efficacy (Bandura, 1982). Self-efficacy encompasses the judgment of one's total capability of performing a task, in addition to the expectancy of success (Gist, 1987). The cognitive processes operate via their effect on expectancy and self-efficacy of organizational participants. Dachler and Mobley (1973) and Vroom (1964) characterized expectancy of success as individual choices being affected by the self-assessed capability to perform the task well. A number of studies have found a relationship between declining goal commitment and decrease in a person's perceived chances of goal achievement (Mento, Cartledge, & Locke, 1980). One of the critical factors influencing expectancy is the perceived relevance of feedback (Vroom, 1964). Although most theorists have examined self-efficacy in terms of performance, it is logical to conclude that the probability of goal commitment is likely to be higher when self-efficacy for a task is correspondingly high. Bandura's (1982) work has shown that self-efficacy reinforces individual commitment to a course of action, especially when it involves "overcoming setbacks, failures, and obstacles to accomplish that course of action." Bandura found that self-efficacy affects expectancy in that those with high self-efficacy tended to invest more effort when they received repeated feedback indicating performance below the level of the assigned goal. Further, Earley (1986) found that information can influence goal commitment and performance both directly and indirectly through its effects on self-efficacy. In this case, we focused primarily on the aspect of information as a facilitator of self-efficacy. Therefore, we hypothesize that
Hypothesis 4: Expectancy of success increases knowledge of, agreement with, and adherence to guideline recommendations and delivery of best practices.
Hypothesis 5: Enhanced self-efficacy via information increases knowledge of, agreement with, and adherence to guideline recommendations and delivery of best practices.
To ensure the success of implementation or change efforts, the goal has to be specific and explicit. At the VA, the goal is unequivocal in that providers were expected to adhere to guideline-recommended screening practice for MDD. Although it is critical that target participants possess the knowledge, skills, and motivation needed to adopt and to apply evidence-based practice, practical and organizational conditions need to exist to facilitate maintenance of the new behavior and acceptance by other stakeholders (e.g., administrators, clinician colleagues, patients, etc.) (Wensing et al., 1998). Hence, we postulate that these intermediate implementation outcomes would be associated with screening for MDD.
Hypothesis 6: Knowledge of, agreement with, and adherence to guideline recommendations and delivery of best practices would impact outcomes of guideline implementation where guideline-indicated depression screening would be performed at a higher frequency.
Data and Sample
The study used the following data sources: (a) the Determinants of Clinical Practice Guideline Implementation Effectiveness Study survey; (b) the External Peer Review Program (EPRP) performance data; and (c) the American Hospital Association annual survey data.
The survey collected data from VA providers regarding their experience, work conditions, attitudes, and experiences with clinical guideline implementation, hospital culture, quality improvement efforts, interdepartmental coordination, frequency and utility of performance feedback, and information technology tools at their facility. The Veterans Health Administration EPRP is a contracted on-site review of clinical records as a functional component of Veterans Health Administration's quality management program. The EPRP abstracts data periodically from a sample of patient records at each VA Medical Center. The sample is drawn from patients who have specific medical conditions of interest (e.g., depression, diabetes, etc.), and although EPRP covers a wide range of services, it focuses on a sample of patients who have chronic diseases and exhibit higher utilization (e.g., at least three ambulatory care visits in the previous year). Ongoing interrater reliability assessments are performed regularly for each abstractor to insure coding accuracy as well as reliability and validity. This system assembles and reports performance measures for specific aspects of prevention and chronic disease management at the facility level. The survey and the EPRP data were then linked to hospital characteristics derived from the American Hospital Association survey at the facility level to create an analytic data set.
The study sample of VA clinicians came from the Determinants of Clinical Practice Guideline Implementation Effectiveness Study, supported by the VA QUERI (Doebbeling, Vaughn, McCoy, & Glassman, 2006). The sampling frame included 139 VA acute care hospitals with primary care clinics. VA administrative data from the Personnel Accounting Integrated Data System were used to identify clinicians for study recruitment. Several reviewers who were knowledgeable in the VA personnel classification system independently reviewed the multiple occupational titles and service codes to determine a sampling frame that included only active physicians, nurses, nurse practitioners, and physician assistants who were based in primary or ambulatory care. Where such a designation was not available, we selected physicians and physician assistants in medicine and geriatrics services and nurses and nurse practitioners from medical and nursing services because they are typically based in primary or ambulatory care. On the basis of power calculations, we sought to identify and recruit at least eight physicians, eight nurses, and four physician assistants or nurse practitioners, if available, from each facility. Physicians, nurses, nurse practitioners, and physician assistants represented clinicians whose clinical responsibilities include conducting MDD screenings.
Random selection from the 139 VA hospitals yielded a pool of 4,621 providers. Providers who were retired, deceased, ineligible for participation (i.e., no longer providing primary care), or had left the VA were removed (N=394). The remaining sample from which to recruit included 4,227 providers: 1,770 were physicians, 1,643 were nurses, and 814 were physician assistants or nurse practitioners. Of the eligible providers, 2,438 responded, representing a response rate of 58%. This response rate exceeded that of a typical large sample (i.e., greater than 1,000 potential participants) of provider surveys (Cummings, Savitz, & Konrad 2001). The categories of physician assistants and nurse practitioners were subsequently collapsed into a single category because of similarities of these job roles. On the organizational level, 132 of the 139 in the sample returned provider surveys (response rate = 95%). Descriptive statistics were generated using facility averages to compare organizational characteristics of responding and nonresponding hospitals. For normally distributed, continuous variables, the t test was used to compare groups. When distributions of continuous variables deviated from normality, comparisons were made using the Wilcoxon rank sum test. For binary variables, statistically significant differences between group proportions were evaluated using the chi-square test or the Fisher's exact test, as appropriate. Alpha was set at .05 and p values were two-tailed. On the basis of this comparison, there are no statistically significant differences between the two groups for teaching status, urbanicity, hospital size, physician full-time employees per 1,000 outpatient visits, and nonemergency outpatient visits.
As Figure 1 illustrates, two sets of analyses, corresponding to the two study objectives, were conducted to test our theoretical model. The first examined the effect of external, internal, and interactive factors on goal commitment. The second examined the impact of goal commitment on guideline adherence. For the first set of analyses, goal commitment was demonstrated through four dependent variables that represented intermediate outcomes of clinical practice guidelines implementation. These variables were created from responses to questions in the survey that asked the extent to which the provider (a) "agreed with the recommendation," (b) "adhered to the recommendations," (c) reported that "implementation has improved [provider's] knowledge of evidence supporting best practices," and (d) stated that "guideline implementation has improved [provider's] delivery of best practices."
Responses for both dependent and predictor variables were measured on a 5-point Likert scale that ranged from 1= not at all, 2 = very little, 3 = some, 4 = great, to 5 = very great. The responses were then dichotomized into two categories, high (responses of "great" and "very great") and low ("not at all," "very little," and "some"). The high category received the value of one, whereas responses in the low category constituted the comparison group, with the value zero.
Predictor variables were grouped according to the model in Figure 1. External factors included feedback and peer group influence. Two variables were used to explore the role of feedback mechanism, and another two variables were included to examine peer influence and coordination. Providers were asked the extent to which they received aggregate guideline adherence data by facility, which is their hospital unit as well as their individual performance data. They also reported the extent to which team work existed at their facility in the guideline implementation efforts and the extent to which there was interdepartmental cooperation on these efforts.
Three variables were included to estimate the interactive effect of participation. Participants were asked about the extent to which providers had input into clinical guideline implementation and were involved in activities to improve the quality of care. In addition, providers rated their facilities' (i.e., VA hospital) emphasis on a culture of participative decision making.
Three variables described the internal factors of expectancy and enhanced self-efficacy. Expectancy of success was represented by the respondent's self-assessment of the extent to which feedback regarding guideline performance affected his or her own clinical practice. Self-efficacy was facilitated by two variables that measured the quantity of and access to information. Providers reported the frequency of receiving guideline information and the extent to which guideline information was readily accessible at the point of patient encounter. The response categories for the frequency of guideline information distribution were as follows: 1 = less than once per year, 2 = once, 3 = two or three times, and 4 = four or more times per year.
Clinical performance, the dependent variable in the second analysis, was measured using the facility average reported in EPRP and was constructed as a binary variable with achieving at least 85% MDD screening rates of appropriate patients coded as "one." We compared the MDD screening rate from EPRP against that of provider self-report (i.e., provider estimate of percent of patients screened for MDD) from the survey data. The EPRP reported a rate of 71.8% for MDD screening, whereas the survey yielded a rate of 70.4%. A t test indicated no statistically significant difference between the two figures (p = .52).
Graphical analyses were conducted to examine the distribution of the parameters and to identify possible outliers. Appropriate descriptive statistics and a correlation matrix were compiled to examine associations among the variables.
In the first set of analyses, a series of hierarchical generalized linear modeling (HGLM) analyses using general estimating equations were conducted to evaluate factors associated with the implementation outcomes of agreement with, adherence to, and improvement in knowledge of the MDD Guidelines and delivery of best practices (Raudenbush & Bryk, 2002). Each outcome variable was modeled as a function of the level of individual participation, organizational participative conditions, and type and frequency of feedback, controlling for provider demographic and practice characteristics, including age, gender, specialty, and exposure to the development of clinical practice guidelines (number of guideline groups in which they had participated) as well as the clustering effect of the facilities.
To examine how implementation outcomes might impact service delivery, we used HGLM again to model achieving at least an 85% MDD screening rate of appropriate patients as a function of agreement with guideline recommendations, adherence to guidelines, improvement in knowledge about guidelines, and ability to apply best practices, controlling for provider gender, number of years the provider has been in practice, and specialty at the first level and bed size, teaching status, urban location, and clustering effect of the facilities at the second level. Rural or urban location is controlled because literature has shown that implementing evidence-based practices may differ by location. For example, rural patients who suffer from major depression may have less access to psychiatrists and therefore may be more likely to work with general practitioners in treating MDD (Fortney et al., 2007).
As expected, some of the intermediate implementation outcomes were correlated (Table 1). To detect possible multicollinearity, we modeled the depression screening outcome as a continuous variable in a multiple regression analysis and examined the variance-inflating factors (VIF). The VIF did not indicate multicollinearity in the model because none of the VIFs in the regressions exceeded the threshold of 10 (Kennedy, 2003). Therefore, retaining the intermediate implementation outcomes as separate variables in our HGLM analysis allowed us to have more detailed information and to identify the effect of individual implementation outcomes on practice. All analyses were conducted using SAS, version 9.1 (SAS Institute, Cary, NC).
Demographic and practice characteristics of the study participants are presented in Table 2. In this provider cohort, approximately 62% of the providers were male. The age of the clinician cohort ranged from 24 to older than 80 years, with 37% falling in the range between 41 and 50 years and 48% older than 50 years. In terms of professional training, 39% were physicians, 51% were either nurse practitioners or nurses and 7% were physician assistants.
Descriptive statistics of predictor variables and implementation outcomes are presented in Table 3. Approximately 18% indicated that provider input had been incorporated into guideline implementation efforts and 24% reported having involvement in improving quality of care. Only 23% felt that their facility fostered a participative culture. Twenty-two percent indicated that team work existed for guideline implementation, and almost 27% reported coordination among various departments. In terms of frequency of obtaining guideline information, approximately 29%, 19%, and 27% reported that they received feedback once a year or less, two to three times a year, and four or more times a year, respectively. Overall, 46% received information at least twice a year. Approximately 14% of the participants received general guideline adherence data, whereas 27% had feedback specific to their individual performance. Almost one quarter reported that having access to feedback affected their performance on guideline adherence.
In surveying the effect of the guideline implementation efforts, 26% agreed with the guideline recommendations, 27% reported adherence to these recommendations, 20% indicated improvement in knowledge about guideline recommendations, and 20% reported improvement in delivery of best practices (Table 3).
HGLM results are presented in Table 4. External factors such as team work and feedback positively affected intermediate implementation outcomes. Team work in guideline implementation was associated with increased agreement with (odds ratio [OR] = 2.02, 95% confidence interval [CI] = 1.46-2.79), improved knowledge of (OR = 1.92, 95% CI = 1.29-2.85), and delivery of best practices (OR = 1.96, 95% CI = 1.31-2.93). Providers who reported receiving individualized performance feedback were 1.37 (95% CI = 1.03-1.82) and 1.34 (95% CI = 0.96-1.87) times as likely to report adhering to guideline recommendations or to indicate improvement in knowledge, respectively.
Interactive factors related to participation exhibited similarly positive effects on implementation outcomes. Provider input into guideline implementation predicted greater agreement with guideline recommendations (OR = 1.44, 95% CI = 1.04-1.99), adherence to guidelines (OR = 1.84, 95% CI = 1.30-2.59), improved knowledge (OR = 1.50, 95% CI = 1.02-2.19), and enhanced best practice delivery (OR = 1.61, 95% CI = 1.13-2.30). In addition, providers who were involved in quality of care improvement initiatives reported greater adherence to guidelines (OR = 1.47, 95% CI = 1.11-1.96), improved knowledge (OR = 1.65, 95% CI = 1.25-2.17), and enhanced best practice delivery (OR = 1.41, 95% CI = 1.05-1.89). However, a culture fostering participative decision making did not show a difference.
Both internal factors of expectancy and self-efficacy positively influenced intermediate implementation outcomes. Participants who reported that feedback affected their performance were 1.59 (95% CI = 1.17-2.18), 1.45 (95% CI = 1.07-1.96), 1.52 (95% CI = 1.10-2.11), and 1.62 (95% CI = 1.16-2.24) times as likely to have indicated agreement with guidelines, adherence to recommendations, improved knowledge, and delivery of best practice, respectively. Self-efficacy enhanced by exposure to information was positively associated with agreement (OR = 1.15, 95% CI = 1.04-1.27), adherence (OR = 1.25, 95% CI = 1.13-1.38), improved knowledge (OR = 1.17, 95% CI = 1.06-1.30), and best practice delivery (OR = 1.14, 95% CI = 1.03-1.26). Access to guideline information at the point of clinical encounter also increased agreement with guideline recommendations (OR = 1.32, 95% CI = 1.01-1.72).
In examining the effect of intermediate outcomes on clinician performance in providing appropriate depression care, providers who reported adherence to guidelines were 1.72 times (95% CI = 1.16-2.54) as likely to screen for depression in more than 85% of their patients with chronic conditions. Agreement with guidelines, improved knowledge, and enhanced delivery of best practices were not significantly associated with the provision of appropriate depression screening (Table 5).
Our findings showed that the goal commitment framework can be useful to practices, its management, and practitioners in understanding factors that facilitate the implementation of evidence-based practices. They also underscore the importance of identifying mechanisms through which improved implementation and care delivery outcomes are achieved. All five hypotheses derived from this framework were supported. External, interactive, and internal factors positively affected intermediate implementation outcomes among providers, some of which in turn increased adherence to appropriate MDD screening. Practices or practitioners may formulate strategies to address factors within the framework that may help bridge the gap between goals of guideline implementation and clinician acceptance and commitment to evidence-based practices.
Hypothesis 1, postulating a positive relationship between the participation and the implementation outcomes, was supported. Providers with greater input into the guideline implementation process or involvement in the quality improvement efforts reported higher likelihood of achieving intermediate implementation outcomes. Including providers in decision making related to the implementation process produces decisions on the basis of better information, and those involved may feel more ownership that motivates them to follow through with the implementation of these decisions.
Hypothesis 2, the relationship between teamwork and implementation outcomes, was supported. Our findings demonstrated that the influence of the team may have acted through several mechanisms. When participants indicated that teamwork was present in the guideline implementation process, agreement with and adherence to recommendations, improved knowledge, and delivery of best practices all significantly increased. In the process of developing approaches to implement guidelines, team members may receive feedback in relation to group norms and performance scores by which they can compare themselves with others in the group (Locke & Latham, 1990). On the other hand, interdepartmental coordination, which would measure a larger set of activities and priorities than the local team efforts, had no effect on implementation. It is possible the size of departments and complexity of coordination among various stakeholders may have impeded effective communication and information exchange and therefore did not contribute to the implementation process.
Hypothesis 3, assessing the association between feedback and implementation outcomes, was partially supported. Individualized performance feedback was positively associated with guideline adherence and improved knowledge, whereas aggregated performance feedback showed no effect. This observation demonstrates that the availability of feedback may be a necessary but not sufficient condition to facilitate change and may depend highly on the type and nature of feedback as well as the definition of the performance measures (Locke & Latham, 1990).
In general, the primary purpose of feedback is to provide information for participants to calibrate their performance to meet set goals. The findings that facility-level aggregate data made no difference whereas individualized feedback was useful in facilitating implementation suggests that feedback has to be meaningful and tied to specific goals that participants can reasonably achieve (Locke & Latham, 1990). Specific feedback may also be more effective than general feedback as it establishes an individual accountability loop through which one's performance can be compared against group or organizational standards (Rosenheck, 2001).
Moreover, goals should ideally translate into action. Participants must value the feedback and have the willingness to modify their behavior if the feedback shows that goals have not been met. For example, the feedback received by our study participants positively affected outcomes that can be quantified, as observed in the increase in knowledge base and guideline adherence, although it seemingly had no effect on outcomes related to participant attitudes, such as agreement with recommendations and the belief that guidelines would enhance the delivery of best practice.
The role of feedback in facilitating implementation is further illustrated by Hypothesis 4. The perceived value of feedback is as critical, if not more, as the availability of feedback. Our findings suggest that when providers agreed with and adhered to the recommendations and improved their knowledge and delivery of best practices, they were more likely to value the feedback and believe that it enhanced performance. The achievement of the intermediate implementation outcomes demonstrated that a provider's positive response to feedback can affect the efficiency of their task strategies and motivation to achieve a certain level of performance (Locke & Latham, 2002). Moreover, Hypothesis 5 showed that providers who received performance data more frequently tended to exhibit greater knowledge of best practices and agreement with and adherence to MDD guidelines. The consistent flow of information obtained via feedback mechanisms may have enforced commitment to past changes and created routines, enabling participants to be self-efficacious and capable of executing the required courses of action (Bandura, 1982; Locke, Latham, & Erez, 1988).
In sum, participation in implementation, information, team work, and individually tailored and valuing feedback positively predicted various intermediate implementation outcomes, which in turn enhanced care delivery, as demonstrated by the support for Hypothesis 6. Our findings are consistent with Latham's (2004) discussion on the motivational benefits of goal commitment, where goal setting directs and energizes individuals to focus on goal-oriented activities, resulting in higher levels of motivation and knowledge utilization. Our results also suggest that performance feedback and provider input regarding organizational quality improvement processes may enhance goal commitment. Participation leads to greater commitment or adherence through improved understanding and support of implementation targets, increased goal ownership, and greater buy in. Feedback equips organizational participants with better information about implementation, thereby increasing adherence. Instituting or improving systems or programs to facilitate timely, appropriate performance feedback, team work, and provider participation in decision making should enhance organizational change and learning in implementing innovative practices.
Study contribution and limitations.
By focusing on the multilevel nature of the implementation process (Klein & Sorra, 1996), our study contributed to the knowledge base of current efforts in implementation research. It presented and empirically tested a theoretical framework to improve our understanding of the guideline implementation process. In addition, this study contributes to current literature because in lieu of focusing on technical interventions, our work examined the implementation of practices to improve service delivery (Hage & Aiken, 1967). It involved a complex decision and a process in which multiple actors and organizations participated. The findings also have practical applications where implementation outcomes such as self-assessed increased adherence had a positive effect in promoting guideline-indicated care delivery. These findings may be generalizable to cohorts of primary care providers as controlling for provider type in our analyses did not show a difference by provider type. Nevertheless, there remains a need for more research to continue the empirical testing of tools and strategies to implement evidence-based practices and build a stronger evidence base from which to plan, implement, and sustain evidence-based care (Magnabosco, 2006). Results presented here may be further enriched by a better understanding in and study of organizational citizenship, group norms, and resources.
As with all research, our study had limitations that warrant discussion. First, the study surveyed providers about their experience with guideline implementation, and although all the providers surveyed may have experienced guideline implementation at their individual facility, they may not have had full knowledge of all organizational factors related to implementation in their facility (Ward et al., 2002). Therefore, their responses reflected their experiences and perceptions and may have been biased by these experiences at their facility. Second, we used data on achievement of implementation outcomes and performance on the basis of self-report. Previous research has shown a discrepancy between clinicians' perception of their performance and actual practice when assessing adherence to recommendations on preventive services (Gemson et al., 1995). It is possible that providers may have overestimated performance due to social acceptability bias. However, the tendency for such bias may have been ameliorated to some degree by surveying participants via postal mail. In addition, distributions of the implementation outcomes were not skewed. When we checked the MDD screening rates reported by survey respondents against those derived from the EPRP data, the difference was not statistically significant. Third, nuances in certain constructs of our theoretical framework may be further captured with more robust data. For example, variables in our data set cannot fully describe the effect of team interaction as it may have facilitated either goal commitment or social loafing. Our database lacked some measurement, such as resources or incentives, although the VA did not offer incentives tied to performance measurement at the time of this survey, which may better inform the goal commitment framework. Fourth, our study focused on a "closed" system, where it may be easier to obtain provider commitment and adherence to evidence-based practices within an integrated system. Hence, our results may not be generalizable to more diverse or loosely affiliated organizations or practitioners. The issue of generalizability raises concerns when we examine guideline implementation at a national or state level and warrants further discussion related to consistency in evidence-based practices in current health care reform debates. In addition, we relied on single-item operational measures as proxies for the constructs of expectancy and self-efficacy, which may be better assessed with qualitative data. Nevertheless, dichotomizing these variables from the Likert scale may have addressed possible concerns related to reliability and validity. We recognize the critical limitation in our ability to only measure single dimensions of complex concepts (e.g., the information aspect of self-efficacy). The lack of variables to empirically test a framework is often one of the most significant challenges in conducting implementation research. Again, additional data collection on qualitative information would improve the testing of our framework. Finally, the use of cross-sectional data did not permit us to establish causal relationships in either direction. Although instrumental variable estimation may be used to address endogeneity, this method remains quite difficult with cross-sectional data, and we were unable to identify a satisfactory instrumental variable. When the instrument is not sufficiently correlated with the endogenous variables, the instrumental variable estimator is unreliable. These quantitative analyses should be supplemented with longitudinal or qualitative studies to confirm these findings. In particular, performance such as adherence may be better captured with before and after comparisons.
Furthermore, we recognize that testing Hypothesis 6 to examine the relationship between implementation outcomes and performance can be significantly strengthened with additional analysis. Our current analysis of Hypothesis 6 represents a preliminary effort to assess this relationship. Although we found an association between improved adherence to evidence-based practices and increased guideline-indicated MDD screening at the facility level, future research using additional data sources such as administrative databases to confirm performance data collected by the facility will contribute to resolving potential biases emanated from possible endogeneity.
This work was supported by the Department of Veterans Affairs, the Veterans Health Administration, Health Services Research, and Development Service QUERI grant nos. CPI99-126 and CPI01-141 (PI: B. Doebbeling), and partially supported by HSR&D Center grant no. HFP 04-148. Part of Dr. Chou's time was supported by the Veterans Research and Education Foundation of the Oklahoma City VA. Dr. Vaughn's time for this work was partially supported by the Center for Research on the Implementation of Innovative Strategies in Practice, VA Iowa City, VA Health Care System. The authors thank all clinicians in the study for their participation and for the assistance of the Center of Excellence staff at the Roudebush VA Medical Center in Indianapolis. An earlier version of this work was presented at the 9th Annual Health Organizations Research Association Conference. The opinions expressed are those of the authors and do not necessarily reflect those of the Department of Veterans Affairs.
Argyris, C. (1964). Integrating the individual and the organization. New York: Wiley.
Bandura, A. (1982). Self-efficiency mechanism in human agency. American Psychologist, 37, 122-147.
Brickner, M., & Bukatko, P. (1987). Locked into performance: Goal setting as a moderator of the social loafing effect. Akron, OK: University of Akron.
Cummings, S. M., Savitz, L. A., & Konrad, T. R. (2001). Reported response rates to mailed physician questionnaires. Health Services Research, 35(6), 1347-1355.
Dachler, H. P., & Mobley, W. H. (1973). Construct validation of an instrumentality-expectancy-task-goal model of work motivation: Some theoretical boundary conditions. Journal of Applied Psychology, 58, 397-418.
Doebbeling, B. N., Vaughn, T. E., McCoy, K. D., & Glassman, P. (2006). Informatics implementation in the Veterans Health Administration (VHA) healthcare system to improve quality of care. AMIA Annual Symposium Proceedings, 204-208.
Drake, R. E., Goldman, H. H., Leff, H. S., Lehman, A. F., Dixon, L., Mueser, K. T., et al. (2001). Implementing evidence-based practices in routine mental health service settings. Psychiatric Services, 52(2), 179-182.
Earley, P. C. (1986). An examination of the mechanisms underlying the relation of feedback and performance. Academy of Management Proceedings, 214-218.
Earley, P. C., Northcraft, G., Lee, C., & Lituchy, T. (1990). Impact of process and outcome feedback on the relation of goal setting to task performance. Academy of Management Journal, 33(1), 87-105.
Eccles, M., Grimshaw, J., Walker, A., Johnston, M., & Pitts, N. (2005). Changing the behavior of healthcare professionals: The use of theory in promoting the uptake of research findings. Journal of Clinical Epidemiology, 58(2), 107-112.
Eccles, M. P., & Grimshaw, J. M. (2004). Selecting, presenting and delivering clinical guidelines: Are there any "magic bullets"? Medical Journal of Australia, 180(Suppl. 6), S52-S54.
Erez, M., & Somech, A. (1996). Is group productivity loss the rule of the exception? Effects of culture and group-based motivation. Academy of Management Journal, 39(6), 1513-1528.
Fortney, J., Pyne, J. M., Edlund, M. J., Williams, D., Robinson, D. E., Mittal, D., et al. (2007). A randomized trial of telemedicine-based collaborative care for depression. Journal of General Internal Medicine, 22, 1086-1093.
Gemson, D. H., Ashford, A., Dickey, L., Raymore, S., Roberts, J. W., Ehrlich, M., et al. (1995). Putting prevention into practice: Impact of a multifaceted physician education program on preventive services in the inner city. Archives of Internal Medicine, 155, 2210-2216.
George, J. M. (1992). Extrinsic and intrinsic origins of perceived social loafing in organizations. Academy of Management Journal, 35, 191-202.
Gist, M. E. (1987). Self-efficacy: Implications for organizational behavior and human resource management. Academy of Management Review, 12, 472-485.
Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly, 82(4), 581-629.
Grimshaw, J. M., Eccles, M., Thomas, R., MacLennan, G., Ramsay, C., Fraser, C., et al. (2006). Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966-1998. Journal of General Internal Medicine, 21(Suppl. 2), S14-S20.
Grimshaw, J. M., Thomas, R. E., MacLennan, G., Fraser, C., Ramsay, C. R., Vale, L., et al. (2004). Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment, 8(6), iii-iv, 1-72.
Hage, J., & Aiken, M. (1967). Program change and organizational properties. A comparative analysis. American Journal of Sociology, 72(5), 503-519.
Harkins, S. G. & Petty, R. E. (1982). The effects of task difficulty and task uniqueness on social loafing. Journal of Personality and Social Psychology, 43, 1214-1229.
Hayward, R. A., Hofer, T. P., Kerr, E. A., & Krein, S. L. (2004). Quality improvement initiatives: Issues in moving from diabetes guidelines to policy. Diabetes Care, 27(Suppl. 2), B54-B60.
Hrebiniak, L. G. (1974). Effects of job level and participation on employee attitudes and perceptions of influence. Academy of Management Journal, 17, 649-662.
Kennedy, P. (2003). A guide to econometrics. Cambridge, MA: MIT Press.
Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation implementation. Academy of Management Review, 21, 1055-1080.
Landy, F. J., & Becker, W. (1987). Motivation theory reconsidered. In L. L. Cummings & B. M. Staw (Eds.), Research in organizational behavior. Greenwich, CT: JAI Press.
Latham, G. P. (2004). The motivational benefits of goal setting. Academy of Management Executive, 18, 126-129.
Latham, G. P., & Yukl, G. A. (1975). A review of research on the application of goal setting in organizations. Academy of Management Journal, 18(4), 824-845.
Liden, R. C., Wayne, S. J., Jaworski, R. A., & Bennett, N. (2004). Social loafing: A field investigation. Journal of Management, 30, 285-304.
Locke, E. A. (1968). Toward a theory of task motivation and incentives. Organizational Behavior and Human Performance, 3, 157-189.
Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. American Psychologist, 57(9), 705-717.
Locke, E. A., Latham, G. P., & Erez, M. (1988). The determinants of goal commitment. Academy of Management Review, 13(1), 23-39.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task performance. Englewood Cliffs, NJ: Prentice-Hall.
Magnabosco, J. L. (2006). Innovations in mental health services implementation: A report on state-level data from the U.S. Evidence-Based Practices Project. Implementation Science, 1, 13-24.
Matsui, T., Kakuyama, T., & Onglatco, M. (1987). Effects of goals and feedback on performance in groups. Journal of Applied Psychology, 72, 407-415.
McQueen, L., Mittman, B. S., & Demakis, J. G. (2004). Overview of the Veterans Health Administration (VHA) Quality Enhancement Research Initiative (QUERI). Journal of the American Medical Informatics Association, 11(5), 339-343.
Mento, A. J., Cartledge, N. D., & Locke, E. A. (1980). Maryland vs Michigan vs Minnesota: Another look at the relationship of expectancy and goal difficulty to task performance. Organizational Behavior and Human Performance, 25, 419-440.
Miller, K., & Monge, P. (1986). Participation, satisfaction, and productivity: A meta-analytic review. Academy of Management Journal, 29(4), 727-753.
Patchen, M. (1970). Participation, achievement, and involvement on the job. Englewood Cliffs, NJ: Prentice-Hall.
Pignone, M. P., Gaynes, B. N., Rushton, J. L., Burchell, C. M., Orleans, C. T., Mulrow, C. D., et al. (2002). Screening for depression in adults: A summary of the evidence for the U.S. Preventive Services Task Force. Annals of Internal Medicine, 136(10), 65-76.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models. Thousand Oaks, CA: Sage Publications.
Rogers, E. M. (2003). Diffusion of innovations. New York: Free Press.
Rosenheck, R. A. (2001). Organizational process: A missing link between research and practice. Psychiatric Services, 52(12), 1607-1612.
Schuster, M. A., McGlynn, E. A., & Brook, R. H. (1998). How good is the quality of health care in the United States? Milbank Quarterly, 76(4), 517-563, 509.
Steers, R. M., & Shapiro, D. L. (2004). The future of work motivation theory. Academy of Management Review, 29, 379-387.
Timmermans, S., & Angell, A. (2001). Evidence-based medicine, clinical uncertainty, and learning to doctor. Journal of Health and Social Behavior, 42(4), 342-359.
Torrey, W. C., Finnerty, M., Evans, A., & Wyzik, P. (2003). Strategies for leading the implementation of evidence-based practices. Psychiatric Clinics of North America, 26(4), 883-897, viii-ix.
Vroom, V. (1964). Work and motivation. New York: Wiley.
Ward, M. M., Vaughn, T. E., Uden-Holman, T., Doebbeling, B. N., Clarke, W. R., & Woolson, R. F. (2002). Physician knowledge, attitudes and practices regarding a widely implemented guideline. Journal of Evaluation in Clinical Practice, 8(2), 155-162.
Weigart, L., & Weldon, E. (1988). The impact of an assigned group goal on the performance of individual group members. Kellogg School of Management, Northwestern University.
Wensing, M., van der Weijden, T., & Grol, R. (1998). Implementing guidelines and innovations in general practice: Which interventions are effective? British Journal of General Practice, 48(427), 991-997.
clinical practice guidelines; depression; evidence-based practice; implementation; quality of care
© 2011 Lippincott Williams & Wilkins, Inc.
Highlight selected keywords in the article text.