Share this article on:

Methodology Issues in Implementation Science

Newhouse, Robin PhD, RN, NEA-BC, FAAN*; Bobay, Kathleen PhD, RN, NEA-BC; Dykes, Patricia C. DNSc, RN, FAAN, FACMI; Stevens, Kathleen R. EdD, RN, ANEF, FAAN§; Titler, Marita PhD, RN, FAAN

doi: 10.1097/MLR.0b013e31827feeca
Original Articles

Background: Putting evidence into practice at the point of care delivery requires an understanding of implementation strategies that work, in what context and how.

Objective: To identify methodological issues in implementation science using 4 studies as cases and make recommendations for further methods development.

Research Design: Four cases are presented and methodological issues identified. For each issue raised, evidence on the state of the science is described.

Results: Issues in implementation science identified include diverse conceptual frameworks, potential weaknesses in pragmatic study designs, and the paucity of standard concepts and measurement.

Conclusions: Recommendations to advance methods in implementation include developing a core set of implementation concepts and metrics, generating standards for implementation methods including pragmatic trials, mixed methods designs, complex interventions and measurement, and endorsing reporting standards for implementation studies.

*Organizational Systems and Adult Health, University of Maryland School of Nursing, Baltimore, MD

Marquette University College of Nursing, Milwaukee, WI

Brigham and Women’s Hospital, Center for Patient Safety Research and Practice, Center for Nursing Excellence, Boston, MA

§Academic Center for Evidence-Based Practice, University of Texas Health Science Center San Antonio, San Antonio, TX

Division of Nursing Business & Health Systems, Office of Practice and Clinical Scholarship, University of Michigan School of Nursing, Ann Arbor, MI

The authors declare no conflict of interest.

Supported by Robert Wood Johnson Foundation (RWJF) Interdisciplinary Nursing Quality Research Initiative (INQRI).

Reprints: Robin Newhouse, PhD, RN, NEA-BC, FAAN, Organizational Systems and Adult Health, University of Maryland School of Nursing, 655 West Lombard Street, Suite 316, Baltimore, MD 21201. E-mail: newhouse@son.umaryland.edu.

Back to Top | Article Outline

BACKGROUND

Evidence-based practices (EBPs) that improve patient outcomes are available but underused for a number of health conditions such as asthma, smoking cessation, heart failure (HF), and diabetes. Underuse adds to the proliferation of substantial unexplained and unjustified variations in practices.1–3 Addressing how to integrate research findings into standard practice is an important challenge.4 The science is young in the understanding of implementation interventions that work for which clinical or administrative topics, in what context, and the mechanisms by which these interventions are effective.5 Implementation science holds promise for expanding what is known about improving health care delivery, outcomes, and value.6

This paper reviews 4 research studies focused on implementation of evidence-based processes to improve health care outcomes that were funded by the Robert Wood Johnson Foundation (RWJF) Interdisciplinary Nursing Quality Research Initiative (INQRI). The authors are principal investigators of INQRI-funded studies, which focused on improving processes of care. Authors briefly describe their studies as cases, highlight major methodological issues raised by the cases (conceptual frameworks, design, and measures), and make recommendations to advance implementation science.

Back to Top | Article Outline

INTRODUCTION TO IMPLEMENTATION SCIENCE

Implementation science is the study of methods, interventions, and variables that promote the uptake and use of research findings and other EBPs by individuals and organizations to improve clinical and operational decision-making in health care with the goal of improving health care quality.7–11 Examples include studies to describe facilitators and barriers of knowledge uptake and use, organizational predictors of adherence to EBP guidelines, attitudes toward EBPs, testing implementation interventions, and defining the structures needed for implementation.12–15 There are multiple terms generated from publications, experts, librarians, Web sites, and other sources that are synonyms or related to the concepts of implementation science.16 Some of the common terms related to implementation science can be reviewed in Table 1.

TABLE 1

TABLE 1

Implementation science uses diverse conceptual frameworks, designs, measures, and analyses. Research designs used in implementation studies range from randomized controlled trials (eg, randomization at the subject level), clustered randomized trials, time series designs, observational studies, and preference trials.15,18 The challenges of testing and reporting findings from complex intervention studies such as those used in implementation are described elsewhere.15,18–20 Covariates include context factors (eg, hospital size and staffing) beyond control of the investigators that may impact the effectiveness of the intervention.21 Measurement methods used in these studies include self-report, observation, abstraction of data from medical records, quality, and administrative data. The issues around measurement methods are under debate15,22,23 with terminology, research design, and measurement21,24 representing the areas of highest priority.

Back to Top | Article Outline

CASE EXAMPLES: CONCEPTUAL FRAMEWORKS, DESIGNS, MEASUREMENT, AND ANALYTIC APPROACHES

As noted in the introduction to this supplement, all INQRI grantee principal investigators were given the opportunity to collaborate in the development of manuscripts focused on cross-cutting topics identified by INQRI’s leadership team. Cases included in this manuscript were chosen because all focus on use or uptake of evidence to inform health care interventions, and are used in the discussion to highlight the methodological challenges faced in health care implementation. Table 2 includes each case’s improvement focus, conceptual framework, design, complex intervention, measures of evidence adoption, contextual measures used, and analytic approach.

TABLE 2

TABLE 2

Back to Top | Article Outline

Case 1: Engaging Frontline Nursing Staff in Quality Improvement (QI)

Nurses are well-positioned to lead the transformation of American health care.1,30,31 There are frequent operational failures and system defects in medical-surgical units with nurses responding with workarounds 95% of the time, failing to offer system corrections and reducing reliability in patient safety and quality.32 The purpose of this project was to identify and address microsystem-level operational failures encountered in frontline patient care and assess if organizational learning drives systems improvements.

Back to Top | Article Outline

Conceptual Framework/Planning the Intervention

Two frameworks were used: Complex Adaptive Theory (CAS)24 and Practice Facilitation (PF).33 CAS24 framed nursing units as dynamic microsystems, with emphasis on patterns of relationships, offering a perspective of hospital quality that deemphasized mechanistic care to one that respects connectedness of the entire health care team. CAS explained connections between operational failures and the positive or negative impact these have on quality. PF provides a range of QI and organizational development approaches to assist health care providers, reach improvement goals.33 PF has demonstrated a 3-fold increase in uptake in best practices.34

Back to Top | Article Outline

Design, Setting, and Intervention

This prospective cluster randomized intervention study of hospital medical-surgical units compared 3 intervention units to 3 matched nonintervention units. Multimethod Assessment Process (MAP) was used to characterize each unit and then Reflective Adaptive Process (RAP) was implemented to guide change.35 The intervention began with a MAP, resulting in a description of the operational failures occurring during microsystem care delivery. Small operational failures were identified during work shifts using researcher-developed pocket cards (index-sized cards containing lists of common workaround categories). Results of the MAP were presented to the nursing staff. RAP was then implemented to guide through PF change,35 operationalized in this study through a practice facilitator external to the microsystems. The RAP process began as the nursing staff reviewed MAP results with a Practice Facilitator and prioritized the particular operational failures that would become targets for planned change. Over 8 facilitated monthly meetings, the staff selected a priority QI project, planned for impact assessment, and implemented change.

Back to Top | Article Outline

Methods of Evaluation

Before-measures and after-measures used existing benchmark records and prospective data as follows: clinical unit history including turnover, staff characteristics, quality indicators (falls, decubitus ulcers, and infections), participant surveys for work environment25 and safety opinion,36 and self-report of operational failures encountered during work shifts. Qualitative data were gathered through semistructured key informant interviews with nursing staff and clinical managers.

Back to Top | Article Outline

Analytic Approach

The multivariate analysis applied a logic model specifying pathways across inputs, actions, intermediate outcomes such as new problem solving, and ultimate outcomes such as improved quality benchmarks and reduced adverse events.

Back to Top | Article Outline

Results

There were no significant differences in the perception of work environment and safety. An average of 5.8 operational failures per 12-hour shift was reported (most common categories being equipment/supplies, facilities, and communication). Conclusions from qualitative data indicated satisfaction with engagement in study QI activities. Key informants reported an increased awareness of workarounds.

Back to Top | Article Outline

Case 2: Effectiveness of Readiness for Hospital Discharge to Reduce Readmission

Readmissions are a major focus of health care reform efforts, with estimates that nearly 20% of Medicare patients are readmitted within 30 days of hospital discharge, at a cost to Medicare of >$17 billion annually.37 The purpose of this study was to evaluate the relationship between patient perceptions of discharge readiness before hospital discharge and 30-day postdischarge readmissions and Emergency Department (ED) visits.

Back to Top | Article Outline

Conceptual Framework/Planning the Intervention

Two theoretical frameworks guided the study. Donabedian’s26 Structure-Process-Outcomes model framed the organizational structural variables (unit-level; context), process variables (patient assessment of quality of discharge teaching and readiness for hospital discharge), and outcome variables (30-d readmissions and ED visits). Meleis’38 theory informed additional context variables related to the nature of the transition (patient-level control variables) and transition conditions (patient and unit-level controls). The nature of the transition refers to registered nurse (RN) consideration of all past significant transitions in the patient’s life and the impact of this transition on the patient and family. Transition conditions include other ongoing factors that could inhibit or block the success of the transition.38

Back to Top | Article Outline

Design, Setting, and Intervention

A nested panel design (nonintervention comparative) with hospital and unit-level fixed effects and patient and unit-level control variables was used (16 medical-surgical units in 4 hospitals).39,40

Back to Top | Article Outline

Methods of Evaluation

Patients completed the Quality of Discharge Teaching Scale (QDTS) and the Readiness for Hospital Discharge Scale (RHDS).41 The QDTS measures patient perception of the content of discharge teaching needed and received and how the content was delivered. The RHDS measures patient perceptions of their readiness for discharge based on 4 subscales: knowledge, personal status, perceived coping ability, and expected support. Psychometric properties of these scales have been previously reported.41 In addition, for a subset of patients, RNs completed a parallel version of the RHDS (RN-RHDS), assessing the nurse’s perception of the patient’s readiness for hospital discharge.39

Back to Top | Article Outline

Analytic Approach

Simultaneous hierarchical linear regression equations were used to determine the direct and indirect effects of structure (unit-level variables) and processes (QDTS and RHDS) on outcomes (readmissions and ED visits) within units over time.

Back to Top | Article Outline

Results

On units where there was more variation in RN staffing, inpatient readmissions were higher. On units with higher RN overtime, there were more ED visits. Nurses are better at predicting readmissions than a patient, which has been confirmed in a subsequent study. Reducing variation in RN staffing and RN overtime could save the 16 study units $11.5 million per year and reduce readmissions by 40%. Cost analysis demonstrated potential cost savings to be recognized by reducing the variation of nurse staffing within units over time, which leads to fewer postdischarge readmissions and ED visits.40,42

Back to Top | Article Outline

Case 3: Translating Fall Risk Status Into Interventions to Prevent Patient Falls

Patient falls are a leading cause of preventable injury in all health care settings and a frequently reported serious adverse event. Hospitalization increases the risk for falls43 and falls drive up hospital expenses and lengths of stay.44,45 The purpose of this study was to develop and evaluate a fall prevention toolkit (FPTK).

Back to Top | Article Outline

Conceptual Framework/Planning the Intervention

The Institute for Healthcare Improvement’s Framework for Spread27 was used as the conceptual model.46 The Framework for Spread is based on Roger’s Diffusion of Innovations theory47 that posits that diffusion of innovations is a process and requires the spread of messages over time to members of a social system. Strategies used to track intervention fidelity included development and implementation of a set of “adoption and spread” metrics on clinical units and ongoing feedback regarding adherence with the FPTK protocol.

Back to Top | Article Outline

Design, Setting, and Intervention

Qualitative methods (individual/focus group interviews) were used to define FPTK requirements.46,48,49 A cluster, randomized controlled design, with hospital and unit-level fixed effects and patient-level control variables was used (8 medical-surgical units in 4 hospitals) to test the effectiveness of the FPTK intervention. The complex intervention was an electronic FPTK that provided linkages between the Morse Fall Scale areas of risk and interventions deemed both effective and feasible by nurses and other care team members to mitigate risk in acute hospitalized patients. The FPTK generated 3 tools to integrate a tailored fall prevention plan into existing workflows: (1) a bed poster, (2) a patient education handout, and (3) a fall prevention plan of care.

Back to Top | Article Outline

Methods of Evaluation

The following were measured at the patient level and reported at the cluster level: (1) number of patient falls per 1000 patient days and (2) number of patient falls with injury per 1000 patient days to evaluate FPTK effectiveness. The following were reported to track adherence: (1) number of fall risk assessments completed on admission and (2) number of patients with tailored FPTK information at the bedside. Qualitative methods were used to evaluate satisfaction and to generate recommendations for improvement.

Back to Top | Article Outline

Analytic Approach

To address the effects of clustering in the analysis when testing for differences in the number of patients with falls across the intervention and usual care units, a Poisson regression model was used containing an intervention effect and fixed effects for hospitals. Generalized Estimating Equations methods were used to test for any residual effect of clustering within unit after controlling for hospital. The Poisson regression approach was used to account for the fact that the longer a patient remained in the hospital, the more opportunity for a fall.50,51

Back to Top | Article Outline

Results

On units with access to the FPTK, patient falls were lower, particularly in patients over the age of 64 years. There were no differences in fall-related injuries. Use of the FPTK could potentially prevent 1 fall every 4 days, 7.5 falls each month, and about 90 falls each year on the study units alone.

Back to Top | Article Outline

Case 4: Phased Cluster Randomized Trial (CRT) Testing if a Collaborative Improves HF Care

Outcomes for patients with HF are worse in rural settings. In a qualitative study52 and a national survey,53 rural hospital nurse executives indicate that strategies to increase networking and collaboration were needed to enhance evidence-based nursing practices. The purpose of this study was to test the effectiveness of a quality collaborative to enhance adoption of best practice in HF care [left ventricular ejection fraction (LVEF) assessment, Angiotensin converting enzyme inhibitor/angiotensin receptor blocker (ACEi/ARB) use, discharge instruction, and smoking cessation counseling].54

Back to Top | Article Outline

Conceptual Framework/Planning the Intervention

The Conceptual Model for Considering the Determinants of Diffusion, Dissemination, and Implementation of Innovations in Health Services Delivery and Organization was the conceptual model used for this study.28 This model frames what is known about the adoption of innovations within organizations, and can guide successful implementation of evidence-based nursing practice. A rural hospital collaborative should affect the adoption and assimilation of evidence-based processes for HF patient care, improving overall quality.

Back to Top | Article Outline

Design, Setting, and Intervention

A phased CRT design was used. Rural hospitals (N=23) from 5 eastern US states were randomly assigned by computer to an experimental (group 1) or control group with delayed intervention (group 2). All hospitals received the intervention. Group 1 (n=11) hospitals participated in the intervention first, with group 2 (n=12) participating in the intervention 6 months later. The intervention included an evidence-based HF tool kit provided through a quality collaborative.

Back to Top | Article Outline

Methods of Evaluation

Data collected included: antecedents (nursing skill mix, nursing HPPD, and voluntary RN turnover-secondary data); readiness (Practice Environment Scale, nurse survey at baseline, 6, and 12 mo); adoption and/or assimilation (smoking cessation counseling—nurse survey at baseline, 6, and 12 mo), and the study coordinator and hospital implementation team activities (site coordinator monthly during intervention team check up tool); implementation (compliance with HF core measures—secondary data quarterly for 8 quarters); and consequences (overall HF patient care quality, readmission of HF patients within 30 d—secondary data quarterly for 8 quarters, and cost-effectiveness site coordinator monthly during intervention team check up tool). Measures of context included the nurse survey of Practice Environment Scale and site coordinator monthly report during the intervention using a team check up tool. Focus groups were conducted at the final collaborative to solicit additional information about the implementation experience.

Back to Top | Article Outline

Analytic Approach

Analytic approaches included hierarchical modeling and Generalized Estimating Equations to account for the cluster effects among hospitals as well as repeated measures over time for quarterly core measure data. There was a small hospital effect (the estimated ICC=0.07, 7% of variance explained).

Back to Top | Article Outline

Results

Nurse staffing (eg, nurse turnover) affects HF core measure performance.55 Over time, lower nurse turnover is associated with better HF care. Nurses frequently provide standard (assessment) but less frequently provide advanced (eg, referral and quit plans) smoking cessation counseling.55,56 Nurses that report better practice environments also report more evidence-based smoking cessation practices.

Back to Top | Article Outline

DISCUSSION AND AUTHOR REFLECTIONS

The conceptual frameworks, research design, and measurement strategies differ in the cases presented. The implications for implementation science are further discussed. Figure 1 includes author recommendations and potential research questions to address methodological issues in implementation studies.

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Conceptual Frameworks

Each study used a different conceptual framework to guide the study design, intervention, and methods, although each focused on improving evidence-based care processes. Each of the conceptual frameworks was effective to inform the study development, implementation, and evaluation, as well as stress the importance of the concept of context. Implementation plans were based on the participants’ experience, environment, and process. Case 2 used both the Donabedian26 and Meleis38 theory. The Donabedian model is widely disseminated for use in QI, but lacks specificity to inform the variables that drive adoption. The Meleis theory added the specificity needed to operationalize the concept. All other cases used 1 theory or conceptual framework that fit the study aims.

Back to Top | Article Outline

Practice Level Theories

More practice level theories are needed, with the development of a standard set of implementation metrics. Practice theories stipulate practices or processes that affect desired outcomes.57 Implementation science has diverse conceptual and theoretical origins. A metanarrative of diffusion of innovations in health care organizations28 found 13 research traditions (eg, rural sociology, medical sociology, and health promotion). Each tradition progressed in silos with little overlap in the development of concepts.

Back to Top | Article Outline

Standard Conceptual Definitions

Development and use of common definitions and concepts among disciplines will advance knowledge, allowing comparisons between interventions, setting, and populations. Common concepts would not only strengthen research design and methods, but would help clinicians generalize interventions and results to their practice setting.

Back to Top | Article Outline

Context Reporting Standards

Further development of standards for reporting the context of the research study setting is needed.58 Better reporting will advance the conceptualization of broad constructs such as “structure” or “antecedents” and promote the use of successful adoption strategies.

Back to Top | Article Outline

Design

Different designs were used in each case to achieve the study aims: observational nested panel design (case 2), mixed methods [CRT with focus groups (cases 3 and 4)], and CRT with qualitative measures (case 1). The implications of these cases on implementation science include the methods issues related to pragmatic trials, use of mixed method designs, and use of complex interventions.

Back to Top | Article Outline

Pragmatic Trials

There are a number of methodological issues in the conduct of pragmatic trials (design choice, endogenity, statistical power, and confounders) for which standards could help. Although randomized control trials are often used in efficacy studies, limitations related to real-world clinical settings often preclude their use in effectiveness trials. Cases 3 and 4 used a CRT to prevent the contamination that could result if randomization occurred at the patient level. The danger of contamination is dilution of the intervention effect, which could lead to type II error.59

CRTs require careful planning during study design as the effects of clustering must be incorporated into both sample size calculations and the data analysis procedures.60 Precautions must be taken to avoid inadequate statistical power and selection bias.60,61 Correlations between individuals in the same cluster tend to be greater than correlations between individuals in other clusters indicating lack of independence (design effect), which must be accounted for in the sample size estimation and analysis. Case 3 was insufficiently powered to detect an effect related to falls with injury, so this outcome was evaluated as a secondary aim. Other concerns associated with CRTs are lack of balance62 and attrition,63 especially when there are relatively few clusters. In cases 2 and 3, data were collected on patient characteristics, evaluated for differences, and the analysis conducted selected to control for endogenity. Attrition was a significant risk for case 3 where there were only 8 clusters.

Back to Top | Article Outline

Mixed Methods

Mixed methods are required to understand why and how interventions are implemented. Most of the intervention studies (cases 1, 3, and 4) used a mixed method approach, including both qualitative and qualitative measures. Mixed methods have the advantage to extend the understanding of how the interventions should be structured, capture the implementation and context of the interventions, and foster better interpretation of the results.

Back to Top | Article Outline

Complex Interventions

Complex interventions should include components with a significant effect on outcomes [both implementation (the process) and effectiveness (causal link between intervention and outcome)]. The interaction of the intervention, users, and context of practice determines the rate and extent of adoption.28 Three of the 4 cases included complex interventions. By nature, these interventions include multiple components that are implemented in a changing and complex environment. The workflow and context complexity in these settings needs to be captured so that methodological approaches can be designed. As the research team interacts with sites, changes in the approach to data collection may be tailored to the site to improve reporting (ie, Web collection instead of written, or site visits instead of telephone follow-up).

Back to Top | Article Outline

Measurement

Cases used standardized measures endorsed by the National Quality Forum,64 allowing comparisons to evaluate the effects of interventions. Cases 1 and 4, however, included new measures for which psychometric estimates were collected. The measurement implications for implementation science include use of standardized measures.

Back to Top | Article Outline

Standard Measures

Measurement in implementation studies could be enhanced by using standardized measures of implementation and adoption whenever available. Implementation measures should include the barriers and facilitators of change, as well as level of adoption by the target sample. Where standardized instruments do not exist, development of tools based on evidence and rigorously tested should occur. Two examples from the cases are the RHDS41 and the Smoking Cessation Counseling Scale,29 both of which have demonstrated adequate psychometric properties.

Back to Top | Article Outline

Measure Implementation Fidelity

Implementation studies often do not include discussion about the implementation fidelity (adherence to the implementation protocol) unless evaluation is the specific focus of the study.58 Most report on adoption of the EBP and impact on patient outcomes. For replication and spread, authors should both measure and report implementation fidelity in the manuscripts.

Back to Top | Article Outline

CONCLUSIONS

This case review of 4 INQRI studies identifies areas for further methods development in implementation science (conceptual frameworks, design, and measurement) and made recommendations to address identified needs. Recommendations to advance methods in implementation include developing a core set of implementation concepts and metrics, generating standards for implementation methods including pragmatic trials, mixed methods designs, complex interventions and measurement, and endorsing reporting standards for implementation studies.

Implementation science is the link between effective interventions, practice, and patient outcomes. The methodological issues raised must be overcome to generate the knowledge needed to leverage change and realize broad health care improvements.

Back to Top | Article Outline

REFERENCES

1. Chasm A New Health System for the 21st Century. IOM. 2001 Washington, DC National Academy Press
2. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–2645 Available from: Massachusetts Medical Society
3. Ward MM, Evans TC, Spies AJ, et al. National Quality Forum 30 safe practices: priority and progress in Iowa hospitals. Am J Med Qual. 2006;21:101–108
4. Avorn J. Transforming trial results into practice change: the final translational hurdle: comment on impact of the ALLHAT/JNC7 dissemination project on thiazide-type diuretic use. Arch Intern Med. 2010;170:858–860
5. Advancing Quality Improvement Research: Challenges and Opportunities, Workshop Summary.IOM [serial online]. 2007 Washington, DC The National Academies Press
6. Stevens K. Evidence-based practice: destination or journey? Nurs Outlook. 2010;58:273–275
7. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1–6 Available from: Bio Med Central
8. Kovner AR, Elton JJ, Billings J. Evidence-based management. Front Health Serv Manage. 2000;16:3–24
9. Titler MG, Everett LQ. Translating research into practice. Considerations for critical care investigators. Crit Care Nurs Clin North Am. 2001;13:587–604
10. Rubenstein LV, Pugh J. Strategies for promoting organizational and practice change by advancing implementation research. J Gen Intern Med. 2006;21:S58–S64 Available from: Blackwell Publishing Inc
11. Walshe K, Rundall TG. Evidence-based management: from theory to practice in health care. Milbank Mem Fund Q. 2001;79:429–457 Available from: Blackwell Publishers Inc
12. Dykes P. Practice guidelines and measurement: state-of-the-science. Nurs Outlook. 2003;51:65–69
13. Estabrooks CA. Thoughts on evidence-based nursing and its science? a Canadian perspective. Worldviews Evid Based Nurs. 2004;1:88–91 Available from: Blackwell Publishers
14. Kirchhoff KT. State of the science of translational research: from demonstration projects to intervention testing. Worldviews Evid Based Nurs. 2004;1:S6–S12 Available from: Blackwell Publishing, Inc
15. Titler MG. Methods in translation science. Worldviews Evid Based Nurs. 2004;1:38–48 Available from: Blackwell Science Ltd
16. McKibbon KA, Lokker C, Wilczynski N, et al. A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel? Implement Sci. 2010;5:16 Available from: Biomed Central. DOI: 10.1186/1748-5908-5-16
17. Stevens K What is improvement science? [Improvement Science Research Network…Improving Patient Outcomes web site] Available at: http://isrn.net/https:/%252Fimprovementscienceresearch.net/about. Accessed July 9, 2012
    18. Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337 a1655. DOI: 10.1136/bmj.a1655
    19. Michie S, Fixsen D, Grimshaw J, et al. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009;4:40 Available from: Biomed Central. DOI: 10.1186/1748-5908-4-40
    20. Michie S, Abraham C, Eccles M, et al. Strengthening evaluation and implementation by specifying components of behaviour change interventions: a study protocol. Implement Sci. 2011;6:10 Available from: Biomed Central. DOI: 10.1186/1748-5908-6-10
    21. French B, Thomas L, Baker P, et al. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organizational context. Implement Sci. 2009;4:28 Available from: Biomed Central. DOI: 10.1186/1748-5908-4-28
    22. Eccles M, Armstrong D, Baker R, et al. An implementation research agenda. Implement Sci. 2009;4:18 Available from: BioMed Central. DOI: 10.1186/1748-5908-4-28
    23. Hrisos S, Eccles M, Francis J, et al. Are there valid proxy measures of clinical behaviour? A systematic review. Implement Sci. 2009;4:37 Available from: BioMed Central. DOI: 10.1186/1748-5908-4-37
    24. Chaffee M, McNeill M. A model of nursing as a complex adaptive system. Nurs Outlook. 2007;55:232.e3–241.e3
    25. Lake ET. The nursing practice environment. Med Care Res Rev. 2007;64:S104–S122
    26. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44:166–206
    27. Massoud MR, Nielsen GA, Nolan K, et al. A Framework for Spread: From Local Improvements to System-wide Change. Cambridge, MA: Institute for Healthcare Improvement; 2006. Available at: http://www.ihi.org/knowledge/Pages/IHIWhitePapers/AFrameworkforSpreadWhitePaper.aspx
    28. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82:581–629
    29. Newhouse RP, Himmelfarb CD, Liang Y. Psychometric testing of the Smoking Cessation Counseling Scale. Image J Nurs Sch. 2011;43:405–411
    30. . Keeping patients safe: transforming the work environment of nurses. IOM. 2004 Available at: http://www.iom.edu/Reports/2003/Keeping-Patients-Safe-Transforming-the-Work-Environment-of-Nurses.aspx
    31. . The future of nursing: leading change, advancing health. IOM. 2010 Available at: http://www.thefutureofnursing.org/IOM-Reports
    32. Tucker AL, Singer SJ, Hayes JE, et al. Front-line staff perspectives on opportunities for improving the safety and efficiency of hospital work systems. Health Serv Res. 2008;43:1807–1829 Available from: Blackwell Publishing Inc
    33. Knox L, Taylor EF, Geonnotti K, et al. Developing and Running a Primary Care Practice Facilitation Program: A How-To Guide (Prepared by Mathmatica Policy Research Under Contract No. HHSA2902009000191 to 5). AHRQ Publication No. 12-0011. Rockville, MD. Agency for Healthcare Research and Quality. December, 2011
    34. Baskerville NB, Liddy C, Hogg W. Systematic review and meta-analysis of practice facilitation within primary care settings. Ann Fam Med. 2012;10:63–74
    35. Stroebel C, McDaniel RR, Crabtree BF, et al. How complexity science can inform a reflective process for improvement in primary care practices. Jt Comm. 2005;31:438–446
    36. Sorra JS, Nieva NF Hospital Survey on Patient Safety Culture. AHRQ Publication No. 04-0041. Rockville, MD. Agency for Healthcare Research and Quality. September, 2004
    37. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med. 2009;360:1418–1422
    38. Meleis AI, Sawyer LM, Im E, et al. Experiencing transitions: an emerging middle-range theory. ANS Adv Nurs Sci. 2000;23:12–28
    39. Weiss ME, Yakusheva O, Bobay KL. Nurse and patient perceptions of discharge readiness in relation to postdischarge utilization. Med Care. 2010;48:482–486
    40. Weiss ME, Yakusheva O, Bobay KL. Nurse staffing, readiness for hospital discharge, and post-discharge utilization. Health Serv Res. 2011;46:1473–1494
    41. Weiss M, Piacentine L. Psychometric properties of the Readiness for Hospital Discharge Scale. J Nurs Meas. 2006;14:163–180
    42. Bobay KL, Yakusheva O, Weiss ME. Outcomes and cost analysis of the impact of unit-level nurse staffing on post-discharge utilization. Nurs Econ. 2011;29:69–78
    43. Lakatos BE, Capasso V, Mitchell MT, et al. Falls in the General Hospital: association with delirium, advanced age, and specific surgical procedures. Psychosomatics. 2009;50:218–226
    44. Bates DW, Pruess K, Souney P, et al. Serious falls in hospitalized patients: correlates and resource utilization. Am J Med. 1995;99:137–143
    45. Krauss M, Nguyen S, Dunagan W, et al. Circumstances of patient falls and injuries in 9 hospitals in a Midwestern healthcare system. Infect Control Hosp Epidemiol. 2007;28:544–550 Available from: The University of Chicago Press on behalf of The Society for Healthcare Epidemiology of America
    46. Dykes PC, Carroll DL, Hurley A, et al. Fall TIPS: strategies to promote adoption and use of a fall prevention toolkit. AMIA Annu Symp Proc. 2009;2009:153–157
    47. Rogers EM Diffusion of Innovations. 2003 New York, NY Simon and Schuster
    48. Carroll D, Dykes P, Hurley A. Patients’ perspectives of falling while in an acute care hospital and suggestions for prevention. Appl Nurs Res. 2010;23:238–241
    49. Dykes PC, Carroll DL, Hurley AC, et al. Why do patients in acute care hospitals fall? Can falls be prevented? J Nurs Adm. 2009;39:299–304
    50. Dykes PC, Carroll DL, Hurley A, et al. Fall prevention in acute care hospitals. JAMA. 2010;304:1912–1918
    51. Dykes PC, Hurley A, Lipsitz S. Preventing falls in acute care hospitals—reply. JAMA. 2011;305:671–672
    52. Newhouse RP. Exploring nursing issues in rural hospitals. J Nurs Adm. 2005;35:350–358
    53. Newhouse RP, Morlock L, Pronovost P, et al. Rural hospital nursing: better environments=shared vision and quality/safety engagement. J Nurs Adm]. 2009;39:189–195 United States
    55. Newhouse R, Dennison-Himmelfarb C, Morlock L, et al. A phased Cluster Randomized Trial of rural hospitals testing a quality collaborative to improve heart failure care: organizational context matters. Med Care. (In press)
    56. Newhouse RP, Dennison C, Liang Y, et al. Smoking cessation counseling by registered nurses: description and predictors in rural hospitals. American Nurse Today Online. 2011;6 Available at: http://www.americannursetoday.com/Article.aspx?id=7902&fid=7870
    57. Walker LO, Avant KC Strategies for Theory Construction in Nursing. 1995 Norwalk, CT Appleton & Lange
    58. Rycroft-Malone JB, Burton CR. Is it time for standards for reporting on research about implementation. Worldviews Evid Based Nurs. 2011;8:189–190
    59. Torgerson DJ. Contamination in trials: is cluster randomisation the answer? BMJ. 2001;322:355–357
    60. Campbell MK, Elbourne DR, Altman DG. CONSORT statement: extension to cluster randomised trials. BMJ. 2004;328:702–708
    61. Donner A, Klar N. Pitfalls of and controversies in cluster randomization trials. Am J Public Health. 2004;94:416–422 Available from: American Public Health Association
    62. Glynn RJ, Brookhart MA, Stedman M, et al. Design of cluster-randomized trials of quality improvement interventions aimed at medical care providers. Med Care. 2007;45:S38–S43
    63. Schulz KF, Altman DG, Moher D, et al. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010;8:18 DOI: 10.1186/1741-7015-8-18
    64. NQF: Measure List. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed June 20, 2012
    Keywords:

    implementation science; context; methods

    © 2013 Lippincott Williams & Wilkins, Inc.