Skip Navigation LinksHome > May 2000 - Volume 30 - Issue 5 > Evaluating Nursing Administration Instruments
Journal of Nursing Administration:
Articles

Evaluating Nursing Administration Instruments

Huber, Diane L. PhD, RN, FAAN, CNAA; Maas, Meridean PhD, RN, FAAN; McCloskey, Joanne PhD, RN, FAAN; Scherb, Cindy A. MS, RN; Goode, Colleen J. PhD, RN, FAAN; Watson, Carol PhD, RN

Free Access
Article Outline
Collapse Box

Author Information

Diane L. Huber, PhD, RN, FAAN, CNAA, Associate Professor,

Meridean Maas, PhD, RN, FAAN, Professor,

Joanne McCloskey, PhD, RN, FAAN, Distinguished Professor, College of Nursing, The University of Iowa,

Cindy A. Scherb, MS, RN, Patient Care Documentation/Informatic/Research Coordinator, Immanuel St. Joseph-Mayo Health System, Kiester, Minnesota,

Colleen J. Goode, PhD, RN, FAAN, Vice President, Patient Services and Chief Nursing Officer, University of Colorado Hospital, Denver,

Carol Watson, PhD, RN, Senior Vice President, Clinical Services, Mercy Medical Center, Cedar Rapids, Iowa.

Collapse Box

Abstract

Objective: To identify and evaluate available measures that can be used to examine the effects of management innovations in five important areas: autonomy, conflict, job satisfaction, leadership, and organizational climate.

Background: Management interventions target the context in which care is delivered and through which evidence for practice diffuses. These innovations need to be evaluated for their effects on desired outcomes. However, busy nurses may not have the time to locate, evaluate, and select instruments to measure expected nursing administration outcomes without research-based guidance. Multiple and complex important contextual variables need psychometrically sound and easy-to-use measurement instruments identified for use in both practice and research.

Method: An expert focus group consensus methodology was used in this evaluation research to review available instruments in the five areas and evaluate which of these instruments are psychometrically sound and easy to use in the practice setting.

Results: The result is a portfolio of measures, clustered by concept and displayed on a spreadsheet. Retrieval information is provided. The portfolio includes the expert consensus judgment as well as useful descriptive information.

Conclusions: The research reported here identifies psychometrically sound and easy-to-use instruments for measuring five key variables to be included in a portfolio. The results of this study can be used as a beginning for saving time in instrument selection and as an aid for determining the best instrument for measuring outcomes from a clinical or management intervention.

In a milieu of rapid change and financial risk, nursing management interventions emphasize changes that improve the environment in which patient care outcomes are managed, enhance nurses' work life, and speed the diffusion of evidence-based practices into actual use. These management interventions often are innovations that need to be evaluated for their effects on desired outcomes.1 Systematic evaluation of nursing management innovations is an important area of nursing administration research, the results of which are needed to guide current and future decision-making. Nurse executives, however, often are unable to implement systematic evaluations because of the rapidity of changes; the lack of staff to design and implement evaluations; and the difficulties encountered in selecting reliable, valid, and practical measures of the salient independent and dependent variables for multiple and complex contextual variables.

Several reasons are seen why nurse executives cannot launch and complete the needed evaluation research. First, management innovations tend to occur in an environment marked by rapid change and scarce resources. Typically, a new innovation is being implemented before the results of the evaluation of a previous innovation are fully analyzed. Second, design considerations for the evaluation of innovations are complex. Efforts to evaluate management innovations face the problems of 1) a large number of important independent variables to consider; 2) difficulties in locating and analyzing the adequacy of measures for variables that can influence the outcome variables of interest; and 3) a confusing array of measures for some variables. Not only must the nurse executive have the expertise required to design and carry out evaluation research, but adequate time also must be available for the necessary planning, implementation, and analysis. Little has been done to appraise the critical variables to be included in evaluating studies of nursing management innovations, or to collect, catalogue, and evaluate nursing administration instruments.1

Because of these constraints, designs for the evaluation of specific innovations and by psychometrically sound and practical measures of key variables identified in a portfolio would enhance the ability to systematically judge the worth of management innovations.1 Such a portfolio would assist nurses and students in practice and education to locate and compare measures of important and complex organizational contextual and outcome variables. To this end, reported here is research undertaken to identify psychometrically sound and practical measures of five important variables for the evaluation of nursing management innovations.

The research answered two questions: What are the instruments available to measure nurse autonomy, conflict, job satisfaction, leadership, and organizational climate? Which of the instruments are psychometrically sound and easy to use in the practice setting?

Back to Top | Article Outline

Conceptual Framework

Concepts from evidence-based practice provide the framework for this research.2-5 The importance of basing clinical and management decisions on research has taken on new meaning as healthcare systems are asked to demonstrate their effectiveness and efficiency to the purchasers of care. Innovations exploit change,6 but few organizations measure the effect that these changes have on patients and staff. Gray2 reported that healthcare decisions primarily are based on values, resources, and opinion-based decision-making, with few decisions having a scientific basis.

An innovation refers to something new (e.g., a new process or new way of doing something) and an innovation can also be the use of a new idea to solve a problem.7 Thus, change and innovation are companion ideas when innovation uses change theory to provide new services rather than just causing disruption.8 However, nursing management innovations that trigger contextual changes need to be evaluated based on rigorous data-based comparisons.1 Whether nursing management innovations are successful depends on evidence that the anticipated outcomes result from the change and are positive. Measurement precision is important for evidence-based practice and its diffusion and adoption. Thus, the reliability and validity of the instruments used to collect data are a key foundation of the innovation decision process.

Rogers4 conceptualized the innovation decision process as having five stages: 1) knowledge, an awareness of the innovation; 2) persuasion, the development of a favorable or unfavorable attitude toward the change or innovation; 3) decision, the choosing to adopt or reject the innovation; 4) implementation, the use of the innovation; and 5) confirmation, the seeking of reinforcement of the decision. To be skilled in the innovation decision process, nurses need to be able to find and appraise research. These skills will enable them to make decisions that are scientifically sound by being evidence-based.

No history is found of the comparative evaluation of nursing administration measurement instruments. In nursing, only a few collections of research instruments are available.9-11 In related literature on the sociology of organizations, Price and Mueller12 developed a handbook of standardized organizational measures. Primarily, the focus of instrument evaluation in nursing has been singular methodologic rigor assessment, not comparative analysis. Each measure is examined against measurement and methodologic criteria. Price and Mueller12(p4) noted that ". . . the two criteria of validity and reliability have emerged as gauges for assessing the utility of measures in building a body of knowledge." Characteristics of a good measure include psychometric rigor, primary use, and level of focus.13 Evaluation of an existing instrument entails a detailed assessment of purpose or objectives, conceptual basis, measurement properties, pragmatic issues, and human rights concerns.14 To efficiently evaluate an innovation, appropriate instruments need to be matched to key variables of interest.

The Iowa Model of Nursing Administration, which depicts systems and outcomes as the specific domains of concern for nurse executives, demonstrates the interdependence of clinical and management activities on outcomes.15,16 Innovations that affect structure, process, resource utilization, and the environment have the potential to affect patient, staff, and organizational outcomes. The subconcepts of autonomy, conflict, job satisfaction, leadership, and organizational climate fit under the structure, process, resources, controls, and environment concepts, respectively, of the organization level in the systems domain. The model specifies the relationship of these factors to patient aggregates and healthcare systems as well as the interaction with the outcomes domain. Clearly, data that are valid and reliable for decision-making about management interventions, systems management, and innovation evaluation are needed.17 Data provided by evaluation can serve to justify the innovation to others and are useful in meeting legal and ethical concerns related to accountability.3

Back to Top | Article Outline

Definitions

To select and analyze five areas of nursing administration instruments, the team identified the lack of standardization in the definitions of the concepts as a problem. Without a standardized definition, the inclusion or exclusion of an instrument became problematic when investigators suspected an overlap of concepts. The key aspect of validity is knowing that the instrument actually measures what it says it measures. Therefore, clarity in the definition was needed. By expert review and successive consensus iterations, the team determined that the following standardized definitions would be used for the purposes of this research:

Back to Top | Article Outline
Autonomy

Autonomy is authority and accountability for one's decisions and activities. The subconcept of professional autonomy is the authority and accountability for practicing one's profession. The subconcept of work autonomy is the authority and accountability for one's work.

Back to Top | Article Outline
Conflict

Conflict is disagreement or incompatibility among individuals or groups. Although conflict and stress are sometimes combined or treated as similar, they are defined separately here. Stress is differentiated from conflict and defined as an uncomfortable physiologic and psychological response in which a situation is perceived as taxing or exceeding one's resources.

Back to Top | Article Outline
Job Satisfaction

Job satisfaction is the degree to which individuals like their work and work environment. This includes both professional or occupational job satisfaction and organizational job satisfaction.

Back to Top | Article Outline
Leadership

Leadership is the process of influencing people to accomplish goals. The subconcept of leadership styles is defined as the different combinations of task and relationship behaviors used to influence people to accomplish goals.

Back to Top | Article Outline
Organizational Culture and Climate

Organizational culture is the shared beliefs, values, and assumptions that exist within an organization. Organizational climate is the study of perceptions that individuals have of various aspects of the environment in the organization.

Back to Top | Article Outline

Methodology

This evaluation research, using expert focus group consensus methodology, was designed to answer the two research questions: What are the instruments available to measure nurse autonomy, conflict, leadership, job satisfaction, and organizational climate? Which of these instruments are psychometrically sound and easy to use in the practice setting?

A research team composed of nursing administration faculty, nurse executives, and doctoral students reviewed variables characterizing nurses and organizations that could be used to evaluate management innovations. Five of the six core team members were doctorally prepared. Three were academicians in nursing administration, and three were practicing nurse executives. All had content expertise and research experience. This balance of roles within the team provided strength in both academic research and measurement as well as current knowledge in nursing administration practice.

Five variables were chosen for intensive analysis of measurement instruments after review and analysis of a previously constructed nursing administration instrument database that contained instruments in 36 broad categories.1,18 To feasibly complete an intensive analysis, five variables were chosen by consensus based on two criteria: 1) the importance of the variable to nursing administration research and practice and 2) that there were sufficient instruments to be able to critique and compare. The five variables also were critical for nurse recruitment and retention and related to performance or productivity as reflected in extensive nursing administration literature. These are crucial variables that have been linked to the effect of nurse staffing on patient outcomes. The prior work leading up to this research project is described elsewhere.17 The research was granted human subjects approval from the University's Institutional Review Board.

A focus group methodology,19 relying on an instrument-critiquing format, formed the core of the research strategy.17 The project began with a collection of nursing administration instruments that had been complied over a number of years.18 Procedures included a search of the literature; a review of research conference abstracts; a review of collected files and related materials; a compilation of supplementary research; the purchase of some instruments and requests for others; the cleaning and updating of files; the decision criteria determination and display format development; individual instrument critiques; the focus group and expert team review and critique; an overview of the concept; a spreadsheet summary; focus group re-review and consensus; a decision about instruments in the portfolio; and a description of the concept, instruments, and results.

A search of research and methodologic literature was conducted to find available and accessible measurement instruments. Data collection procedures included manual and electronic literature searches to identify both available instruments and descriptions of their reliability, validity, and practicality. In addition, selected research conference abstracts were reviewed to glean titles of instruments actually used and reported in the most current nursing administration research. Although a comprehensive review was done, it was not possible to assure that all existing instruments were indeed located. Not considered were some instruments that were not discoverable or were unretrievable or that fit our definition of unavailable.

Back to Top | Article Outline

Instrument Critique Procedure

The final sample consisted of 67 instruments selected for review in five areas: autonomy (n = 15); conflict (n = 5); job satisfaction (n = 13); leadership (n = 18); and organizational culture or climate (n = 16). Each instrument and the accompanying information was assigned to one of the research investigators for review. Criteria were developed for judging psychometric soundness and ease of use.13 The criteria used to determine a rating of psychometric soundness included derived from a conceptual or theoretic framework, methodologic description of development and testing, reported reliability and validity statistics (1 = not available, 3 = preliminary testing reported, 5 = multiple results available), and repeated use in research studies. Criteria for ease of use were ease of reading and rating items (no more than 10 to 15 items), overall visualization of the instrument for flow, and ease of scoring. Based on these criteria, psychometric soundness was rated on a 5-point scale (1 = no information available, 3 = tested but at a low level, 5 = well developed or extensive) as was ease of use or practicality (1 = hardest, 3 = moderate, 5 = easiest). The investigator also prepared a brief description of the instrument, including the population for which the measure was designed and tested; the population used for psychometric testing; cost, if known; the source of the instrument; and a key reference, if available. A detailed list of psychometric statistics was prepared with as much information as was available.

The focus group process was then employed to review and critique each instrument. A team member was assigned as the primary reviewer for each instrument. An instrument evaluation and critique form17 was completed to display instrument characteristics and to help analyze each instrument against preestablished psychometric and ease-of-use criteria. The primary reviewer also determined a judgment score of 1 to 5, with 1 = low and 5 = high, for both criteria for each instrument, using an instrument ranking form that guided decisions about the two criteria of psychometric soundness and ease of use17 (Fig. 1). Following the description and rating by the assigned investigator, the results were presented to the research team for review and discussion. Once the team agreed with the ratings, the results were entered into a spreadsheet database. After the review and rating of all instruments was complete, individual instruments were chosen by team consensus as recommended for consideration in research if they received a consensus rating of 3 or above for psychometric soundness and 2 or above for ease of use. The results were compiled for each variable category for comparison analysis.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline

Results

As anticipated, differences were found among the five variables as to the number of instruments that were available and their quality. Judged to meet the psychometric and practicality criteria were 11 measures of autonomy, 3 of conflict, 8 of job satisfaction, 9 of leadership, and 12 of organizational climate.

The results of the reviews of the instruments are summarized in Tables 1 through 5. Instruments are listed alphabetically in each of the five areas: autonomy, conflict, job satisfaction, leadership, and organizational climate and culture. For each instrument, the following information is given: name of author, title of the instrument, a brief description, the population for whom the instrument was designed and tested, a consensus rating on psychometrics, a consensus rating on ease of use, the cost if known, the contact source, and a key reference. The information assists a prospective user in finding an instrument that fits the needs of the study or evaluation project. Instruments that have ratings of 3 or above on both psychometrics and ease-of-use scales are the preferred choices. A rating of 3 on the psychometrics scale means the instrument has had at least some clinical testing, but at a beginning or low level; a rating of 3 on the ease-of-use scale means that the instrument is moderately easy to use, score, and interpret.

Table 1
Table 1
Image Tools
Table 1
Table 1
Image Tools
Table 1
Table 1
Image Tools
Table 2
Table 2
Image Tools
Table 3
Table 3
Image Tools
Table 3
Table 3
Image Tools
Table 3
Table 3
Image Tools
Table 4
Table 4
Image Tools
Table 4
Table 4
Image Tools
Table 4
Table 4
Image Tools
Table 4
Table 4
Image Tools
Table 5
Table 5
Image Tools
Table 5
Table 5
Image Tools
Table 5
Table 5
Image Tools
Table 5
Table 5
Image Tools

The best ratings would be 5 on both the psychometric and ease-of-use scale. Of the 67 instruments reviewed (Tables 1-5), only three (4%) were scored 5 on both scales for a total of 10 points: Rahim Organization Conflict Inventories, Brayfield and Rothe Overall Job Satisfaction, and Posner and Kouzes Leadership Practices Inventory. Eleven (16%) other instruments were rated for a total of 9 points on both scales, and 15 (22%) instruments were rated for a total of 8 points on both scales. The ratings of the instruments in each of the five categories are discussed briefly below.

Back to Top | Article Outline
Autonomy

Fifteen instruments (3 addressing professional autonomy and 12 addressing work autonomy) were reviewed with none achieving a perfect rating on both scales. The two work autonomy instruments by Hinshaw and Atwood were rated a 9 on both scales. All three of the professional autonomy instruments were designed only for nurses; six of the work autonomy instruments were designed for use with both nurses or nursing students.

Back to Top | Article Outline
Conflict

Only five instruments were found for review in the conflict area, demonstrating a need for further instrument development. The Rahim Organizational Conflict Inventories was rated 5, both on psychometrics and ease-of-use scales. The Thomas-Kilmann Conflict Mode Instrument was rated 5 on ease of use, but no information was available about psychometrics. Only two of the instruments, the Cox Organizational Conflict Scale and the Perceived Conflict Scale, were specifically designed for use with nurses.

Back to Top | Article Outline
Job Satisfaction

Of the 13 instruments reviewed in the category of job satisfaction, only the Brayfield and Rothe Overall Job Satisfaction instrument received a 5 rating on both psychometrics and ease of use. Both the popular Stamps and Piedmonte Index of Work Satisfaction and the Hinshaw and Atwood Work Satisfaction Scale received a 5 rating in psychometrics but only a 1 on ease of use. Eight of the instruments in this category were designed specifically for use with nurses.

Back to Top | Article Outline
Leadership

Of the 18 instruments reviewed only one, the Posner and Kouzes Leadership Practices Inventory, received a 5 on both scales. Nine instruments in this category were rated only a 1 or 2 on psychometrics, reflecting the prevalence of training and development-focused instruments in this category. Only two of the instruments were designed specifically for use with nurses.

Back to Top | Article Outline
Organizational Climate or Culture

Of the 16 instruments measuring organizational climate or culture, 9 measured climate and 7 measured culture. None of the instruments received a perfect rating on both scales despite an extensive array of instruments. Five of the climate instruments and two of the culture instruments were designed specifically for use with nurses. Of the 16 instruments, 10 received a total rating on both scales of at least an 8.

Back to Top | Article Outline

Discussion

Measurement precision is central to building evidence for practice interventions. Fundamental to measurement rigor is the availability of extensively tested instruments. However, many major important variables in nursing lack an array of such tools. This renders the search and retrieval process difficult and frustrating. Numerous roadblocks were encountered in the process of identifying, locating, and retrieving instruments and their supporting psychometric data. Barriers included instruments cited or used but not being available in their entirety; instrument authors being unwilling to share the instrument; copy-right holders not being able to be located; numerous instruments described as custom-developed for one study and not psychometrically tested; and instruments only available through costly purchase. Some instruments involved extensive searching. For example, one of the investigators served as a blind reviewer for a journal article that used a specific scale as a part of the data collection. The investigator had been searching for a copy of the same scale without success. The investigator requested that the editor forward the investigator's contact information and a request for information about this scale to the author once the review process was complete. In time, the author contacted the investigator. Unfortunately, although the author had a copy of the scale given to her by a colleague, this copy did not come from the scale originator and contact information for the scale's author was not known. Thus, permission to use the scale and its psychometric properties could not be obtained, the original form of the scale could not be verified, and none of this information was published or available.

One other problem encountered was the development and use of scales in dissertations, which subsequently were difficult to locate and retrieve. If the title of the dissertation does not cue the reader about each variable studied, then the scale may have been in the dissertation but not noticed. Dissertations generally require time and interlibrary loan procedures for retrieval. In some cases, instruments were available only for a fee and the fees charged varied considerably. Further, some scales were mentioned in the literature, but the scale developers either refused to share a copy of them with us or refused permission for their use. In general, the search, location, and verification of instruments was not easy and required a considerable amount of time and investigation. This finding is consistent with Strickland's20 observation about the issues related to appropriate selection and use of instruments for nursing studies.

The difficulties in location and retrieval plague students, researchers, and nurses doing evaluation studies. This argues for the need for a general dialogue in the profession about guidelines for openness and availability of research instruments. Guidelines could be developed to protect copyright, monitor fee assessments, and provide for repositories. The issues on both sides are complex, yet a consensus could be encouraged to advance the development of nursing's knowledge base. Further research is needed to catalogue, critique, and compare research instruments in nursing for use in the field.

Back to Top | Article Outline

Implications

Evidence-based practice is becoming an imperative in nursing. The same imperative should exist for nursing administration and management. It is not only wasteful of human and economic resources to implement new management strategies, organizational structures and processes, or care delivery models without a systematic evaluation, it is failing to build nursing administration science. Nursing management innovations affect clinical practice through a reciprocal relationship. Nurse clinicians depend on nurse administrators to provide the optimal environment for their practices, and patients expect clinicians to help them realize quality outcomes. Both should demand that nursing management innovations are systematically evaluated for quality and cost-effectiveness.

The identification and analysis of nursing administration instruments for evaluating management innovations is timely and significant and is analogous to efforts to analyze patient outcomes. For example, Harris and Warren21 suggested that instrument characteristics of applicability, practicality, comprehensiveness, reliability, validity, and responsiveness were appropriate considerations for selecting assessment tools for clinical use and patient outcomes evaluation. Similarly, a need exists for psychometric rigor and ease of use in instruments required for sound administrative research.

The implementation of innovations based on anecdotal evidence or a need to reduce costs is likely to be more costly in the long run. Because evidence of cost-effectiveness is absent, little objective data support decision-making. Yet, nurse administrators and managers often lack the resources that are needed to perform needed evaluations. Thus, need is seen for a compendium of measurement instruments that nurse administrators and managers can use to quickly implement an evaluation strategy as soon as a decision has been made to implement an innovation. One method that has been used by some nursing administrators is to measure certain variables on a scheduled basis. For example, if nurse job satisfaction and nurse autonomy are measured annually, then these longitudinal data are readily available when rapid change occurs.

Back to Top | Article Outline

Conclusions

The research reported here identifies psychometrically sound and easy-to-use instruments for measuring five key variables of interest to researchers conducting systems research. Clearly, additional variables are important in evaluating innovations. If critiqued, they would be helpful to nurse administrators and managers. Clarification of designs that are appropriate for evaluation of innovations also would be helpful. The results of this study can be used as a beginning for saving time in instrument selection and as an aid for determining the best instrument for a given practice or research situation.

Back to Top | Article Outline

Acknowledgment

This research was funded in part by the American Organization of Nurse Executives (1994-1995 Research Scholar Award, Diane Huber, PI). We are grateful to them for their support of this project. The team members were Diane Huber, Meridean Maas, Joanne McCloskey, Colleen Goode, Carole Gongaware, and Carol Watson. The authors thank doctoral research assistants Cindy Scherb and Vicki Steelman.

Back to Top | Article Outline

References

1. McCloskey JC, Maas M, Huber DG, et al. Nursing management innovations: A need for systematic evaluation. Nurs Economic$. 1994;12(1):35-44.

2. Gray JA. Evidence-based Healthcare. London: Churchill Livingston; 1997.

3. Horsley JA, Crane J. Using Research to Improve Nursing Practice: CURN Project. New York: Grune & Stratton; 1983.

4. Rogers EM. Diffusion of Innovations, 3rd ed. New York: The Free Press; 1983.

5. White JM, Leske JS, Pearcy JM. Models and processes of research utilization. Nurs Clin North Am. 1995;30(3):409-420.

6. Drucker PF. Innovation and Entrepreneurship: Practice and Principles. New York: Harper & Row; 1985.

7. Kanter RM. The Change Masters: Innovation for Productivity in the American Corporation. New York: Simon & Schuster; 1983.

8. Romano CA. Innovation: The promise and the perils for nursing and information technology. Comput Nurs. 1990;8(3):99-104.

9. Strickland OL, Waltz CF. Measurement of Nursing Outcomes. Vol. 2. New York: Springer-Verlag; 1988.

10. Waltz CF, Strickland OL. Measurement of Nursing Outcomes. Vol. 1. New York: Springer-Verlag; 1988.

11. Waltz CF, Strickland OL. Measurement of Nursing outcomes. Vol. 3. New York: Springer-Verlag; 1990.

12. Price JL, Mueller CW. Handbook of Organizational Measurement. Marshfield, MA: Pitman; 1986.

13. Pfeiffer JW, Ballew AC. Using Instruments in Human Resource Development. San Diego: Pfeiffer & Co; 1988.

14. Strickland OL. Using existing measurement instruments. J Nurs Meas. 1994;2(1):3-6.

15. Gardner D, Kelly K, Johnson M, McCloskey J, Maas M. Nursing administration model for administrative practice. J Nurs Adm. 1991;21(3):37-41.

16. Johnson M, Gardner D, Kelly K, Maas M, McCloskey JC. The Iowa model: A proposed model for nursing administration. Nursing Economic$. 1991;9(4):255-262.

17. Huber D. Facilitating instrument evaluation. Nursing Economic$. 1998;16(1):27-32.

18. Clougherty J, McCloskey JC, Johnson M, et al. Creating a resource data base for nursing service administration. Comput Nurs. 1991;9(2):69-74.

19. Morgan DL. Successful Focus Groups: Advancing the State of the Art. Newpark, CA: Sage; 1993.

20. Strickland OL. Measuring well to study well. J Nurs Meas. 1993;1(1):3-4.

21. Harris MR, Warren JJ. Patient outcomes: Assessment issues for the CNS. Clinical Nurse Specialist. 1995;9(2):82-86.

Cited By:

This article has been cited 13 time(s).

International Nursing Review
Leadership styles in ethical dilemmas when head nurses make decisions
Zydziunaite, V; Lepaite, D; Suominen, T
International Nursing Review, 60(2): 228-235.
10.1111/inr.12018
CrossRef
Journal of Nursing Administration
The effect of nursing leadership on hospital nurses' professional practice behaviors
Manojlovich, M
Journal of Nursing Administration, 35(): 366-374.

Journal of Professional Nursing
Jordanian nurses' job satisfaction and intent to stay: Comparing teaching and non-teaching hospitals
Mrayyan, MT
Journal of Professional Nursing, 23(3): 125-136.
10.1016/j.profnurs.2006.12.006
CrossRef
Contemporary Nurse
Hospital organizational climates and nurses' intent to stay: Differences between units and wards
Mrayyan, MT
Contemporary Nurse, 27(2): 223-236.

Journal of Emergency Nursing
The Impact of Ed Nurse Manager Leadership Style on Staff Nurse Turnover and Patient Satisfaction in Academic Health Center Hospitals
Raup, GH
Journal of Emergency Nursing, 34(5): 403-409.
10.1016/j.jen.2007.08.020
CrossRef
Journal of Athletic Training
Leadership behaviors of athletic training leaders compared with leaders in other fields
Laurent, TG; Bradney, DA
Journal of Athletic Training, 42(1): 120-125.

Journal of Advanced Nursing
The psychometric properties of the Chinese Dialysis Quality of Life Scale for Hong Kong dialysis patients
Suet-Ching, WL
Journal of Advanced Nursing, 36(3): 441-449.

International Nursing Review
Jordanian nurses' job satisfaction, patients' satisfaction and quality of nursing care
Mrayyan, MT
International Nursing Review, 53(3): 224-230.

Journal of Nursing Care Quality
Perceptions of Jordanian Head Nurses of Variables That Influence the Quality of Nursing Care
Mrayyan, MT
Journal of Nursing Care Quality, 19(3): 276-279.

PDF (60)
Journal of Nursing Administration
Leadership and Retention: Research Needed
Kleinman, CS
Journal of Nursing Administration, 34(3): 111-113.

PDF (57)
Nursing Research
Measuring Leadership Practices of Nurses Using the Leadership Practices Inventory
Tourangeau, AE; McGilton, K
Nursing Research, 53(3): 182-189.

PDF (98)
Journal of Nursing Administration
Nurse/Physician Collaboration: Action Research and the Lessons Learned
Dechairo-Marino, AE; Jordan-Marsh, M; Traiger, G; Saulo, M
Journal of Nursing Administration, 31(5): 223-232.

PDF (125)
The Journal of Perinatal & Neonatal Nursing
Strategies for the Rapid Adoption of Best Practices on the Labor and Delivery Unit
Tillett, J; Kruger, B
The Journal of Perinatal & Neonatal Nursing, 23(2): 102-104.
10.1097/JPN.0b013e3181a254e1
PDF (77) | CrossRef
Back to Top | Article Outline

© 2000 Lippincott Williams & Wilkins, Inc.

 

Login