Skip Navigation LinksHome > May 2013 - Volume 55 - Issue 5 > Reliability and Validity Testing of the CDC Worksite Health...
Journal of Occupational & Environmental Medicine:
doi: 10.1097/JOM.0b013e31828349a7
Original Articles

Reliability and Validity Testing of the CDC Worksite Health ScoreCard: An Assessment Tool to Help Employers Prevent Heart Disease, Stroke, and Related Health Conditions

Roemer, Enid Chung PhD; Kent, Karen B. MPH; Samoly, Daniel K. MPH; Gaydos, Laura M. PhD; Smith, Kristyn J. BA; Agarwal, Amol BS; Matson-Koffman, Dyann M. DrPH, MPH, CHES; Goetzel, Ron Z. PhD

Free Access
Article Outline
Collapse Box

Author Information

From the Emory University Institute for Health and Productivity Studies (Drs Roemer and Goetzel, Mr Samoly, and Ms Kent and Ms Smith), Washington, DC, and Emory University (Dr Gaydos and Mr Agarwal), Atlanta, Ga: Center for Disease Control and Prevention/National Center for Chronic Disease Prevention and Health Promotion/Division for Heart Disease and Stroke Prevention (Dr Matson-Koffman), Atlanta, Ga; and Truven Health Analytics (Dr Goetzel), Bethesda, Md.

Address correspondence to: Enid Chung Roemer, PhD, Emory University Institute for Health and Productivity Studies, Rollins School of Public Health, 1341 22nd Street NW, Washington, DC 20037 (enid.c.roemer@emory.edu).

The authors declare no conflicts of interest.

Funding for this study was provided through a cooperative agreement between the Centers for Disease Control and Prevention and the Emory University Prevention Research Center; grant number 5U48DP001909.

The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.

Collapse Box

Abstract

Objective: To develop, evaluate, and improve the reliability and validity of the CDC Worksite Health ScoreCard (HSC).

Methods: We tested interrater reliability by piloting the HSC at 93 worksites, examining question response concurrence between two representatives from each worksite. We conducted cognitive interviews and site visits to evaluate face validity of items and refined the instrument for general distribution.

Results: The mean question concurrence rate was 77%. Respondents reported the tool to be useful, and on average 49% of all possible interventions were in place at the surveyed worksites. The interviews highlighted issues undermining reliability and validity, which were addressed in the final version of the instrument.

Conclusions: The revised HSC is a reasonably valid and reliable tool for assessing worksite health promotion programs, policies, and environmental supports directed at preventing cardiovascular disease.

The United States is facing an epidemic of chronic diseases, which is threatening the competitiveness of American businesses with large productivity losses and unsustainable health care costs. In 2010, the total cost of cardiovascular diseases, including heart disease and stroke, in the United States was estimated to be $444 billion.1 Treatment of these diseases accounts for about $1 of every $6 spent on health care in the United States, and as the population ages, these costs are expected to increase substantially.2 Although heart disease, stroke, and related chronic conditions are among the most common and costly of all health problems, they are also among the most preventable.3 Studies estimate that 60% to 95% of heart disease risk is attributable to potentially modifiable behaviors.4–8

We know from prior research that individuals are more likely to adopt and sustain health-promoting behaviors if these behaviors are supported in their work or school environment.9–12 Moreover, we also know that the most effective approach to impacting employee health is through a comprehensive evidence-based worksite health promotion program (defined as containing key elements of individual risk-reduction programs that are coupled with organizational, cultural, and environmental supports for healthy behaviors, and coordinated and integrated with other wellness activities13) and that the effects of such programs may be considerable. For instance, a 2012 literature review by Chapman14 found that participants in worksite health promotion programs had about 25% lower medical and absenteeism expenditures than nonparticipants. Similarly, in 2010, Baicker et al15 found the cost savings garnered by well-designed worksite wellness programs to be substantial: the return on investment considering medical expenditures was $3.27 for every dollar spent, and for absenteeism the return on investment was $2.73 for every dollar spent. Yet, despite these promising financial benefits, only 6.9% of US employers offer comprehensive worksite health promotion programs.16

To support the development of comprehensive evidence-based worksite health promotion programs, the Emory University Institute for Health and Productivity Studies, in partnership with the Centers for Disease Control and Prevention (CDC), developed the CDC Worksite Health ScoreCard (HSC). The purpose of the HSC is to help employers assess their current health promotion programs, identify gaps, and prioritize high-impact interventions to prevent heart disease, stroke, and related chronic conditions.

The HSC, a self-assessment survey instrument, includes questions on key evidence-based and best-practice interventions that have been recommended to be part of a comprehensive worksite heart disease/stroke prevention program. The HSC covers the following 12 domains: (1) organizational support, (2) tobacco control, (3) nutrition, (4) physical activity, (5) weight management, (6) stress management, (7) depression, (8) high blood pressure, (9) high cholesterol, (10) diabetes, (11) signs and symptoms of heart attack and stroke, and (12) emergency response to heart attack and stroke. Although each domain can be completed as a stand-alone module, we recommend that employers complete the whole survey to better assess the comprehensiveness of a worksite health promotion program.

The aim of this article was to document the steps taken to evaluate the validity and reliability of the HSC.

Back to Top | Article Outline

BACKGROUND

In Phase I of tool development, the Institute for Health and Productivity Studies partnered with the CDC to develop the self-assessment questions for the HSC; evaluate and improve the tool's scientific evidence base, usability, and relevance to employers; and develop a scoring methodology.

Subject-matter experts (SMEs) from various divisions of the CDC played a key role in assessing the content validity of the tool during this first phase, offering expertise in each of the 12 content domains of the tool. Specifically, SME teams provided a thorough review of the relevant scientific evidence, determined the questions that should be included on the HSC, and assigned scoring weights to each item based on a ranking of the strength of evidence supporting the item and its potential impact on health improvement. The final draft of the tool consists of 100 (dichotomous “YES” or “NO”) questions that ask employers whether they have a specific intervention or program in place at their worksite. As part of this initial phase of developing the tool, we also pilot-tested the HSC with nine employers in 2008, nine employers in 2010, and 70 worksite health promotion practitioners in 2010, who, in turn, provided valuable feedback on the instrument to ensure that it was clearly worded and simple to complete. Following each of these pilot administrations, the tool was revised to address feedback from the employers and practitioners.

Phase II involved further validation of the tool and is the focus of this study. The validation process entailed three stages: (1) a pilot administration of the HSC survey (interrater reliability testing), (2) cognitive interviews with a subsample of respondents (face validity testing), and (3) a final revision of the survey as advised by the CDC SMEs (for maintenance of content validity). The study was reviewed and deemed exempt by the Emory University Institutional Review Board. The final version of the tool was released in August 2012 and is available on-line at: http://www.cdc.gov/dhdsp/pubs/worksite_scorecard.htm.

Back to Top | Article Outline

METHODS

Recruitment

We recruited a convenience sample of 146 employers to participate in this study. To ensure a heterogeneous sample across organization sizes, business types, and US geographic areas, we collaborated with national business coalitions and associations (ie, the National Business Coalition on Health and the National Safety Council), as well as state health departments, to distribute the study application packet broadly. Specifically, National Business Coalition on Health, National Safety Council, and 32 state health departments assisted us by e-mailing the recruitment packet (containing instructions, frequently asked questions, and an application form) to their list of local employers, coalitions, and business leaders eligible to participate in the study. We provided recruiters with a template cover letter to send to perspective study participants that explained the study requirements. We also sent recruitment e-mails to a CDC chronic disease listserv, to reach people who were engaged in CDC-funded worksite initiatives throughout the United States. Interested employers self-selected into the study by completing and submitting an application.

Employers were not required to have an active health promotion program in place to participate in the study. Notwithstanding, during recruitment, worksites with fewer than 10 employees were excluded because we determined that it would be unlikely that extremely small employers would have the necessary resources to put in place substantial health promotion programs.

This study defined the unit of analysis as a worksite (a single campus/building, as opposed to the entire organization) and collected data from just one worksite in each organization. In the case of large organizations that have multiple worksites, we asked that respondents restrict their responses to a single worksite location (which may have included a cluster of buildings within walking distance) because program offerings can vary widely across worksites located in different regions or parts of the globe within a given organization.

Back to Top | Article Outline
Data Collection
On-line Survey

To test interrater reliability, all enrolled worksites were asked to have two knowledgeable employees (eg, worksite wellness practitioners, human resources specialists, or benefits managers) independently complete the on-line HSC survey. Respondents were encouraged to consult with others within the organization to obtain answers to questions related to areas in which they lacked knowledge, as would be the case in a real-world setting when the tool is administered to employers. Notwithstanding, the two respondents were asked not to consult with each other in completing the instrument.

The HSC is a 100-item instrument, consisting of 12 domains. All questions are in a YES/NO format. Each item carries a weighted point value (where 1 = good, 2 = better, 3 = best), which reflects the potential impact that the intervention or strategy has on the intended health behavior(s) or outcome(s), and the strength of scientific evidence supporting that impact. Overall and domain scores are summed on the basis of the weighted value of each item that receives a “YES” response (with a possible total score range of 0 to 215). Conversely, “NO” responses are assigned zero points.a

Participants were instructed to answer each question about the presence of a given program, policy, or practice currently in place at their worksite or in place within the past 12 months (eg, During the past 12 months, did your worksite have a written policy banning tobacco use at your worksite?). We also captured information about the organization's demographics (eg, business type, size, industry) and survey respondents (eg, job title, the number and type of individuals they consulted to complete the survey).

For our primary analysis of interrater reliability, we examined the level of concordance (ie, index of percent agreement) between two survey responses from each worksite. Organizations with fewer than two fully completed surveys were excluded from the analysis. Thus, the final study sample consisted of 93 organizations (or 186 survey responses).

Back to Top | Article Outline
Cognitive Interviews

This phase of data collection was designed to test face validity of the tool, identify and explain any issues with wording or content underlying questions with low reliability (ie, low levels of agreement between respondents), and determine specific ways to refine the HSC.

Of the 93 employers who completed the on-line survey, we selected a sample of 29 employers and invited them to participate in cognitive interviews (by telephone or in-person). For the telephone interviews (with a 1-hour time limit), we selected a stratified random sample of 20 employers. The sample was stratified by organization size, according to CDC size definitions: very small (0 to 99 employees); small (100 to 249); medium (250 to 749); and large (750+). We interviewed both survey respondents from each organization (simultaneously), using a written interview protocol to ensure a consistent approach between interviewers and across interviews. Rather than reviewing each individual question of the survey, interviewers probed respondents by health topic, with special emphasis placed on individual questions where there was a discrepancy between the two survey responses.

Interview probes were structured to assess the respondents' comprehension of the question, the information retrieval process, the decision process, and the response process. For example, we asked respondents to explain their understanding of the general intent of the question; the meaning of specific terms; with whom they consulted to gather relevant information; how much effort was required to answer the question; whether they answered the question critically and objectively or were swayed by impression management (answering questions with a desire to project a positive image of the organization); and whether they were able to match their actual program configuration to the response options offered by the survey. Finally, we asked about the degree to which respondents found the HSC useful as an evaluation tool (ie, whether it adequately captured the type of heart disease/stroke prevention programs that are relevant and feasible to implement in a worksite setting).

A convenience sample of nine employersb (out of the sample of 93, excluding those participating in the telephone interviews) were targeted for in-person interviews, which were conducted as part of a lengthier site-visit (lasting 2 to 3 hours) in which we also asked respondents to provide a brief tour of their facilities. This allowed study team members to observe and confirm the presence of certain environmental interventions (eg, automated external defibrillator placements and vending machine food/beverage offerings) and review relevant documentation that might provide a better understanding of their wellness policies and programs (eg, a description of the smoking policy in the employee handbook, program flyers, or informational newsletters).

Back to Top | Article Outline
Data Analysis

We cleaned, coded, and analyzed the survey data, using Microsoft Excel 2010 and SAS 9.0 software (SAS Institute Inc, Cary, NC). Organizations without two complete responses were excluded from the analysis. For the analysis of interrater reliability, we examined the index of percent agreement; that is, the percentage of times that co-respondents at each worksite both answered “YES” or “NO” to a question. Incidences where respondents left a question blank (missing data) were coded as nonconcurrence.

Prior to the study, we designated 80% as the minimum acceptable level of concurrence for each question, in accordance with standards set by Nunnally17 and accepted as appropriate for scale reliability. Items receiving a lower than 80% concurrence score were, therefore, automatically flagged and subjected to review for revision or elimination from the tool.

The interviews provided very useful qualitative data, allowing us to ascertain the reasons that items did not meet the minimum level of concurrence threshold. Both the telephonic and site-visit interviews were conducted by two team members, so that the dedicated role of one researcher was to take careful notes using a standardized data capture form. At the end of data collection, these notes were aggregated into a thematic summary. These summaries were instrumental in highlighting potential areas for improvement, particularly questions that required edits or elimination because of inconsistent interpretation or instructions that needed to be further clarified.

Back to Top | Article Outline
CDC Subject-Matter Expert Review

After completing data collection and analysis, we again met with the CDC SMEs to review the general findings of the study and the detailed list of suggested revisions yielded from interviews. During this process, the SMEs made a number of modifications to the survey items to improve comprehension and ease of use while maintaining the content validity of the original questions. Subject-matter experts also verified and updated relevant literature citations needed to support the weighting scores assigned to each question.

Back to Top | Article Outline

RESULTS

Sample Characteristics

The study sample included 93 employers of varying sizes, business types, and industries, from 32 states across the United States. Within these organizations, the 186 respondents who completed the survey were most often wellness program or human resources personnel. Table 1 provides a summary description of the study sample.

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Interrater Reliability

We assessed the tool's interrater reliability by comparing the two responses from each organization and calculating the index of percent agreement for each question (ie, the percentage of times that the two responses from the same organization were the same). These question-level concurrence rates ranged from 58% to 99%, with a mean of 77%. Figure 1 shows the distribution of concurrence rates across the entire survey. Table 2 displays the average concurrence rate for each of the domains (eg, nutrition); these ranged from 70% to 82%.

Figure 1
Figure 1
Image Tools
Table 2
Table 2
Image Tools

A little over two-thirds of the survey questions (68/100) fell below the 80% threshold of acceptable concurrence and were, therefore, included in the subsequent SME review process. Our initial data analysis revealed some notable patterns: questions with the lowest concurrence rates were primarily those inquiring about (1) insurance coverage for specific types of drugs/services and (2) health programs that are typically offered through third party vendors (eg, smoking-cessation counseling, lifestyle self-management programs).

Qualitative findings from the interviews and site visits provided further insight into the issues that led to low concurrence rates. For example, we found that respondents had difficulty distinguishing “educational seminars, workshops, or classes” from “group lifestyle counseling” and “lifestyle self-management programs.” Because of inconsistent interpretation of these terms, many of the questions that asked about these programs had low concurrence rates. Furthermore, we found that many respondents did not consider services that were offered through the employer's Employee Assistance Program or health insurance provider in their responses.

Although most respondents appreciated the comprehensiveness of the HSC and completed it in two or more sessions (totaling 30 to 40 minutes), some respondents were discouraged by the tool's length. In many of these cases, respondents reported that they stopped reading questions carefully as they progressed through the survey, leading to a high number of discrepant responses. On an individual-question level, some respondents noted that the examples and clarifications embedded in some of the questions often made the questions too wordy. This led some respondents to skim questions and miss important details. Also, it was not always clear to some respondents whether the examples provided were necessary components of a program or policy for the purpose of yielding a “YES” response.

Finally, we found that most of the discrepant responses were due to the unique conditions imposed by the study. For this study, the two respondents from each organization were restricted from consulting with each other, which would not be the case if the HSC were administered in a “real world” setting. In some cases, one of the respondents had exclusive access to information requested in a question (eg, insurance benefit design) and the other respondent just guessed or left the question blank.

Back to Top | Article Outline
Face Validity: Relevance of the Tool

Of the 29 employers invited to participate in the interviews (telephonic and site visits), all agreed to participate. Across the business size categories, 10 were very small employers, 3 small employers, 8 medium-sized employers, and 8 large employers. Interview respondents across all the size categories were equally positive about the tool. Many noted that they found the tool useful and instructive and very effective in helping them evaluate their worksite's wellness program along with opportunities for improvement. For example, several respondents indicated that the tool offered ideas for new interventions that they would consider implementing in the future, as well as ways to modify and improve existing programs.

Our analysis of the survey scores further confirmed our qualitative findings that the tool represented realistic and feasible interventions to implement in the workplace, across all organization sizes. Table 3 shows the score distribution for the entire study sample, overall and by organization size. Employer scores ranged from 18 to 211, with a mean of 129 (SD = 40) out of a possible 215 points. Table 4 illustrates the average scores for each of the 12 domains for all study participants and by organization size. There was a clear gradient in the scores across organization size groupings, with larger organizations scoring higher on the tool than smaller organizations. This was consistent with our expectation that larger organizations would have more interventions in place, because they generally have more resources available for these efforts.15

Table 3
Table 3
Image Tools
Table 4
Table 4
Image Tools

Overall, our analysis suggests that the strategies and interventions listed in the HSC are relevant and feasible for the business community. On average, the surveyed worksites reported that they had in place 49% of all possible interventions.c The percentage of interventions in place was correlated with organization size. Of the 100 interventions listed on the tool, large organizations had an average of 59 interventions in place, medium-sized organizations an average of 48, small organizations an average of 37, and very small organizations an average of 35 interventions.

Although some interventions were much more widely implemented than others, we found that all the interventions listed on the tool were in place in at least some of the surveyed worksites. Table 5 shows the most and least common interventions in place across the study organizations. The most common interventions were ones focused on general awareness building and providing health education. The least common interventions were ones related to food procurement policies because many organizations (1) do not offer food and beverage options at their worksite and therefore cannot support associated interventions or (2) have limited ability to alter existing food and beverage vendor contracts.

Table 5
Table 5
Image Tools
Back to Top | Article Outline

DISCUSSION

The CDC developed the HSC to support employer efforts to create a healthy workplace culture in which programs, policies, and the social and physical environment support the adoption of a healthy lifestyle, with the ultimate aim of preventing heart disease, stroke, and related chronic and debilitating conditions in the US workforce.

This article reports on 2 years of work focused on the development of the HSC. Phase I of this study was devoted to examining the instrument's content and face validity, reviewing the scientific evidence for each questionnaire item, assessing the impact level of all the interventions on the survey, and developing a scoring methodology. Phase II was devoted to further evaluating the tool's face and content validity, as well as its interrater reliability, through an on-line self-report survey, interviews, and SME review.

Back to Top | Article Outline
Limitations

Any self-assessment tool like the HSC faces limitations. Self-reported responses may be inaccurate for several reasons. First, some survey items may contain technical terms or ambiguous concepts that impede comprehension for some respondents. Second, the programs, policies, and environmental supports referred to in the questions may take different forms in different organizations or lack relevance in some organizations, and therefore the available response options (“YES” or “NO” for questions on this tool) may not adequately match reality in some circumstances. Third, some questions may be subject to recall bias and may require additional prompts/examples to elicit a meaningful response.

Our question-by-question reliability analysis and qualitative interview feedback provided a plethora of data to test whether these three main limitations compromised the validity and interrater reliability of items that did not meet the minimum concurrence rate threshold and resulted in a systematic bias in the responses. We found in our interviews with employers that when respondents were unsure about the answer to a question (for any reason), they were more likely to report affirmatively that their organizations had an intervention in place. Therefore, overall scores based on the self-report survey may have been artificially inflated.

During the SME review process, we addressed the key issues highlighted in our data to reduce this bias and improve the reliability and validity of the tool. First, we revised the instructions so that they emphasized the need to complete the tool collaboratively among knowledgeable individuals within an organization. In the interviews, we learned that many of the discrepancies occurred because the respondent did not have the relevant knowledge and decided to guess or leave questions blank rather than seek the answer from someone else in his or her organization who may have known the correct answer to the question. Fewer than one-half of all respondents (49%) who completed the survey noted that they consulted with others at the organization. We surmise that this limited collaboration may also partially explain why more than one-third of our original 146 organizations did not fulfill the requirement of having two fully completed surveys and, therefore, had to be excluded from the final analysis.

Second, we shortened most of the questions on the tool. Several participants admitted that they skimmed many of the long questions (generally those with more than two lines of text). During the interviews, when reviewing items that had discrepant answers, some respondents noted, “I must have not read the question carefully, the answer should really be...” or “I don't know why I answered that way, but I agree with my colleague that the answer should be....” To address this issue, we shortened most of the questions by separating out examples and clarifiers from the core question. We placed the example prompts in a smaller, colored font below each question in the print version. For the future Web-based version of the tool, respondents will have the option to hide or show prompts. This will allow users to focus in on the main thrust of the question and seek clarification only when needed.

Third, respondents sometimes had difficulty determining whether a program counted for a “YES” response (eg, programs offered through Employee Assistance Program or health insurance providers), thus leading to discrepancies within organizations. To address this issue, we included additional examples and definitions in the prompts and/or in the glossary, for example, highlighting the distinction between “group lifestyle counseling” and “lifestyle self-management programs” and reminding users that programs could be offered by on-site staff or by third-party providers (eg, vendors, health insurance company), in-person or on-line.

In summary, although more than two-thirds of the items fell below the concurrence rate threshold of 80%, the majority of questions were still above 70% concurrence (79/100) and only one question received less than 60% concurrence. Moreover, it seemed that the low concurrence rate across items was most often due to issues that were addressed in the revisions (eg, question length) or constraints that would not apply in a “real world” application of the tool (eg, limitations on collaboration), and seldom due to issues with content.

All final revisions were systematically reviewed by the SMEs to confirm that the content validity of the items was not compromised. In addition, SMEs reviewed and confirmed or revised the weighted value assigned to each item as appropriate based on updated literature reviews.

Although this study allowed us to assess and improve the interrater reliability and validity of the HSC, it has not been reevaluated for interrater reliability. Also, it has not yet undergone testing for other types of validity such as predictive validity (ie, the degree to which the HSC score is predictive of some criterion measure, such as employee health behaviors, biometric measures, or health care costs), discriminant validity (ie, the degree to which the HSC scores are distinct from other measures that are unrelated), and convergent validity (ie, the degree to which HSC scores are correlated to scores on similar employer assessment tools). Future studies should be conducted to reevaluate the interrater reliability of the revised tool and to address these other types of validity. For example, predictive validity studies could examine whether organizations with higher HSC scores have employees with healthier behaviors or biometric readings, and lower health care and productivity costs compared to organizations with lower HSC scores. Furthermore, predictive validity testing could also examine whether improving HSC scores leads to improved health-related outcomes over time.

Back to Top | Article Outline

CONCLUSION

Our overall conclusion from this work is that the revised HSC is a reasonably valid and reliable tool for assessing worksite health promotion programs, policies, and environmental supports directed at preventing cardiovascular disease among workers. Our analysis, along with further input from SMEs at the CDC, allowed us to improve the face and content validity of the tool by making it clearer and easier to complete. The HSC represents one of the few current, comprehensive, and evidence-based worksite tools that have undergone reliability and validity testing and are publicly available for addressing a significant and growing need confronting America's business community.

Although employers have a legal responsibility to provide a safe and hazard-free workplace, they also have abundant opportunities to actively promote individual health and foster a healthy work environment. The HSC includes questions on many of the key evidence-based and best-practice strategies and interventions that are part of a comprehensive worksite health promotion approach and that address the leading health conditions driving medical and productivity-related costs for employers.

Organizations, both large and small, can use the HSC to assess their health promotion programs, identify program deficits, prioritize interventions aimed at preventing heart disease, stroke, and related health conditions, set benchmarks, and measure progress. Furthermore, states and county health departments, researchers, as well as the CDC, can potentially use the tool for surveillance purposes to obtain a current view of worksite practices, establish best-practice benchmarks, and track improvements in worksite health promotion programs over time.

Back to Top | Article Outline
ACKNOWLEDGMENTS

Development and content validity of the CDC Worksite Health ScoreCard was made possible through the time and expertise provided by the CDC Workgroup members and subject-matter experts (SMEs) listed as follows: Dyann Matson Koffman, DrPH, MPH, CHES, NCCDPHPd/DHDSPe, CDC Lead; Pamela Allweiss, MD, MPH, NCCDPHP/DDTf; Shanta R. Dube, PhD, MPH, NCCDPHP/OSHg; Marilyn Batan, MPH, NCCDPHP/DACHh; Casey L. Chosewood, MD, NIOSHi/ODj; Wendy Heaps, MPH, CHES, OADPk/OD; D. Bo Kimsey, PhD, MSEH, NCCDPHP/DNPAOl; Jason E. Lang, MPH, MS, NCCDPHP/OD; Dory C. Masters, MEd, CHES, NCCDPHP/DHDSP; Jeannie A. Nigam, MS, NIOSH/DARTm; Patricia Poindexter, MPH, NCCDPHP/DCPCn; Abby Rosenthal, MPH, NCCDPHP/OSH; Ahmed Jamal, MBBS, MPH, NCCDPHP/OSH; Hilary Wall, MPH, NCCDPHP/DHDSP; Brian J. Bowden, MSc, NCCDPHP/DNPAO; Tina J. Lankford, MPH, NCCDPHP/DNPAO; Susan J. McCarthy, MPH, NCCDPHP/DASHo; Edie M. Lindsay, JD, MPH Candidate, NCCDPHP/OD.

Special thanks to Steven Culler, PhD, Emory University for his leadership in developing the strength of evidence and impact rating scales, and to Andrew P. Lanza, MPH, MSW, CCHPp/NCCDPHP; Joel K. Kimmons, PhD, CCHP/NCCDPHP/DNPAO; Terry F. Pechacek, PhD, CCHP/NCCDPHP; Steven L. Sauter, PhD, NIOSH/DART; and William H. O'Brien, PhD, ABPP, Bowling Green State University, for their contributions as SMEs in providing strength of evidence and impact ratings for questionnaire items related to diabetes, tobacco, stress management, and depression, respectively. Information provided or views expressed in this publication do not necessarily reflect the official views of the CDC or imply endorsement by the Federal government.

Back to Top | Article Outline

REFERENCES

1. Centers for Disease Control and Prevention. Chronic Diseases: The Power to Prevent, the Call to Control: At a Glance 2011. Atlanta, GA: US Department of Health and Human Services; 2011.

2. National Center for Health Statistics. Health, United States, 2009, With Chartbook on Trends in the Health of Americans. Hyattsville, MD: National Center for Health Statistics; 2010.

3. Roger VL, Go AS, Lloyd-Jones DM, et al. Heart disease and stroke statistics—2011 update. Circulation. 2011;123:459–463.

4. Chiuve SE, McCullough ML, Sacks FM, Rimm EB. Healthy lifestyle factors in the primary prevention of coronary heart disease among men. Circulation. 2006;114:160–167.

5. Chiuve SE, Fung TT, Rexrode KM, et al. Adherence to a low-risk, healthy lifestyle and risk of sudden cardiac death among women. JAMA. 2011;306:62.

6. Yusuf S, Hawken S, Ôunpuu S, et al. Effect of potentially modifiable risk factors associated with myocardial infarction in 52 countries (the INTERHEART study): case-control study. The Lancet. 2004;364:937–952.

7. Ezzati M, Lopez AD, Rodgers A, Murray CJL. Comparative Quantification of Health Risks: Global and Regional Burden of Diseases Attributable to Selected Major Risk Factors. Geneva, Switzerland: World Health Organization; 2004.

8. Stampfer MJ, Hu FB, Manson JAE, Rimm EB, Willett WC. Primary prevention of coronary heart disease in women through diet and lifestyle. N Engl J Med. 2000;343:16–22.

9. Soler RE, Leeks KD, Razi S, et al. A systematic review of selected interventions for worksite health promotion: the assessment of health risks with feedback. Am J Prev Med. 2010;38:S237–S262.

10. Conn VS, Hafdahl AR, Cooper PS, Brown LM, Lusk SL. Meta-analysis of workplace physical activity interventions. Am J Prev Med. 2009;37:330–339.

11. Anderson LM, Quinn TA, Glanz K, et al. The effectiveness of worksite nutrition and physical activity interventions for controlling employee overweight and obesity: a systematic review. Am J Prev Med. 2009;37:340–357.

12. Corbière M, Shen J, Rouleau M, Dewa CS. A systematic review of preventive interventions regarding mental health issues in organizations. Work (Read, Mass). 2009;33:81.

13. Childress JM, Lindsay GM. National indications of increasing investment in workplace health promotion programs by large- and medium-size companies. North Carol Med J. 2006;67:449–452.

14. Chapman LS. Meta-evaluation of worksite health promotion economic return studies: 2012 update. Am J Health Promot. 2012;26:1–12.

15. Baicker K, Cutler D, Song Z. Workplace wellness programs can generate savings. Health Aff. 2010;29:304–311.

16. Linnan L, Bowling M, Childress J, et al. Results of the 2004 national worksite health promotion survey. Am J Health Promot. 2008;98:1503–1509.

17. Nunnally J. Psychometric Methods. New York: McGraw; 1978.

a Information about the scoring methodology, the evidence, and impact ratings and points assigned to each survey item is available in Appendix D of the CDC Worksite Health ScoreCard Manual at: http://www.cdc.gov/dhdsp/pubs/worksite_scorecard.htm. Cited Here...

b The sample size was limited to nine employers from the Washington, District of Columbia, and Atlanta, Georgia, areas due to resource and time constraints. Cited Here...

c These data are based on both respondents indicating having the strategy in place. Cited Here...

d National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). Cited Here...

e Division for Heart Disease and Stroke Prevention (DHDSP). Cited Here...

f Division of Diabetes Translation (DDT). Cited Here...

g Office of Smoking and Health (OSH). Cited Here...

h Division of Adult and Community Health (DACH). Cited Here...

i National Institute for Occupational Safety and Health (NIOSH). Cited Here...

j Office of the Director (OD). Cited Here...

k Office of the Associate Director for Policy (OADP). Cited Here...

l Division of Nutrition, Physical Activity, and Obesity (DNPAO). Cited Here...

m Division of Applied Research and Technology (DART). Cited Here...

n Division of Cancer Prevention and Control (DCPC). Cited Here...

o Division of Adolescent and School Health (DASH). Cited Here...

p The Coordinating Center for Health Promotion (CCHP). Cited Here...

©2013The American College of Occupational and Environmental Medicine

Login

Article Tools

Images

Share