Skip Navigation LinksHome > January/February 2008 - Volume 12 - Issue 1 > The Practical Use of Program Theory
ACSM'S Health & Fitness Journal:
doi: 10.1249/01.FIT.0000298462.14759.47
Worksite Health Promotion

The Practical Use of Program Theory

Pronk, Nico Ph.D., FACSM, FAWHP

Free Access
Article Outline
Collapse Box

Author Information

Nico Pronk, Ph.D., FACSM, FAWHP, is executive director of the Health Behavior Group and vice president of Health and Disease Management at Health-Partners health system, which provides health promotion, disease prevention, and disease management services for worksites and health plans around the country. Dr. Pronk has published extensively in the health-related scientific literature and is currently an Associate Editor for ACSM's Health & Fitness Journal® and an Editorial Board member of CDC's Preventing Chronic Disease e-journal. Among other public services activities, he currently serves on the Task Force on Community Preventive Services supported by CDC and the Interest Group on Worksite Health Promotion at ACSM. Dr. Pronk received Fellow status from ACSM and the former Association for Worksite Health Promotion (AWHP).

"There's nothing so practical as a good theory."

Figure. No caption a...
Image Tools

Kurt Lewin

So how do you answer when you're being asked why this new worksite health promotion program will prove to be a good choice? What is so special about this one as opposed to the others? Why are you sold on this particular program compared with all those other programs that seem to be much less expensive, seem to give participants exactly what they want, seem to be so flexible that they can do anything we ask for, etc., etc., etc.?

Somewhere in your response to these questions, you are highly likely to include statements that go something like this: "…based on the underlying assumptions…," or "…given the limitations of what we know about the…," or "…following the inclusion of such principles in the program's design…" In other words, you will likely include qualifying statements in your response that provide a foundation upon which to build trust in your program's choice. Often, such foundations are couched in existing knowledge of what we know works well based on a theory-in reference to a set of statements that are constructed to explain and predict phenomena (e.g., events, or the behavior of employees). In many instances, we are constructing models of reality. A theory makes generalizations about observations and consists of an interrelated and coherent set of ideas and models. Of course, a theory can be tested so there may be proof that it's a good one, and in science, this is done often. Before appropriate testing, a program can therefore be theoretically sound-it is not uncommon for a theory to produce predictions that are later confirmed or proven incorrect by experiment.

Back to Top | Article Outline

SO WHAT MAKES FOR A GOOD THEORY?

Figure. No caption a...
Image Tools

"An ounce of action is worth a ton of theory."

Friedrich Engels

Generally speaking, programs fail or succeed in large part based on the accuracy of the underlying assumptions about how they are supposed to work. These assumptions may be considered the theoretical underpinnings of the program. In effect, they may be regarded as the program's theories, and there are two groups: first, the program's theory of cause and effect and, second, the program's theory of implementation (1, 2). Combined, these two theories provide reasonable explanations about why a program may fail, succeed, or have less than optimal impact. Programs are only successful when a sound theory of cause and effect combines with a sound theory of implementation. Programs may be less than optimal in terms of their intended outcomes when either the theory of implementation or the theory of cause and effect has faulty assumptions resulting in an implementation problem or a causal logical problem, respectively. Programs experience outright failure when a faulty theory of implementation combines with a faulty theory of cause and effect (3).

Back to Top | Article Outline
Theory of cause and effect

The program theory of cause and effect presents underlying logic and assumptions and explains why a program will cause specific outcomes. These causal connections can be summarized using "if-then" statements about the program: "If I implement [the program], then [the desired outcome] will occur." Figure 1 presents this graphically.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Theory of implementation

The program theory of implementation defines the strategy for implementation in the field (1). Because most health programs may be implemented through a variety of different strategies, a program's ultimate performance is determined in part by the strategy selected to implement it in the field and in part by the degree to which the program is actually implemented as intended (4). Figure 2 presents this graphically.

Figure 2
Figure 2
Image Tools
Back to Top | Article Outline

CONNECTING THE PROGRAM AND ITS OUTCOMES

"He who loves practice without theory is like the sailor who boards a ship without a rudder and compass and never knows where he may cast."

Leonardo da Vinci

It is of course helpful to be able to explain how a program generated the intended outcomes. As in the case of the mismatch between a program's theories of cause and effect and implementation, what are the reasons why a certain result was obtained?

As a result of program evaluations, mediating variables may be identified that explain why the program's intervention results in improved outcomes. Mediating variables such as self-efficacy and autonomy, among others, are related to the program's theory of cause and effect and define the mechanisms, explaining how the program caused the observed outcomes. On the other hand, moderating variables relate to the program's theory of implementation and define how the characteristics of people and program implementation strategies, features, or protocols are associated with the outcomes. For example, moderating variables include sex, socioeconomic status, race, program exposure, time/duration, and intensity among others.

It becomes increasingly clear that program theory is extremely helpful in program design and evaluation to ensure that there is sufficient program review available using data derived from the evaluation that will increase the understanding of how program objectives are supposed to be achieved. Clearly defined and focused questions may be developed, aided by sound program theory. Using such questions, a shared understanding of how programs are supposed to work and reach their intended objectives may be achieved among program staff, decision makers, employees, and other stakeholders. A common view of program theory is valuable because it has the ability to bring together all those who may have different beliefs about how the program produces its effects.

Back to Top | Article Outline

HOW PRACTICAL IS YOUR PROGRAM THEORY?

"Knowing is not enough; we must apply. Willing is not enough; we must do."

Johann Wolfgang von Goethe

Take a look at your own program. Do you have a clear, concise, and focused approach in place that includes a clear line of sight between the program itself and the effects or impacts you hope to generate? Are you confident that the strategies and tactics you have selected to implement the program are consistent with the most efficient and effective approach to generate the outcomes? Additionally, are the strategies and tactics implemented fully and according to plan? If so, your measurement and evaluation strategies will probably generate the most appropriate and valuable insights that you and your organizational leadership are looking for. If not, you may want to go back and review how to organize the approach selected and ensure that a sound program theory is documented-it will certainly enhance your ability to report on program progress and tell a compelling story in times of decision making around program resource allocation.

Back to Top | Article Outline

References

1. Shortell, S.M. Suggestions for improving the study of health program implementation. Health Services Research 19(1):117-125, 1984.

2. Chen, H.T. Theory-Driven Evaluations. Newbury Park, CA: Sage Publications, 1990.

3. Patrick, D.L., D. Gembrowski, M.L. Durham, et al. Cost and outcomes of Medicare reimbursement for HMO preventive services. Health Care Financing Review 20(4):25-43, 1999.

4. Pronk, N.P. Designing and evaluating health promotion programs: simple rules for a complex issue. Disease Management and Health Outcomes 11(3):149-157, 2003.

© 2008 American College of Sports Medicine

Login

Article Tools

Images

Share

Article Level Metrics

Connect With Us