Chronic diseases are responsible for 7 of 10 deaths in the United States.1 Many of these deaths could be prevented by changes in health behaviors such as diet, physical activity, and smoking.2,3 Accordingly, it is a national priority to develop and scale effective interventions for promoting healthy behavior, particularly in lower income populations where the risk of chronic disease is disproportionately high.2 A number of approaches have been effective including: chronic disease self-management training,4,5 structured diabetes prevention programs,6,7 financial incentives,8 digital automated monitoring,9 and support from community health workers (CHWs).10–12
Yet, even the most effective of these interventions does not work for all people. For example, only 60% of participants in the Diabetes Prevention Program (DPP) intensive lifestyle intervention achieved exercise goals and only 40% met weight loss goals.10 We know surprisingly little about what distinguished responders from nonresponders or reasons for nonresponse.7 Nonresponders in the DPP trial were similar to responders across all measured baseline sociodemographic characteristics.10 Other behavioral intervention studies4,6,8,13 have similarly concluded that baseline measures explain little of the variation in treatment response.
Numerous systematic reviews7,10,14 and research funders15 have identified the same knowledge gap: for whom do effective chronic disease interventions work/not work and why? Answering this question is critical so that interventions can be targeted and tailored across populations.
Our study team has developed IMPaCT16–19 (Individualized Management for Patient-Centered Targets), a standardized CHW intervention tested in 2 prior randomized clinical trials16,19 including a recent trial of outpatients with multiple chronic diseases.16 The primary analysis of this trial demonstrated that the intervention improved control of diabetes, obesity, smoking, mental health, and quality of primary care while reducing hospital admissions.16 However, despite overall effectiveness, 36.7% of intervention patients had worsened chronic disease control over the study period.
We were interested in learning who these nonresponders were and why their chronic diseases got worse despite support. Here, we present a secondary analysis of trial data exploring differences between intervention responders and nonresponders. We also use a concurrent qualitative evaluation to uncover differences not otherwise measured.
We used a convergent parallel design20 that combined a randomized controlled trial with a qualitative process evaluation. Details of this trial have been previously described.16–18 Briefly, enrollment took place between July 2013 and October 2014 at 2 urban internal medicine clinics. Eligible patients lived in a high-poverty region and were diagnosed with 2 or more of the following chronic diseases: diabetes, obesity, hypertension or tobacco dependence. We did not require patients to be in poor control of their chronic disease. After enrollment, patients selected one of their chronic diseases to focus on during the study and set a chronic disease management goal in consultation with their primary care provider.
Trained research assistants collected baseline biometric [glycosylated hemoglobin (Hba1C), body mass index, systolic blood pressure or cigarettes per day] and survey data (including the Patient Activation Measure,21 a 13-item measure that assesses patient’s knowledge, skill, and confidence for self-management). After the baseline survey, patients were randomized to goal-setting alone or goal-setting plus 6 months of the IMPaCT intervention. Six months later, blinded research assistants conducted study visits to assess outcomes including the primary outcome: change in selected chronic disease.
We defined nonresponders as those individuals who had worsening of their selected chronic disease (as measured by an increase in Hba1C, body mass index, systolic blood pressure or cigarettes per day between enrollment and 6-mo follow-up). Responders were those who improved, or at least maintained control of their selected chronic disease. We reasoned that for low-income patients with multiple chronic diseases, maintaining control can be considered a victory and we did not want to consider these people nonresponders.
IMPaCT is theory-based and builds on the Reasoned Action Approach22,23 (Fig. 1) an evidence-based framework for behavior-dependent outcomes. In this framework, attitudes, social norms, and self-efficacy shape an individual’s intention to initiate a behavior. A strong intention predicts performance of behavior unless there are external barriers. CHWs are trained laypeople who share socioeconomic backgrounds with patients and have high levels of empathy. These characteristics allow CHWs to influence attitudes, shift social norms, address external barriers, and bolster self-efficacy through strategies like motivational interviewing (Fig. 1).
At their first meeting, CHWs and patients discussed social and behavioral determinants affecting patients’ health. CHWs then helped patients create action plans to achieve their chronic disease management goals. CHWs provided 6 months of tailored support, contacting patients at least weekly, to help patients execute their action plans. Over the 6-month period, CHWs encouraged patients to monitor progress on their selected chronic disease goal each week (ie, track blood sugars, weight, daily cigarettes or blood pressure). If patients failed to make progress (for instance, if their weight was going up instead of down), CHWs used a semistructured script to provide empathic motivational interviewing, and troubleshoot barriers. CHWs also facilitated a weekly support group designed to foster peer networks.
The IMPaCT model provides guidelines for program infrastructure including hiring, training, and supervision.25 CHWs were recruited by circulating job descriptions through a network of community-based organizations (eg, neighborhood associations, churches). Job applicants were screened through group and individual interviews and employer reference checks to identify individuals who were good listeners, nonjudgmental, and reliable.
CHWs went through a month-long training that covers topics such as action-planning, motivational interviewing and trauma-informed care.
CHW fidelity to the training and care model was reinforced through several strategies. Newly trained CHWs were apprenticed to a senior CHW until they demonstrated proficiency in prespecified core competences. CHWs were then observed by their supervisor, typically a master’s level social worker, to confirm that adherence to work practices as described in training and in intervention manuals. Supervisors continued to assess intervention fidelity through a recurring series of weekly assessments: reviews of documentation, observation of CHWs in the field, telephone calls to patients, a performance dashboard of process and outcome metrics and weekly meetings with each CHW.
Qualitative Process Evaluation
The goal of this process evaluation was to understand patient and CHW perspectives on the intervention, comparing responders and nonresponders.
The evaluation triangulated multiple data sources collected at the completion of the 6-month intervention: (1) abstraction of medical records and CHW documentation; (2) asynchronous qualitative interviews with patient and their CHW that included stimulated recall of key topics gleaned from the chart abstraction; and (3) participant observation notes recorded at patient interviews. We used the Reasoned Action Approach to develop structured data collection guides for each source probing for key domains such as self-efficacy, social norms, intention of performing health behaviors, and external barriers.
Trained interviewers (unknown to study participants) conducted patient interviews either in patients’ homes or private clinic rooms, based on preference, while a research assistant recorded participant observation notes. Patients and their CHWs were interviewed at separate times in no particular sequence without any reference to each other’s interviews. Interviewers asked open-ended questions to encourage patients to speak freely and in their own words. Interviewers asked probing follow-up questions to clarify statements, explore important themes, and establish timelines of events. Interviews were audio-recorded and transcribed.
We purposively sampled responders and nonresponders, and planned to obtain 40 interviews (20 patients and 20 CHWs) to allow for thematic saturation, the point at which no new qualitative codes are created or refined and code frequency stabilizes.25 We performed iterative data collection and analysis to determine whether new themes were emerging, assess saturation, and inform purposive sampling. For instance, if we realized that results among hypertensive patients were saturated, but new themes were still emerging among smokers, we purposively sampled more smokers.
We used the above described randomized controlled trial dataset, focusing on the subgroup of individuals in the CHW intervention arm. Response was defined as a dichotomous variable: improvement or maintenance (negative or zero value) for change in patients’ selected chronic disease.
We descriptively compared baseline characteristics of responders and nonresponders in the CHW intervention arm using χ2 tests for categorical variables and 2-tailed, unpaired t tests, as well as the Wilcoxon rank sum test for continuous variables.
We then built a multivariable logistic regression model including baseline variables (main effects and interaction terms) that were associated (P<0.2) with the outcome in the descriptive analysis (available as Online Supplement, Supplemental Digital Content 1, http://links.lww.com/MLR/B594). We also included in this model conceptually driven variables: basic demographic and clinical variables (age, sex, race, prior hospital use) and domains of the Reasoned Action Approach (self-efficacy and commitment to chronic disease management goals, social support, and key external barriers as reflected by income, employment, drug or alcohol use, and perceived stress). We then used stepwise regression to restrict the model to variables with a threshold of significance (P<0.1).
All 4 datasets for each patient (chart, asynchronous dyadic interviews, and observation notes) were uploaded into QSR NViVO 10.0 (QSR International) for analysis. We used an integrated approach,26 developing a coding schema that included major ideas that emerged from a close reading as well as a set of a priori codes corresponding to key domains of the Reasoned Action Approach. Two trained research assistants coded all data and iteratively met to modify the coding schema and interview guide for clarity.26 At coding meetings, inter-rater reliability (κ) was calculated using the NViVO coding comparison function. Discrepancies (a node where inter-rater reliability was <0.8) were discussed to facilitate either consensus or a deeper understanding of the issue at hand. The final inter-rater reliability across all data sources was 0.93.
Data were sorted by code based on intervention response using the matrix query function of NViVO. These reports were carefully read in order to understand prominent themes including differences between responders and nonresponders. We used member checking27—a technique in which findings are validated with members of the study population—by discussing findings with CHWs.
The themes reported below are derived from the integrated analysis of all 4 data sources.
Of the subgroup of patients who received CHW support (n=150), 63.3% maintained or improved control of their selected chronic disease control, while 36.7% worsened. In the descriptive analysis, responders and nonresponders were similar across all measured characteristics with the exception of race, chronic disease selection, baseline chronic disease control, goal difficulty, and patient activation measure (Table 1).
In the final multivariable logistic regression model, the only significant predictor of response was an interaction between patient activation and study arm (odds ratio, 1.03; 95% confidence interval, 1.05–1.01; P=0.05): patients with lower activation were more likely to respond to the intervention. African American patients appeared more likely to respond to the intervention (odds ratio, 13.3; 95% confidence interval, 1.3–139.6; P=0.005); however, this estimate was unstable because there were only 8 non–African Americans in the sample.
There was no difference in intervention response based on which CHW was delivering the intervention. There were no differences in the types of action plans patients and their CHWs worked on together (Table 1).
Ninety trial participants were screened for interest in the qualitative evaluation. Of the 76 who consented, 24 were purposively sampled and invited to participate. Three of the individuals were not reachable at the time of the interview and 1 was no longer interested, leaving 20 (10 responders, 10 nonresponders) in the final sample (Table 2) in addition to 20 dyad interviews with 4 CHWs.
There were several themes common across responders and nonresponders. Most patients describe being motivated at the time of study enrollment and considering chronic disease management a high priority. CHWs affirmed the uniformity of initial motivation; in fact they had trouble predicting who would eventually become a responder versus nonresponder. Ninety percent of patients expressed a positive opinion of their CHW. Charts and interviews revealed that responders and nonresponders received similar types of social support from their CHWs. This included emotional support (“I [told CHW] about things going on with my children’s father. She really understood” [Female patient, 35 y]), appraisal support (“When I was doing well, [CHW] said she proud of me” [Female patient, 80 y]), and informational support (“[CHW] got me a list of stuff, what to eat and what not to eat” [Male patient, 48 y]).
Charts and interviews revealed common barriers across both groups including limited access to resources like healthy food, stress, trauma histories, disability, and insurance issues. One notable barrier endorsed by half of all patients was grief associated with death of family members, (“I lost my only child and that was a very [big] shock to me … and then after that my grandson, his only son, dropped dead” [Female patient, 88 y]). Patients commonly talked about doing well with their health behavior change until a family death disrupted them entirely.
Differences Between Responders and Nonresponders
Despite similarities, there were a number of important qualitative differences distinguishing responders versus nonresponders (Table 3).
Influence of Social Norms
Responders sometimes enlisted the help of supportive friends or family to do things like exercise or reinforce smoking cessation. However, when friends or family were not supportive, responders seemed to pull away from their influence. “[My family] didn’t take me serious [about quitting smoking]. It made me more determined. So I really didn’t talk about it with anybody. I just let them know I don’t smoke no more” [Male patient, 79 y]. These responders created new social norms by drawing closer to their CHWs or the support group. For instance, a CHW described how a patient gravitated to the support group when her family continued to offer her unhealthy food: “[She] saw that she wasn’t alone and met other people working on the same goals”[CHW].
In contrast, nonresponders seemed to have a harder time disentangling from negative social norms. One CHW remarked: “[The patient] had three people who lived in her house who all smoked and every time they would light up, she felt the need to light up. And I had asked her if she wanted to ask her family members to light up outside. And I remember her—she didn’t agree to that.” “It makes it difficult,” this same patient explained. “Feeling that I didn’t have anybody at home working with me to try to do things that needed to be done. That brought about a lot of stress and some anger and I smoked a lot” [Female patient, 59 y].
Specific Versus Vague Barriers
Responders commonly described concrete barriers, such as unhealthy food at social events, inclement weather, or pain that made it difficult to exercise. Because of the concrete, often temporary nature of these barriers, responders routinely could work with CHWs to find solutions.
Nonresponders described vague barriers and as a result could not easily identify solutions. They used phrases like “It’s nothing specific” or “It was situations beyond my control.” They expressed lack of self-awareness around why they struggled on their health goals: “I don’t know what happened. My sugar just got out of control.”
Response to Failure
All patients found behavior change to be challenging, and most had periods of failure (ie, gaining weight or relapsing with cigarettes) along the way. Responders were more likely to react to these failures with resolve. A CHW described a patient “who would call and say like, oh, I had a setback. She gained three or four pounds. But she didn’t make a bad week turn into a bad month. She got right back on track as soon as she could” [CHW]. Another patient explained setbacks with childcare and other family stressors. “So I was going through more of a depression. But I didn't let that get me down either. I was making sure I would get to my goal” [Female patient, 39 y].
Contrastingly, nonresponders often started off optimistically, but were discouraged by failure and became avoidant. They described feelings of self-blame that often led them to disengage from the intervention. One CHW described a patient who “got kinda frustrated because she said she didn’t see the weight coming off like it was coming off before. I just encouraged her to just keep working on it [CHW].” However, as the patient explained: “[The program] was coming to the end, and I was telling her I hadn’t reached the goal. She would tell me don’t get discouraged. But, I got discouraged. I mean, what you gonna do? Either you do it or you don’t” [Female patient, 56 y].
In some instances, discouragement was prompted by CHWs asking patients to self-monitor disease measures (ie, blood sugar, weight, cigarettes or blood pressure). Monitoring forced patients to confront their progress or lack thereof on their goals and seemed to create a type of aversive feedback: “She keep texting me and calling me to ask about weight. But some of the texts and some of the calls, I never replied back, because I feel ashamed. Because she would be like, let’s do this—it’s going to be good for you. At that time, when I answer, yeah, yeah, yeah. But I hang up, f**k it, I don’t want to do that s**t because I don’t see the change on me [Male patient, 40 y].”
Interestingly, when CHWs adjusted their approach and deliberately stopped discussing self-monitoring or health goals, patients sometimes became reengaged. As told from a CHW’s perspective: “She was avoiding my calls because her sugars were high. So I left messages purely to make her smile. She began to call me … checking her sugars, without me asking! [CHW].”
We found minimal quantitative clues to explain differences in response to an evidenced-based chronic disease management intervention. The intervention was slightly more effective among patients with lower baseline patient activation.
The qualitative evaluation revealed several common external barriers that have been well described in the literature, such as lack of healthy foods28 and insurance problems.29 Notably, this high-risk group described grief as a common and tragic deterrent to health behavior change.
The most striking difference between responders and nonresponders was their reactions to failure. By encouraging patients to monitor progress on chronic disease goals, CHWs activated a feedback loop that provided patients with signals of success or failure (Fig. 2A). Responders seemed to be motivated by failure and went on to “work even harder” with their CHW on health behavior change, ultimately improving chronic disease control. Nonresponders appeared discouraged by failure and avoided their CHWs. Interestingly, these patients may have been reengaged when CHWs stopped focusing on the “numbers” and provided pure emotional support.
These findings raise a critical question: what caused these 2 subgroups of patients, so similar by most measures, to have such different reactions to failure?
Recent behavioral science theories30,31 may explain these individual differences. Failure is processed in 2 stages: attribution (Why did I fail?) and emotion (How do I feel about the failure?). When people attribute failure to concrete and controllable causes (I failed this test because I didn’t study), they feel regret,32 which can sharpen motivation, increase self-efficacy and improve behavior.32,33 Contrastingly when people attribute failure to vague or uncontrollable causes (I failed because I am stupid), they feel ashamed and hopeless.34,35 These negative emotions can trigger avoidance as a way to preserve self-esteem.30,36
Fortunately, 2 behavioral interventions seem to promote resolve instead of avoidance: attribution retraining37,38 and positive affect induction.30 Attribution retraining is a form of cognitive reframing that encourages participants to interpret failures as controllable.37,38 This retraining has been tested in education, and helps students academic performance after failure,37 with the greatest benefit for students predisposed to avoidance.39,40 Few studies have translated attribution retraining to the sphere of health behavior.37 Positive affect induction uses strategies such as unexpected compliments or gifts41,42 and self-affirmation43 to induce positive emotion. Two health care studies demonstrated that positive affect induction improved adherence to hypertensive medications44 and doubled physical activity among patients undergoing percutaneous coronary intervention.45
Our results and insights from the behavioral science literature are synthesized in Figure 2B.
This study has several limitations. This was a single-center study which may not be generalizable beyond an urban, disadvantaged population. The study was not powered to detect differences between responders and nonresponders. For instance, although we did not see differential response across the CHW delivered the intervention, it is difficult to know with certainty whether nonresponse was due to patient rather than CHW characteristics. However, there were several safeguards in place to reinforce CHW fidelity to the intervention and the qualitative data supported the notion that care was uniform. Another limitation was that qualitative interviews may have been subject to recall bias which we attempted to minimize by using chart-stimulated recall. Finally, results were validated by member-checking with CHWs but not with patients.
Sustained health behavior change is incredibly challenging and most people fail along the way. Self-monitoring —a cornerstone of many health promotion strategies like wearable tracking devices46 or “Know Your Numbers” campaigns47—can heighten awareness of these failures. How a patient ultimately responds to failure may be an important and modifiable determinant of future behavior and chronic disease outcomes.
Yet failure and nonresponse are understudied. Quantitative analyses examining nonresponse are often unrevealing, likely because we are not measuring the right baseline variables. We should measure not only demographic, but psychological characteristics (eg, grit,48 response to failure,49 or coping style50) in intervention trials. Understanding predictors of nonresponse could inform targeting of interventions for maximal benefit. Alternately, interventions could be modified to better serve would be nonresponders; for instance based on these findings, the study team is planning to train IMPaCT CHWs on positive affect induction and attribution retraining. Perhaps in the future, CHWs will be able to help patients face the failures that are an inevitable part of behavior change.
1. Centers for Disease Control and Prevention. At a Glance 2015 National Center for Chronic Disease Prevention and Health Promotion Fact Sheet. Centers for Disease Control and Prevention, Atlanta, GA, 2015. Available at: https://www.cdc.gov/chronicdisease/resources/publications/aag/pdf/2015/nccdphp-aag.pdf
2. Mensah GA, Mokdad AH, Ford ES, et al. State of disparities in cardiovascular health in the United States. Circulation. 2005;111:1233–1241.
3. Schroeder SA. Shattuck Lecture. We can do better—improving the health of the American people. N Engl J Med. 2007;357:1221–1228.
4. Lorig KR, Sobel DS, Stewart AL, et al. Evidence suggesting that a chronic disease self-management program can improve health status while reducing hospitalization: a randomized trial. Med Care. 1999;37:5–14.
5. Lorig KR, Ritter P, Stewart AL, et al. Chronic disease self-management program: 2-year health status and health care utilization outcomes. Med Care. 2001;39:1217–1223.
6. Knowler WC, Barrett-Connor E, Fowler SE, et al. Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. N Engl J Med. 2002;346:393–403.
7. Commnuity Preventive Services Task Force. Diabetes prevention and control: combined diet and physical activity promotion programs to prevent type 2 diabetes among people at increased risk. 2014. Available at: https://www.thecommunityguide.org/sites/default/files/assets/Diabetes-Diet-and-PA_1.pdf
8. Volpp KG, Troxel AB, Pauly MV, et al. A randomized, controlled trial of financial incentives for smoking cessation. N Engl J Med. 2009;360:699–709.
9. McLean G, Band R, Saunderson K, et al. Digital interventions to promote self-management in adults with hypertension systematic review and meta-analysis. J Hypertens. 2016;34:600–612.
10. Commnuity Preventive Services Task Force. Cardiovascular disease prevention and control: interventions engaging community health workers. 2015. Available at: https://www.thecommunityguide.org/content/tffrs-cardiovascular-disease-interventions-engaging-community-health-workers
11. Kim K, Choi JS, Choi E, et al. Effects of community-based health worker interventions to improve chronic disease management and care among vulnerable populations: a systematic review. Am J Public Health. 2016;106:e3–e28.
12. Viswanathan M, Kraschnewski JL, Nishikawa B, et al. Outcomes and costs of community health worker interventions: a systematic review. Med Care. 2010;48:792–808.
13. Griffiths C, Motlib J, Azad A, et al. Randomised controlled trial of a lay-led self-management programme for Bangladeshi patients with chronic disease. Br J Gen Pract. 2005;55:831–837.
14. Community Preventive Services Task Force. Cardiovascular disease prevention and control: interactive digital interventions for blood pressure self-management. 2017. Available at: https://www.thecommunityguide.org/content/tffrs-cardiovascular-disease-prevention-interactive-digital-interventions-blood-pressure
15. Patient-Centered Outcomes Research Institute. Putting research to work for individual patients. 2015. Available at: https://www.pcori.org/blog/putting-research-work-individual-patients
16. Kangovi S, Mitra N, Grande D, et al. Community health worker support for disadvantaged patients with multiple chronic diseases: a randomized clinical trial. Am J Public Health. 2017;107:1660–1667.
17. Kangovi S, Mitra N, Turr L, et al. A randomized controlled trial of a community health worker intervention in a population of patients with multiple chronic diseases: study design and protocol. Contemp Clin Trials. 2017;53:115–121.
18. Kangovi S, Mitra N, Smith RA, et al. Decision-making and goal-setting in chronic disease management: baseline findings of a randomized controlled trial. Patient Educ Couns. 2017;100:449–455.
19. Kangovi S, Mitra N, Grande D, et al. Patient-centered community health worker intervention to improve posthospital outcomes: a randomized clinical trial. JAMA Intern Med. 2014;174:535–543.
20. Guetterman TC, Fetters MD, Creswell JW. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. Ann Fam Med. 2015;13:554–561.
21. Hibbard JH, Stockard J, Mahoney ER, et al. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004;39(pt 1):1005–1026.
22. Fishbein M, Ajzen I. Predicting and Changing Behavior: The Reasoned Action Approach. New York, NY: Psychology Press; 2010.
23. Fishbein M. Factors influencing behavior and behavior change. Final Report. Paper presented at: Theorists Workshop, Bethesda, MD, 1992.
24. Rimer B, Glanz K. Theory at a Glance. Bethesda, MD: National Institutes of Health, National Cancer Institute; 2005:25.
25. Kangovi S, Leri D, Clayton C, et al. Penn Center for Community Health Workers. 2013. Available at: http://chw.upenn.edu/
. Accessed June 7, 2018.
26. Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: developing taxonomy, themes, and theory. Health Serv Res. 2007;42:1758–1772.
27. Bernard HR, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Los Angeles, CA: SAGE; 2010.
28. Evans A, Banks K, Jennings R, et al. Increasing access to healthful foods: a qualitative study with residents of low-income communities. Int J Behav Nutr Phys Act. 2015;12(suppl 1):S5.
29. Jerant AF, von Friederichs-Fitzwater MM, Moore M. Patients’ perceived barriers to active self-management of chronic conditions. Patient Educ Couns. 2005;57:300–307.
30. Eberly MLDMT, Lee TLocke E, Latham G. The goal striving process. New Developments in Goal Setting and Task Performance. New York, NY: Taylor and Francis Group; 2013:35–51.
31. Webb TL, Chang B, Benn Y. “The Ostrich Problem”: motivated avoidance or rejection of information about goal progress. Soc Personal Psychol Compass. 2013;7:794–807.
32. Ketelaar T, Au WT. The effects of feelings of guilt on the behaviour of uncooperative individuals in repeated social bargaining games: an affect-as-information interpretation of the role of emotion in social interaction. Cogn Emot. 2003;17:429–453.
33. Cron WL, Slocum JW, VandeWalle D, et al. The role of goal orientation on negative emotions and goal setting when initial performance falls short of one's performance goal. Hum Perform. 2005;18:55–80.
34. Hospers HJ, Kok G, Strecher VJ. Attributions for previous failures and subsequent outcomes in a weight reduction program. Health Educ Q. 1990;17:409–415.
35. Tracy JL, Robins RW. Appraisal antecedents of shame and guilt: support for a theoretical model. Pers Soc Psychol B. 2006;32:1339–1351.
36. Ilies R, Judge TA. Goal regulation across time: the effects of feedback and affect. J Appl Psychol. 2005;90:453–467.
37. Forsterling F. Attributional retraining: a review. Psychol Bull. 1985;98:495–512.
38. Perry RP, Hechter FJ, Menec VH, et al. Enhancing achievement-motivation and performance in college-students—an attributional retraining perspective. Res High Educ. 1993;34:687–723.
39. Ilies R, Judge TA, Wagner DT. The influence of cognitive and affective reactions to feedback on subsequent goals role of behavioral inhibition/activation. Eur Psychol. 2010;15:121–131.
40. Hall NC, Perry RP, Goetz T, et al. Attributional retraining and elaborative learning: Improving academic development through writing-based interventions. Learn Individ Differ. 2007;17:280–290.
41. Charlson ME, Boutin-Foster C, Mancuso CA, et al. Randomized controlled trials of positive affect and self-affirmation to facilitate healthy behaviors in patients with cardiopulmonary diseases: rationale, trial design, and methods. Contemp Clin Trials. 2007;28:748–762.
42. Erez A, Isen AM. The influence of positive affect on the components of expectancy motivation. J Appl Psychol. 2002;87:1055–1067.
43. Steele CM, Spencer SJ, Lynch M. Self-image resilience and dissonance—the role of affirmational resources. J Pers Soc Psychol
44. Ogedegbe GO, Boutin-Foster C, Wells MT, et al. A randomized controlled trial of positive-affect intervention and medication adherence in hypertensive African Americans. Arch Intern Med. 2012;172:322–326.
45. Peterson JC, Charlson ME, Hoffman Z, et al. A randomized controlled trial of positive-affect induction to promote physical activity after percutaneous coronary intervention. Arch Intern Med. 2012;172:329–336.
46. Asch DA, Muller RW, Volpp KG. Automated hovering in health care—watching over the 5000 hours. N Engl J Med. 2012;367:1–3.
48. Duckworth AL, Quinn PD. Development and validation of the Short Grit Scale (Grit-S). J Pers Assess. 2009;91:166–174.
49. Zemack-Rugar Y, Corus C, Brinberg D. The “Response-to-Failure” Scale: predicting behavior following initial self-control failure. J Mark Res. 2012;49:996–1014.
50. Carver CS. You want to measure coping but your protocol’s too long: consider the brief COPE. Int J Behav Med. 1997;4:92–100.