The capacity to improve quality in the healthcare system hinges on the ability to consistently measure health outcomes. Patient-reported outcome measures (PROMs) have become a central component of the quality improvement process because they reflect the ultimate goal of the healthcare system: improving patients’ sense of health and well-being . As such, they are an important tool for providing more patient-centered care, evaluating and improving current practices, and measuring quality as a part of alternative payment models . However, a limiting factor in the implementation and use of PROMs in research and clinical practice is inadequate followup; the response to PROM surveys regularly falls below 50% [14, 26]. This is important because < 5% loss of patient followup can lead to bias, with 20% loss posing serious threats to the validity of obtained data [20, 29, 33]. Furthermore, advanced alternative payment models that incentivize submission of PROM data, such as the Comprehensive Care for Joint Replacement program, are beginning to require PROM submission rates > 80% to fulfill data collection criteria . A high survey response is therefore important to properly interpret PROMs and apply collected data to improve the organization and delivery of health care.
Maintaining adequate followup is particularly difficult among orthopaedic patients, whose interaction with the healthcare system is often episodic, and who may not always need ongoing care . For orthopaedic surgeons, absence of followup for certain subsets of patients can result in unreliable outcome data. For example, patients who are dissatisfied with their surgery may be less likely to adhere to followup protocols, creating an unwarranted perception of success . Furthermore, more consistent tracking of patient outcomes could allow for earlier identification of complications and improved patient safety [1, 2]. Electronic followup particularly presents an opportunity to enhance patient communication while also reducing costs .
Healthcare payers are increasingly using incentives in orthopaedics to influence provider and patient behavior in ways that may improve care value . Monetary incentives have demonstrated promise in promoting physical activity after TKA and encouraging weight loss in patients who are obese [8, 23]. To increase survey response, various incentive strategies have been used in health care and other fields. These strategies include monetary offers, nonmonetary offers such as small gifts, and lottery-based incentives such as entry into a prize draw [4, 7]. To our knowledge, monetary incentives are the only method that have shown efficacy, but minor improvements in response come with relatively large increases in costs [3, 10, 17, 18]. Social incentives, in which a charitable donation is made in return for questionnaire completion, offer a potentially more cost-effective strategy to increase response rates. These incentives work by reinforcing an individual’s sense of positive self-identity, and compared with standard monetary incentives, they hold the additional advantage of contributing to social utility . The only study to examine social incentives attempted to increase the response of patients with alcoholism to an online survey by offering a generalized donation to a national cancer research organization . This strategy was not effective; however, the donation lacked relevance to the surveyed population. We wished to determine whether more personalized social incentives would increase the probability of survey response. Making the donation recipient as relatable to the patient as possible, either by making the donation to an individual person or to a research program focused on the patient’s condition, might create stronger altruistic appeal and increase the probability of response.
In this study, we attempted to answer the following questions: (1) Do personalized social incentives increase response rates or response completeness for postoperative PROM surveys in an orthopaedic population? (2) Are there demographic factors associated with response and nonresponse to postoperative PROM surveys? (3) Are some demographic factors associated with increased response to social incentive offers?
Patients and Methods
This was a prospective, randomized controlled trial performed at a single academic medical center from March 2018 through April 2018. Appropriate institutional review board exemption was obtained before the study began.
Participants were selected from an institutional orthopaedic outcomes database. We included patients older than 18 years who underwent orthopaedic surgery 1 to 2 years earlier. Included procedures were Achilles tendon repair; ACL reconstruction; meniscectomy; primary TKA; primary THA; and hip arthroscopy with labral repair, femoroplasty, or acetabuloplasty. Because THA and TKA comprised the largest volume of procedures, they were each classified separately for analysis. All other procedures were grouped together and classified as “sports medicine.”
A total of 4864 patients met the initial inclusion criteria. Of these, 179 (4%) were excluded because they did not have an email address on file, leaving a total of 4685 eligible participants. A random sample of 3000 (64%) patients were selected from this group for inclusion in the study. Of these, 99 (4%) email messages were undeliverable, and these patients were excluded from the analysis (Fig. 1). The proportion of undeliverable emails did not differ between groups (control: 4% [27 of 750], patient donation: 2% [17 of 751], research donation: 3% [24 of 749], explanation: 4% [31 of 750]; p = 0.222).
A list of study participants was given to the study statistician for randomization into one of four experimental groups. The patients were randomized using a computer-generated blocked randomization scheme stratified by the six procedure types. A block size of eight was used to ensure adequate coverage within each stratum. Patients were assigned to one of four experimental groups: (1) control group: patients received an invitation to complete the survey (n = 750); (2) patient donation: patients received an invitation to complete the survey with an offer of a USD 5 donation to provide necessary medical supplies to a pediatric orthopaedic patient in financial need (n = 751); (3) research donation: patients received an invitation to complete the survey with an offer of a USD 5 donation to a procedure-specific research program at the institution where the procedure was performed (n = 749); for example, patients who had undergone ACL reconstruction were offered a donation to a research project investigating the effects of ACL injury on cartilage health; and (4) explanation: patients received an invitation to complete the survey with a detailed explanation of how survey response supports efforts to improve care quality (n = 750). Individuals were blinded to their participation in this study and were unaware that patients in the other experimental groups received different email messages. Patients were not informed of any incentive offers before receiving the invitation to complete the survey. All patients were analyzed in the group to which they were assigned.
Most patients were women (55%, 1607 of 2901) and white (80%, 2269 of 2901). Race was self-reported and included white, black, Asian, American Indian or Alaskan Native, and Native Hawaiian or other Pacific Islander. Because white and black constituted the largest numbers of patients, they were each classified separately for analysis. All other races were grouped together and classified as “other,” and patients reporting two or more races were classified as “two or more races.” Data on race were not reported for 54 of the 2901 (2%) patients analyzed. The proportion of missing race data did not differ between groups (control: 2% [14 of 723], patient donation: 2% [16 of 734], research donation: 2% [13 of 725], explanation: 2% [11 of 719]; p = 0.830). The median age was 58 years (range, 18-92 years), and the median time from the procedure to email invitation was 17 months (range, 12-24 months). No differences were observed among experimental groups in patient demographics or procedure characteristics (Table 1).
All patients received an email invitation with the same PROM survey link. Reminders containing the same text as in the original email were sent to nonrespondents at 1 and 2 weeks. Data collection ended 4 weeks after the initial email. The overall response rate of the 2901 analyzed patients was 46%. Donations were made to an institutional fund that provides necessary medical equipment to pediatric orthopaedic patients or to the appropriate research program based on the number of complete responses in each respective group.
Incentive offers were formulated with input from orthopaedic surgeons and behavioral neuroscientists. The patient donation was designed to appeal to aiding another individual patient, the research donation was designed to make donor recipients relevant to the participant, and the explanation group was included to provide a nonmonetary social incentive control (see Appendix, Supplemental Digital Content 1, https://links.lww.com/CORR/A176). Final incentive offers, email wording, and incentive magnitude were informed and validated through a series of 10 qualitative interviews with individual patients in outpatient orthopaedic clinics. Patients who were interviewed were not included in the study.
PROM data are used for research and quality improvement initiatives at our institution, and the timing and frequency of survey distribution is evolving. Our collection strategy includes completion of the Patient-Reported Outcomes Measurement Information System 61 at the first new-patient visit and at subsequent 3-month intervals, regardless of whether the patient undergoes surgery. Surveys for this study were created through Research Electronic Data Capture (REDCap, Version 8.1.4) . Research Electronic Data Capture is a secure, web-based application designed to support data collection and storage for research studies. Surveys included questions on patient demographics as well as Patient-Reported Outcomes Measurement Information System computer adaptive tests assessing physical function, pain interference, pain intensity, and depression. For this study, patient information was deidentified and not associated with questionnaire results. The survey took approximately 5 to 10 minutes to complete.
The primary outcome was the proportion of patients who responded (defined here as the response rate) at 4 weeks. The overall response rate included partial and complete survey responses. Secondary outcomes included the proportion of complete responses among respondents as well as demographic factors–including age, gender, and race–associated with response and nonresponse. We also performed subgroup analyses to determine any demographic factors associated with increased response to social incentives.
The historical survey response rate for the institutional database used in this study was 29%. We considered a 9% or greater difference in response to be clinically meaningful. Sample size calculation was performed before the study began. Assuming a baseline response rate of 29%, we calculated that at least 661 patients per group would be required to detect a 9% difference in the response rate, with 80% power at an α level of 0.05. To account for anticipated undeliverable emails, group sizes of 750 patients were used, giving an overall sample size of 3000 patients. To compare patient and procedure characteristics between groups, we used the chi-square test for categorical variables and expressed counts and percentages. Because continuous data were nonparametric, we measured differences using the Wilcoxon rank-sum test and reported medians (25th–75th percentiles). Outcomes assessors were not blinded, but statistical analyses were prespecified before study initiation.
To answer the first research question, if social incentives affect the survey response or response completeness, we analyzed the proportion of respondents and complete responses in each experimental group. Data were analyzed with a chi-square test. The scope of these analyses was descriptive, and there was no α correction for multiple testing. To answer the second research question, if there were demographic factors associated with response and nonresponse, we used a logistic regression multivariable analysis for survey response to adjust for baseline confounders that were significant in univariable analyses. The model was fit using the response status as the outcome and the experimental group as the main predictor. Other covariates were chosen based on univariate analyses, using a cutoff p value of 0.10, and consisted of age, gender, race, and procedure type. The linearity of age with response status was tested, and we determined that age should be modeled as a piecewise linear function with cutoffs at 58 and 64 years. To answer the third research question, if any demographic factors were associated with increased response to social incentive offers, we performed subgroup analyses for variables associated with response in the logistic regression model. The proportion of respondents in each experimental group was analyzed with a chi-square test for each category of age, gender, and race. All tests were two-sided and considered significant at a p value < 0.05. All statistical analyses were performed using SAS Version 9.4 (SAS Institute, Cary, NC, USA).
There was no difference in the response rate or response completeness between experimental groups. The overall response was similar among all four groups (research donation: 49% [353 of 725], patient donation: 45% [333 of 734], control: 45% [322 of 723], explanation: 44% [314 of 719]; p = 0.239). Furthermore, among respondents, there was no difference in the proportion of complete responses between groups (research donation: 89% [315 of 353], patient donation: 90% [301 of 333], control: 89% [287 of 322], explanation: 87% [274 of 314]; p = 0.647) (Table 2).
After controlling for potential confounding variables such as age, gender, race, and procedure type, we found that women, older patients, and white patients were slightly more likely to respond to the PROM survey than men, younger patients, and black patients. Women were more likely than men to respond (odds ratio [OR], 1.175; 95% CI, 1.006–1.372; p = 0.042). Older patients were also more likely to respond than younger patients (< 58 years: OR, 1.016 per 1-year increase; 95% CI, 1.007–1.026; p = 0.001; 58-64 years: OR, 1.023 per 1-year increase; 95% CI, 1.015–1.031; p < 0.001; > 64 years: OR, 1.021 per 1-year increase; 95% CI, 1.014–1.028; p < 0.001). White patients were more likely to respond than black patients (OR, 2.034; 95% CI, 1.635–2.531; p < 0.001). No differences were seen in the other race categories (Table 3).
Men and patients younger than 58 years were more likely to respond to particular incentives than women and older patients. Men were more likely to respond to the research donation than women (research donation: 49% [155 of 316], patient donation: 45% [146 of 328], control: 40% [130 of 325], explanation: 39% [127 of 325]; p = 0.041). Patients younger than 58 years were also more likely to respond to the research donation than older patients (research donation: 40% [140 of 351], control: 35% [130 of 371], patient donation: 32% [113 of 357], explanation: 27% [93 of 340]; p = 0.004). We found no such differences in any particular race category (Table 4).
The transformation of value-based health care makes the accurate and efficient measurement of patient outcomes increasingly important, especially in orthopaedics, where pain relief and functional improvement are, from the patient’s perspective, the primary outcomes [9, 27]. However, low responses to PROM surveys create unreliable data, and high levels of response are necessary for evolving pay for performance models [5, 20, 33]. We found that personalized social incentives did not produce a difference in the overall response to PROM surveys in patients who underwent orthopaedic surgery, but they did produce small increases in response among certain subgroups, specifically men and younger patients. The development of novel and targeted strategies to improve survey responses will be important to effectively implement PROM data into the healthcare system.
Our study has several limitations. First, although we were able to confirm that all patients received the email invitation, we were unable to determine if nonrespondents in each group actually opened the email. This information would have been beneficial in identifying barriers to survey completion and guiding future efforts. Second, this study was conducted among patients at a single academic center and did not examine any upper extremity, spine, trauma, or foot and ankle patients. Thus, our results may not be generalizable to nonacademic practice settings or different geographic regions. Further, our results may not apply to orthopaedic patients outside the examined procedures, particularly younger patient populations. However, another study examining a cohort of younger patients with orthopaedic trauma also found that younger men were less likely to adhere to followup protocols . Therefore, our results might be well-applied to patients with orthopaedic trauma and potentially other orthopaedic populations. Third, our study focused on operatively treated patients, and it is unclear whether the results would apply to patients who did not have surgery. Fourth, we did not examine the education level or socioeconomic status of the participants, factors that may have been associated with the response to social incentives. Finally, the patients in our study had undergone surgery 1 to 2 years ago and had already received requests to complete PROM surveys before this study, which introduces the possibility of respondent fatigue in our population.
Social Incentives Did Not Increase Survey Response Rates
Social incentives did not increase the overall response to our PROM survey. Our results are similar to the results of other reports stating that social incentives are ineffective in improving the overall questionnaire response [11, 18, 25]. A previous study showed that an offer of a £5 (approximately USD 6.50) charity donation to a national cancer research organization did not affect the response of patients attempting to reduce alcohol consumption to an online questionnaire . We expanded these findings to more personalized social incentives. Furthermore, we found no difference in response completeness among respondents in the four groups. This is consistent with the results of reports stating that monetary incentives and questionnaire length do not affect the rates of partial response [6, 32]. Moreover, respondents in all four groups had a complete response rate of > 85%, suggesting that the main barrier to collecting high-quality data is response initiation rather than completeness.
There are several explanations for the lack of difference in the overall response between the incentive and control groups. There may be an implicit social incentive involved in completing an outcome survey. Even without an explicit donation offer or explanation of why response is valuable, patients may perceive the act of completing the survey as constructive to the surgeon who performed their procedure or beneficial to future patients. If so, completion of an outcome survey itself may reinforce the patient’s sense of positive self-identity, and including an explanation or donation offer may provide little additional benefit. Indeed, systematic reviews have identified altruism as a principal motivator for participation in clinical trials [28, 31]. Therefore, surgeon emphasis on the role and value of the patient’s response to PROMs during the care process could be an influential approach to improving response rates.
It is also possible that the incentive value of USD 5 was too small. However, charitable donation offers of up to USD 40 have been ineffective in improving response rates to physician surveys . Moreover, potential improvements in response rates must be carefully weighed against increases in cost. The timing of the incentive offer may also be an important factor in a patient’s decision to respond. In general, response rates to electronic postoperative surveys decline further from surgery, likely from patient recovery or respondent fatigue . Perhaps patients who were 1 to 2 years away from surgery were too far removed from their procedure to feel a robust connection to donation recipients. Informing patients of social incentive offers in the perioperative period may prime them for the incentive and allow for a stronger association to develop over time.
Demographic Factors Associated with Response Rate
Several factors were associated with survey nonresponse. Men, younger patients, and black patients were all slightly less likely to respond than women, older patients, and white patients. Several studies with larger effect sizes have shown similar results. One study found that men, younger patients, patients who smoke cigarettes, and patients who consume alcohol are more likely to be lost to followup after orthopaedic trauma . Similarly, another study found that the postoperative survey response in patients who underwent elective surgery was lower among men, younger patients, black patients, and patients with a lower socioeconomic status . Identifying patients who are less likely to adhere to followup protocols allows for the development of individualized approaches to increase response rates.
Differential Response Among Groups by Type of Social Incentive
Men and younger patients were slightly more likely to respond to the procedure-specific research donation than the other incentive offers, which aligns with a contemporary theory in social psychology that individuals are more likely to act altruistically toward others who are similar to themselves . These groups also had lower response rates at baseline. Women and older patients may be less likely to respond to social incentives simply because they are more likely to respond initially. Conversely, men and younger patients seem to have less motivation to respond at baseline, and social incentives may provide the added stimulus they need [16, 24]. Thus, social incentives may be particularly useful in augmenting the response of people in whom it is typically low, although additional strategies will be necessary.
Despite small effects in specific subgroups, personalized social incentives did not increase the overall response to postoperative orthopaedic surveys. Higher response rates are required for data integrity and pay for performance, and there is a need to develop strategies to reach these thresholds and enable healthcare stakeholders to use PROMs practically and effectively.
1. Basch E, Artz D, Dulko D, Scher K, Sabbatini P, Hensley M, Mitra N, Speakman J, McCabe M, Schrag D. Patient online self-reporting of toxicity symptoms during chemotherapy. J Clin Oncol. 2005;23:3552-3561.
2. Basch E, Iasonos A, Barz A, Culkin A, Kris MG, Artz D, Fearn P, Speakman J, Farquhar R, Scher HI, McCabe M, Schrag D. Long-term toxicity monitoring via electronic patient-reported outcomes in patients receiving chemotherapy. J Clin Oncol. 2007;25:5374-5380.
3. Brealey SD, Atwell C, Bryan S, Coulton S, Cox H, Cross B, Fylan F, Garratt A, Gilbert FJ, Gillan MGC, Hendry M, Hood K, Houston H, King D, Morton V, Orchard J, Robling M, Russell IT, Torgerson D, Wadsworth V, Wilkinson C. Improving response rates using a monetary incentive for patient completion of questionnaires: an observational study. BMC Med Res Methodol. 2007;7:12.
4. Brueton VC, Tierney J, Stenning S, Harding S, Meredith S, Nazareth I, Rait G. Strategies to improve retention in randomised trials. Cochrane Database Syst Rev. 2013;12:MR000032.
5. Centers for Medicare and Medicaid Services. Overview of CJR Quality Measures, Composite Quality Score, and Pay-for-Performance Methodology. 2016; 2-6. Available at: https://innovation.cms.gov/Files/x/cjr-qualsup.pdf
. Accessed May 25, 2018.
6. Dirmaier J, Harfst T, Koch U, Schulz H. Incentives increased return rates but did not influence partial nonresponse or treatment outcome in a randomized trial. J Clin Epidemiol. 2007;60:1263-1270.
7. Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;3:MR000008.
8. Finkelstein EA, Tham KW, Haaland BA, Sahasranaman A. Applying economic incentives to increase effectiveness of an outpatient weight loss program (TRIO) - A randomized controlled trial. Soc Sci Med. 2017;185:63-70.
9. Gagnier JJ. Patient reported outcomes in orthopaedics. J Orthop Res. 2017;35:2098-2108.
10. Gates S, Williams MA, Withers E, Williamson E, Mt-Isa S, Lamb SE. Does a monetary incentive improve the response to a postal questionnaire in a randomised controlled trial? The MINT incentive study. Trials. 2009;10:44.
11. Gattellari M, Ward JE. Will donations to their learned college increase surgeons' participation in surveys? A randomized trial. J Clin Epidemiol. 2001;54:645-649.
12. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377-381.
13. Hohwu L, Lyshol H, Gissler M, Jonsson SH, Petzold M, Obel C. Web-based versus traditional paper questionnaires: a mixed-mode survey with a Nordic perspective. J Med Internet Res. 2013;15:e173.
14. Howard JS, Toonstra JL, Meade AR, Whale Conley CE, Mattacola CG. Feasibility of conducting a web-based survey of patient-reported outcomes and rehabilitation progress. Digital Health. 2016;2:2055207616644844.
15. Huettel SA, Kranton RE. Identity economics and the brain: uncovering the mechanisms of social conflict. Philos Trans R Soc Lond B Biol Sci. 2012;367(1589):680-691.
16. Hutchings A, Neuburger J, Grosse Frie K, Black N, van der Meulen J. Factors associated with non-response in routine use of patient reported outcome measures after elective surgery in England. Health Qual Life Outcomes. 2012;10:34.
17. Kenyon S, Pike K, Jones D, Taylor D, Salt A, Marlow N, Brocklehurst P. The effect of a monetary incentive on return of a postal health and development questionnaire: a randomised trial [ISRCTN53994660]. BMC Health Serv Res. 2005;5:55.
18. Khadjesari Z, Murray E, Kalaitzaki E, White IR, McCambridge J, Thompson SG, Wallace P, Godfrey C. Impact and costs of incentives to reduce attrition in online trials: two randomized controlled trials. J Med Internet Res. 2011;13:e26.
19. Kim J, Lonner JH, Nelson CL, Lotke PA. Response bias: effect on outcomes evaluation by mail surveys after total knee arthroplasty. J Bone Joint Surg Am. 2004;86-a(1):15-21.
20. Kristman V, Manno M, Cote P. Loss to follow-up in cohort studies: how much is too much? Eur J Epidemiol. 2004;19:751-760.
21. Lansky D, Nwachukwu BU, Bozic KJ. Using financial incentives to improve value in orthopaedics. Clin Orthop Relat Res. 2012;470(4):1027-1037.
22. Leidy NK, Vernon M. Perspectives on patient-reported outcomes: content validity and qualitative research in a changing clinical trial environment. Pharmacoeconomics. 2008;26:363-370.
23. Losina E, Collins JE, Deshpande BR, Smith SR, Michl GL, Usiskin IM, Klara KM, Winter AR, Yang HY, Selzer F, Katz JN. Financial incentives and health coaching to improve physical activity following total knee replacement: A randomized controlled trial. Arthritis Care Res (Hoboken). 2018;70(5):732-740.
24. Madden K, Scott T, McKay P, Petrisor BA, Jeray KJ, Tanner SL, Bhandari M, Sprague S. Predicting and preventing loss to follow-up of adult trauma patients in randomized controlled trials: an example from the FLOW Trial. J Bone Joint Surg Am. 2017;99:1086-1092.
25. Nesrallah G, Barnieh L, Manns B, Clase C, Mendelssohn D, Guyatt G. A charitable donation incentive did not increase physician survey response rates in a randomized trial. J Clin Epidemiol. 2014;67:482-483.
26. Peters M, Crocker H, Jenkinson C, Doll H, Fitzpatrick R. The routine collection of patient-reported outcome measures (PROMs) for long-term conditions in primary care: a cohort survey. BMJ Open. 2014;4:e003968.
27. Porter ME. What is value in health care? N Engl J Med. 2010;363:2477-2481.
28. Ross S, Grant A, Counsell C, Gillespie W, Russell I, Prescott R. Barriers to participation in randomised controlled trials: a systematic review. J Clin Epidemiol. 1999;52:1143-1156.
29. Sacket DL, Richardson WS, Rosenberg W. Evidence-based Medicine: How to Practice and Teach EBM.
New York, NY, USA: Churchill Livingstone; 1997.
30. Stürmer S, Snyder M. Helping Us’ versus ‘Them’: Towards a Group-Level Theory of Helping and Altruism Within and Across Group Boundaries. In Stürmer S, Snyder M (Eds.), The Psychology of Prosocial Behavior: Group Processes, Intergroup Relations, and Helping (pp. 37-44). Malden, MA, USA: John Wiley & Sons, Inc.; 2009.
31. Todd AM, Laird BJ, Boyle D, Boyd AC, Colvin LA, Fallon MT. A systematic review examining the literature on attitudes of patients with advanced cancer toward research. J Pain Symptom Manage. 2009;37:1078-1085.
32. Yu S, Alper HE, Nguyen AM, Brackbill RM, Turner L, Walker DJ, Maslow CB, Zweig KC. The effectiveness of a monetary incentive offer on survey response rates and response completeness in a longitudinal study. BMC Med Res Methodol. 2017;17:77.
33. Zelle BA, Bhandari M, Sanchez AI, Probst C, Pape HC. Loss of follow-up in orthopaedic trauma: is 80% follow-up still acceptable? J Orthop Trauma. 2013;27:177-181.