The high level engagement approach incorporated 2 primary venues for engagement. The first venue was stakeholder advisory boards (SABs) that were involved in planning and developing governance activities and refining policies, fulfilling the research management and participant protection elements of research governance. There were 4 SABs planned, 1 responsible for governance, and 3 others representing each of the 3 conditions of focus—weight management/obesity (WMO), heart failure, and Kawasaki disease (KD). Each SAB was expected to include around 10 stakeholders. In all engagement activities, we include patients, advocates, clinicians, and researchers as stakeholders.
The second venue was the stakeholder research prioritization panels (panels). Prioritization of research topics by stakeholders fulfills the accountability element of research governance, in which those who make decisions about the research topics are held accountable to stakeholder’s expressed priorities.
The panels were convened to set research priorities using an online, modified Delphi (OMD) method—a deliberative and iterative approach to attaining consensus with discussion and statistical feedback. The ExpertLens online platform19 was used to facilitate rating and interaction among a large number of stakeholders in a series of condition-specific panels. ExpertLens permits an unlimited number of people interested in a topic to express their opinions, have their opinions read, and conveniently organizes those opinions into topic areas by replacing traditional face-to-face meetings with asynchronous, moderated online discussion boards.20
ExpertLens harnesses the wisdom of “select crowds” by allowing participants:
- To independently generate candidate ideas for research topics and evaluation criteria—round 0.
- To respond to a set of predetermined questions about those ideas and submit ratings on evaluation criteria for each idea—round 1.
- To familiarize themselves with the answers given by others and to discuss the group responses via asynchronous and (partially) anonymous and moderated online discussion boards—round 2.
- To modify their original answers in light of the group discussion and submit new ratings—round 3.
The group’s final answer is determined statistically by analyzing the last set of responses provided by each individual using the RAND/UCLA Appropriateness Method’s approach to determining the existence of group consensus.21 A productive engagement of stakeholders with different perspectives is difficult with an in-person format within any reasonable time-frame, but it becomes feasible using an online format. The OMD approach has demonstrated utility in engaging patients in health care planning,22 exploring performance measurements for arthritis,23 and research goals for suicide prevention.24 However, researchers have not previously evaluated the differences in stakeholder opinions about OMD in large multi-stakeholder panels. In this study, we recruited 360 stakeholders who self-identified their role as patient, clinician, or researcher. Then, grouped by conditions, patients and clinicians were randomly assigned to either solo-stakeholder panels or mixed panels involving patients, clinicians and researchers. Researchers were only assigned to mixed panels.
The initial research topics for consideration and the evaluation criteria used by the panels during voting rounds were selected by the SABs. The WMO panel considered 9 topics while the heart failure and KD panels considered 7 topics each. In rounds 1 and 3, all panels rated topics using the same 5 criteria (contribution to more informed health care decision-making, collaboration among patients, caregivers and clinicians, relevance to a large proportion of patients and caregivers, impact in health care goals, and innovation) and were able to explain their ratings. None of the rating questions or explanations (eg, rationale comments) were required.
SAB and panel candidates were recruited through personal contacts of pSCANNER investigators, clinicians, and staff, referrals from patient co-chairs or other SAB members. Notices inviting participation were also sent by email through supportive partnering organizations such as the Kawasaki Disease Foundation, the Society for Participatory Medicine, and by SAB members to online patient communities and social media. Advisors had to be patients (including parents/guardians/caregivers of KD patients who were minors), patient advocates, clinicians, or researchers interested in and experienced with issues related to one of the 3 conditions of interest. Candidates completed a brief questionnaire about their background and interest in participating. Eligible candidates were accepted until a specific deadline date or the targeted numbers were achieved (30 SAB members and 360 OMD panelists).
The stakeholder engagement activities occurred over 18 months. These activities are described in Table 1. Feedback was obtained from stakeholders through email or verbally. This project was reviewed by the Institutional Review Board at UC Davis and RAND and determined not to constitute human subjects research.
The activities for both SAB and panel were supported by the pSCANNER stakeholder engagement team of investigators and staff who implemented the activities. The core stakeholder engagement team included 2 researchers, 1 project manager, and 1 patient co-investigator. There were 6 additional ad hoc staff. The patient co-investigator served as co-chair of the SAB with another patient representative. These 1 patients worked closely with the stakeholder engagement team to plan the activities and content for all SAB meetings. The disciplines represented in the full team included community outreach, CBPR, health education, graphic design, health services research, Delphi experts and programmers, clinical experts, meeting facilitation, and project management.
The level of engagement in SABs was measured by tracking the participation rate for each activity by stakeholder group. In addition, an internal assessment consisted of gathering email feedback after each meeting in order to improve conduct of the meetings as well as at conclusion of the final SAB meeting.
The OMD engagement was evaluated in 2 ways. First, engagement activity level was assessed quantitatively through objective participation metrics collected by the system using participant logins (deidentified for the analysis). Second, engagement experience data were collected using an online questionnaire at completion of the OMD panels. We report in this paper on 10 statements (listed in Table 4) regarding opinions about the online discussions, and use of the OMD software, which were rated using a 7-point Likert scale (from 1=strongly disagree to 7=strongly agree). The items are from the authors’ previous OMD studies of professional stakeholders.19,25 We compared activity level and experience between 2 groups: patients/parents (parents in the KD panel) and clinicians/researchers which were analyzed using descriptive statistics and t tests.
The engagement strategies as implemented are summarized in Table 2. Implemented strategies organized by PCORI stakeholder engagement principle, and described in detail in an Appendix (Supplemental Digital Content 1, http://links.lww.com/MLR/B457).
Forty-six individuals participated in the SABs, including 16 patients/parents who had experience with one of the 3 clinical conditions, 4 patient advocates with experience in some aspect of research governance, and 26 clinicians with experience with one of the 3 clinical conditions. Of the SAB members, 25 (54%) were women. The breakdown by stakeholder group is shown in Table 3.
Attendance, a measure of engagement, was fairly high, ranging from 64% to 75% for the SAB meetings (Table 3). The meeting to refine the online Delphi software and to review panel results were offered as “optional” to all the members and had lower levels of attendance. Most SAB members agreed to continue in phase II: a 3-year research implementation and sustainability period: 93% of patients/parents, 83% of patient advocates, and 69% of clinicians. In total, SAB members spent 130 hours in meeting time: 52 hours by clinicians, 56 hours by patients/parents, and 22 hours by patient advocates.
A small number of emailed comments from the SAB were received. The quotes below were received after the final SAB meeting. Most comments were positive ones regarding structure:
I truly enjoyed the collaborative effort of the SAB group and think it was well organized and executed. As a result, the group developed important goals that I am sure will make a difference within the KD community down the road (KD parent, SAB-KD and panel member).
I'm really learning a lot about the processes involved! Thank you for being flexible with call times to accommodate different schedules! (clinician, SAB-WMO).
Several comments suggested changes to facilitation strategies.
You may want to try smaller groups to get broader participation and more consensus. Otherwise, it has been a very equitable and productive process (KD parent, SAB-KD and panel member).
I like the methodical nature of this, sometimes it was hard to get in on the discussions on the conference calls- not your problem, but could make use of “raise your hand” feature to call on individuals who are not as assertive; especially patients (clinician, SAB-WMO).
Lessons Learned Regarding SABs
The need for collaborative planning and preparation became evident quickly. In the first SAB meeting, the stakeholders raised the need to have a more in-depth understanding of how pSCANNER’s technology and organization differed from other networks. A brief introduction was not sufficient. Meeting with the patient co-chairs to assure that this information was relevant and understandable led to a compelling presentation of this material and ultimately to a professionally designed informational module. This also became the standard practice of setting agenda items and vetting materials before each meeting with the patient co-chairs.
One issue raised by the patient co-chairs is selection of key patient partners. Patients have expertise in their lived experience and can provide patient perspective, that is, needs, preferences, potential solutions, to research governance. Patient partners in research governance, however, may need to represent more than individual experience and be interested and willing to extend from personal perspective to stakeholder group perspectives. Patients may face logistical barriers to engagement such as lack of access to the internet for viewing webinar materials during meetings, or familiarity with document management websites for accessing documents such as presentations and minutes. Accommodations may also be needed for individuals with visual or hearing impairments or other conditions that challenge the use of telephones and computers. Still others work or have other obligations that require flexibility in schedule to avoid conflicts with work or personal activities. There is little evidence or guidance about how to identify and foster patient partners and support the diversity of personal circumstances they bring to the engagement relationship.
There was a high level of engagement by participants (Table 4). A total of 85% of participants in round 1 answered at least on rating question and 78% gave at least one rationale comment. On average, participants completed 95% of rating questions and commented on 75% of their ratings. There were no significant differences between participation of patients/parents and clinicians/researchers. Overall, 88% of participants were retained from round 1 to round 3 and there was a very small reduction in percent participation. However, in round 3, there were small but significant differences in both percent of questions answered (P=0.03) and percent of comments made (P=0.009) between patients/parent and clinicians/researchers groups.
Overall, 292 of the 349 participants who enrolled in the OMD panels completed the online questionnaire about their participation experiences (84% response rate) (Table 5). The majority of questionnaire respondents were female (60%), white (66%) and had graduate or professional degrees (68%). However, there were significant differences between the patient/parent compared with professional group on all 3 characteristics. As might be expected, the professional group had much higher levels of education as they were clinicians and researchers. The 2 groups also differed in their opinions of the online experience, with significant differences on 5 of the 10 statements in the questionnaire (Table 5). The greatest difference in experiences with OMD was on the statement “I had trouble following the discussion” such that patients were almost a full point lower (less favorable). Nonetheless, patients were also more favorably disposed to using the OMD software in future. A more detailed analysis on the factors associated with active OMD engagement is provided in another paper.25
The stakeholder engagement approach was designed and implemented in accordance with the PCORI engagement principles. The stakeholders were active in all engagement activities throughout phase I. The very high level of interest in continuing as SAB members in phase II can be interpreted as an indication of advisors’ satisfaction with the experience. However, our interpretation is limited due to lack of formal evaluation of the SABs. Although models do exist for program evaluation that may be illustrative for stakeholder engagement, there are distinctions that bear highlighting. For one, metrics for effective engagement remain challenging to define and collect.26,27 In addition, stakeholders in PCOR, particularly patients, have a vested interest in the conduct and outcomes of research. Evaluation must take into account fulfillment of those direct interests throughout the research lifecycle.
While we set out to demonstrate a scalable approach to engagement through the online prioritization panels we do not have a good understanding of the scalability of the SABs. As interest in governance among research participants grows, so do needs for guidance regarding feasible and effective processes for such engagement. Scalability may be supported by standard operating procedures as well as technology. The premeeting and postmeeting procedures we implemented for the SAB are one example of standard operating procedures but much more research in this area is needed.
The OMD approach is one example of how technology may be used to deeply engage a much larger group than traditional in-person meetings or even conference calls. The findings presented in this paper suggest that patients find OMD an acceptable mode of engagement for both patients/parents and clinicians/researchers with a high level of retention (88%) which exceeded the 40%–50% retention levels that have been reported in previous in-person and online Delphi studies.28,29 There is room for improvement in helping patients to understand the content of the discussions related to research. While there appears to be a reduction in participation level with respect to contributing comments, it was fairly small. There was no requirement that participants enter their rationale comments and for those participants whose ratings did not change, they may not have felt the need to enter duplicative comments in round 3.
Another challenge with large-scale engagement could be sustaining the cost of such projects. In addition to the costs of using OMD software which is proprietary to RAND, large-scale engagement activities require staff to recruit stakeholders and design, coordinate, and implement different activities, as well as to pay honoraria to participating stakeholders, all of which may not be possible without grant funds.
Structured engagement of patients and other stakeholders in research governance can yield rich contributions for PCOR. Advantages of designing research governance activities with deep attention to PCOR principles include potential for meaningful participation of patients, and interaction among diverse stakeholders who might not typically work together. However, there are not well-established methods for evaluating stakeholder engagement which continues to be an area ripe for future research. An efficient way to engage representative stakeholders in research governance is a necessary first step to assuring the public of trustworthy use of data networks for health research. This study describes a PCOR-principled, purposefully designed, approach for research governance that may serve as a model for scalable stakeholder engagement in research networks and PCOR.
The authors thank members of the pSCANNER team who contributed to this project: Victoria Ngo, Sean Grant, Dena Rifkin, Carl Stepnowsky, Nikolai Kirienko, Paul Heidenreich, Michael Ong, Jane Burns, and Zhaoping Li.
1. National Institute for Health Research. About Us. National Health Service. Available at: www.invo.org.uk/
. Accessed June 27, 2017.
2. Israel BA, Schulz AJ, Parker EA, et al. Review of community-based research: assessing partnership approaches to improve public health. Ann Rev Public Health. 1998;19:173–202.
3. Minkler M, Wallerstein N. Community-based Participatory Research
for Health: From Process to Outcomes. San Francisco: John Wiley & Sons; 2011.
4. Kim KK, Mahajan SM, Miller JA, et al. Answering Research Questions with National Clinical Research Network. In: C. Delaney, C. Weaver, J. Warren, T. Clancy, & R. Simpson (Eds.), Big Data-Enabled Nursing. Delaney (pp. 211-226). 2017. Switzerland: Springer.
6. Deverka PA, Lavallee DC, Desai PJ, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res. 2012;1:181–194.
7. Concannon TW, Meissner P, Grunbaum JA, et al. A new taxonomy for stakeholder engagement
in patient-centered outcomes research
. J Gen Intern Med. 2012;27:985–991.
8. Elwyn G, Crowe S, Fenton M, et al. Identifying and prioritizing uncertainties: patient and clinician engagement in the identification of research questions. J Eval Clin Pract. 2010;16:627–631.
9. Lloyd K, White J. Democratizing clinical research. Nature. 2011;474:277–278.
10. Nilsen ES, Myrhaug HT, Johansen M, et al. Methods of consumer involvement in developing healthcare policy and research, clinical practice guidelines and patient information material. Cochrane Database Syst Rev. 2006;3:Cd004563.
13. Domecq J, Prutsky G, Elraiyah T, et al. Patient engagement in research: a systematic review. BMC Health Serv Res. 2014;14:89–97.
14. Boote J, Baird W, Beecroft C. Public involvement at the design stage of primary health research: a narrative review of case examples. Health Policy. 2010;95:10–23.
15. O’Haire C, McPheeters M, Nakamoto E, et al. Engaging Stakeholders To Identify and Prioritize Future Research Needs. Methods Future Research Needs Report, Number 4
. Rockville, MD: Agency for Healthcare Research and Quality. 2011.
16. Carman KL, Maurer M, Mangrum R, et al. Understanding an informed public’s views on the role of evidence in making health care decisions. Health Aff. 2016;35:566–574.
17. Lavallee DC, Wicks P, Alfonso Cristancho R, et al. Stakeholder engagement
in patient-centered outcomes research
: high-touch or high-tech? Expert Rev Pharmacoecon Outcomes Res. 2014;14:335–344.
18. Ohno-Machado L, Agha Z, Bell DS, et al. pSCANNER: patient-centered Scalable National Network for Effectiveness Research. J Am Med Inform Assoc. 2014;21:621–626.
19. Khodyakov D, Hempel S, Rubenstein L, et al. Conducting online expert panels: a feasibility and experimental replicability study. BMC Med Res Methodol. 2011;11:174–181.
20. Dalal S, Khodyakov D, Srinivasan R, et al. ExpertLens: a system for eliciting opinions from a large pool of non-collocated experts with diverse knowledge. Technol Forecast Soc. 2011;78:1426–1444.
21. Fitch K, Bernstein SJ, Aguilar MD, et al. The RAND/UCLA Appropriateness Method User’s Manual. Santa Monica, CA: RAND Corporation; 2001.
22. Khodyakov D, Stockdale SE, Smith N, et al. Patient engagement in the process of planning and designing outpatient care improvements at the Veterans Administration Health‐care System: findings from an online expert panel. Health Expect. 2016;20:130–145.
23. Khodyakov D, Grant S, Barber CE, et al. Acceptability of an online modified Delphi
panel approach for developing health services performance measures: results from 3 panels on arthritis research. J Eval Clin Pract. 2016;23:354–360.
24. Claassen CA, Pearson JL, Khodyakov D, et al. Reducing the burden of suicide in the US: the aspirational research goals of the National Action Alliance for Suicide Prevention Research Prioritization Task Force. Am J Prevent Med. 2014;47:309–314.
25. Khodyakov D, Grant S, Meeker D, et al. Comparative analysis of stakeholder experiences with an online approach to prioritizing patient-centered research topics. J Am Med Inform Assoc. 2017;24:537–543.
26. Hill K. Building a Methodology for Monitoring and Measuring Civic Engagement. Paper presented at the Ethics of Public Service Symposium, Greenville, NC. 2013. Accessed May 6, 2017.
27. Eder MM, Carter-Edwards L, Hurd TC, et al. A logic model for community engagement within the CTSA consortium: can we measure what we model? Acad Med. 2013;88:1430–1436.
28. Elwyn G, O’connor A, Stacey D, et al. Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. BMJ. 2006;333:1417–1423.
29. Jillson ILinstone H, Turoff M. The national drug-abuse policy Delphi. The Delphi Method: Techniques and Applications. Boston: Addison-Wesley Publishing Company, Advanced Book Program; 2002:119–154.
patient-centered outcomes research; community-based participatory research; modified Delphi; stakeholder engagement
Supplemental Digital Content
Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.