Journal Logo

Special Report

Special Report

Crowdsourcing: Differentiating the Signal from the Noise

Shaw, Gina

doi: 10.1097/01.EEM.0000650960.38519.a3

    If it's true that the many are smarter than the few in business, economics, and policy, it appears that many doctors may also be smarter than fewer as well. New research suggests that the wisdom of crowds—a concept described by journalist James Surowiecki in the 2004 book of the same name, proposing that the aggregation of information in groups results in better decisions than a single individual would make—is applicable to medicine.

    Researchers from the Harvard T.H. Chan School of Public Health and Brigham and Women's Hospital found that a collective intelligence approach led to higher diagnostic accuracy than a specialist working alone. (JAMA Netw Open. (2019;2[3]:e190096;

    The study leveraged the Human Diagnosis Project (Human Dx), a multicountry dataset of ranked differential diagnoses by individual physicians, graduate trainees, and medical students solving user-submitted, structured clinical cases. The authors analyzed all cases between May 7, 2014, and Oct. 5, 2016, with 10 or more respondents (1572 cases). Of the 2069 users solving 1572 cases from the dataset, 1228 (59.4%) were residents or fellows, 431 (20.8%) were attending physicians, and 410 (19.8%) were medical students. The majority (1452, 70.2%) were trained in internal medicine.

    The investigators found that the accuracy of a diagnosis was associated with increasing group size, from 62.5 percent (95% CI, 60.1%-64.9%) for individual physicians up to 85.6 percent (95% CI, 83.9%-87.4%) for groups of nine (23.0% difference; 95% CI, 14.9%-31.2%; p<0.001). Even just one additional set of insights improved diagnostic accuracy: Groups from two users (77.7% accuracy; 95% CI, 70.1%-84.6%) to nine users (85.5% accuracy; 95% CI, 75.1%-95.9%) all outperformed individual specialists in their subspecialty (66.3% accuracy; 95% CI, 59.1%-73.5%; p<0.001 v. groups of 2 and 9).

    Medical crowdsourcing is a growing concern, with multiple social media platforms now offering physicians the opportunity to seek second, third, or 11th opinions from colleagues on challenging cases. The largest to date is SERMO (based on the Latin word for conversation), which boasts 800,000-plus users, including nearly 24,000 emergency physicians who have posted nearly 2000 patient cases on the platform since 2011. Overall, SERMO reports that more than 7300 cases were medically crowdsourced in 2018, with 89 percent of doctors reporting that they got the advice they needed within 24 hours.

    Crowdsourcing Platforms

    One example of a recent emergency medicine case: A 9-year-old boy who had suffered a serious fall from an electric skateboard was complaining of sharp pain in his left wrist. The radiographic survey showed a dorsal epiphyseal detachment migrated dorsally and a likely infraction at the distal metaphyseal level. The treating emergency physician sought guidance on SERMO about whether to treat conservatively after reduction under sedation or to recommend surgery.

    Another medical crowdsourcing platform, Figure 1, has been described as Instagram for doctors, where physicians can upload de-identified images—anything from photos of skin rashes or puzzling ECG findings to CT and MRI scans—and ask colleagues to weigh in. Anyone can join, but only verified medical professionals can post photos and make comments. Their spokesperson said more than 60,000 emergency physicians are among their verified worldwide members.

    One recent emergency case recounted how a 19-year-old Somali refugee presented with an eight-week history of nonpruritic verrucous growths on his face and ears. He had no significant medical history and was homeless. The case was viewed by more than 100,000 health care professionals on Figure 1 and received nearly 200 comments. Many suggested the patient may have had leprosy, which was later confirmed on biopsy.

    Despite these successful examples, it's likely more difficult for emergency physicians to take advantage of crowdsourcing than doctors in other specialties that don't operate in the hectic, chaotic environment that is the ED. A dermatologist in his office practice might have a few minutes to photograph that rash, write a detailed description of the patient's presentation, and post it to SERMO, Figure 1, or another crowdsourcing platform. But does the typical attending or resident in the emergency department have the bandwidth to do that when he is also being called into cardiac arrest codes, intervening in overdoses, and generally racing to put out one fire after another?

    “It's true that the collective intelligence method requires input from multiple people, which takes time, something that is always short in emergency medicine,” said Michael Barnett, MD, an assistant professor of health policy and medicine at Harvard and a co-author of the JAMA Network Open study. He noted that true collective differential crowdsourcing—engaging practitioners from all over the world—might be unpredictable about how long it takes—an hour, two hours, a day? “If you're in a setting where every single minute matters, that's not going to work.”

    Online crowdsourcing is also limited by HIPAA restrictions, noted Salim Rezaie, MD, the director of clinical education at Greater San Antonio Emergency Physicians and the creator and founder of “HIPAA requirements make it tough to crowdsource in real time on social media platforms,” he said. “You can post things like chest x-rays and EKGs that do not give away identifiable protected health information, but the problem is how quickly people respond to that. I would argue that it's not fast enough for how sick some of these people are.”

    Leverage Your Team

    But a crowdsourcing approach could potentially be applied in emergency medicine within your own team, especially in an academic medical setting where you have multiple residents and physicians. “In the emergency department where I work, we are lucky enough to have no fewer than four physicians on at one time,” Dr. Rezaie said. “If you have a difficult case, you can ‘crowdsource’ within your own team. I do think there's value in that.”

    Dr. Barnett agreed. “Our study found that even three or four differentials applied together were more likely to reach an accurate diagnosis than one alone,” he said. “So perhaps diagnostic accuracy would be improved if more than one person reviews the chart and patient history before presenting the patient and trying to come to a consensus. It's an approach that would be testable and wouldn't necessarily have to take up that much time.”

    How does crowdsourcing a diagnosis differ from the classic tumor board approach or just everyday rounds? Dr. Barnett said it involves individual clinicians proposing and writing down their possible diagnoses before the discussion happens. “Once people start talking about a case, the way they are thinking about it is going to rapidly converge,” he said. “It's getting that independent diagnosis before you launch into the discussion that is likely to provide you with more information.”

    Power dynamics and hierarchies are also at play in every setting, including emergency medicine. “A medical student or resident might not feel comfortable running down his full list of possible diagnoses if the attending doesn't think it's reasonable,” Dr. Barnett said. “If there's a way to share your initial diagnostic thoughts in a more anonymous, pooled way, that could be helpful. Everybody approaching a case can have complementary input to a diagnosis or problem that can be valuable, and we don't always solicit the full range of what people are thinking and really consider it together in any systematic way.”

    Crowdsourcing can also be done after the fact to boost accuracy for similar cases in the future. “Once I know the outcome of a case, I can post the details on social media or a specialty group page—again, applying HIPAA protections—and get crowdsourced opinions,” said Dr. Rezaie. “Then I can go back and assess the crowdsourced opinions and new insights in order to apply them to other cases.”

    REBEL EM often serves as a forum for the post hoc crowdsourcing of such cases. Dr. Rezaie has facilitated similar online discussions in collaboration with the educational website Academic Life in Emergency Medicine (ALiEM) and Annals of Emergency Medicine. One such session focused on the 2014 Journal of the American Medical Association trial on the age-adjusted D-dimer cutoffs for ruling out pulmonary embolism (ADJUST-PE) by Righini, et al. “That's a perfect example of a very specific question looking for a specific answer as to what we should be doing in practice,” he said.

    Five online facilitators hosted the multimodal discussion on the ALiEM website, Twitter, and Google Hangout. Comments across the social media platforms were curated for the report, which led to a proposed algorithm for PE diagnosis that included a higher cutoff for older individuals. “Of course, since then there have been several publications on this topic and the guidelines have been updated, but we were able to reach that conclusion through crowdsourcing much sooner,” Dr. Rezaie said.

    A Tool for Research

    Crowdsourcing also has significant potential as a tool in medical research and education, said Brent Thoma, MD, an associate professor of emergency medicine at the University of Saskatchewan College of Medicine and a clinician educator with the Royal College of Physicians and Surgeons of Canada. “We have an excellent online community of practice in emergency medicine that has really connected on platforms like Twitter and Facebook as well as in other forums,” he said. “Building those communities expands our potential to recruit those people to participate in our studies.”

    Dr. Thoma and colleagues described the use of a technique they call a MONA (massive online needs assessment) to help develop an accessible and practical FOAM curriculum in bleeding and clotting disorders for emergency medicine residents and physicians. (Perspect Med Educ. 2018;7[3]:219; “We had noticed a lack of accessible and up-to-date material for practicing clinicians, partly because of rapid developments in the clinical field,” he explained.

    Using a social media campaign targeting a specific online community, they were able to recruit 198 clinicians from 21 countries to complete an embedded Google Forms survey on the website. “We identified 17 high-priority perceived needs, 17 prompted needs, and 10 topics with unperceived needs through our MONA process,” Dr. Thoma said.

    He and his group also described the efficacy of leveraging these virtual practice communities to recruit clinicians for participation in the METRIQ (Medical Education Translational Resources: Impact and Quality) study. (AEM Educ Train. 2017;1[2]:110;

    “The million-dollar question is, where does crowdsourcing in emergency medicine and in medicine in general go from here?” Dr. Rezaie asked. “Five or six years ago, there were a lot of people doing sites on their own, and I predicted that was not sustainable, and we'd see teams of people getting together to build these communities. At REBEL EM, we now have about 18 people from different parts of the country working on the site, constantly looking at the information coming out and having discussions in the background, much like prepublication peer review, except a journal charges money and we don't. What is the financial stability of that? We don't know.”

    He also cautioned that crowdsourcing, especially the virtual kind, requires a healthy dose of skepticism. “With all the greatness of technology and how fast we're getting information out, people should be very leery of believing everything they read or hear in an online community. Question everything you hear and read as you crowdsource and employ critical thinking to differentiate the signal from the noise.”

    Share this article on Twitter and Facebook.

    Access the links in EMN by reading this on our website, or in our free iPad app, both available at

    Comments? Write to us at

    Ms. Shawis a freelance writer with more than 20 years of experience writing about health and medicine. She is also the author of Having Children After Cancer, the only guide for cancer survivors hoping to build their families after a cancer diagnosis. You can find her work

    Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.