Secondary Logo

Journal Logo

Research Reports

Quantifying Federal Funding and Scholarly Output Related to the Academic Emergency Medicine Consensus Conferences

Nishijima, Daniel K. MD; Dinh, Tu; May, Larissa MD; Yadav, Kabir MDCM; Gaddis, Gary M. MD, PhD; Cone, David C. MD

Author Information
doi: 10.1097/ACM.0000000000000073


In 2000, Academic Emergency Medicine (AEM), the journal of the Society for Academic Emergency Medicine, hosted a one-day consensus conference to develop a research agenda on “Errors in Emergency Medicine.” The conference was organized in response to the 1999 Institute of Medicine report “To Err Is Human,” which evaluated the morbidity and mortality related to medical errors.1 The 2000 conference was presented as a preconference offering at the society’s annual meeting. The editors dedicated the entire November issue of the journal to publishing the proceedings of that consensus conference.2

Each year since 2000, the AEM editorial board has selected a topic of interest to the AEM readership for the following year’s consensus conference. Unlike other academic consensus conferences, which typically generate expert opinion and consensus on a controversial clinical topic for which evidence is limited,3,4 the purpose of each AEM conference is to develop a consensus-based research agenda to advance understanding of that topic by inspiring studies to address current knowledge gaps.5 The goal of the consensus conferences is that these research agendas can serve as a guide for future funding proposals. Table 1 lists the topics of each year’s conference, illustrating the breadth of topics selected. The journal has maintained this concept and format over the years to help advance the academic mission of the Society for Academic Emergency Medicine by helping define research needs in our specialty and providing the society’s members with consensus-driven ideas for research and, hopefully, some leverage for obtaining extramural funding in the form of the recommendations set forth in the conference proceedings.

Table 1:
Topics of Society of Academic Emergency Medicine’s Annual Consensus Conferences, 2000–2014

The proceedings of each consensus conference are presented in a full issue of AEM. Four types of conference-related manuscripts are published:

  • commentaries (including an executive summary by the conference chairs);
  • summaries of plenary and panel presentations;
  • proceedings of the conference “breakout sessions” where consensus is generated; and
  • original contributions on the conference topic.

Original contributions on the conference topic are solicited to the general readership as a “Call for Papers” approximately a year before the consensus conference. This category of manuscripts allows researchers whose research focus is in the area being discussed to publish their work, illustrating the types of studies that can be done on that topic and suggesting a springboard for further work.

It is unusual (and perhaps unique) for a society-sponsored journal to dedicate a full issue each year to publishing consensus conference proceedings that are designed to generate a research agenda, rather than simply summarizing the state of knowledge of a topic at a single point in time. Moreover, revenue generated from registrants generally covers only a proportion of the conference expenses. These logistical and financial challenges need to be weighed with the potential benefits of the conferences. However, quantitatively, potential downstream gains are unknown. The purpose of this study was therefore to evaluate the downstream academic productivity of the consensus conferences using two approaches: (1) evaluating subsequent federal grant funding for research projects conducted by the authors of the papers published in the dedicated issues of the journal, and (2) calculating citation counts of those conference papers.

This study may be of value and generalizable to other specialties for two reasons. The first is that, by quantitatively measuring academic productivity generated by these conferences, other specialties may be more inclined to organize similarly structured conferences. These conferences may serve as an incubator for innovative ideas and methods for particular topics where traditional clinical academic research may not work. In particular, the prototype of randomized controlled trials may not be feasible for health care delivery research or improving health care systems.6 They can also help focus the attention of a broad audience on a particular topic that members of the specialty feel needs attention; assessing the downstream academic productivity of these conferences can help determine whether the format is having the desired results in stimulating research on the topic. The second is that our methodology may serve as a template for evaluating comparable conferences that focus on setting research agendas.7–9 If and when other specialties conduct similar conferences, they can use these methods to assess their success.


Design and study participants

We conducted a cross-sectional study during August and September 2012. The 11 consensus conference issues of the journal (November, 2000–2009, and December, 2010) were reviewed. A list of all conference-related papers and their authors was assembled. Each paper was categorized as commentary, plenary/panel presentation, breakout session, or original contribution, based on its heading in the table of contents. This list of papers and their authors formed the study population.

Data sources

The National Institutes of Health Research Portfolio Online Reporting Tools Expenditures and Results (NIH RePORTER) system was used to identify federal funding obtained by authors contributing to the consensus conference issues.10 The NIH RePORTER system is an electronic tool that allows users to search a repository of both intramural and extramural federally funded research from 1988 to present.10 The system includes research funded by the NIH, the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality (AHRQ), the Health Resources and Services Administration, the Substance Abuse and Mental Health Services Administration, and the United States Department for Veterans Affairs. It excludes funding obtained from Canadian sources; thus, we could not evaluate non-U.S. funding obtained by Canadian authors on conference-related manuscripts. In addition, it does not include nonfederal funding such as foundation or society grants.

Citation counts were generated through online review of Scopus (Elsevier BV, Amsterdam, The Netherlands) and Google Scholar (Google Inc., Mountain View, California). The Scopus database contains nearly 50 million records from just under 20,000 titles, including full coverage of Medline, and is currently the largest abstract and citation database of the peer-reviewed scientific literature. Google Scholar covers a broader range of sources, including theses, technical reports, and other documents found using Web “crawlers.”

Study procedures

Between August 21 and August 29, 2012, two of us (T.D. and D.K.N.) queried NIH RePORTER for subsequent funding by consensus conference issue authors. Search terms included each author’s full name and funding cycles from the year of the consensus conference issue to present. Common names were cross-referenced with topic domain and author institution. We abstracted each project’s activity code (e.g., R01, R03, K23), title of project, funding amount, funding institute or agency or center, and fiscal years funded. Activity codes were categorized into R01 equivalents (R01, R23, R29, R37, DP2), other R awards (R03, R15, R21), training awards (K01, K02, K23, K24, other K awards, F32, F31, other F awards, other T awards), cooperative agreements (all U awards), program awards (all P awards), small business innovation research (SBIR), small business technology transfer (SBTT) awards (R41, R42, R43, R44, U43, U44), and other awards. These categories were consistent with NIH RePORTER categories. Two of us (D.K.N. and K.Y.) independently coded each funded project as “related” or “unrelated” to the consensus conference topic domain using project information from NIH RePORTER, which included a description of the project (abstract text), narrative (public health relevance statement), and project terms to determine conference relatedness. Discrepancies in coding were adjudicated by a third member of our team (L.M.).

Between August 29 and September 12, 2012, one of us (D.C.C.) examined the Scopus and Google Scholar databases to determine the number of papers citing each consensus conference paper; this information was manually recorded on an Excel spreadsheet. To examine potential differences in citation counts between papers of the four types, means were calculated for each of the four types of papers.

Data analysis

Data formatting and coding of variables were conducted using Microsoft Excel 2010 (Microsoft Corporation, Redmond, Washington) and STATA 11.0 statistical software (STATA Corporation, College Station, Texas). Means were used as the primary measure of central tendency for the citation counts, because journal impact factor, the most common journal citation metric, is expressed using means. However, the data for some years in particular, and thus the overall data, were somewhat right-skewed, justifying the use of medians (with IQR). Interrater reliability of coding for related projects was measured with Cohen kappa coefficient with 95% confidence intervals (CI), with substantial agreement defined as a kappa > 0.6.11


The 11 consensus conference issues of the journal included 280 manuscripts with 994 contributing authors; there were 852 unique authors, as some authors contributed to multiple papers and multiple journal issues. One hundred thirty-seven of the 852 unique authors (16.1%) were identified in NIH RePORTER as having received subsequent federal funding. These 137 authors obtained funding for 318 individual projects accounting for $329,492,017. The most common activity codes for funded projects were other R awards (91 awards, 29%) and R01 equivalents (82 awards, 26%) (Figure 1). AHRQ (51 awards, 16%), the National Institute for Alcohol Abuse and Alcoholism (35 awards, 11%), and the National Heart, Lung, and Blood Institute (27 awards, 8.5%) were the most common funding agencies (Figure 2).

Figure 1:
Activity codes for all 318 funded projects that arose from issues presented at Academic Emergency Medicine’s annual consensus conference, 2000–2010. Activity codes, abstracted in NIH RePORTER, were categorized into R01 equivalents (R01, R23, R29, R37, DP2), other R awards (R03, R15, R21), training awards (K01, K02, K23, K24, other K awards, F32, F31, other F awards, other T awards), cooperative agreements (all U awards), program awards (all P awards), small business innovation research (SBIR), small business technology transfer (SBTT) awards (R41, R42, R43, R44, U43, U44), and other awards. These categories were consistent with NIH RePORTER categories.
Figure 2:
Funding institution/agency/center for all 318 funded projects that arose from issues presented at Academic Emergency Medicine’s annual consensus conference, 2000–2010. Abbreviations: AHRQ, Agency for Healthcare Research and Quality; CDC, Centers for Disease Control and Prevention; NCI, National Cancer Institute; NCRR, National Center for Research Resources; NCATS, National Center for Advancing Translational Sciences; NHLBI, National Heart, Lung, and Blood Institute; NINDS, National Institute of Neurological Disorders and Stroke; NIA, National Institute on Aging; NIAAA, National Institute of Alcohol Abuse and Alcoholism; NIAID, National Institute of Allergy and Infectious Disease; NIAMS, National Institute of Arthritis and Musculoskeletal and Skin Diseases; NICHD, National Institute of Child Health and Human Development; NIDA, National Institute on Drug Abuse; NIGMS, National Institute of General Medical Sciences; NIDDK, National Institute of Diabetes and Digestive and Kidney Diseases; NIMHD, National Institute on Minority Health and Disparities.

Funded projects and their amounts (related and total) were tabulated individually for each conference (Table 2). The median number of related funded projects per year over the 11 years was 22 projects (range 10–97 projects). The median amount of total related funding per year was $20,488,331 (range $7,779,512 to $122,918,205). Fifty projects with discrepancies of coding for related versus unrelated required adjudication by a third author, with 32 judged through adjudication to be conference-related. Interrater reliability of coding for related projects was fair (kappa 0.6; 95% CI 0.5–0.7).

Table 2:
Overview of Funding Received by Authors of Articles Printed in the Proceedings of the Society for Academic Emergency Medicine’s Annual Consensus Conferences, 2000–2010*

Table 3 shows the numbers of citations to the conference papers from each year, and summary totals. There were 4,403 total citations in Scopus and 6,633 in Google Scholar. Citations per paper were 15.73 ± 20.45 in Scopus and 23.69 ± 32.57 in Google Scholar. The overall medians were 9 (IQR 4–20) in Scopus and 13 (IQR 6–29) in Google Scholar. Commentaries were cited an average of 6.23 (SD ± 4.93) times each, plenary presentations an average of 14.18 (SD ± 18.75) times each, breakout session proceedings an average of 21.59 (SD ± 71.61) times each, and original contributions an average of 19.96 (± 24.97) times each.

Table 3:
Citation Counts by Year of Articles Printed in the Proceedings of the Society for Academic Emergency Medicine’s Annual Consensus Conferences, 2000–2010


There is a paucity of literature describing consensus conferences with the primary goal of setting a research agenda. Most consensus conferences are focused on clinical guideline development and practice recommendations, especially for topics for which significant controversy between learned experts exists. There appear to be only three published society consensus conferences that have focused on setting research agendas.7–9 Although studies have evaluated subsequent publication of abstracts presented at society meetings,12–15 there are limited data on research productivity (defined by funding success and subsequent publication) based on prior participation in a national consensus conference. One relevant model, which is somewhat complementary to the design of our conference and this manuscript, is a recent study describing a conference platform as a successful career stimulation strategy. We found, through career tracking data, that nearly half of conference participants published their work, and one-third subsequently obtained research funding.16

Our results suggest that the consensus conference platform as developed by AEM has met the tests of “proof of concept.” We have demonstrated that a sizable body of related research and funding has subsequently been completed. Moreover, the conference manuscripts are among the most highly cited manuscripts published in AEM, increasing the impact factor of the journal. With recent impact factors for the journal ranging from 1.861 to 2.197 and five-year impact factors from 2.474 to 2.536, showing that the typical AEM article is cited roughly twice in the two to five years following its publication, the citation figures shown in Table 3 suggest higher citation rates for the consensus conference papers than for the average AEM paper. The data presented here should be helpful to leaders of future conferences across specialties. The leaders of our conferences have varied degrees of experience at approaching federal and other health-care-related entities with requests to help fund consensus conferences. Going forward, rather than approaching potential sources of funding simply with an idea, our conference leaders can now approach these potential funding sources with documentation of significant scholarly output related to the conference topic in postconference years, suggesting a high “return on investment” for funders.

Although the results of this study suggest that subsequent related scholarly output has transpired after the consensus conference, we are unable to definitively establish a causal relationship between the conference and subsequent scholarly output. Moreover, it is difficult to quantify the impact of the consensus conferences on future funding. In addition, postconference funding is just one measure of consensus conference “success.” Less quantifiable measures of success include conference individual participant education and career development. Local practice improvements that result from consensus conferences, such as implementing interventions to improve quality in crowded emergency departments, would also be an important contribution, though also difficult to measure.

The board of directors of the Society of Academic Emergency Medicine has recently questioned whether AEM should continue producing the consensus conferences, citing the expenses to produce each conference and publish its proceedings. To examine the financial aspects of the conferences, we queried the society’s executive office records for financial information regarding the conferences. Records from 2007 to 2011 were available. Expenses for each conference are approximately $60K–100K, including site-related and featured speaker-related expenses, plus the cost of publishing the proceedings in AEM, and registration fees have generated only about 15% to 30% of the revenue needed to produce recent conferences (see Supplemental Digital Table 1,, which describes revenues and expenses for consensus conferences 2007–2011). We believe that the data we have compiled demonstrate the value of the conferences to the society and its members and will aid in securing extramural funding for future consensus conferences. It is our hope that other societies may find these data compelling and useful as well.

The question of “impact factor” is somewhat controversial,17,18 and a number of other publication metrics have been generated to overcome some of the shortcomings of the impact factor. Regardless of whether these citation metrics are considered valid, it is clearly in the best interests of both the Society for Academic Emergency Medicine and AEM to publish articles that become highly cited in the future. These consensus conferences have assisted in the achievement of this goal; other journals may be interested in sponsoring conferences given the high citation rate of our conference proceedings papers.

There are certain limitations inherent to this study. First, other measures of conference scholarly output likely exist of which we are unaware or which were not measured. For example, we did not examine publications resulting from the projects we found through our NIH RePORTER search. In addition, the authors of the consensus conference manuscripts represented a proportion (typically 25%–50%) of all conference attendees. Because of the large number of participants, we did not evaluate the scholarly output of nonauthor conference participants.

Second, a number of Canadian presenters and attendees were present at these conferences, and Canadians were particularly prominent among the leadership of the 2007 conference “Knowledge Translation in Emergency Medicine.” The addition of the as-yet not quantified contributions of Canadians to the sum of external funding related to these conferences would increase the funding total estimated herein. In addition, we were unable to ascertain which consensus conference authors were non-U.S. citizens (and thus generally exempt from receiving U.S. federal funding), and thus we did not exclude these authors from the denominator. The funding totals reported are therefore likely an underestimate.

Third, nonfederal funding from pharmaceutical, intramural, and foundation sources provide crucial funding for emergency care researchers. However, we did not include these sources of funding because obtaining these data accurately and comprehensively was logistically prohibitive.

Fourth, some subjectivity exists regarding assessing whether funded projects are or are not related to the prior consensus conference. The total number of projects and total funding represent the maximum possible; the data we provide for related projects represent our best approximation of funding that may have benefited from the prior conferences. When disagreements regarding “relatedness” existed among the two primary adjudicators, a tie was broken by a third adjudicator. The Cohen kappa for these adjudications was 0.6, demonstrating some degree of disagreement.

In conclusion, the authors of consensus conference manuscripts have obtained significant federal grant support for follow-up research related to conference themes. In addition, the manuscripts generated by these consensus conferences are frequently cited. Consensus conferences devoted to research agenda development appear to be an academically worthwhile endeavor.


1. Kohn LT, Corrigan JM, Donaldson MS To Err Is Human. Building a Safer Health System. Report of the Institute of Medicine. 1999 Washington, DC National Academy Press
2. Biros MH, Adams JG, Wears RL. Errors in emergency medicine: A call to action. Acad Emerg Med. 2000;7:1173–1174
3. Morris MI, Daly JS, Blumberg E, et al. Diagnosis and management of tuberculosis in transplant donors: A donor-derived infections consensus conference report. Am J Transplant. 2012;12:2288–2300
4. Lutz MP, Zalcberg JR, Ducreux M, et al.First St Gallen EORTC Gastrointestinal Cancer Conference 2012 Expert Panel. Highlights of the EORTC St. Gallen International Expert Consensus on the primary therapy of gastric, gastroesophageal and oesophageal cancer—differential treatment strategies for subtypes of early gastroesophageal cancer. Eur J Cancer. 2012;48:2941–2953
5. Biros MH, Adams JG. What is consensus? Acad Emerg Med. 2002;9:1063
6. Zuckerman B, Margolis PA, Mate KS. Health services innovation: The time is now. JAMA. 2013;309:1113–1114
7. Kraft GH, Johnson KL, Yorkston K, et al. Setting the agenda for multiple sclerosis rehabilitation research. Mult Scler. 2008;14:1292–1297
8. Murray PT, Devarajan P, Levey AS, et al. A framework and key research questions in AKI diagnosis and staging in different environments. Clin J Am Soc Nephrol. 2008;3:864–868
9. Walston J, Hadley EC, Ferrucci L, et al. Research agenda for frailty in older adults: Toward a better understanding of physiology and etiology: Summary from the American Geriatrics Society/National Institute on Aging Research Conference on Frailty in Older Adults. J Am Geriatr Soc. 2006;54:991–1001
10. . Research Portfolio Online Reporting Tools (RePORT). National Institutes of Health. Accessed February 22, 2013
11. Viera AJ, Garrett JM. Understanding interobserver agreement: The kappa statistic. Fam Med. 2005;37:360–363
12. Amirhamzeh D, Moor MA, Baldwin K, Hosalkar HS. Publication rates of abstracts presented at pediatric orthopaedic society of North America meetings between 2002 and 2006. J Pediatr Orthop. 2012;32:e6–e10
13. Drury NE, Maniakis-Grivas G, Rogers VJ, Williams LK, Pagano D, Martin-Ucar AE. The fate of abstracts presented at annual meetings of the Society for Cardiothoracic Surgery in Great Britain and Ireland from 1993 to 2007. Eur J Cardiothorac Surg. 2012;42:885–889
14. Harel Z, Wald R, Juda A, Bell CM. Frequency and factors influencing publication of abstracts presented at three major nephrology meetings. Int Arch Med. 2011;4:40
15. Papagikos MA, Rossi PJ, Lee WR. Publication rate of abstracts from the annual ASTRO meeting: Comparison with other organizations. J Am Coll Radiol. 2005;2:72–75
16. Interian A, Escobar JI. The use of a mentoring-based conference as a research career stimulation strategy. Acad Med. 2009;84:1389–1394
17. Cone DC. Measuring the measurable: A commentary on impact factor. Acad Emerg Med. 2012;19:1297–1299
18. Reynolds JC, Menegazzi JJ, Yealy DM. Emergency medicine journal impact factor and change compared to other medical and surgical specialties. Acad Emerg Med. 2012;19:1248–1254

Supplemental Digital Content

© 2014 by the Association of American Medical Colleges