Kanter, Steven L. MD
From time to time, it is important for those of us who work in academic medicine to take stock. Whether we are considering all medical schools and teaching hospitals, an individual academic health center, a department or division, a research laboratory, or even a small unit, it is valuable, at least once a year, to pause and reflect on a set of indicators that helps us understand whether or not we are achieving stated goals, pursuing initiatives that align with an overarching mission, or trending in the direction we intend. It is essential to know if we are developing, receding, or staying the same in terms of resources, people, diversity, and other key aims and values.
Both within and outside of academic medicine, a number of interested parties track such indicators, which can be leading or lagging, quantitative or qualitative, financial or practical. Governments track leading economic indicators, county public health departments track community health indicators, and businesses track key performance indicators. In academic medicine, various organizations, schools, and hospitals do an excellent job tracking certain measures. But data that are available often are kept in different places, are reported in different formats, may comprise raw or derived values, are generated at different levels of granularity, have different owners, and may have different restrictions on availability, accessibility, and use. So, imagine how beneficial it could be to the academic medicine community if there were a set of easily accessible, widely-available, peer reviewed, annually published key indicators that could help assess the vitality of the academic medicine enterprise.
To advance thinking about such indicators and to provide a mechanism to publish and track them, this issue of Academic Medicine introduces a new annual feature called Key Indicators in Academic Medicine (KIAMs). Each KIAM appears on two journal pages. On the first page, you will find the title, authors, a rationale for the indicator, methodologic notes, limitations of the measure, sources used, acknowledgements, and relevant references. On the second page, you will find the tables, graphs, and charts that illustrate the indicator.
By accepting KIAM submissions and subjecting them to peer review, the journal can draw on the expertise available in the academic medicine community. Each year, peer review can be used to inform the process to include new indicators, refine existing ones, and eliminate those that are no longer relevant. By publishing KIAMs in the journal, they become part of the indexed literature and, thus, are easily discovered by an Internet search engine, easily accessible to any interested individual, and easily cited by anyone who wishes to reference them.
In this issue, there are seven KIAMs. In addition, there is an excellent article by Joiner and Coleman in which they suggest a framework for analyzing KIAMs, point to opportunities and pitfalls, and indicate how this feature could form the basis of a comprehensive project within the United States.
Obviously, the seven KIAMs in this issue do not compose a comprehensive set of measures. To understand the well-being of medical schools and teaching hospitals, the academic medicine community will need to track a much larger set of key indicators. The KIAMs in this issue introduce the concept, illustrate the use of a particular template for reporting an indicator, and demonstrate the value of publishing indicators in the peer-reviewed, indexed literature. The article by Joiner and Coleman provides expert guidance for individuals and organizations who wish to submit indicators for consideration during the next publication cycle. (The call for the next set of KIAMs appears in this issue.)
The journal welcomes submissions of updated and new KIAMs to be published about a year hence. Of course, the journal also welcomes manuscripts to be published at any time that advance thinking and practice in the use of key indicators, that analyze key indicators and their relationships to events and outcomes, and that consider—in a thoughtful, deep, and scholarly manner—the arguments, tensions, and issues at play in assessing the health of academic medicine. Note that while the indicators published in this issue generally focus on measures in the United States, the journal welcomes submissions about KIAMs in any country.
I believe that there should be an ongoing discussion in the academic medicine community and its journals about which key indicators convey the most important information about the health of academic medicine, about which indicators are most sensitive to change as the well-being of medical schools and teaching hospitals waxes and wanes, about the merits and flaws of different indicators, and about the ideal level of granularity for a particular measure to provide information that is most useful for decision making. I hope that this initiative will catalyze such conversations at meetings and conferences, and in medical schools and teaching hospitals.
I am grateful to Keith Joiner for his enthusiastic support of the notion of KIAMs and for his expert advice; I also thank him and David Coleman for writing their insightful article that appears in this issue. I also wish to thank Anne Farmakidis, Managing Editor, for her crucial work in helping to develop the key indicators project and in reviewing the early versions. I thank Jennifer Campi, Senior Staff Editor, for her creative and innovative editing of the key indicators and for seeing them through production. Both of these staff members contributed their talent and expertise to make the content more accessible and the design more pleasing to the eye. Finally, I thank the reviewers, who forged new ground in figuring out how to review a KIAM submission.