Skip Navigation LinksHome > December 2000 - Volume 32 - Issue 12 > ACSM Abstracts: A Need For More Data
Medicine & Science in Sports & Exercise:
SPECIAL COMMUNICATIONS: ACSM Abstract commentary

ACSM Abstracts: A Need For More Data

Farquhar, William B. Ph.D.

Free Access
Collapse Box

Author Information

Research Fellow

HRCA Research and Training Institute and

Department of Neurology, Harvard Medical School

1200 Center Street

Boston, MA

Edward J. Zambraski, Ph.D., FACSM

Full Professor of Physiology

Department of Cell Biology and Neuroscience

Rutgers University

New Brunswick, NJ

To the Editor:

As the national ACSM meeting in Indianapolis was ending, we were involved in a conversation with a group of colleagues concerning the quality of the meeting. In general, there were laudatory comments concerning the symposia, invited lectures, and clinical case studies, attesting to the exceptional work of the program committee. However, there were rather critical comments directed at the free communications. Opinions were voiced that many of the slide and poster presentations were not presented well and the research abstracts poorly prepared. There was even the radical suggestion that ACSM should reinstitute the abstract review process.

This conversation prompted us to examine all of the 1877 abstracts from the national ACSM year 2000 annual meeting to determine whether indeed there was a basis for the critical comments about the quality of abstracts. Our focus was on abstracts that pertained to experimental research (i.e., where you have to measure identified parameters to obtain data to address the question being asked). Consequently, the first task was to identify the experimental research abstracts, versus all others. Of the total number of abstracts (1877), 160 were identified as not being appropriate for our review. These were abstracts that described symposia, special lectures, or clinical case studies. Because of the criteria we used, explained below, another 157 abstracts were denoted as not appropriate for review, as they involved surveys, correlational studies, meta-analyses, mathematical modeling, etc. This left us with 1560 abstracts that described experimental research.

To keep things as objective as possible, and manageable, we decided initially to apply two criteria to judge each abstract. The first was whether the abstract clearly stated the purpose or hypothesis, as well as a summary statement or conclusion. This was not as easy to assess as one would think, but overall the vast majority of abstracts contained these essential elements. The second criterion was whether the abstract included experimental data, numerical values for the measures being made to answer the defined question. Historically, a frequent basis for abstract rejection used by other professional societies has been a failure to include data. We were comfortable with this criterion, as we felt that we could be relatively objective in our assessment. Consequently, we evaluated these research abstracts for whether they included actual numerical experimental data, or gave data as percents or percent change only. Numbers associated with statistical tests (P, F, t, or r values) were not considered to be the fundamental data obtained. We also realized that for some experimental studies it is not appropriate to present the results numerically. Examples would include anatomical studies or some aspects of molecular biology. Such studies were not included in the “no data” category.

Of these 1560 research abstracts, nearly one third of this group—or 460—contained no data. An additional 135 contained data, but only represented by percents or percent change. This means that 595 (460 + 135) abstracts, or 38% of this total group, failed to give any absolute data for the important measures that were made in the study. Figure 1

Figure 1
Figure 1
Image Tools

shows the results.

Although many of the abstracts were well written and made an important contribution to the meeting, in our opinion, a research abstract that does not present actual data (see “No Data” column in Figure 1) is unacceptable, and at best, only a mediocre representation of the science. An abstract with no data cannot be evaluated, or the work replicated, because there are no numbers to compare. Such abstracts do not provide the basis to know whether the stated conclusions are indeed correct. An abstract without data that just lists conclusions like “these data show that this changed, increased, decreased, etc.” is essentially saying to someone, “just take my word that this is what we found.” Without data one does not know whether the measurements are correct. While some might argue that expressing data in a percent format is fine and adequate, there is still the limitation that one does not know if the measurements were in the ballpark. Also there is the problem of the “law of initial values” (i.e., the amount of change may be dependent upon the initial or baseline value). We present the “Only Percent Data” column separately in Figure 1 and let the reader assess the appropriateness of this form of data presentation.

Based upon what we found, our opinion is that approximately 30 or more percent of these research abstracts from this year’s past ACSM annual meeting are inadequate in terms of information content. These abstracts would have been vastly improved and the conclusions strengthened with the inclusion of data.

A logical question is whether this situation, or the lack of data in research abstracts, is new or a problem that has existed in the past. Another more focused question is whether the quality of abstracts, in terms of their containing actual data, was different (i.e., better) when the ACSM abstract review process was in place. To address these questions we did the same analysis for the ACSM abstracts submitted to the 1996 meeting. This was the last year that the mandatory abstract review process was in effect.

Of the 1256 abstracts presented at the 1996 annual ACSM meeting (published MSSE, Vol. 28, Suppl.), 246 were identified as not being appropriate for our review based upon the exclusion criteria listed above. Of the remaining 1010 experimental research abstracts, the number that contained no data and data only in percent format, is shown in Table 1

Table 1
Table 1
Image Tools

.

These numbers indicate that the lack of experimental data in certain ACSM research abstracts is not new. In comparison with the data in Figure 1 from the recent ACSM meeting, although the proportion of abstracts with no data was lower in 1996 (20 vs 29%), the fraction of abstracts with data in a percent format only was essentially the same.

In 1996, there were 38 scientists functioning as “area representatives” and an additional 219 scientists serving as “area reviewers” involved in the abstract review process. Despite the fact that all of these 1010 abstracts were reviewed and accepted, one in five of these experimental research abstracts contained no data. Consequently, one could argue that despite there being an extensive review system/process in effect, the problem of abstracts containing no data still existed. Therefore, the reinstitution of mandatory abstract review would not appear to be a viable solution. On the other hand, if all abstracts were reviewed, and the reviewers were instructed not to accept research abstracts with no data, this problem would be eliminated.

It is important to emphasize that by using our criteria, the majority of abstracts contained a purpose/hypothesis and experimental data. However, we believe that a significant number are in need of improvement. As members of other professional societies, our observations are that a lack of data in experimental abstracts is certainly not unique to ACSM. Indeed, if the AHA, APS, or other scientific societies similar to ACSM did this analysis of their abstracts, we would predict similar findings. Certainly, we have all submitted abstracts of varying quality. Nevertheless, we (the College and members) have the opportunity to take the lead in addressing and correcting this issue. Doing so can only improve the overall scientific quality of the meeting and, by extension, the discipline of sports and exercise science.

We see this challenge as one that could be addressed, without necessarily going back to an abstract review process. Mentors can assist by paying closer attention to the abstracts being submitted by their trainees. ACSM Fellows who sign off on abstracts should look more carefully and critically at what they are sponsoring. ACSM might provide more guidance and specific information on the preparation of abstracts at the meetings, on its web site, or in its publications. Finally, we (the College) need to monitor the quality of the research being presented at our annual meetings. Our hope is that the above recommendations will improve an already well organized and intellectually stimulating annual meeting.

William B. Farquhar Ph.D.

© 2000 Lippincott Williams & Wilkins, Inc.

Login

Article Tools

Images

Share

Connect With Us