Skip Navigation LinksHome > September 2010 - Volume 85 - Issue 9 > A Meta-Analysis of Studies of Publication Misrepresentation...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181e2cf2b
Ethics Issues

A Meta-Analysis of Studies of Publication Misrepresentation by Applicants to Residency and Fellowship Programs

Wiggins, Michael N. MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Wiggins is associate professor of ophthalmology, Jones Eye Institute, and associate residency program director, University of Arkansas, Little Rock, Arkansas.

Correspondence should be addressed to Dr. Wiggins, Jones Eye Institute, University of Arkansas for Medical Sciences, 4301 West Markham, Slot 523, Little Rock, AR 72205-7199; telephone: (501) 686-5150; fax: (501) 603-1289; e-mail: wigginsmichael@uams.edu.

First published online June 7, 2010

Collapse Box

Abstract

Purpose: Many studies from various fields of medicine about the accuracy of residency and fellowship applications have reported disturbing percentages of candidates with publication misrepresentation on their applications. However, other similar studies have found much lower percentages. No evaluation of these types of studies is currently available to explain this disparity. Therefore, this study evaluated the wide range of percentages of applicants with publication misrepresentation reported in the literature.

Method: Studies of residency and fellowship applicant misrepresentation were identified and reviewed. Using uniform inclusion criteria, the data reported by each study were recalculated to determine the percentage of candidates with misrepresentation.

Results: Thirteen out of 18 studies (eight residency and five fellowship) found in the literature from 1995 to 2008 reported sufficient details to perform a recalculation. The most common type of misrepresentation reported was listing nonexistent articles, followed by errors in authorship order and nonauthorship. After recalculation, the mean percentage of candidates with misrepresentation per applicant pool decreased significantly (7.2% to 4.9%, P = .03048). No study characteristic, such as sample size, was found to be predictive of the percentage of applicants with misrepresentation. No difference was found in the percentage of applicants with misrepresentation in residency versus fellowship programs.

Conclusions: The variance in study results of misrepresentation decreases when uniform inclusion criteria are applied. Caution must be used in directly comparing the results of these studies as originally reported. Program directors should be aware that self-promotion in the authorship list is a common form of misrepresentation.

The issue of publication misrepresentation on postgraduate medical education applications has been of concern to scholars in the field for some time. Sekas and Hutson1 published the first article dealing with fellowship applicant misrepresentation in 1995, the findings of which implied a lack of professionalism in 30.2% of applicants reporting publications. The reasons suggested for these misrepresentations ranged from a desire to appear more competitive to a “mental aberration.” Within a four-year period, six more studies of this type were published from various residency and fellowship programs; they reported ranges from as low as 5.8% to as high as 100% of applicants who misrepresented publications.2–6 Since that time, 12 more studies in this area have been performed, reporting a range of applicants with misrepresentations from 1.8% to 34%.1,7–18 Hebert et al12 reported that the extensiveness of the search criteria played a role in the percentage of applicants with misrepresentation, suggesting that the large discrepancies in reported applicant misrepresentation rates might be due to the thoroughness of the study rather than true differences in the study groups.

Previous studies have commented on the findings of other studies of misrepresentation; however, no study currently exists in the literature to examine why some studies find high percentages of applicants with misrepresentation while others find a noticeably lower percentage. Therefore, the goal of this study was to answer the following questions: Are prior studies of applicant misrepresentation sufficiently similar in design (i.e., definition of misrepresentation, inclusion/exclusion criteria, the number of databases searched) to be directly compared with one another? If not, if similar study criteria were applied to the reported data of each study, would the variance in the range of results persist or decrease? Additional goals were to see whether the findings of Hebert and colleagues would be corroborated. In other words, would studies using a larger number of data sources find a smaller percentage of misrepresentation? Are there other study characteristics, such as sample size or the percentage of applicants reporting publications, that might influence the percentage of misrepresentation discovered? And, finally, is there a difference in the percentages of misrepresenting applicants between those applying to residency and those applying to fellowship programs?

Back to Top | Article Outline

Method

Studies of the misrepresentation of publications by applicants to residency or fellowship programs were identified using a search of PubMed, Web of Science, Journal Citation Reports, Google Scholar, Article First, and Academic Search Elite. I used “misrepresentation publications,” “ghost articles,” “resident applicant publications,” and “fellowship applicant publications” as key words during the searches. Searches were not limited by date to reduce the possibility of overlooking an article. Once an article was identified, its bibliography was reviewed to locate other articles. I then recorded the specialty of the program, year of publication, number of applicants reviewed, number of applicants reporting published articles in peer-reviewed journals, number of applicants found with misrepresentations on articles in peer-reviewed journals, types of misrepresentation, search criteria used, and study definition of misrepresentation.

I recalculated the number of applicants with misrepresentations reported from each study using a uniform definition of misrepresentation. The definition of misrepresentation used was similar to that of Caplan et al11: an applicant listing article authorship when he or she was not listed as an author, listing an article that could not be located in the reported journal or elsewhere, presenting authorship lists that were different from the literature (listing a higher place on the order of authorship or omitting other authors), listing an abstract as a published article, or listing the article in a more prestigious journal than found in the literature.

Definitions of misrepresentation that were not clear-cut were excluded. I defined controversial definitions as those directly mentioned and purposefully excluded by 1 or more of the 13 studies. Seven of the 13 studies intentionally excluded one or more of three definitions.2,7,9,11,12,14,15 These included typographical errors, such as the wrong journal volume or page number, articles listed as “in press” but not published within a given time frame, and unverified articles because the journal could not be located. Several reasons for excluding these definitions were discussed in the seven articles, such as that unpublished articles could be delayed by the publisher, that the candidate might not understand the difference between “in press” and “under submission,” and that unlocated journals could be difficult to locate because of the limited availability of non-Western journals or the completeness of the search strategy. I also excluded studies not providing sufficient details to recalculate their findings.

The meta-analysis protocol was set up as described by R. Rosenthal and M. R. DiMatteo19 for meta-analysis for literature reviews. The percentages of applicants reporting publications, and the percentages of applicants with misrepresentations per applicant pool, and per those reporting publications were calculated. I chose a paired, one-tailed t test with a significance level set at .05 for the comparisons of the total percentage of applicants with misrepresentation per applicant pool and per those reporting publications before and after applying a uniform definition as the recalculated percentages could only result in no change or a reduction in value. The means and standard deviations of each group were used to calculate the coefficient of variation to compare the dispersion of percentages in each group before and after the application of uniform criteria. Two-tailed, unpaired t tests with a significance level set at .05 were used for comparisons of study characteristics and for resident applicant studies versus fellowship applicant studies since the percentage of either group within each comparison could have varied in either direction. Finally, as a post hoc means of identifying other influences on comparison outcomes, I calculated effect sizes using a Pearson correlation coefficient. Calculations were made with Statistical Analysis Software v. 9.2 and Microsoft Excel 2007.

Back to Top | Article Outline

Results

Eighteen studies examining misrepresentation among applicants were found in the literature ranging from 1995 to 2008.1–18 One study reported separately on both resident and fellowship applicants, so in effect 19 studies (14 residency and 5 fellowship) were available.4 Thirteen of the 19 studies (8 residency and 5 fellowship) reported sufficient details to calculate misrepresentation rates of published articles among candidates (Table 1).1–4,6–12,14,15 Six of the 19 studies provided insufficient information in the text to determine the number of candidates with misrepresented articles per applicant pool and per those with reported publications.5,13,16–18 Eight of the 13 studies reported the specific types of misrepresentation found. The most common types of misrepresentation were listing nonexistent articles (39 total candidates), followed by errors in authorship order (22 candidates), nonauthorship (7 candidates), and reporting abstracts as articles (5 candidates). Of the studies counting unlocated journals as misrepresentation, three studies reported a total of 12 candidates.

Table 1
Table 1
Image Tools

The studies differed by the breadth of definition of misrepresentation. Whereas almost all studies agreed that nonexistent articles and nonauthorship constituted misrepresentation, less than half considered a candidate promoting him- or herself on the author list or reporting an article as appearing in a more prestigious journal to be misrepresentation. Five of the studies considered unlocated journals to be misrepresentation, whereas eight did not, with many stating that unlocated journals might be in existence, but unavailable. One study counted clerical errors as misrepresentation, whereas the other 12 did not. Eight studies considered articles listed as “in-press” but not published after a year to be misrepresentation, but five studies did not because reasons for nonpublication other than fraud were possible. Overall, the average number of categories of actions considered misrepresentation was 3.3 with a range of 2 to 4. The studies also differed by the number of data sources searched. Data sources included databases, such as PubMed, and data retrieval services, such as the National Library of Medicine's LocatorPlus. The average number of data sources used was 5.2, with a range of 2 to 15. Six out of 13 studies (46.2%) used three data sources or fewer, whereas 7 studies used five or more (53.8%).

When the same study criteria were applied, the overall average number of candidates with misrepresentation per applicant pool significantly decreased from the reported 7.2% to 4.9% (P = .0305 [95% confidence interval (CI) 0.3151–2.8571]) (Table 2). The range of percentages between the studies decreased (from 0.8%–16.1% to 0.6%–11.3%) as well as the coefficient of variation (0.60 to 0.57), indicating a decrease in the dispersion of percentages. When examining applicants with misrepresentation per those reporting published articles, the mean also decreased, but not quite to statistical significance (21.3% to 15.9%, P = .0604 [95% CI −0.3595 to 8.2435]). Similarly, the range of number of applicants with misrepresentation per those reporting published articles also decreased (from between 1.8 and 50 to between 1.3 and 29.8), with a decrease in the coefficient of variation (0.61 to 0.57).

Table 2
Table 2
Image Tools

The median number of data sources searched in the studies was five. A comparison of those studies using five or more databases with those using fewer than five found no difference in the mean number of applicants with misrepresentation per applicant pool or per those with publications (P = .5235 and .4611, respectively). The average number of applicants reviewed was 261.3, with a median of 213. When comparing studies reviewing 213 or more applicants with those studying fewer than 213, there was no significant difference in the number of applicants with misrepresentation per applicant pool or per those with publications (P = .2167 and .6399, respectively). The mean percentage of applicants claiming publications was 33.2%, with a median of 32.3%. Of those studies with 32.3% or more applicants with publications, the percentage of applicants with misrepresentation per applicant pool and per those with publications was not significantly different from those studies with less than 32.3% (P = .9576 and .1219, respectively).

The average percentage of resident versus fellowship applicants with misrepresentation per applicant pool was 4.5% and 5.4%, respectively (Table 2). The average percentage of resident versus fellowship applicants with misrepresentation per those with publications was 17% and 14.2%, respectively. There was no difference found between the misrepresentation of applicants to residency and fellowship programs per applicant pool or per those with publications (P = .5825 [95% CI −2.676 to 2.0346] and .6096 [95% CI −14.6554 to 6.6791], respectively). The reader may contact me for further details regarding the statistical analysis of these results.

Back to Top | Article Outline

Discussion

I evaluated the large disparity in results from studies of the percentages of applicants to residency and fellowship programs who misrepresented articles on their applications. This evaluation revealed that these studies were not sufficiently similar in design to be directly compared with one another as originally reported, because of differences in what was counted as misrepresentation. When unlocated journals, “in press” articles, and typographical errors were eliminated from the counts of misrepresentation, the differences in the percentages of misrepresentation decreased.

Studies of resident or fellowship applicant misrepresentation are worthwhile because society places a large degree of trust in physicians. Misrepresentation raises concerns ranging from the applicant's attention to detail to her or his ethical fortitude. Sekas and Hutson1 were the first to publish a study of this kind, and the 30.2% of applicants with misrepresentations reported therein had a degree of sensationalism that led to many similar studies in other specialties. However, the 11.6% of applicants with misrepresentation per applicant pool reported by Sekas and Hutson was somewhat less dramatic. Also, it should be noted that more than half of the examples of misrepresentation in their study included unlocated journals, leaving the possibility that these articles may have existed. When I recalculated the data from Sekas and Hutson's study, the percentage of applicants with misrepresentations per those with publications dropped from 30.2% to 13.2%. The percentage per applicant pool dropped from 11.6% to 3%, the lowest of all the fellowship studies. Thus, although no act of misrepresentation is acceptable, the problem may not be as severe as previously claimed. It is curious to speculate: Had the first study originally reported a 3% misrepresentation rate, would many of the subsequent studies have followed?

Study differences in what is and is not counted as misrepresentation can lead to other difficulties when comparisons are made. A recent example is the study by Gussman and Blechman,18 which found that 9.9% of applicants to an obstetrics–gynecology (Ob/Gyn) residency had misrepresentations in comparison with the 30.2% of gastroenterology (GI) fellowship applicants found by Sekas and Hutson and the 10.6% of orthopedic fellowship applicants found by Patel et al.14 The implication is that the misrepresentation rates of Ob/Gyn applicants are more similar to those of orthopedic fellowship applicants than to those of GI applicants. However, similar to Patel et al, Gussman and Blechman did not count unlocated journals, whereas Sekas and Hutson did. The 13.2% of applicants found in the GI fellowship study after unlocated journals are excluded creates very small differences among the three studies.

Another caution in quoting and comparing percentages from these studies is in the denominator from which the percentages were derived. Some of the studies focused on the percentage of applicants per those with articles, some on the percentage per applicant pool, some on the percentage per those with various types of publications (books, abstracts, etc.), and some on the percentage of articles with misrepresentation instead of the percentage of applicants with misrepresentation. The studies of applicants to orthopedics from 1999 and 2007 are an example. Konstantakos et al9 stated that the misrepresentation rate of citations slightly increased from 18% in 1999 to 20.6% in 2007, suggesting that misrepresentation may be on the rise. However, Table 1 shows that 22.4% of recalculated misrepresentations per those with published articles occurred in 1999, but only 21% in 2007. Similarly, 5.2% of recalculated applicants per the applicant pool had misrepresentations in 1999 compared with 5.5% in 2007. Thus, although the percentage of misrepresented articles may have slightly increased, the percentage of applicants with misrepresentations did not.

I also examined the data from prior studies for a correlation between study characteristics and the percentage of misrepresentation discovered. After controversial examples of misrepresentation were excluded, no study characteristic was found to correlate with the percentage of misrepresentation discovered. Thus, neither the sample size, the percentage of applicants with publications, nor the number of data sources used could be shown to predict the percentage of misrepresentation found. Therefore, the findings did not support the conclusions of Hebert et al12 that studies of misrepresentation should find smaller percentages if a larger number of data sources are searched. It is possible that if unlocated journals had been counted as misrepresentation by all 13 studies and not excluded from my analysis, then Hebert and colleagues' conclusions might have held up, as more thorough searches would seem to locate journals previously thought to be fictitious. However, nonexistent articles were included by all studies. Although it is logical to assume that more thorough searches would find articles previously thought to be nonexistent, this did not bear out in the comparison of studies using few data sources versus those using multiple sources.

Finally, I found no differences in the percentages of misrepresentation in fellowship versus residency applicants. Therefore, as the number of these types of studies is limited, it was reasonable to evaluate them as a combined group.

A meta-analysis of studies is biased by the availability of only published data. It is possible that other studies of resident or fellowship applicant misrepresentation have been performed but were not accepted for publication. My study was limited to a relatively small number of studies found in the literature and to the clarity of data reported in the articles. For example, Roellig and Katz8 stated that 11 applicants reported articles that were “unfound.” It is not clear if these were nonexistent articles in located journals or were unlocated journals. Also, 6 of the 19 available studies provided insufficient detail to calculate the percentages of candidates with misrepresentation, many of which chose to focus more on the percentages of citations with misrepresentation more than the number of candidates responsible.

In conclusion, when controversial examples of misrepresentation are excluded, the results of studies finding large percentages of applicants with misrepresentations become more similar to studies finding smaller percentages. Therefore, the concern regarding the honesty of applicants may not be as warranted as previously thought. Also, the findings from my analysis should make program directors aware that of those applicants who do have misrepresentations, self-promotion on the author list was the second most common form and can easily be overlooked. And, finally, if future studies of misrepresentation are to be done in other areas of medicine, it is worth knowing that sample sizes and numbers of data sources larger than the medians reported here may not significantly impact the results.

Back to Top | Article Outline

Acknowledgments:

The author would like to thank Diane E. Skinner, EdD, MPH, and Joseph Chacko, MD, for their critical review of this manuscript.

Back to Top | Article Outline

Funding/Support:

This work was supported in part by unrestricted grants from Research to Prevent Blindness and the Pat and Willard Walker Eye Research Center.

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

Not applicable.

Back to Top | Article Outline

Previous presentations:

A summary of the findings in this article was presented at the Ophthalmology Research Seminar Series, Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, Arkansas, on September 16, 2009.

Back to Top | Article Outline

References

1Sekas G, Hutson WR. Misrepresentation of academic accomplishments by applicants for gastroenterology fellowships. Ann Intern Med. 1995;123:38–41.

2Dale JA, Schmitt CM, Crosby LA. Misrepresentation of research criteria by orthopaedic residency applicants. J Bone Joint Surg Am. 1999;81:1679–1681.

3Gurudevan SV, Mower WR. Misrepresentation of research publications among emergency medicine residency applicants. Ann Emerg Med. 1996;27:327–330.

4Bilge A, Shugerman RP, Robertson WO. Misrepresentation of authorship by applicants to pediatrics training programs. Acad Med. 1998;73:532–533.

5Baker DR, Jackson VP. Misrepresentation of publications by radiology residency applicants. Acad Radiol. 2000;7:727–729.

6Boyd AS, Hook M, King LE Jr. An evaluation of the accuracy of residency applicants' curricula vitae: Are the claims of publications erroneous? J Am Acad Dermatol. 1996;35:606–608.

7Yang GY, Schoenwetter MF, Wagner TD, Donohue KA, Kuettel MR. Misrepresentation of publications among radiation oncology residency applicants. J Am Coll Radiol. 2006;3:259–264.

8Roellig MS, Katz ED. Inaccuracies on applications for emergency medicine residency training. Acad Emerg Med. 2004;11:992–994.

9Konstantakos EK, Laughlin RT, Markert RJ, Crosby LA. Follow-up on misrepresentation of research activity by orthopaedic residency applicants: Has anything changed? J Bone Joint Surg Am. 2007;89:2084–2088.

10Glazer JL, Hatzenbuehler JR, Dexter WW, Kuhn CB. Misrepresentation of research citations by applicants to a primary care sports medicine fellowship program in the United States. Clin J Sport Med. 2008;18:279–281.

11Caplan JP, Borus JF, Chang G, Greenberg WE. Poor intentions or poor attention: Misrepresentation by applicants to psychiatry residency. Acad Psychiatry. 2008;32:225–229.

12Hebert RS, Smith CG, Wright SM. Minimal prevalence of authorship misrepresentation among internal medicine residency applicants: Do previous estimates of “misrepresentation” represent insufficient case finding? Ann Intern Med. 2003;138:390–392.

13Cohen-Gadol AA, Koch CA, Raffel C, Spinner RJ. Confirmation of research publications reported by neurological surgery residency applicants. Surg Neurol. 2003;60:280–283.

14Patel MV, Pradhan BB, Meals RA. Misrepresentation of research publications among orthopedic surgery fellowship applicants: A comparison with documented misrepresentations in other fields. Spine. 2003;28:632–636.

15Panicek DM, Schwartz LH, Dershaw DD, Ercolani MC, Castellino RA. Misrepresentation of publications by applicants for radiology fellowships: Is it a problem? AJR Am J Roentgenol. 1998;170:577–581.

16Katz ED, Shockley L, Kass L, et al. Identifying inaccuracies on emergency medicine residency applications. BMC Med Educ. 2005;5:30.

17Kuo PC, Schroeder RA, Shah A, Shah J, Jacobs DO, Pietrobon R. “Ghost” publications among applicants to a general surgery residency program. J Am Coll Surg. 2008;207:485–489.

18Gussman D, Blechman A. Verification of publications, presentations and posters by applicants to a residency in obstetrics and gynecology. J Reprod Med. 2007;52:259–261.

19Rosenthal R, DiMatteo MR. Meta-analysis: Recent developments in quantitative methods for literature reviews. Annu Rev Psychol. 2001;52:59–82.

Cited By:

This article has been cited 3 time(s).

Scientometrics
The impact of misconduct on the published medical and non-medical literature, and the news media
Zhang, MH; Grieneisen, ML
Scientometrics, 96(2): 573-587.
10.1007/s11192-012-0920-5
CrossRef
Journal of the American College of Radiology
Misrepresentation of Publications by Radiology Residency Applicants: Is It Really a Problem?
Eisenberg, RL; Cunningham, M; Kung, JW; Slanetz, PJ
Journal of the American College of Radiology, 10(3): 195-197.
10.1016/j.jacr.2012.12.013
CrossRef
World Journal of Urology
Publication misrepresentation among urology residency applicants
Hsi, RS; Hotaling, JM; Moore, TN; Joyner, BD
World Journal of Urology, 31(3): 697-702.
10.1007/s00345-012-0895-0
CrossRef
Back to Top | Article Outline

© 2010 Association of American Medical Colleges

Login

Article Tools

Images

Share