Systematic reviews and meta-analyses are powerful types of evidence and can provide quality information by rigorously summarizing numerous trials into one concise document regarding a focused research question1. As with all types of research, these studies can be flawed in several ways. Two main types of flaws are flaws in the reporting of the published paper and flaws in the methodology of the research. These flaws can cause a systematic review to be biased and can potentially mislead decision-making2.
The quality of systematic reviews across several disciplines in medicine has been studied previously. In a study examining 105 systematic reviews in internal medicine, the quality of reporting scored only three of seven on the Oxman and Guyatt scale3. Another study that assessed forty-two systematic reviews in internal medicine journals showed an average of only 4.6 of eleven items fulfilled on the Assessment of Multiple Systematic Reviews (AMSTAR) guidelines4. In other fields, such as otolaryngology, systematic reviews and meta-analyses contain flawed methodology, with an average score of 3.9 of 10 with use of the Overview Quality Assessment Questionnaire5. Moreover, these flaws in both methodology and reporting are not limited to English-language reviews. A review of traditional Chinese medicine indicated that over one-half of the reviews had methodological and reporting flaws6.
In the field of general surgery, most systematic reviews have had major methodological flaws as well. In a six-year period from 1997 to 2002, general surgery systematic reviews averaged 3.3 of 10 on the Overview Quality Assessment Questionnaire7. More specifically, reviews in general surgery and orthopaedics were shown to have lower quality when completed solely by a surgical department without any outside expertise (e.g., an epidemiologist)2,6. Additionally, studies that focused on fracture management were of substantially poorer quality than those that focused on diagnostic tests or thrombosis prophylaxis2.
In orthopaedic surgery, the number of systematic reviews and meta-analyses that have been published has substantially increased in the last two decades2. Although the methodological quality has improved over time, in 2008, 68% of published studies still had methodological flaws1. Therefore, the information may be more plentiful, but it is still of poor quality. The objective of this study was to assess the quality of reporting as well as the methodological quality and risk of bias of systematic reviews and meta-analyses in orthopaedic literature published from 2006 to 2010 to determine if the quality of these studies had improved.
Materials and Methods
We conducted a review of the reporting and methodological quality of systematic reviews and meta-analyses in orthopaedic journals.
Eligibility Criteria
To be included in this review, the paper had to be (1) either a systematic review or a meta-analysis8, (2) published in the years from 2006 to 2010, and (3) published in one of the top five orthopaedic journals as determined by the 2010 Institute for Scientific Information (ISI) Thomson Reuters Journal Citation Reports.
Paper Identification
We identified the top five journals according to the 2010 ISI Thomson Reuters Journal Citation Reports five-year impact factor, and we manually searched each journal from the years 2006 to 2010. The journals included Osteoarthritis and Cartilage, The Spine Journal, The Journal of Bone and Joint Surgery (American Volume), The American Journal of Sports Medicine, and the Journal of Orthopaedic Research. Each journal was retrieved electronically and manually reviewed by one author (P.J.K.), and when inclusion was uncertain, the second author (J.J.G.) was contacted. Included papers were downloaded and collated for later assessment.
Assessment of Reporting Quality
Eligible papers were assessed with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement; PRISMA is an internationally recognized guideline used for reporting the quality of reviews8. The PRISMA statement was created and refined through a variety of consensus methods with a large number of individuals with expertise in their relative areas. Each item was chosen due to having empirical evidence for being important to include in a written report of a systematic review. The PRISMA statement is a list of twenty-seven items that are recommended to be included in a review to ensure that the published report contains all relevant information. Each PRISMA item is judged with a “yes,” “no,” or “don’t know” response. A “yes” response means that the item is fulfilled, a “no” response means that the item is not fulfilled, and a “don’t know” response means that, because of incomplete reporting or unclear language, it is inconclusive as to whether the item is fulfilled. In the present study, each review was separately and independently assessed by the two authors (P.J.K. and J.J.G.). Any disagreements were resolved by discussion between the authors.
Assessment of Methodological Quality
We assessed the methodological quality of each included systematic review with the AMSTAR tool, which was chosen because it is the most recent, reliable, and valid instrument for evaluating systematic reveiws9. The AMSTAR tool is an eleven-item questionnaire that was created by building on previous tools (e.g., the Overview Quality Assessment Questionnaire), empirical evidence, and expert consensus. This instrument determines the methodological quality of systematic reviews by assessing the presence of the following items: an a priori design, duplicate study selection and data extraction, a comprehensive literature search, the use of status of publication as an inclusion criteria, a list of included and excluded studies, characteristics of included studies, documented assessment of the scientific quality of the included studies, appropriate use of the scientific quality in forming conclusions, appropriate use of methods to combine the findings of studies, assessment of the likelihood of publication bias, and documentation of potential conflicts of interest. This instrument has proven to be reliable and valid4. In particular, AMSTAR has content and construct validity as well as high inter-rater reliability10,11. The items are judged with a “yes,” “no,” “can’t answer,” or “not applicable” response. A “yes” response means that the item is fulfilled, a “no” response means that the item is not fulfilled, a “can’t answer” response means that it is inconclusive as to whether the item is fulfilled, and a “not applicable” response means that the item is not relevant for the type of paper. For example, if the systematic review did not perform a meta-analysis, “not applicable” would be selected for several items. Each paper in the present study was separately and independently assessed by the two authors (P.J.K. and J.J.G.). Any discrepancies were resolved by consensus between the authors.
Data Extraction
All data for both AMSTAR and PRISMA guidelines were extracted into a preformatted Microsoft Excel (Redmond, Washington) spreadsheet designed by one investigator (J.J.G.). We also recorded the funding source, year of publication, and journal impact factor where applicable for each included article. Funding sources were categorized into five groups: public, private, mixed public and private, unknown, and none. Funding sources that were not obvious were verified through several electronic searches and discussed by the investigators, who then came to a consensus.
Data Analysis
After each review was assessed for both reporting and methodological quality, we calculated the proportions of each item reported within and across journals. Proportions were calculated as percentages of “yes,” “no,” “don’t know” (or “can’t answer”), and “not applicable” responses. We performed separate analyses of variance with total PRISMA and AMSTAR items fulfilled as the dependent variable, and journal source as well as funding source as the categorical independent variable. We used linear regression to explore the influence of several variables on the total number of fulfilled items for the PRISMA statement and the AMSTAR tool separately. We included the following predictors in the models: five-year impact factor, year of publication, and funding type. All significance levels were set at p = 0.05.
Source of Funding
Funding for this study was obtained from the Department of Orthopaedic Surgery at the University of Michigan. The funding had no role in the design, implementation, interpretation, or writing of this study.
Results
The 2010 ISI Thomson Reuters Journal Citation Reports present five-year impact factors for the top five orthopaedic journals, namely The American Journal of Sports Medicine, Osteoarthritis and Cartilage, The Journal of Bone and Joint Surgery, the Journal of Orthopaedic Research, and The Spine Journal (Table I). Although The Spine Journal did not have a five-year impact factor, its 2010 one-year impact factor was the fourth highest among orthopaedic journals; therefore, this journal was chosen for inclusion. Seventy-six systematic reviews and meta-analyses met our inclusion criteria and were included in our analysis. The number of reviews and general topics of the journals can be seen in Table II. Two of the journals (The American Journal of Sports Medicine and the Journal of Orthopaedic Research) had a small number of systematic reviews compared with the other journals.
TABLE I -
Impact Factors for Top Orthopaedic Journals
Journal |
Impact Factor |
No. of Included Studies |
The American Journal of Sports Medicine
|
4.801 |
4 |
Osteoarthritis and Cartilage
|
4.495 |
22 |
The Journal of Bone and Joint Surgery
|
3.762 |
23 |
Journal of Orthopaedic Research
|
3.379 |
1 |
The Spine Journal
|
3.024 |
26 |
TABLE II -
General Topics in the Systematic Reviews
Topic of Systematic Review |
No. of Systematic Reviews |
Sports medicine |
7 |
Arthroplasty |
8 |
Conditions of the hip |
2 |
Conditions of the spine |
13 |
Conditions of the nervous system |
1 |
Conditions of the shoulder |
2 |
Systemic conditions |
1 |
Fractures |
4 |
Osteoarthritis |
18 |
Diagnostic |
1 |
Back pain |
16 |
Issues with reporting/publication |
3 |
Reporting Quality
The data for reporting quality stratified first by journal and then for all journals are presented in Table III. The Journal of Bone and Joint Surgery had the highest reporting quality; reviews from that journal reported 77% of the items in the PRISMA statement, followed by smaller percentages from the other four journals (p = 0.013). The total number of unfulfilled PRISMA items (i.e., those with a “no” response) ranged from 21% (The Journal of Bone and Joint Surgery) to 59% (the Journal of Orthopaedic Research). Finally, the number of unknown fulfillment items (i.e., a “don’t know” response) ranged from 0% (the Journal of Orthopaedic Research) to 7% (The Spine Journal). The details of reporting for each question of the PRISMA statement are listed in Table IV.
TABLE III -
Proportion of PRISMA Statement Items Reported Across All Systematic Reviews by Journal
*
Journal (no.)† |
“Yes” Response |
“No” Response |
“Don’t Know” Response |
The American Journal of Sports Medicine (n = 4) |
0.63 |
0.33 |
0.04 |
Osteoarthritis and Cartilage (n = 22) |
0.67 |
0.30 |
0.03 |
The Journal of Bone and Joint Surgery (n = 23) |
0.77 |
0.21 |
0.02 |
Journal of Orthopaedic Research (n = 1) |
0.41 |
0.59 |
0.00 |
The Spine Journal (n = 26) |
0.64 |
0.29 |
0.07 |
Totals |
0.68 |
0.28 |
0.04 |
*All values represent the proportion of items of the PRISMA statement.
†no. = the total number of systematic reviews included.
TABLE IV -
Distribution of Individual Questions in PRISMA
Systematic Review |
Total “Yes” Responses(no. ([%])
|
Total “No” Responses(no. [%])
|
Total “Don’t Know” Responses(no. [%])
|
Title |
65 [86%] |
11 [14%] |
0 [0%] |
Abstract |
|
|
|
Structured summary |
75 [99%] |
1 [1%] |
0 [0%] |
Introduction |
|
|
|
Rationale |
76 [100%] |
0 [0%] |
0 [0%] |
Objectives |
76 [100%] |
0 [0%] |
0 [0%] |
Methods |
|
|
|
Protocol and registration |
14 [18%] |
62 [82%] |
0 [0%] |
Eligibility criteria |
64 [84%] |
12 [16%] |
0 [0%] |
Information sources |
74 [97%] |
2 [3%] |
0 [0%] |
Search |
57 [75%] |
19 [25%] |
0 [0%] |
Study selection |
61 [80%] |
15 [20%] |
0 [0%] |
Data collection process |
53 [70%] |
23 [30%] |
0 [0%] |
Data items |
61 [80%] |
15 [20%] |
0 [0%] |
Risk of bias in individual studies |
50 [66%] |
26 [34%] |
0 [0%] |
Summary measures |
62 [82%] |
14 [18%] |
0 [0%] |
Synthesis of results |
28 [37%] |
5 [7%] |
43 [56%] |
Risk of bias across studies |
7 [9%] |
69 [91%] |
0 [0%] |
Additional analyses |
26 [34%] |
50 [66%] |
0 [0%] |
Results |
|
|
|
Study selection |
62 [82%] |
14 [18%] |
0 [0%] |
Study characteristics |
64 [84%] |
12 [16%] |
0 [0%] |
Risk of bias within studies |
48 [63%] |
28 [37%] |
0 [0%] |
Results of individual studies |
61 [80%] |
15 [20%] |
0 [0%] |
Synthesis of results |
30 [39%] |
6 [8%] |
40 [53%] |
Risk of bias across studies |
7 [9%] |
67 [88%] |
2 [3%] |
Additional analysis |
27 [36%] |
49 [64%] |
0 [0%] |
Discussion |
|
|
|
Summary of evidence |
76 [100%] |
0 [0%] |
0 [0%] |
Limitations |
49 [64%] |
27 [36%] |
0 [0%] |
Conclusions |
76 [100%] |
0 [0%] |
0 [0%] |
Funding |
53 [70%] |
23 [30%] |
0 [0%] |
The three items most commonly reported as unfulfilled (i.e., with a “no” response) were the description of risk of bias across studies, the analysis of risk of bias in the results section, and the lack of a protocol and registration. Only 9% of the systematic reviews contained an explanation and analysis of the risk of bias across the included studies, and 37% of the reviews did not assess the risk of bias for each included study.
Methodological Quality
A mean of 54% of AMSTAR items were fulfilled across all journals. The data for methodological quality stratified by journal are presented in Table V. The American Journal of Sports Medicine had the highest methodological quality of the five journals, with 61% of the AMSTAR items fulfilled, followed by The Spine Journal (55%), The Journal of Bone and Joint Surgery (50%), Osteoarthritis and Cartilage (48%), and the Journal of Orthopaedic Research (28%), although the difference across journals was not significant (p = 0.128). AMSTAR item fulfillment was unknown in 6% of the items across all journals, with The Journal of Bone and Joint Surgery having the highest percentage of “can’t answer” ratings. The total numbers of each rating (“yes,” “no,” “can’t answer,” and “not applicable”) across all journals for each AMSTAR item are listed in Table VI. The AMSTAR items with the poorest compliance by far were those on whether an a priori design was provided and whether the likelihood of publication bias was assessed. Only 13% of reviews had an a priori design, and only 9% of reviews assessed publication bias. Additionally, only 42% of reviews used the scientific quality of the included studies to appropriately formulate conclusions, and only 47% of reviews clearly defined that status of publication was an inclusion criterion.
TABLE V -
Fulfillment of AMSTAR Items by Journal
*
Journal (no.)† |
“Yes”Response |
“No”Response |
“Can’t Answer”Response |
“Not Applicable”Response |
The American Journal of Sports Medicine (n = 4) |
0.61 |
0.32 |
0.03 |
0.04 |
Osteoarthritis and Cartilage (n = 22) |
0.48 |
0.39 |
0.07 |
0.07 |
The Journal of Bone and Joint Surgery (n = 23) |
0.50 |
0.23 |
0.10 |
0.04 |
Journal of Orthopaedic Research (n = 1) |
0.28 |
0.64 |
0.09 |
0.00 |
The Spine Journal (n = 26) |
0.55 |
0.33 |
0.04 |
0.09 |
Totals |
0.54 |
0.34 |
0.06 |
0.06 |
*All values are proportions.
†no. = the total number of systematic reviews included.
TABLE VI -
Distribution of Individual Questions in AMSTAR for All Included Journals
AMSTAR |
“Yes” Responses(no. [%])
|
“No” Responses(no. [%])
|
“Not Applicable” Responses(no. [%])
|
“Can’t Answer” Responses(no. [%])
|
Was an a priori design provided? |
10 [13%] |
63 [83%] |
0 [0%] |
3 [4%] |
Was there duplicated study selection and data extraction? |
39 [51%] |
5 [7%] |
0 [0%] |
32 [42%] |
Was a comprehensive literature search performed? |
60 [79%] |
13 [17%] |
0 [0%] |
3 [4%] |
Was the status of publication used as an inclusion criterion? |
36 [47%] |
34 [45%] |
1 [1%] |
5 [7%] |
Was a list of studies (included and excluded) provided? |
65 [86%] |
11 [14%] |
0 [0%] |
0 [0%] |
Were the characteristics of the included studies provided? |
66 [87%] |
9 [12%] |
1 [1%] |
0 [0%] |
Was the scientific quality of the included studies assessed and documented? |
52 [68%] |
24 [32%] |
0 [0%] |
0 [0%] |
Was the scientific quality of the included studies used appropriately in formulating conclusions? |
32 [42%] |
44 [58%] |
0 [0%] |
0 [0%] |
Were the methods used to combine the findings of studies appropriate? |
22 [29%] |
4 [5%] |
45 [59%] |
5 [7%] |
Was the likelihood of publication bias assessed? |
7 [9%] |
69 [91%] |
0 [0%] |
0 [0%] |
Was the conflict of interest stated? |
65 [86%] |
11 [14%] |
0 [0%] |
0 [0%] |
Regression Analyses
For the PRISMA statement, the only variable that was significant for predicting the reporting quality of a review (i.e., the number of fulfilled items or “yes” responses) was the actual reporting of the funding source. When the funding source was reported in a systematic review, the quality of that review was, on average, higher than that of a paper not noting a funding source. The other factors were not significant in predicting quality (Table VII).
TABLE VII -
Linear Regression Results for the Outcome Variable of Quality of Reporting with PRISMA Items
Variable |
Coefficient |
P Value |
95% Confidence Interval |
Funding source |
5.08 |
0.00 |
3.22 to 6.95 |
Year of publication |
0.10 |
0.79 |
−0.64 to 0.84 |
Impact factor |
0.05 |
0.94 |
−1.32 to 1.43 |
The linear regression for AMSTAR quality prediction is reported in Table VIII. Similar to the PRISMA statement, the only factor that was significant was the actual reporting of the funding source. Papers reporting a funding source were, on average, of higher quality than those not describing a funding source. In the analysis of variance, we found that the total of AMSTAR and PRISMA items that were met was significantly different between funding categories (p < 0.05). Summary statistics for all funding categories appear in Table IX.
TABLE VIII -
Linear Regression Results for the Outcome Variable of Total Number of AMSTAR Items Being Met
Variable |
Coefficient |
P Value |
95% Confidence Interval |
Funding source |
1.36 |
0.01 |
0.43 to 2.29 |
Year of publication |
0.09 |
0.64 |
−0.28 to 0.46 |
Impact factor |
−0.42 |
0.23 |
−1.11 to 0.27 |
TABLE IX -
Summary Statistics for Categories of Funding
Funding Source |
Mean PRISMA Items Met (Range)
|
Mean AMSTAR Items Met (Range)
|
Public |
19.6 (11-25) |
6.5 (3-9) |
Private |
22.5 (20-27) |
7.3 (5-10) |
Mixed public and private |
22.0 (20-24) |
9.0 (8-10) |
Unknown |
14.9 (8-21) |
5.1 (1-9) |
None |
19.8 (14-25) |
5.7 (2-8) |
Discussion
We collected all systematic reviews and meta-analyses published from 2006 to 2010 in the five orthopaedic journals with the highest impact factor, and we assessed their reporting and methodological quality. Generally, these studies were of poorer quality than would be expected from the top orthopaedic journals. Also, we did not find that these reviews improved in quality from 2006 to 2010. On average, only two-thirds of reporting quality items were fulfilled in the top orthopaedic journals. Reporting quality can be easily and quickly improved by referring to accepted reporting tools such as PRISMA or the Meta-analysis Of Observational Studies in Epidemiology (MOOSE) statement when designing and reporting reviews12. In fact, only three of the included journals listed the PRISMA statement in their instructions to authors, which could partly explain the poor reporting overall and the general lack of improvement in reporting.
The methodological quality of the included papers was even poorer than the reporting quality. On average, only 54% of suggested methodological components were completed by the included reviews; however, some of this was a result of under-reporting or a lack of information, not inadequate methods. However, as with reporting quality, much methodological guidance is available to authors of systematic reviews and meta-analyses, including guidelines like AMSTAR and accepted textbooks and handbooks on the topic13-17.
Although similar to other areas of medicine3-6, the overall quality of methods and reporting in systematic reviews published in the top orthopaedic journals is less than optimal. In particular, the systematic reviews published in these five journals reported less than 50% of the following items: whether there was a protocol, how the synthesis of the included studies was done and what the results of this synthesis were, the risk of bias assessment of the included studies, the details of any additional analyses, the limitations of the study, and the funding source (Table IV). All of this information is important to include in order to allow the reader to accurately assess the methods and the results of the review. In particular, not reporting a risk of bias assessment, also called a methodological quality assessment, is a major drawback of these studies. In terms of risk of bias, the included systematic reviews performed the following important methods less than 50% of the time: providing an a priori study design, assessing risk of bias, proper methods for combining studies, and the assessment of publication bias (Table VI). In particular, a failure to assess publication bias is a major drawback in these studies. It is imperative that readers be able to tell whether the systematic review results are influenced by any small study effects such as a lack of publication of studies with small sample sizes or large variances, or if the lack of such studies varies by the study’s statistical significance level, both of which can bias the pooled effect estimate in a meta-analysis. The latter can be detected with use of contour-enhanced funnel plots18. There is much guidance in the literature about these important components of systematic review methods18-20, and it is recommended that an assessment of publication bias be done in every systematic review.
These findings are important for two main reasons. First, the information in these systematic reviews could be biased and therefore misleading. Second, with an increased focus on evidence-informed medicine, particularly in orthopaedics, and the high regard of systematic reviews in the evidence hierarchy, the quality of these important studies must be improved in order to advance orthopaedic care21,22. These findings are similar to others regarding the quality of systematic reviews in the orthopaedic literature. One study reported that, overall, only 15% of reviews were considered rigorous23. In that study, the researchers collected 110 reviews from fifteen orthopaedic journals during the year 2000. More specifically, in another study, only 29% of the reviews concerning joint arthroplasty were considered high quality on the PRISMA checklist24; the study identified seventy-seven reviews from 1993 to 2009. Although we did not rate each individual study on a scale of quality in the present study, our results suggest that the quality is similar to that found in previous reports.
This study has two strengths. First, the reviews that were analyzed were published in the top orthopaedic journals according to impact factor. In contrast to previous findings25, our analyses did not reveal that impact factor predicted quality. This finding could have arisen from the fact that the impact factors in the included journals were very close to one another and, thus, the variability was not sufficient to detect a difference. But, it is also possible that impact factor is an imperfect predictor of the quality of systematic reviews published in the top-cited journals. The impact factor is a measure of the frequency with which articles from a particular journal are cited in a particular year, and this can be artificially inflated by self-citing. Thus, there is much debate concerning how indicative the impact factor is of systematic review quality or other type of study quality. Several other journal quality metrics exist, such as the Eigenfactor Score and the Article Influence Score26. The Eigenfactor Score is a measure of the overall value provided by all articles published in a particular journal in a given year, calculated with use of an algorithm that includes ISI Thomson Reuters data and ignores self-citations26. The Article Influence Score is a measure of a journal’s influence based on a per-article citation calculation, which is determined by dividing the Eigenfactor Score by the percent of all articles recorded in the Journal Citation Reports26. It is possible that these two measures are better indicators of systematic review quality, but this hypothesis remains to be tested. The second strength of the present study is that the guidelines used to assess both reporting and methodological quality are well validated and accepted27.
Our study also has limitations. The first limitation is the possibility of higher quality studies being present in journals that were not considered in this paper. Another potential limitation is that the guidelines used to rate the reporting and methodological quality might not contain items specific to systematic reviews and meta-analyses in orthopaedics. Finally, because two journals (the Journal of Orthopaedic Research and The American Journal of Sports Medicine) did not publish a large number of systematic reviews, this study does not represent the quality of papers in these journals beyond the years sampled.
Overall, we determined that the reporting and methodological quality of systematic reviews and meta-analyses in the top five orthopaedic journals is less than optimal. We recommend that all journals endorse relevant guidelines for reporting systematic reviews (e.g., PRISMA, MOOSE) and performing systematic reviews (e.g., AMSTAR) and that researchers include coinvestigators with expertise in systematic review methods when planning, implementing, and reporting these important studies. Paying close attention to relevant guidelines when planning, implementing, reporting, and reviewing systematic reviews and meta-analyses will improve the quality of these important investigations and increase their applicability for clinical decision-making.
References
1. Dijkman BG, Abouali JA, Kooistra BW, Conter HJ, Poolman RW, Kulkarni AV, Tornetta P 3rd, Bhandari M . Twenty years of meta-analyses in orthopaedic surgery: has quality kept up with quantity? J Bone Joint Surg Am. 2010 Jan;92(1):48-57.
2. Bhandari M, Morrow F, Kulkarni AV, Tornetta P 3rd . Meta-analyses in orthopaedic surgery. A systematic review of their methodologies. J Bone Joint Surg Am. 2001 Jan;83(1):15-24.
3. Lawson ML, Pham B, Klassen TP, Moher D . Systematic reviews involving complementary and alternative medicine interventions had higher quality of reporting than conventional medicine reviews. J Clin Epidemiol. 2005 Aug;58(8):777-84.
4. Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, Ramsay T, Bai A, Shukla VK, Grimshaw JM . External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One. 2007;2(12):e1350.
5. Rudmik LR, Walen SG, Dixon E, Dort J . Evaluation of meta-analyses in the otolaryngological literature. Otolaryngol Head Neck Surg. 2008 Aug;139(2):187-94.
6. Junhua Z, Hongcai S, Xiumei G, Boli Z, Yaozu X, Hongbo C, Ming R . Methodology and reporting quality of systematic review/meta-analysis of traditional Chinese medicine. J Altern Complement Med. 2007 Oct;13(8):797-805.
7. Dixon E, Hameed M, Sutherland F, Cook DJ, Doig C . Evaluating meta-analyses in the general surgical literature: a critical appraisal. Ann Surg. 2005 Mar;241(3):450-9.
8. Moher D, Altman DG, Liberati A, Tetzlaff J . PRISMA statement. Epidemiology. 2011 Jan;22(1):128, author reply 128.
9. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM . Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.
10. Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M . AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009 Oct;62(10):1013-20. Epub 2009 Feb 20.
11. Kang D, Wu Y, Hu D, Hong Q, Wang J, Zhang X . Realibility and External Validity of AMSTAR in assessing quality of TCM systematic reviews. Evid Based Complement Alternat Med. 2012;2012:732195.
12. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe TA, Thacker SB . Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000 Apr 19;283(15):2008-12.
13. Higgins JPT, Green S , editors. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011].
www.cochrane-handbook.org. Accessed 2011 Feb.
14. Centre for Reviews and Dissemination. CRD’s Guidance for Undertaking Reviews in Health Care editors. New York: York; 2009.
15. Littell JH, Corcoran J, Pillai VK . Systematic reviews and meta-analysis. Oxford: Oxford University Press; 2008.
16. Sutton AJ, Abrams KA, Jones DR, Sheldon TA, Song F . Methods for meta-analysis in medical research. New York: John Wiley; 2000.
17. Whitehead A. Meta-analysis of controlled clinical trials. New York: John Wiley & Sons; 2002.
18. Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L . Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. J Clin Epidemiol. 2008 Oct;61(10):991-6.
19. Moreno SG, Sutton AJ, Turner EH, Abrams KR, Cooper NJ, Palmer TM, Ades AE . Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ. 2009;339:b2981.
20. Egger M, Davey Smith G, Schneider M, Minder C . Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997 Sep 13;315(7109):629-34.
21. Celermajer DS. Evidence-based medicine: how good is the evidence? Med J Aust. 2001 Mar 19;174(6):293-5.
22. Ferlie E, Wood M, Fitzgerald L . Some limits to evidence-based medicine: a case study from elective orthopaedics. Qual Health Care. 1999 Jun;8(2):99-107.
23. Bhandari M, Montori VM, Devereaux PJ, Wilczynski NL, Morgan D, Haynes RB ; Hedges Team. Doubling the impact: publication of systematic review articles in orthopaedic journals. J Bone Joint Surg Am. 2004 May;86(5):1012-6.
24. Sharma R, Vannabouathong C, Bains S, Marshall A, MacDonald SJ, Parvizi J, Bhandari M . Meta-analyses in joint arthroplasty: a review of quantity, quality, and impact. J Bone Joint Surg Am. 2011 Dec 21;93(24):2304-9.
25. Montori VM, Wilczynski NL, Morgan D, Haynes RB ; Hedges Team. Systematic reviews: a cross-sectional study of location and citation counts. BMC Med. 2003 Nov 24;1:2.
26. Brown T. Journal quality metrics: options to consider other than impact factors. Am J Occup Ther. 2011 May-Jun;65(3):346-50.
27. Phi L, Ajaj R, Ramchandani MH, Brant XM, Oluwadara O, Polinovsky O, Moradi D, Barkhordarian A, Sriphanlop P, Ong M, Giroux A, Lee J, Siddiqui M, Ghodousi N, Chiappelli F . Expanding the grading of recommendations assessment, development, and evaluation (Ex-GRADE) for evidence-based clinical recommendations: Validation study. Open Dent J. 2012;6:31-40.