Secondary Logo

Journal Logo

REVIEW CRITERIA: Method

Data Analysis and Statistics

McGaghie, William C.; Crandall, Sonia*

Author Information
  • Free

REVIEW CRITERIA

  • Data-analysis procedures are sufficiently described, and are sufficiently detailed to permit the study to be replicated.
  • Data-analysis procedures conform to the research design; hypotheses, models, or theory drives the data analyses.
  • The assumptions underlying the use of statistics are fulfilled by the data, such as measurement properties of the data and normality of distributions.
  • Statistical tests are appropriate (optimal).
  • If statistical analysis involves multiple tests or comparisons, proper adjustment of significance level for chance outcomes was applied.
  • Power issues are considered in statistical studies with small sample sizes.
  • In qualitative research that relies on words instead of numbers, basic requirements of data reliability, validity, trustworthiness, and absence of bias were fulfilled.

ISSUES AND EXAMPLES RELATED TO THE CRITERIA

Data analysis along the “seamless web” of quantitative and qualitative research (see “Research Design,” earlier in this chapter) must be performed and reported according to scholarly conventions. The conventions apply to statistical treatment of data expressed as numbers and to qualitative data expressed as observational records, field notes, interview reports, abstracts from hospital charts, and other archival records. Data analysis must “get it right” to ensure that the research progression of design, methods (including data analysis), results, and conclusions and interpretation is orderly and integrated. Amplification of the seven data-analysis and statistical review criteria in this section underscores this assertion. The next article, entitled “Reporting of Statistical Analyses,” extends these ideas.

Quantitative

Statistical, or quantitative, analysis of research data is not the keystone of science. It does, however, appear in a large proportion of the research papers submitted to medical education journals. Reviewers expect a clear and complete description of research samples and data-analysis procedures in such papers.

Statistical analysis methods such as t-tests or analysis of variance (ANOVA) used to assess group differences, correlation coefficients used to assess associations among measured variables within intact groups, or indexes of effect such as odds ratios and relative risk in disease studies flow directly from the investigator's research design. (Riegelman and Hirsch1 give specific examples.) Designs focused on differences between experimental and control groups should use statistics that feature group contrasts. Designs focused on within-group associations should report results as statistical correlations in one or more of their many forms. Other data-analytic methods include meta-analysis,2 i.e., quantitative integration of research data from independent investigations of the same research problem; procedures used to reduce large, complex data sets into more simplified structures, as in factor analysis or cluster analysis; and techniques to demonstrate data properties empirically, as in reliability analyses of achievement-test or attitude-scale data, multidimensional scaling, and other procedures. However, in all cases research design dictates statistical analysis of research data. Statistical analyses, when they are used, must be driven by the hypotheses, models, or theories that form the foundation of the study being judged.

Statistical analysis of research data often rests on assumptions about data measurement properties and the normality of data distributions, and many other features. These assumptions must be satisfied to make the data analysis legitimate. By contrast, nonparametric, or “distribution-free,” statistical methods can be used to evaluate group differences or the correlations among variables when research measurements are in the form of categories (female-male, working-retired) or ranks (tumor stages, degrees of edema). Reviewers need to look for signs that the statistical analysis methods were based on sound assumptions about characteristics of the data and research design.

A reviewer must be satisfied that statistical tests presented in a research manuscript have been used and reported properly. Signs of flawed data analysis include inappropriate or suboptimal analysis (e.g., wrong statistics) and failure to specify post hoc analyses before collecting data.

Statistical analysis of data sets that is done without attention to an explicit research design or an a priori hypothesis can quickly become an exercise in “data dredging.” The availability of powerful computers, user-friendly statistical software, and large institutional data sets increases the likelihood of such mindless data analyses. Being able to perform hundreds of statistical tests in seconds is not a proxy for thoughtful attention to research design and focused data analysis. Reviewers should also be aware that, for example, in the context of only 20 statistical comparisons, one of the tests will be likely to achieve “significance” solely by chance. Multiple statistical tests or comparisons call for adjustment of significance levels (p-values) using the Bonferroni or a similar procedure to ensure accurate data interpretation.3

Research studies that involve small numbers of participants often lack enough statistical power to demonstrate significant results.4 This shortfall can occur even when a larger study would show a significant effect for an experimental intervention or for a correlation among measured variables. Whenever a reviewer encounters a “negative” study, the power question needs to be posed and ruled out as the reason for a nonsignificant result.

Qualitative

Analysis of qualitative data, which involves manipulation of words and symbols rather than numbers, is also governed by rules and rigor. Qualitative investigators are expected to use established, conventional approaches to ensure data quality and accurate analysis. Qualitative flaws include (but are not limited to) inattention to data triangulation (i.e., cross-checking information sources); insufficient description (lack of “thick description”) of research observations; failure to use recursive (repetitive) data analysis and interpretation; lack of independent data verification by colleagues (peer debriefing); lack of independent data verification by stakeholders (member checking); and absence of the a priori expression of the investigator's personal orientation (e.g., homeopathy) in the written report.

Qualitative data analysis has a deep and longstanding research legacy in medical education and medical care. Wellknown and influential examples are Boys in White, the classic study of student culture in medical school, published by Howard Becker and colleagues5; psychiatrist Robert Coles' five-volume study, Children of Crisis6; the classic participant observation study by clinicians of patient culture on psychiatric wards published in Science7; and Terry Mizrahi's observational study of the culture of residents on the wards, Getting Rid of Patients.8 Reviewers should be informed about the scholarly contribution of qualitative research in medical education. Prominent resources on qualitative research9–13 provide research insights and methodologic details that would be useful for the review of a complex or unusual study.

References

1. Riegelman RK, Hirsch RP. Studying a Study and Testing a Test: How to Read the Medical Literature. 2nd ed. Boston, MA: Little, Brown, 1989.
2. Wolf FM. Meta-Analysis: Quantitative Methods for Research Synthesis. Sage University Paper Series on Quantitative Applications in the Social Sciences, No. 59. Beverly Hills, CA: Sage, 1986.
3. Dawson B, Trapp RG. Basic and Clinical Biostatistics. 3rd ed. New York: Lange Medical Books/McGraw-Hill, 2001.
4. Cohen J. Statistical Power Analysis for the Behavioral Sciences. Rev. ed. New York: Academic Press, 1977.
5. Becker HS, Geer B, Hughes EC, Strauss A. Boys in White: Student Culture in Medical School. Chicago, IL: University of Chicago Press, 1961.
6. Coles R. Children of Crisis: A Study of Courage and Fear. Vols. 1–5. Boston, MA: Little, Brown, 1967–1977.
7. Rosenhan DL. On being sane in insane places. Science. 1973;179:250–8.
8. Mizrahi T. Getting Rid of Patients: Contradictions in the Socialization of Physicians. New Brunswick, NJ: Rutgers University Press, 1986.
9. Glaser BG, Strauss AL. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago, IL: Aldine, 1967.
10. Miles MB, Huberman AM. Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed. Thousand Oaks, CA: Sage, 1994.
11. Harris IB. Qualitative methods. In: Norman GR, van der Vleuten CPM, Newble D (eds). International Handbook for Research in Medical Education. Dordrecht, The Netherlands, Kluwer, 2001.
12. Gicomini MK, Cook DJ. Users' guide to the medical literature. XXIII. qualitative research in health care. A. Are the results of the study valid? JAMA. 2000;284:357–62.
13. Gicomini MK, Cook DJ. Users' guide to the medical literature. XXIII. qualitative research in health care. B. What are the results and how do they help me care for my patients? JAMA. 2000;284:478–82.

RESOURCES

Goetz JP, LeCompte MD. Ethnography and Qualitative Design in Educational Research. Orlando, FL: Academic Press, 1984.
    Guba EG, Lincoln YS. Effective Evaluation. San Francisco, CA: Jossey-Bass, 1981.
      Fleiss JL. Statistical Methods for Rates and Proportions. 2nd ed. New York: John Wiley & Sons, 1981.
        Pagano M, Gauvreau K. Principles of Biostatistics. Belmont, CA: Duxbury Press, 1993.
          Patton MQ. Qualitative Evaluation and Research Methods. 2nd ed. Newbury Park, CA: Sage, 1990.
            Winer BJ. Statistical Principles in Experimental Design. 2nd ed. New York: McGraw-Hill, 1971.

              Section Description

              Review Criteria for Research Manuscripts

              Joint Task Force of Academic Medicine and the GEA-RIME Committee

              © 2001 Association of American Medical Colleges