Secondary Logo

Journal Logo

REVIEW CRITERIA: Discussion and Conclusion

Discussion and Conclusion

Interpretation

Crandall, Sonia J.; McGaghie, William C.

  • Free

REVIEW CRITERIA

  • The conclusions are clearly stated; key points stand out.
  • The conclusions follow from the design, methods, and results; justification of conclusions is well articulated.
  • Interpretations of the results are appropriate; the conclusions are accurate (not misleading).
  • The study limitations are discussed.
  • Alternative interpretations for the findings are considered.
  • Statistical differences are distinguished from meaningful differences.
  • Personal perspectives or values related to interpretations are discussed.
  • Practical significance or theoretical implications are discussed; guidance for future studies is offered.

ISSUES AND EXAMPLES RELATED TO THE CRITERIA

Research follows a logical process. It starts with a problem statement and moves through design, methods, and results. Researchers' interpretations and conclusions emerge from these four interconnected stages. Flaws in logic can arise at any of these stages and, if they occur, the author's interpretations of the results will be of little consequence. Flaws in logic can also occur at the interpretation stage. The researcher may have a well-designed study but obscure the true meaning of the data by misreading the findings.1

Reviewers need to have a clear picture of the meaning of research results. They should be satisfied that the evidence is discussed adequately and appears reliable, valid, and trust-worthy. They should be convinced that interpretations are justified given the strengths and limitations of the study. In addition, given the architecture, operations, and limitations of the study, reviewers should judge the generalizability and practical significance of its conclusions.

The organization of the Discussion section should match the structure of the Results section in order to present a coherent interpretation of data and methods. Reviewers need to determine how the discussion and conclusions relate to the original problem and research questions. Most important, the conclusions must be clearly stated and justified, illustrating key points. Broadly, important aspects to consider include whether the conclusions are reasonable based on the description of the results; on how the study results relate to other research outcomes in the field, including consensus, conflicting, and unexpected findings; on how the study outcomes expand the knowledge base in the field and inform future research; and on whether limitations in the design, procedures, and analyses of the study are described. Failure to discuss the limitations of the study should be considered a serious flaw.

On a more detailed level, reviewers must evaluate whether the authors distinguish between (1) inferences drawn from the results, which are based on data-analysis procedures and (2) extrapolations to the conceptual framework used to design the study. This is the difference between formal hypothesis testing and theoretical discussion.

Quantitative Approaches

From the quantitative perspective, when interpreting hypothesis-testing aspects of a study, authors should discuss the meaning of both statistically significant and non-significant results. A statistically significant result, given its p-value and confidence interval, may have no implications for practice.2 Authors should explain whether each hypothesis is confirmed or refuted and whether each agrees or conflicts with previous research. Results or analyses should not be discussed unless they are presented in the Results section.

Data may be misrepresented or misinterpreted, but more often errors come from over-interpreting the data from a theoretical perspective. For example, a reviewer may see a statement such as “The sizeable correlation between test scores and ‘depth of processing’ measures clearly demonstrates that the curriculum should be altered to encourage students to process information more deeply.” The curricular implication may be true but it is not supported by data. Although the data show that encouraging an increased depth of processing improves test scores, this outcome does not demonstrate the need to change curriculum. The intent to change the curriculum is a value statement based on a judgment about the utility of high test scores and their implications for professional performance. Curricular change is not implied directly from the connection between test scores and professional performance.

The language used in the Discussion needs to be clear and precise. For example, in research based on a correlation design, the Discussion needs to state whether the correlations derive from data collected concurrently or over a span of time.3 Correlations over time suggest a predictive relationship among variables, which may or may not reflect the investigator's intentions. The language used to discuss such an outcome must be unambiguous.

Qualitative Approaches

Qualitative researchers must convince the reviewer that their data are trustworthy. To describe the trustworthiness of the collected data, the author may use criteria such as credibility (internal validity) and transferability (external validity) and explain how each was addressed.4 (See Giacomini and Cook, for example, for a thorough explanation of assessing validity in qualitative health care research.5) Credibility may be determined through data triangulation, member checking, and peer debriefing.4,6 Triangulation compares multiple data sources, such as a content analysis of curriculum documents, transcribed interviews with students and the faculty, patient satisfaction questionnaires, and observations of standardized patient examinations. Member checking is a process of “testing” interpretations and conclusions with the individuals from whom the data were collected (interviews).4 Peer debriefing is an “external check on the inquiry process” using disinterested peers who parallel the analytic procedures of the researcher to confirm or expand interpretations and conclusions.4 Transferability implies that research findings can be used in other educational contexts (generalizability).6,7 The researcher cannot, however, establish external validity in the same way as in quantitative research.4 The reviewer must judge whether the conclusions transfer to other contexts.

Biases

Both qualitative and quantitative data are subject to bias. When judging qualitative research, reviewers should carefully consider the meaning and impact of the author's personal perspectives and values. These potential biases should be clearly explained because of their likely influence on the analysis and presentation of outcomes. Those biases include the influence of the researcher on the study setting, the selective presentation and interpretation of results, and the thoroughness and integrity of the interpretations. Peshkin's work is a good example of announcing one's subjectivity and its potential influence on the research process.8 He and other qualitative researchers acknowledge their responsibility to explain how their values may affect research outcomes. Reviewers of qualitative research need to be convinced that the influence of subjectivity has been addressed.6

References

1. Day RA. How to Write and Publish a Scientific Paper. 5th ed. Phoenix, AZ: Oryx Press, 1998.
2. Rosenfeld RM. The seven habits of highly effective data users [editorial]. Otolaryngol Head Neck Surg. 1998;118:144–58.
3. Fraenkel JR, Wallen NE. How to Design and Evaluate Research in Education. 4th ed. Boston, MA: McGraw-Hill Higher Education, 2000.
4. Lincoln YS, Guba EG. Chapter 11. Naturalistic Inquiry. Newbury Park, CA: Sage, 1985.
5. Giacomini MK, Cook DJ. Users' guide to the medical literature. XXIII. Qualitative research in health care. A. Are the results of the study valid? JAMA. 2000;284:357–62.
6. Grbich C. Qualitative Research in Health. London, U.K.: Sage, 1999.
7. Erlandson DA, Harris EL, Skipper BL, Allen SD. Doing Naturalistic Inquiry: A Guide to Methods. Newbury Park, CA: Sage, 1993.
8. Peshkin A. The Color of Strangers, the Color of Friends. Chicago, IL: University of Chicago Press, 1991.

RESOURCES

Day RA. How to Write and PUblish A Scientific Paper. 5th ed. Phoenix, AZ: Oryx Press, 1998 [chapter 10].
    Erlandson DA, Harris EL, Skipper BL, Allen SD. Doing Naturalistic Inquiry: A Guide to Methods. Newbury Park, CA: Sage, 1993.
      Fraenkel JR, Wallen NE. How to Design and Evaluate Research in Education. 4th ed. Boston, MA: McGraw—Hill Higher Education, 2000 [chapters 19, 20].
        Gehlbach SH. Interpreting the Medical Literature. 3rd ed. New York: McGraw—Hill, 1992.
          Guiding Principles for Mathematics and Science Education Research Methods: Report of a Workshop. Draft. Workshop on Education Research Methods, Division of Research, Evaluation and Communication, National Science Foundation, November 19–20, 1998, Ballston, VA. Symposium presented at the meeting of the American Education Research Association, April 21, 1999, Montreal, Quebec, Canada. 〈http://bear.berkeley.edu/publications/report11.html〉. Accessed 5/1/01.
            Huth EJ. Writing and Publishing in Medicine. 3rd ed. Baltimore, MD: Williams & Wilkins, 1999.
              Lincoln YS, Guba EG. Naturalistic Inquiry. Newbury Park, CA: Sage Publications, 1985 [chapter 11].
                Miller WL, Crabtree BF. Clinical research. In: Denzin NK, Lincoln YS (eds). Handbook of Qualitative Research. Thousand Oaks, CA: Sage, 1994:340–53.
                  Patton MQ. Qualitative Evaluation and Research Methods. 2nd ed. Newbury Park, CA: Sage, 1990.
                    Peshkin A. The goodness of qualitative research. Educ Res. 1993;22:23–9.
                      Riegelman RK, Hirsch RP. Studying a Study and Testing a Test: How to Read the Health Science Literature. 3rd ed. Boston, MA: Little, Brown, 1996.
                        Teaching/Learning Resources for Evidence Based Practice. Middlesex University, London, U.K. 〈http://www.mdx.ac.uk/www/rctsh/ebp/main.htm〉. Accessed 5/1/01.
                          Users' Guides to Evidence-Based Practice. Centres for Health Evidence [Canada]. 〈http://www.cche.net/principles/content_all.asp〉. Accessed 5/1/01.

                            Section Description

                            Review Criteria for Research Manuscripts

                            Joint Task Force of Academic Medicine and the GEA-RIME Committee

                            © 2001 Association of American Medical Colleges