While most research published in orthopaedic surgery journals derives from one of several familiar study designs—case series, historically controlled studies, randomized trials, and variations on the systematic review theme—much of the fun (and a considerable portion of the benefit) we get from reading journals happens when clinician scientists tackle old, resistant problems in new ways.
Clinical Orthopaedics and Related Research® is on the vanguard of publishing innovative approaches in musculoskeletal science. Indeed, CORR® leads the way both in terms of establishing editorial standards that ensure the consistent, clear reporting of a wide range of newer study designs [7, 11, 13, 19, 20], as well as providing tools for reviewers and readers [2, 17] to help them get the most out of the discoveries that we publish.
We also take special pleasure in developing new article types that help surgeons do their jobs better; the most-recent example has been our CORR Synthesis series [14-16], a reboot of the review-article format, but one that delivers robust approaches to screening, selection, and presentation that mitigate the sources of bias that otherwise are so deeply embedded in articles of this type .
CORR’s enthusiasm for two seemingly dissimilar article types, machine learning models and qualitative research, is just another example of our journal’s openness to new approaches to solving problems. We believe these article types deserve the enthusiasm of readers, as well.
Opposites Attract: Machine Learning and Qualitative Research
On the face of it, it’s hard to imagine two more dissimilar research approaches than machine learning and qualitative research. But as we’ve said before, to a large degree, research is research , and so as CORR’s editors assess papers that we receive, we have been and will continue to hold papers that use these disparate methods to a set of common standards (Table 1).
Table 1. -
Common standards CORR
senior editors use to assess submitted papers
Do the findings support specific recommendations that can help us take better care of patients, practice more efficiently, or make better public policy? Or,
To help the curious reader get started, CORR has published a helpful how-to on machine learning , as well as an interview introducing high-quality qualitative research to this audience ; the study covered in that interview is itself a don’t-miss . For the reader looking to go a bit deeper, the JAMA “Users’ Guide” on machine learning is thoughtful and well written  as is their older but still-relevant piece in that same series on qualitative research ; we also recommend using the PROBAST (Prediction model Risk of Bias Assessment Tool)  for those who want more.
Below, we provide a brief overview of each article type, point to why we believe each has an important role to play, and detail how CORR’s editors plan to evaluate the ones we receive.
What is Machine Learning and How is it Useful?
Broadly speaking, machine learning studies (and related approaches) seek to harness computer algorithms to produce models to diagnose or prognosticate based on large numbers of variables and vast quantities of data. Typically, these systems begin with few assumptions and an enormous list of potential predictor variables, in the hopes of identifying associations that humans—weighed down by our preconceived notions—might otherwise miss. After that preliminary analysis in what is called a training dataset, the algorithm derives and refines predictive functions in a separate setting, called a validation set, to see whether the identified associations prove robust. With still more data, these systems can continue to self-educate and improve their performance.
We believe machine learning will help orthopaedic surgeons take better care of patients. In recent years, we’ve seen that even expert surgeons are no more likely than chance to anticipate which patients will improve meaningfully following knee replacement ; by contrast, a computer algorithm designed to do just that did pretty well, and it’s still learning .
Machine learning and its relatives in the broader discipline of artificial intelligence may also help us anticipate prognosis in patients with malignancies , and even make diagnoses using rich sources of visual data, like radiographic images , and even histopathology slides . In addition, and unlike humans looking at many images or slides, these machines don’t make errors associated with carelessness or fatigue; the machine doesn’t tire.
While artificial intelligence once was derided as “the study of how to make computers do things which, at the moment, people do better” , this may no longer be the case.
What is Qualitative Research and How is it Useful?
Most orthopaedic research is descriptive, and some is comparative, but interpretive research—studies that help us to understand why patients feel the way they do, how they form beliefs and (mis-)understandings about their bodies, and which factors inform their decision-making—largely has been relegated to social science journals. Some editors of medical journals even have actively deprioritized this kind of work [12, 24]. We believe this is a missed opportunity for clinicians who want a deeper understanding of why their patients feel as they do. Qualitative research often asks questions that mirror those that patients and clinicians ask every day—versions of “given my situation, what should I do?”—and as such, we see it as an important research tool.
Survey studies can tell us those things, too, but in survey studies, the research team can only find answers to questions that they ask; in qualitative research, an open-ended interview approach allows patients to tell us what matters to them. By interpreting patients’ experiences, qualitative researchers can produce a rich, nuanced perspective, and support specific recommendations on a variety of clinically important topics.
In contrast to most kinds of research, in which the researcher’s participation is seen as a source of bias, the idea of subjectivity in qualitative work is seen as a feature, not a bug—as long as the reader is made aware of how that subjectivity is deployed. Creating teams of researchers with diverse backgrounds (such as surgeons, social scientists, epidemiologists, and trialists) to analyze the data from a variety of perspectives is fundamental to this process. By collecting and analyzing data in parallel, the researchers can test or challenge their emerging interpretations in subsequent interviews, helping to ensure that the resulting findings are grounded in the patient experiences (and don't just reflect the researchers’ preexisting biases). For example, qualitative papers can help readers to understand whether the content of our commonly used outcomes tools is the “right” stuff to focus on, they can plumb patients’ needs to help us determine what study endpoints would be most meaningful to the people we care for, and they can help us to identify barriers to implementation of medical recommendations or to trial enrollment.
At CORR, we are intrigued by qualitative studies that tell coherent stories leading to specific recommendations. While most qualitative studies we’ve seen don’t clear this bar—and so most don’t get published—the ones that do can really change surgeons’ thinking. One marvelous example  that we highlighted with a Take 5 interview  identified a number of serious misunderstandings in the minds of patients who planned to undergo joint replacement—misunderstandings that pushed these patients to choose major surgical treatment over safer, less-invasive alternatives. By identifying those misconceptions, the study was able to develop a practical roadmap to help surgeons ensure that patients who choose surgery do not do while laboring under important misapprehensions.
CORR’s Editorial Standards on Qualitative Research
As noted earlier, the standards that we apply to all papers (Table 1) naturally also apply to papers on machine learning and qualitative research papers.
When screening qualitative research papers, the easy-to-remember acronym RATS—relevance, appropriateness, transparency, and soundness —is something we’ll seek to apply. In particular we will ask that these papers:
- Provide a clearly defined research question relevant the practice of orthopaedic surgery;
- offer a clear description of and theoretical justification for the sampling strategy and data analysis procedures;
- convince readers that alternative interpretations of the data have been considered;
- give thoughtful consideration to the researcher’s influence on the findings; and
- produce and support specific recommendations for surgeons to use in practice.
We’ll also ask our subject-matter experts (CORR’s peer reviewers) to apply tools like the COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist  in their more-detailed assessments.
CORR’s Editorial Standards on Machine Learning
Most, although not all, of machine learning studies involve models to improve our ability to make a diagnosis or refine our prognostic precision; as such, a number of checklists relevant to diagnostic and prognostic studies that authors, reviewers, and many readers will be familiar with can be helpful. Depending on the study design, CORR’s editors expect to apply reporting standards like TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis)  or STARD (Standards for Reporting Diagnostic Accuracy) 2015 .
Further, CORR will ask that these papers either:
- Present a finding that an important machine learning or artificial intelligence driven approach does not work the way we believed or hoped it would; or,
- If the approach under study works well, then provide a viable tool that readers can use (such as a free URL or a commercially available product) that will help them to improve patient care.
Papers that simply show that a prediction or diagnosis can be made using machine learning, but do not give readers the ability to use the tool for themselves, are of little interest and we don’t expect to publish many of these. We are especially interested in and supportive of researchers who provide their actual code as an electronic appendix, so that others can replicate and build on the discoveries published here.
Good research is good research, no matter the type.
The authors acknowledge Samantha Bunzli PhD, CORR’s newest associate editor, both for sharing her thoughtful qualitative research with CORR’s readers , and also for visiting recently with CORR’s Senior Editor panel to discuss how to make the most of qualitative research in our specialty. We are deeply indebted to Dr. Bunzli for her insights and contributions both to this essay and more generally as we have assessed qualitative research for publication here over the last year or so.
1. Anderson AB, Wedin R, Fabbri N, Boland P, Healey J, Forsberg JA. External validation of PATHFx Version 3.0 in patients treated surgically and nonsurgically for symptomatic skeletal metastases. Clin Orthop Relat Res. 2020;478:808-818.
2. Beadling L, Leopold SS. Editorial: A new way to read, write, and review for CORR®
. Clin Orthop Relat Res. 2016;474:605-606.
3. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, LijmerJG Moher D, Rennie D, de Vet HCW, Kressel HY, Rifai N, Golub RM, Altman DG, Hooft L, Korevaar DA, Cohen JF, For the STARD Group. STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. BMJ. 2015;351:h5527.
4. Bunzli S, O’Brien S, Ayton D, Dowsey M, Gunn J, Choong P, Manski-Nankervis J-A. Misconceptions and the acceptance of evidence-based nonsurgical interventions for knee osteoarthritis. A qualitative study. Clin Orthop Relat Res. 2019;477:1975-1983.
5. Clark JP. “How to peer review a qualitative manuscript.” In: Peer Review in Health Sciences. Second edition. Eds: Godlee F, Jefferson T. London, UK: BMJ Books; 2003:219-235.
6. Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. Ann Intern Med.
7. Dobbs MB, Gebhardt MC, Gioe TJ, Manner PA, Porcher R, Rimnac CM, Wongworawat MD, Leopold SS. Editorial: How does CORR®
evaluate survey studies? Clin Orthop Relat Res. 2017;475:2143-2145.
8. Fontana MA, Lyman S, Sarker GK, Padgett DE, Maclean CH. Can machine learning algorithms predict which patients will achieve minimally clinically important differences from total joint arthroplasty? Clin Orthop Relat Res. 2019;477:1267-1279.
9. Ghomrawi HM, Mancuso CA, Dunning A, Gonzalez Della Valle A, Alexiades M, Cornell C, Sculco T, Bostrom M, Mayman D, Marx RG, Westrich G, O’Dell M, Mushlin AI. Do surgeon expectations predict clinically important improvements in WOMAC scores after THA and TKA? Clin Orthop Relat Res.
2017; 475: 2150–2158.
10. Giacomini MK, Cook DJ, for the Evidence-based Medicine Working Group. Users' guides to the medical literature: XXIII. Qualitative research in health care A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 2000;284;357-362.
11. Grauer JN, Leopold SS. Editorial: large database studies--what they can do, what they cannot do, and which ones we will publish. Clin Orthop Relat Res. 2015;473:1537-1539.
12. Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, Boaden R, Braithwaite J, Britten N, Carnevale F, Checkland K, Cheek J, Clark A, Cohn S, Coulehan J, Crabtree B, Cummins S, Davidoff F, Davies H, Dingwall R, Dixon-Woods M, Elwyn G, Engebretsen E, Ferlie E, Fulop N, Gabbay J, Gagnon MP, Galasinski D, Garside R, Gilson L, Griffiths P, Hawe P, Helderman JK, Hodges B, Hunter D, Kearney M, Kitzinger C, Kitzinger J, Kuper A, Kushner S, Le May A, Legare F, Lingard L, Locock L, Maben J, Macdonald ME, Mair F, Mannion R, Marshall M, May C, Mays N, McKee L, Miraldo M, Morgan D, Morse J, Nettleton S, Oliver S, Pearce W, Pluye P, Pope C, Robert G, Roberts C, Rodella S, Rycroft-Malone J, Sandelowski M, Shekelle P, Stevenson F, Straus S, Swinglehurst D, Thorne S, Tomson G, Westert G, Wilkinson S, Williams B, Young T, Ziebland S. An open letter to The BMJ
editors on qualitative research. BMJ. 2016;352:i563
13. Hering TM, Rimnac CM, Dobbs MB, Leopold SS. Editorial: Reporting gene expression analyses in CORR
®. Clin Orthop Relat Res. 2019;477:1525-1527.
14. Karhade AV, Schwab JH. CORR
Synthesis: When should we be skeptical of clinical prediction models. Clin Orthop Relat Res. [Published online ahead of print June 10, 2020]. DOI: 10.1097/CORR.0000000000001367
15. Kim TK, Chawla A, Meshram P, Synthesis CORR: What is the evidence for the clinical use of stem cell-based therapy in the treatment of osteoarthritis of the knee? Clin Orthop Relat Res. 2020;478:964-978.
16. LaBelle MW, Marcus RE. CORR
Synthesis: What is the role of platelet-rich plasma injection in the treatment of tendon disorders? Clin Orthop Relat Res. [Published online ahead of print May 21, 2020]. DOI: 10.1097/CORR.0000000000001312
17. Leopold SS. Editorial: Getting the most from what you read in orthopaedic journals. Clin Orthop Relat Res. 2017;475:1757-1761.
18. Leopold SS. Editorial: Introducing CORR
Synthesis—Review articles with a twist (actually, several twists). Clin Orthop Relat Res. 2020;478:925-927.
19. Leopold SS. Editorial: No-difference Studies Make a Big Difference. Clin Orthop Relat Res. 2015;473:3329-3331.
20. Leopold SS. Editorial: "Pencil and paper" research? Network meta-analysis and other study designs that do not enroll patients. Clin Orthop Relat Res. 2015 Jul;473(7):2163-5
21. Leopold SS. Editor’s Spotlight/Take 5: Misconceptions and the acceptance of evidence-based nonsurgical interventions for knee osteoarthritis. A qualitative study. Clin Orthop Relat Res. 2019;477:1970-1974.
22. Liu Y, Chen P-HC, Krause J, Peng L. How to Read Articles that Use Machine Learning. Users’ Guides to the Medical Literature. JAMA. 2019;322:1806-1816.
23. Liu Y, Kohlberger T, Norouzi M, Dahl GE, Smith JL, Mohtashamian A, Olson N, Peng LH, Hipp JD, Stumpe MC. Artificial intelligence–Based breast cancer nodal metastasis detection: insights into the black box for pathologists. Arch Pathol Lab Med. 2019;143:859-868.
24. Loder E, Groves T, Schroter S, Merino JG, Weber W. Qualitative research and The BMJ
. A response to Greenhalgh and colleagues’ appeal for more. BMJ.
Editor’s choice: Radiomics. Available at https://www.nature.com/collections/ksgfknntbs/
. Accessed on July 7, 2020.
26. Rich E. Artificial Intelligence. Singapore: McGraw-Hill; 1983.
27. Tong A, Sainsbury P, Craig J. Consolidated Criteria for Reporting Qualitative Research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007;19:349-357.
28. Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS, Reitsma JB, Kleijnen J, Mallett S, for the PROBAST Group. PROBAST: A Tool to assess the risk of bias and applicability of prediction model studies. Ann Int Med. 2019;170:51-58.