OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 01:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Editorial: Opposites Attract at CORR®—Machine Learning and Qualitative Research

2020·16 Zitationen·Clinical Orthopaedics and Related ResearchOpen Access
Volltext beim Verlag öffnen

16

Zitationen

7

Autoren

2020

Jahr

Abstract

While most research published in orthopaedic surgery journals derives from one of several familiar study designs—case series, historically controlled studies, randomized trials, and variations on the systematic review theme—much of the fun (and a considerable portion of the benefit) we get from reading journals happens when clinician scientists tackle old, resistant problems in new ways. Clinical Orthopaedics and Related Research® is on the vanguard of publishing innovative approaches in musculoskeletal science. Indeed, CORR® leads the way both in terms of establishing editorial standards that ensure the consistent, clear reporting of a wide range of newer study designs [7, 11, 13, 19, 20], as well as providing tools for reviewers and readers [2, 17] to help them get the most out of the discoveries that we publish. We also take special pleasure in developing new article types that help surgeons do their jobs better; the most-recent example has been our CORR Synthesis series [14-16], a reboot of the review-article format, but one that delivers robust approaches to screening, selection, and presentation that mitigate the sources of bias that otherwise are so deeply embedded in articles of this type [18]. CORR’s enthusiasm for two seemingly dissimilar article types, machine learning models and qualitative research, is just another example of our journal’s openness to new approaches to solving problems. We believe these article types deserve the enthusiasm of readers, as well. Opposites Attract: Machine Learning and Qualitative Research On the face of it, it’s hard to imagine two more dissimilar research approaches than machine learning and qualitative research. But as we’ve said before, to a large degree, research is research [7], and so as CORR’s editors assess papers that we receive, we have been and will continue to hold papers that use these disparate methods to a set of common standards (Table 1). Table 1. - Common standards CORR senior editors use to assess submitted papers Is the topic important and is there a good rationale (reason) to ask the questions the study asks? And, Is the work robust? And, Do the findings support specific recommendations that can help us take better care of patients, practice more efficiently, or make better public policy? Or, Does the paper uncover some unexpected, counterintuitive finding or association that changes our thinking on an important theme? To help the curious reader get started, CORR has published a helpful how-to on machine learning [14], as well as an interview introducing high-quality qualitative research to this audience [21]; the study covered in that interview is itself a don’t-miss [4]. For the reader looking to go a bit deeper, the JAMA “Users’ Guide” on machine learning is thoughtful and well written [22] as is their older but still-relevant piece in that same series on qualitative research [10]; we also recommend using the PROBAST (Prediction model Risk of Bias Assessment Tool) [28] for those who want more. Below, we provide a brief overview of each article type, point to why we believe each has an important role to play, and detail how CORR’s editors plan to evaluate the ones we receive. What is Machine Learning and How is it Useful? Broadly speaking, machine learning studies (and related approaches) seek to harness computer algorithms to produce models to diagnose or prognosticate based on large numbers of variables and vast quantities of data. Typically, these systems begin with few assumptions and an enormous list of potential predictor variables, in the hopes of identifying associations that humans—weighed down by our preconceived notions—might otherwise miss. After that preliminary analysis in what is called a training dataset, the algorithm derives and refines predictive functions in a separate setting, called a validation set, to see whether the identified associations prove robust. With still more data, these systems can continue to self-educate and improve their performance. We believe machine learning will help orthopaedic surgeons take better care of patients. In recent years, we’ve seen that even expert surgeons are no more likely than chance to anticipate which patients will improve meaningfully following knee replacement [9]; by contrast, a computer algorithm designed to do just that did pretty well, and it’s still learning [8]. Machine learning and its relatives in the broader discipline of artificial intelligence may also help us anticipate prognosis in patients with malignancies [1], and even make diagnoses using rich sources of visual data, like radiographic images [25], and even histopathology slides [23]. In addition, and unlike humans looking at many images or slides, these machines don’t make errors associated with carelessness or fatigue; the machine doesn’t tire. While artificial intelligence once was derided as “the study of how to make computers do things which, at the moment, people do better” [26], this may no longer be the case. What is Qualitative Research and How is it Useful? Most orthopaedic research is descriptive, and some is comparative, but interpretive research—studies that help us to understand why patients feel the way they do, how they form beliefs and (mis-)understandings about their bodies, and which factors inform their decision-making—largely has been relegated to social science journals. Some editors of medical journals even have actively deprioritized this kind of work [12, 24]. We believe this is a missed opportunity for clinicians who want a deeper understanding of why their patients feel as they do. Qualitative research often asks questions that mirror those that patients and clinicians ask every day—versions of “given my situation, what should I do?”—and as such, we see it as an important research tool. Survey studies can tell us those things, too, but in survey studies, the research team can only find answers to questions that they ask; in qualitative research, an open-ended interview approach allows patients to tell us what matters to them. By interpreting patients’ experiences, qualitative researchers can produce a rich, nuanced perspective, and support specific recommendations on a variety of clinically important topics. In contrast to most kinds of research, in which the researcher’s participation is seen as a source of bias, the idea of subjectivity in qualitative work is seen as a feature, not a bug—as long as the reader is made aware of how that subjectivity is deployed. Creating teams of researchers with diverse backgrounds (such as surgeons, social scientists, epidemiologists, and trialists) to analyze the data from a variety of perspectives is fundamental to this process. By collecting and analyzing data in parallel, the researchers can test or challenge their emerging interpretations in subsequent interviews, helping to ensure that the resulting findings are grounded in the patient experiences (and don't just reflect the researchers’ preexisting biases). For example, qualitative papers can help readers to understand whether the content of our commonly used outcomes tools is the “right” stuff to focus on, they can plumb patients’ needs to help us determine what study endpoints would be most meaningful to the people we care for, and they can help us to identify barriers to implementation of medical recommendations or to trial enrollment. At CORR, we are intrigued by qualitative studies that tell coherent stories leading to specific recommendations. While most qualitative studies we’ve seen don’t clear this bar—and so most don’t get published—the ones that do can really change surgeons’ thinking. One marvelous example [4] that we highlighted with a Take 5 interview [21] identified a number of serious misunderstandings in the minds of patients who planned to undergo joint replacement—misunderstandings that pushed these patients to choose major surgical treatment over safer, less-invasive alternatives. By identifying those misconceptions, the study was able to develop a practical roadmap to help surgeons ensure that patients who choose surgery do not do while laboring under important misapprehensions. CORR’s Editorial Standards on Qualitative Research As noted earlier, the standards that we apply to all papers (Table 1) naturally also apply to papers on machine learning and qualitative research papers. When screening qualitative research papers, the easy-to-remember acronym RATS—relevance, appropriateness, transparency, and soundness [5]—is something we’ll seek to apply. In particular we will ask that these papers: Provide a clearly defined research question relevant the practice of orthopaedic surgery; offer a clear description of and theoretical justification for the sampling strategy and data analysis procedures; convince readers that alternative interpretations of the data have been considered; give thoughtful consideration to the researcher’s influence on the findings; and produce and support specific recommendations for surgeons to use in practice. We’ll also ask our subject-matter experts (CORR’s peer reviewers) to apply tools like the COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist [27] in their more-detailed assessments. CORR’s Editorial Standards on Machine Learning Most, although not all, of machine learning studies involve models to improve our ability to make a diagnosis or refine our prognostic precision; as such, a number of checklists relevant to diagnostic and prognostic studies that authors, reviewers, and many readers will be familiar with can be helpful. Depending on the study design, CORR’s editors expect to apply reporting standards like TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) [6] or STARD (Standards for Reporting Diagnostic Accuracy) 2015 [3]. Further, CORR will ask that these papers either: Present a finding that an important machine learning or artificial intelligence driven approach does not work the way we believed or hoped it would; or, If the approach under study works well, then provide a viable tool that readers can use (such as a free URL or a commercially available product) that will help them to improve patient care. Papers that simply show that a prediction or diagnosis can be made using machine learning, but do not give readers the ability to use the tool for themselves, are of little interest and we don’t expect to publish many of these. We are especially interested in and supportive of researchers who provide their actual code as an electronic appendix, so that others can replicate and build on the discoveries published here. Good research is good research, no matter the type.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMeta-analysis and systematic reviewsClinical practice guidelines implementation
Volltext beim Verlag öffnen