Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Discrepancy and error in radiology: concepts, causes and consequences.
176
Zitationen
4
Autoren
2012
Jahr
Abstract
“All men are liable to error; and most men are, in many points, by passion or interest, under temptation to it”. Locke, John, An Essay concerning Human Understanding (1690), bk. 4, ch. 20, sect. 17. In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished1. Radiology involves decision-making under conditions of uncertainty2, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank). Notwithstanding these complexities, the public frequently expects that a medical investigation will produce “the correct answer”, all the time. This unfortunate over-simplification of a multi-factorial process is often informed by representations on TV dramas, media reports describing every discrepancy or dispute over interpretation as a scandal, and the political imperative to divert anger over perceived failings on to others, preferably easy targets, often portrayed and perceived as privileged. Amid many possibilities of error, it would be strange indeed to be always in the right. Peter Mere Latham (1789-1875), General remarks on the Practice of Medicine, The Heart and its Affections Ch. IV With respect to radiological investigations, the use of the term “error” is often unsuitable; it is more appropriate to concentrate on “discrepancies” between a report and a retrospective review of a film or outcome1. Professional body guidelines recommend that all imaging procedures should include an expert opinion from a radiologist, given by means of a written report or comment3. “Opinion” may be defined as “a conclusion arrived at after some weighing of evidence, but open to debate or suggestion”, and thus an expert’s opinion should not be expected to be incontrovertible4. Error implies a mistake (an incorrect interpretation of an imaging study, in this context). In order for a report to be erroneous, it follows that a correct report must also be possible. Because of the subjectivity of image interpretation, the definition of error depends on “expert opinion”. An observer makes an error if he or she fails to reach the same conclusion that would be reached by a group of expert observers. Errors can only arise in cases where the correct interpretation is not in dispute. Somewhere between the clear-cut error and the inevitable difference of opinion in interpretation is an arbitrary division defining the limit of professional acceptability4. Errors in judgement must occur in the practice of an art which consists largely in balancing probabilities. Sir William Osler (1849-1919), Aequanimitas, with Other Addresses, Teacher and Student. Unlike physical examination of patients, or findings at surgery or endoscopy, evidence of a radiologic examination remains available for subsequent scrutiny, and can be used for study of observer variation. A 20-year literature review in 2001 suggested the level of error for clinically significant or major error in radiology is in the range 2-20% and varies depending on the radiological investigation5. The issue of error in radiology has been recognised for many years. Studies in the 1940s found that CXRs of patients with suspected tuberculosis were read differently by different observers in 10-20% of cases. In the 1970s, it was found that 71% of lung cancers detected on screening radiographs were visible in retrospect on previous films4,6. The “average” observer has been found to miss 30% of visible lesions on barium enemas4. A 1999 study found that 19% of lung cancers presenting as a nodular lesion on chest x-rays were missed7. Another study identified major disagreement between 2 observers in interpreting x-rays of patients in an emergency department in 5-9% of cases, with an estimated incidence of errors per observer of 3-6%8. A 1997 study using experienced radiologists reporting a collection of normal and abnormal x-rays found an overall 23% error rate when no clinical information was supplied, falling to 20% when clinical details were available9. A recent report suggests a significant major discrepancy rate (13%) between specialist neuroradiology second opinion and primary general radiology opinion10. A recent review found a “real-time” error rate among radiologists in their day-to-day practices averages 3-5%, but also quoted previous research showing that in patients subsequently diagnosed with lung or breast cancer with previous “normal” relevant radiologic studies, retrospective review of the chest radiographs (in the case of lung cancer) or mammogram (in breast cancer cases) identified the lung cancer in as many as 90% and the breast cancer in as many as 75% of cases11. Prolonged attention to a specific area on a radiograph (“visual dwell”) increases both false negative and false positive errors. Reducing the viewing time for CXRs to less than 4 seconds also increases the miss rate4. Comparative studies of other medical non-radiologic fields have found a similar prevalence of inaccuracy in clinical assessment and examination. A Mayo Clinic study of autopsies published in 2000, which compared clinical diagnoses with post-mortem diagnoses, found that in 26% of cases, a major diagnosis was missed clinically11. Common experience in radiology suggests that many errors are of little or no significance to the patient, and some significant errors remain undiscovered. Errors are inevitable, and the concept of necessary fallibility must be accepted. Equally a threshold of competency is required of all professionals involved in the delivery of radiology services.
Ähnliche Arbeiten
Refinement and reassessment of the SERVQUAL scale.
1991 · 3.967 Zit.
Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review
2005 · 3.798 Zit.
Radiobiology for the Radiologist.
1974 · 3.502 Zit.
International evidence-based recommendations for point-of-care lung ultrasound
2012 · 2.829 Zit.
Radiation Dose Associated With Common Computed Tomography Examinations and the Associated Lifetime Attributable Risk of Cancer
2009 · 2.434 Zit.