Visualdx.com no longer supports your web browser (Internet Explorer version 8 or lower). See what browsers we support.

We could all do better. The case of symptom checkers vs. physicians.

docandpatient.jpgMillions of patients worldwide turn to Google and online symptom checkers to try to figure out what ails them. A recent study published in JAMA Internal Medicine compared the diagnostic performance of online symptom checkers against actual physicians to see which had better accuracy.1 Physicians were correct 72% of the time, beating the computer tools by a 2-to-1 margin. The authors concluded that “physicians vastly outperformed computer algorithms in diagnostic accuracy.” The study quickly made national headlines.2-6

If you are wondering what made this small study newsworthy, so am I. The authors did not explain why the symptom checkers did so poorly, but who really cares? Consumer symptom checkers are not designed to be replacements or even cognitive assistants for physicians. So here is a more apt headline: “Study reveals physicians do poorly in solving relatively easy medical cases.” The real news is that physicians had only 72% accuracy with not very difficult cases, cases that were borrowed from an earlier BMJ article.7 While disappointing, these results are not altogether surprising. The 2015 Institute of Medicine study, Improving Diagnosis in Health Care, reported a 5%-10% diagnostic error rate in medicine, and many believe it is even higher.8 The fact of the matter is that doctors sometimes get it wrong. So instead, let’s all have a discussion of why physicians do poorly at diagnosis.

In medical education, we train students and residents to memorize typical and classic presentations and then expect them to generalize “on the fly” to variants. Medical students (and likely most people that watch TV medical dramas) can diagnose crushing chest pain, left arm pain, and diaphoresis as a myocardial infarction. But what about a patient presenting with toothache and fatigue? An emergency physician recently told me about an MI patient that waited 5 hours to be seen in the ED because the case was thought to be non-urgent. “I never saw it present like that” would be a standard response for the miss. It is the unusual or atypical presentation that is oftentimes behind diagnostic errors. We mentor in the old adage, "when you hear hoof beats, think of horses, not zebras.” We all know that common diseases happen commonly, but it’s also true that variations of common diseases happen almost as commonly.

Yet cognitive mistakes and shortcuts have repeatedly been shown to be the cause of diagnostic error.9 These include things like frequency gambling (“playing the odds”), diagnosis momentum (excluding alternative possibilities), premature closure (halting the decision-making process too early), posterior probability error (being overly influenced by a patient’s history), and availability (perceiving as more likely things that readily come to mind). Some experts have argued that we can “cognitively de-bias” medical students and trainees and make them aware of the common cognitive mistakes; therefore, they will not make mistakes. But let’s be honest and recognize that we as humans have a tendency to make errors. Our pattern recognition might not be as good as we think it is. People in other industries working with complex systems require information tools for guidance and expertise. So why does medicine not teach to tools?  Why do we test closed book in medicine, when we want our doctors to use evidence when they work? Society has changed to encourage technology and information-assisted knowledge. We use evidence daily, as we work, on our phones, on the desktop, or inside of the electronic health record.

So how can we do better? There are many working in the field of diagnostic decision-making and the study of diagnostic error, and it is heartening to see attention paid to this widespread problem. And yet clinicians, even if they are pooling their knowledge, are still not systematically addressing diagnosis in all its ambiguity, variation, and complexity, because they are looking for what they know, not what they don’t know. This JAMA study is remarkable in that the authors seem to be unaware of professional diagnostic decision support. Furthermore, they fail to recognize the idea that diagnostic decision support is an aid to human cognition, not a replacement for the brain. If we learn anything from the JAMA study, it is that it should not be an either/or proposition.

New professional systems are available that model variation in presentation, going way beyond what a single physician or a crowd could possibly resolve at the immediate time of decision-making. We expect patients to be armed with “knowledge” from the internet. At the same time, clinicians should be augmenting their knowledge with tools to help them do their job better, tools that every other profession has implemented, from aeronautics to accounting. It is time for medicine to catch up.

References

  1. Semigran HL, Levine DM, Nundy S, Mehrotra A. Comparison of Physician and Computer Diagnostic Accuracy. JAMA Intern Med. 2016 Oct 10. http://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2565684. Accessed October 12, 2016.
  2. Kaplan K. Your phone may be smart, but your doctor still knows more than an app. Los Angeles Times. October 10, 2016. http://www.latimes.com/science/sciencenow/la-sci-sn-doctors-vs-apps-20161010-snap-story.html. Accessed October 12, 2016.
  3. Marcus MB. How do online symptom checkers compare to a doctor’s diagnosis? CBS News website. http://www.cbsnews.com/news/online-symptom-checkers-doctors-diagnosis-compared/. Published October 10, 2016. Accessed October 12, 2016.
  4. Docs much better than internet or app-based symptoms checkers: head-to-head comparison reveals human physicians vastly outperform virtual ones. ScienceDaily website. https://www.sciencedaily.com/releases/2016/10/161010120138.htm. Published October 10, 2016. Accessed October 12, 2016.
  5. PubMed Health. Doctors ‘vastly outperform’ symptom checker apps.  https://www.ncbi.nlm.nih.gov/pubmedhealth/behindtheheadlines/news/2016-10-11-doctors-vastly-outperform-symptom-checker-apps-/. Published October 11, 2016. Accessed October 12, 2016.
  6. Mack H. JAMA study: doctors beat symptom-checker apps in diagnostic accuracy. MobiHealthNews website. http://www.mobihealthnews.com/content/jama-study-doctors-beat-symptom-checker-apps-diagnostic-accuracy. Published October 12, 2016. Accessed October 12, 2016.
  7. Semigran  HL, Linder  JA, Gidengil  C, Mehrotra  A. Evaluation of symptom checkers for self diagnosis and triage: audit study. BMJ. 2015;351:h3480. http://www.bmj.com/content/351/bmj.h3480.long. Accessed October 12, 2016.
  8. The National Academies of Science Engineering and Medicine. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press; 2015. http://www.nationalacademies.org/hmd/Reports/2015/Improving-Diagnosis-in-Healthcare.aspx. Accessed October 12, 2016.
  9. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78[8]:775-780.

 

 

 

 

About VisualDx

VisualDx is an award-winning diagnostic clinical decision support system that has become the standard electronic resource at more than half of U.S. medical schools and more than 1,500 hospitals and institutions nationwide.  VisualDx combines clinical search with the world's best medical image library, plus medical knowledge from experts to help with diagnosis, treatment, self-education, and patient communication. Expanding to provide diagnostic decision support across General Medicine, the new VisualDx brings increased speed and accuracy to the art of diagnosis. Learn more at www.visualdx.com.

Index