Delhi | 25°C (windy) | Air: 185%

Pediatricians, Don’t Hang Up Your White Coats. ChatGPT Missed 80% Of Diagnoses

  • Nishadil
  • January 10, 2024
  • 0 Comments
  • 3 minutes read
  • 15 Views
Pediatricians, Don’t Hang Up Your White Coats. ChatGPT Missed 80% Of Diagnoses

Forbes Innovation Healthcare Pediatricians, Don’t Hang Up Your White Coats. ChatGPT Missed 80% Of Diagnoses Nina Shapiro Contributor Opinions expressed by Forbes Contributors are their own. Dispelling health myths, fads, exaggerations and misconceptions. Following Click to save this article. You'll be asked to sign into your Forbes account.

Got it Jan 10, 2024, 08:00am EST Share to Facebook Share to Twitter Share to Linkedin Pediatrician examining child at check up getty While the role of Artificial Intelligence (AI) in multiple fields may seem to be overtaking human brain power, this is not yet the case in medical diagnoses. An article published in this month’s JAMA Pediatrics reviewed accuracy of ChatGPT version 3.5 in diagnosing pediatric conditions.

The chatbot scored a dismal 17% correct. The study authors, based at the Cohen Children’s Medical Center in New York, utilized pediatric case photographs from JAMA Pediatrics Clinical Cases (60 in total) and the New England Journal of Medicine pediatric case presentations from the Massachusetts General Hospital “Case Reports” section (40 in total) between 2013 and 2023.

Out of these 100 pediatric case presentations, chatbot completely missed 72 of them, and was close but not correct on 11 others. Frustrated child getty While some of the errors in the chatbot’s diagnosis were, indeed, close to the actual diagnosis (a cyst versus a fistula, for instance), some were way off the mark (a life threatening platelet disorder versus a treatable vitamin deficiency).

Case reports from these journals tend to include multiple sources of clinical information, including patient histories, radiologic imaging, photographs of physical findings, laboratory values, and even photomicrographs of specimens if biopsies were taken. ChatGPT can incorporate multiple facets of information, but the lack of subtle understanding of the general condition of a given patient remains a weak link.

Technologies such as ChatGPT, considered to be a large language model (LLM) information source, do play an increasingly important role in medical diagnostics and therapeutics. A prior study, published in JAMA Network in 2023, utilized ChatGPT version 4 to assess the accuracy of diagnosing adult clinical cases from the New England Journal of Medicine .

In this scenario, the chatbot was accurate 39% of the time (27 out of 70), and even when it didn’t nail the diagnosis, the bot listed the correct answer in the differential diagnoses, or a list of possible diagnoses, 64% of the time, or in 45 out of 70 case presentations. MORE FOR YOU New Names Appear On Epstein List: What To Know About Latest Unsealed Documents The Russians Could Run Out Of Infantry Fighting Vehicles In Two Or Three Years Trump Prosecutor Fani Willis Subpoenaed To Testify In Divorce Case Of Alleged Romantic Partner, Report Says Physician reviewing radiologic images getty Even though the recent pediatric study showed quite poor diagnostic skills from ChatGPT version 3.5, the study authors recommend remaining optimistic.

The authors spoke with Medpage Today , and encouraged clinicians to continue to investigate LLM’s in medical practice. For now, this application may be better employed as a means of generating information for patients based on direct prompts from clinicians. One of the issues with current AI programs is the inability to decipher information when it comes to linking one aspect of a patient’s history with another aspect of their history or presentation.

For instance, a patient with a particular lifelong condition may be predisposed to a particular medical issue. But if this prior condition is not delineated in the text or photograph, the chatbot will not be able to incorporate and evaluate this information. That’s where experience, nuanced interpretation and a broader knowledge of medicine comes into play.

And while LLM’s are an evolving adjuvant for health diagnostics and therapeutics, for now, the doctors and other healthcare professionals need to stick around. Follow me on Twitter or LinkedIn . Check out my website or some of my other work here . Nina Shapiro Editorial Standards Print Reprints & Permissions.