Recent Popular Leaderboard What is KiKo? Case Reports

A survey and analysis ChatGPT's generated differential diagnoses utilizing physical exam descriptions from a pediatric dermatology textbook

Need to claim your poster? Find the KiKo table at the conference and they'll help you get set up.

Presented at: Society for Investigative Dermatology 2025

Date: 2025-05-07 00:00:00

Views: 2

Summary: Abstract Body: With the national shortage of pediatric dermatologists and the evolving field of artificial intelligence (AI), this study was designed to analyze the diagnostic accuracy of ChatGPT 3.5 as it pertains to common pediatric cutaneous pathologies. ChatGPT 3.5 was used to acquire answers to the prompt “diagnosis of [physical exam description]” using standardized physical examination findings from 178 pediatric dermatologic conditions in the Hurwitz Clinical Pediatric Dermatology textbook. Despite the availability of more sophisticated models of ChatGPT that can utilize photo input, ChatGPT 3.5 was specifically used as it is free and the most accessible to parents of patients. Based on inputted physical exam findings, ChatGPT successfully identified 59.30% of pathologies. The inconsistent diagnostic accuracy displays the continued need for pediatric dermatologists and reliable resources to manage pediatric cutaneous disease. Curtis Perz<sup>1</sup>, Colby L. Presley<sup>2</sup>, Margaret Hurley<sup>3</sup>, Shane Swink<sup>4, 5</sup> 1. Preliminary Medicine, William Beaumont Army Medical Center, El Paso, TX, United States. 2. Division of Dermatology, Lehigh Valley Health Network, Allentown, PA, United States. 3. Philadelphia College of Osteopathic Medicine, Philadelphia, PA, United States. 4. Division of Dermatology, The Children's Hospital of Philadelphia, Philadelphia, PA, United States. 5. University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States. Clinical Research: Epidemiology and Observational Research