Recent Popular Leaderboard What is KiKo? Case Reports

An analysis of ChatGPT recommendations for dressing selection in wound care

Need to claim your poster? Find the KiKo table at the conference and they'll help you get set up.

Presented at: Society for Investigative Dermatology 2025

Date: 2025-05-07 00:00:00

Views: 2

Summary: Abstract Body: Our study aims to asses the accuracy of wound care recommendations made by ChatGPT, a large language model (LLM), that utizilizes artificial intelligence to generate responses to inquiries, including medical advice. Given its widespread use, the implications of patients querying ChatGPT for medical recommendations, such as wound care, should be reviewed and familiar to the dermatology community. Specifically, this study assesses ChatGPT’s accuracy in recommending wound dressings for different wound types, including more general wound types and more detailed descriptions. Data was collected in October 2024 using ChatGPT-4. Inputs and descriptions were derived from Dressings, Chapter 145, of Dermatology and the authors’ clinical experience. ChatGPT was queried with wound descriptors, and responses were evaluated for correctness based on established clinical guidelines. ChatGPT’s overall accuracy for broad wound categories (e.g., exudative, painful, bleeding, and infected wounds) was 87% (13/15). For specific wound indications, ChatGPT recommended the correct dressing as the first choice 34% (11/32) of the time and within the top three choices 72% (23/32) of the time. The model failed to recommend key dressings for specific wound types including film dressings for skin tears and alginates for venous ulcers. Overall, ChatGPT has suboptimal performance in making wound care recommendations and has increased risk of providing misguidance in wound care to patients. While ChatGPT’s performance was high for broader wound categories, its decreased accuracy for specific queries underscores the importance of clinical expertise in wound care decision-making. Algorithmic bias, stemming from training data limitations, may have influenced these results. Further integration with clinical databases and real-world feedback could enhance the model’s accuracy in specific wound care scenarios. Colby Presley<sup>2</sup>, Alec Robitaille<sup>1</sup>, Steven Oberlender<sup>2</sup>, Cynthia L. Bartus<sup>2</sup> 1. Midwestern University Arizona College of Osteopathic Medicine, Glendale, AZ, United States. 2. Dermatology, Lehigh Valley Health Network, Allentown, PA, United States. Clinical Research: Epidemiology and Observational Research