We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Accuracy of an Artificial Intelligence Chatbot's Interpretation of Clinical Ophthalmic Images.
- Authors
Mihalache, Andrew; Huang, Ryan S.; Popovic, Marko M.; Patil, Nikhil S.; Pandya, Bhadra U.; Shor, Reut; Pereira, Austin; Kwok, Jason M.; Yan, Peng; Wong, David T.; Kertes, Peter J.; Muni, Rajeev H.
- Abstract
This study evaluates the performance of the novel release of an artificial intelligence chatbot that is capable of processing imaging data. Key Points: Question: How does the artificial intelligence chatbot ChatGPT-4 (OpenAI) perform in processing ophthalmic imaging data? Findings: In this cross-sectional study including 136 ophthalmic cases provided by OCTCases, the chatbot answered 70% of all multiple-choice questions correctly, performing better on nonimage–based questions (82%) than image-based questions (65%). Meaning: In this study, the chatbot demonstrated a fair performance on multiple-choice questions pertaining to ophthalmic cases that required multimodal input; as multimodal chatbots become increasingly widespread, it is necessary to stress their appropriate integration within medicine. Importance: Ophthalmology is reliant on effective interpretation of multimodal imaging to ensure diagnostic accuracy. The new ability of ChatGPT-4 (OpenAI) to interpret ophthalmic images has not yet been explored. Objective: To evaluate the performance of the novel release of an artificial intelligence chatbot that is capable of processing imaging data. Design, Setting, and Participants: This cross-sectional study used a publicly available dataset of ophthalmic cases from OCTCases, a medical education platform based out of the Department of Ophthalmology and Vision Sciences at the University of Toronto, with accompanying clinical multimodal imaging and multiple-choice questions. Across 137 available cases, 136 contained multiple-choice questions (99%). Exposures: The chatbot answered questions requiring multimodal input from October 16 to October 23, 2023. Main Outcomes and Measures: The primary outcome was the accuracy of the chatbot in answering multiple-choice questions pertaining to image recognition in ophthalmic cases, measured as the proportion of correct responses. χ2 Tests were conducted to compare the proportion of correct responses across different ophthalmic subspecialties. Results: A total of 429 multiple-choice questions from 136 ophthalmic cases and 448 images were included in the analysis. The chatbot answered 299 of multiple-choice questions correctly across all cases (70%). The chatbot's performance was better on retina questions than neuro-ophthalmology questions (77% vs 58%; difference = 18%; 95% CI, 7.5%-29.4%; χ21 = 11.4; P <.001). The chatbot achieved a better performance on nonimage–based questions compared with image-based questions (82% vs 65%; difference = 17%; 95% CI, 7.8%-25.1%; χ21 = 12.2; P <.001).The chatbot performed best on questions in the retina category (77% correct) and poorest in the neuro-ophthalmology category (58% correct). The chatbot demonstrated intermediate performance on questions from the ocular oncology (72% correct), pediatric ophthalmology (68% correct), uveitis (67% correct), and glaucoma (61% correct) categories. Conclusions and Relevance: In this study, the recent version of the chatbot accurately responded to approximately two-thirds of multiple-choice questions pertaining to ophthalmic cases based on imaging interpretation. The multimodal chatbot performed better on questions that did not rely on the interpretation of imaging modalities. As the use of multimodal chatbots becomes increasingly widespread, it is imperative to stress their appropriate integration within medical contexts.
- Publication
JAMA Ophthalmology, 2024, Vol 142, Issue 4, p321
- ISSN
2168-6165
- Publication type
Article
- DOI
10.1001/jamaophthalmol.2024.0017