We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
ChatGPT and the clinical informatics board examination: the end of unproctored maintenance of certification?
- Authors
Kumah-Crystal, Yaa; Mankowitz, Scott; Embi, Peter; Lehmann, Christoph U
- Abstract
We aimed to assess ChatGPT's performance on the Clinical Informatics Board Examination and to discuss the implications of large language models (LLMs) for board certification and maintenance. We tested ChatGPT using 260 multiple-choice questions from Mankowitz's Clinical Informatics Board Review book, omitting 6 image-dependent questions. ChatGPT answered 190 (74%) of 254 eligible questions correctly. While performance varied across the Clinical Informatics Core Content Areas, differences were not statistically significant. ChatGPT's performance raises concerns about the potential misuse in medical certification and the validity of knowledge assessment exams. Since ChatGPT is able to answer multiple-choice questions accurately, permitting candidates to use artificial intelligence (AI) systems for exams will compromise the credibility and validity of at-home assessments and undermine public trust. The advent of AI and LLMs threatens to upend existing processes of board certification and maintenance and necessitates new approaches to the evaluation of proficiency in medical education.
- Subjects
CHATGPT; LANGUAGE models; MEDICAL informatics; ARTIFICIAL intelligence; BOARD books; MEDICAL education
- Publication
Journal of the American Medical Informatics Association, 2023, Vol 30, Issue 9, p1558
- ISSN
1067-5027
- Publication type
Article
- DOI
10.1093/jamia/ocad104