We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Limitations of readability assessment tools.
- Authors
Alzaid, Mohammad; Ali, Faisal R.; Stapleton, Emma
- Abstract
This letter discusses the limitations of readability assessment tools, specifically in the context of using artificial intelligence (AI) chatbots for patient education. The authors commend a recent study that evaluated the ability of an AI chatbot called ChatGPT to explain common ENT operations. While ChatGPT was able to simplify its responses, resulting in improved readability scores, the study found that this simplification came at the cost of information quality. The authors highlight the limitations of readability assessment tools, which mainly focus on quantifiable aspects of texts and may not consider factors such as organizational structure, concept difficulty, or familiarity with the subject. They also suggest that the assigned grade levels by different formulas may vary, affecting the reliability of these tools. The authors recommend that when using readability formulas, the assessed text should undergo edits to avoid misleading scores, and they call for the development of validated AI-oriented scoring tools to evaluate patient education material.
- Subjects
ARTIFICIAL intelligence; CHATGPT; READABILITY (Literary style); PATIENT education; CHATBOTS
- Publication
European Archives of Oto-Rhino-Laryngology, 2024, Vol 281, Issue 9, p5021
- ISSN
0937-4477
- Publication type
Article
- DOI
10.1007/s00405-024-08716-8