We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
(021) ChatGPT's Ability to Assess Quality and Readability of Online Medical Information.
- Authors
Golan, R; Ripps, SJ; Raghuram, R; Loloi, J; Bernstein, A; Connelly, ZM; Golan, NS; Ramasamy, R
- Abstract
Introduction: Health literacy plays a crucial role in enabling patients to understand and effectively use medical information. As technology rapidly advances, the significance of health literacy becomes even more pronounced, particularly in comprehending complex medical information. Artificial Intelligence (AI) platforms have garnered significant attention for their remarkable ability to generate automated responses to a wide range of prompts. However, their capacity to assess the quality and readability of provided text remains uncertain. Given the growing prominence of AI web assistant tools, we hypothesized that integrating these tools into patients' web searches could enhance the retrieval of accurate medical information. Objective: To evaluate the proficiency of Conversational Generative Pre-Trained Transformer (ChatGPT) in assessing readability, and utilizing the DISCERN tool to assess quality of online content regarding shock wave therapy for erectile dysfunction. Methods: Websites were generated using a Google search of "shock wave therapy for erectile dysfunction" with location filters disabled. Readability was analyzed using the Readable software (Readable.com, Horsham, United Kingdom). Quality was assessed independently by three reviewers using the DISCERN tool. The same plain text files collected were inputted into ChatGPT to determine whether it produced comparable metrics for readability and quality. Results: The study results revealed a notable disparity between ChatGPT's readability assessment and that obtained from a reliable tool, Readable.com (p<0.05). This indicates a lack of alignment between ChatGPT's algorithm and that of established tools, such as Readable.com. Similarly, the DISCERN score generated by ChatGPT differed significantly from the scores generated manually by human evaluators (p<0.05), suggesting that ChatGPT may not be capable of accurately identifying poor-quality information sources regarding shock wave therapy as a treatment for erectile dysfunction. Conclusions: ChatGPT's evaluation of the quality and readability of online text regarding shockwave therapy for erectile dysfunction differs from that of human raters and trusted tools. ChatGPT's current capabilities were not sufficient for reliably assessing the quality and readability of textual content. Further research is needed to elucidate the role of AI in the objective evaluation of online medical content in other fields. Continued development in AI and incorporation of tools such as DISCERN into AI software may enhance the way patients navigate the web in search of high-quality medical content in the future. Disclosure: No.
- Subjects
CHATGPT; GENERATIVE pre-trained transformers; LANGUAGE models; ARTIFICIAL intelligence; HEALTH literacy
- Publication
Journal of Sexual Medicine, 2024, p1
- ISSN
1743-6095
- Publication type
Article
- DOI
10.1093/jsxmed/qdae001.019