We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
ChatGPT's ability to generate realistic experimental images poses a new challenge to academic integrity.
- Authors
Zhu, Lingxuan; Lai, Yancheng; Mou, Weiming; Zhang, Haoran; Lin, Anqi; Qi, Chang; Yang, Tao; Xu, Liling; Zhang, Jian; Luo, Peng
- Abstract
The rapid advancements in large language models (LLMs) such as ChatGPT have raised concerns about their potential impact on academic integrity. While initial concerns focused on ChatGPT's writing capabilities, recent updates have integrated DALL-E 3's image generation features, extending the risks to visual evidence in biomedical research. Our tests revealed ChatGPT's nearly barrier-free image generation feature can be used to generate experimental result images, such as blood smears, Western Blot, immunofluorescence and so on. Although the current ability of ChatGPT to generate experimental images is limited, the risk of misuse is evident. This development underscores the need for immediate action. We suggest that AI providers restrict the generation of experimental image, develop tools to detect AI-generated images, and consider adding "invisible watermarks" to the generated images. By implementing these measures, we can better ensure the responsible use of AI technology in academic research and maintain the integrity of scientific evidence.
- Subjects
CHATGPT; EDUCATION ethics; LANGUAGE models; RESEARCH integrity
- Publication
Journal of Hematology & Oncology, 2024, Vol 17, Issue 1, p1
- ISSN
1756-8722
- Publication type
Article
- DOI
10.1186/s13045-024-01543-8