We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Machine vs Machine: Large Language Models (LLMs) in Applied Machine Learning High-Stakes Open-Book Exams.
- Authors
Quille, Keith; Alattyanyi, Csanad; Becker, Brett A.; Faherty, Róisín; Gordon, Damian; Harte, Miriam; Hensman, Svetlana; Hofmann, Markus; Jiménez García, Jorge; Kuznetsov, Anthony; Marais, Conrad; Nolan, Keith; Nicolai, Cianan; O'Leary, Ciarán; Zero, Andrzej
- Abstract
There is a significant gap in Computing Education Research (CER) concerning the impact of Large Language Models (LLMs) in advanced stages of degree programmes. This study aims to address this gap by investigating the effectiveness of LLMs in answering exam questions within an applied machine learning final-year undergraduate course. The research examines the performance of LLMs in responding to a range of exam questions, including proctored closed-book and open-book questions spanning various levels of Bloom's Taxonomy. Question formats encompassed open-ended, tabular data-based, and figure-based inquiries. To achieve this aim, the study has the following objectives: Comparative Analysis: To compare LLM-generated exam answers with actual student submissions to assess LLM performance. Detector Evaluation: To evaluate the efficacy of LLM detectors by directly inputting LLM-generated responses into these detectors. Additionally, assess detector performance on tampered LLM outputs designed to conceal their AI-generated origin. The research methodology used for this paper incorporates a staff-student partnership model involving eight academic staff and six students. Students play integral roles in shaping the project's direction, particularly in areas unfamiliar to academic staff, such as specific tools to avoid LLM detection. This study contributes to the understanding of LLMs' role in advanced education settings, with implications for future curriculum design and assessment methodologies.
- Subjects
LANGUAGE models; MACHINE learning; PROGRAMMING languages; BLOOM'S taxonomy; TAXONOMY; CURRICULUM planning; PERFORMANCES; COINCIDENCE
- Publication
RED - Revista de Educación a Distancia, 2024, Vol 24, Issue 78, p1
- ISSN
1578-7680
- Publication type
Article
- DOI
10.6018/red.603001