We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Can language models automate data wrangling?
- Authors
Jaimovitch-López, Gonzalo; Ferri, Cèsar; Hernández-Orallo, José; Martínez-Plumed, Fernando; Ramírez-Quintana, María José
- Abstract
The automation of data science and other data manipulation processes depend on the integration and formatting of 'messy' data. Data wrangling is an umbrella term for these tedious and time-consuming tasks. Tasks such as transforming dates, units or names expressed in different formats have been challenging for machine learning because (1) users expect to solve them with short cues or few examples, and (2) the problems depend heavily on domain knowledge. Interestingly, large language models today (1) can infer from very few examples or even a short clue in natural language, and (2) can integrate vast amounts of domain knowledge. It is then an important research question to analyse whether language models are a promising approach for data wrangling, especially as their capabilities continue growing. In this paper we apply different variants of the language model Generative Pre-trained Transformer (GPT) to five batteries covering a wide range of data wrangling problems. We compare the effect of prompts and few-shot regimes on their results and how they compare with specialised data wrangling systems and other tools. Our major finding is that they appear as a powerful tool for a wide range of data wrangling tasks. We provide some guidelines about how they can be integrated into data processing pipelines, provided the users can take advantage of their flexibility and the diversity of tasks to be addressed. However, reliability is still an important issue to overcome.
- Subjects
LANGUAGE models; MACHINE learning; GENERATIVE pre-trained transformers; RESEARCH questions; DATA science; DATA modeling
- Publication
Machine Learning, 2023, Vol 112, Issue 6, p2053
- ISSN
0885-6125
- Publication type
Article
- DOI
10.1007/s10994-022-06259-9