We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Abstraction and model evaluation in category learning.
- Authors
VANPAEMEL, WOLF; STORMS, GERT
- Abstract
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
- Subjects
ABSTRACT thought; MATHEMATICAL models; COMPUTATIONAL complexity; LEARNING; ARTIFICIAL intelligence
- Publication
Behavior Research Methods, 2010, Vol 42, Issue 2, p421
- ISSN
1554-351X
- Publication type
Article
- DOI
10.3758/BRM.42.2.421