Empirical Inference Article 2008

Generalization and Similarity in Exemplar Models of Categorization: Insights from Machine Learning

Exemplar theories of categorization depend on similarity for explaining subjects’ ability to generalize to new stimuli. A major criticism of exemplar theories concerns their lack of abstraction mechanisms and thus, seemingly, generalization ability. Here, we use insights from machine learning to demonstrate that exemplar models can actually generalize very well. Kernel methods in machine learning are akin to exemplar models and very successful in real-world applications. Their generalization performance depends crucially on the chosen similaritymeasure. While similarity plays an important role in describing generalization behavior it is not the only factor that controls generalization performance. In machine learning, kernel methods are often combined with regularization techniques to ensure good generalization. These same techniques are easily incorporated in exemplar models. We show that the Generalized Context Model (Nosofsky, 1986) and ALCOVE (Kruschke, 1992) are closely related to a statistical model called kernel logistic regression. We argue that generalization is central to the enterprise of understanding categorization behavior and suggest how insights from machine learning can offer some guidance. Keywords: kernel, similarity, regularization, generalization, categorization.

Author(s): Jäkel, F. and Schölkopf, B. and Wichmann, FA.
Journal: Psychonomic Bulletin and Review
Volume: 15
Number (issue): 2
Pages: 256-271
Year: 2008
Month: April
Day: 0
Bibtex Type: Article (article)
DOI: 10.3758/PBR.15.2.256
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@article{4783,
  title = {Generalization and Similarity in Exemplar Models of Categorization: Insights from Machine Learning},
  journal = {Psychonomic Bulletin and Review},
  abstract = {Exemplar theories of categorization depend on similarity for explaining subjects’ ability to
  generalize to new stimuli. A major criticism of exemplar theories concerns their lack of abstraction
  mechanisms and thus, seemingly, generalization ability. Here, we use insights from
  machine learning to demonstrate that exemplar models can actually generalize very well. Kernel
  methods in machine learning are akin to exemplar models and very successful in real-world
  applications. Their generalization performance depends crucially on the chosen similaritymeasure.
  While similarity plays an important role in describing generalization behavior it is not
  the only factor that controls generalization performance. In machine learning, kernel methods
  are often combined with regularization techniques to ensure good generalization. These same
  techniques are easily incorporated in exemplar models. We show that the Generalized Context
  Model (Nosofsky, 1986) and ALCOVE (Kruschke, 1992) are closely related to a statistical
  model called kernel logistic regression. We argue that generalization is central to the enterprise
  of understanding categorization behavior and suggest how insights from machine learning can
  offer some guidance. Keywords: kernel, similarity, regularization, generalization, categorization.},
  volume = {15},
  number = {2},
  pages = {256-271},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = apr,
  year = {2008},
  slug = {4783},
  author = {J{\"a}kel, F. and Sch{\"o}lkopf, B. and Wichmann, FA.},
  month_numeric = {4}
}