Article 2020

Representing absence of evidence: why algorithms and representations matter in models of language and cognition

{Theories of language and cognition develop iteratively from ideas, experiments and models. The abstract nature of \textquotedblleftcognitive processes\textquotedblright means that computational models play a critical role in this, yet bridging the gaps between models, data, and interpretations is challenging. While the how and why computations are performed is often the primary research focus, the conclusions drawn from models can be compromised by the representations chosen for them. To illustrate this point, we revisit a set of empirical studies of language acquisition that appear to support different models of learning from implicit negative evidence. We examine the degree to which these conclusions were influenced by the representations chosen and show how a plausible single mechanism account of the data can be formulated for representations that faithfully capture the task design. The need for input representations to be incorporated into model conceptualisations, evaluations, and comparisons is discussed.}

Author(s): Bröker, F and Ramscar, M
Journal: {Language, Cognition and Neuroscience}
Volume: Epub ahead
Year: 2020
Publisher: Routledge
Bibtex Type: Article (article)
DOI: 10.1080/23273798.2020.1862257
Address: London
Electronic Archiving: grant_archive

BibTex

@article{item_3275354,
  title = {{Representing absence of evidence: why algorithms and representations matter in models of language and cognition}},
  journal = {{Language, Cognition and Neuroscience}},
  abstract = {{Theories of language and cognition develop iteratively from ideas, experiments and models. The abstract nature of \textquotedblleftcognitive processes\textquotedblright means that computational models play a critical role in this, yet bridging the gaps between models, data, and interpretations is challenging. While the how and why computations are performed is often the primary research focus, the conclusions drawn from models can be compromised by the representations chosen for them. To illustrate this point, we revisit a set of empirical studies of language acquisition that appear to support different models of learning from implicit negative evidence. We examine the degree to which these conclusions were influenced by the representations chosen and show how a plausible single mechanism account of the data can be formulated for representations that faithfully capture the task design. The need for input representations to be incorporated into model conceptualisations, evaluations, and comparisons is discussed.}},
  volume = {Epub ahead},
  publisher = {Routledge},
  address = {London},
  year = {2020},
  slug = {item_3275354},
  author = {Br\"oker, F and Ramscar, M}
}