Empirical Inference Conference Paper 2006

Evaluating Predictive Uncertainty Challenge

This Chapter presents the PASCAL Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good “probabilistic predictions”, rather than just the usual “point predictions” with no measure of uncertainty, in regression and classification problems. Parti-cipants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions.

Author(s): Quinonero Candela, J. and Rasmussen, CE. and Sinz, F. and Bousquet, O. and Schölkopf, B.
Book Title: Machine Learning Challenges: Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment
Journal: Machine Learning Challenges: First PASCAL Machine Learning Challenges Workshop (MLCW 2005)
Pages: 1-27
Year: 2006
Month: April
Day: 0
Editors: J Quiñonero Candela and I Dagan and B Magnini and F d'Alché-Buc
Publisher: Springer
Bibtex Type: Conference Paper (inproceedings)
Address: Berlin, Germany
DOI: 10.1007/11736790_1
Event Name: First PASCAL Machine Learning Challenges Workshop (MLCW 2005)
Event Place: Southampton, United Kingdom
Digital: 0
Electronic Archiving: grant_archive
ISBN: 978-3-540-33428-6
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{3924,
  title = {Evaluating Predictive Uncertainty Challenge},
  journal = {Machine Learning Challenges: First PASCAL Machine Learning Challenges Workshop (MLCW 2005)},
  booktitle = {Machine Learning Challenges: Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment},
  abstract = {This Chapter presents the PASCAL Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good “probabilistic predictions”, rather than just the usual “point predictions” with no measure of uncertainty, in regression and classification problems. Parti-cipants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions. },
  pages = {1-27},
  editors = {J Quiñonero Candela and I Dagan and B Magnini and F d'Alché-Buc},
  publisher = {Springer},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Berlin, Germany},
  month = apr,
  year = {2006},
  slug = {3924},
  author = {Quinonero Candela, J. and Rasmussen, CE. and Sinz, F. and Bousquet, O. and Sch{\"o}lkopf, B.},
  month_numeric = {4}
}