Empirical Inference Book 2000

Advances in Large Margin Classifiers

The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.

Author(s): Smola, AJ. and Bartlett, PJ. and Schölkopf, B. and Schuurmans, D.
Pages: 422
Year: 2000
Month: October
Day: 0
Series: Neural Information Processing
Publisher: MIT Press
Bibtex Type: Book (book)
Address: Cambridge, MA, USA
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@book{974,
  title = {Advances in Large Margin Classifiers},
  abstract = {The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms.
  The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.},
  pages = {422},
  series = {Neural Information Processing},
  publisher = {MIT Press},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Cambridge, MA, USA},
  month = oct,
  year = {2000},
  slug = {974},
  author = {Smola, AJ. and Bartlett, PJ. and Sch{\"o}lkopf, B. and Schuurmans, D.},
  month_numeric = {10}
}