Empirical Inference Poster 2004

Efficient Approximations for Support Vector Classiers

In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

Author(s): Kienzle, W. and Franz, MO.
Volume: 7
Pages: 68
Year: 2004
Month: February
Day: 0
Bibtex Type: Poster (poster)
Digital: 0
Electronic Archiving: grant_archive
Event Name: 7th Tübingen Perception Conference (TWK 2004)
Event Place: Tübingen, Germany
Links:

BibTex

@poster{KienzleF2004,
  title = {Efficient Approximations for Support Vector Classiers},
  abstract = {In face detection, support vector machines (SVM) and neural networks (NN) have been shown
  to outperform most other classication methods. While both approaches are learning-based,
  there are distinct advantages and drawbacks to each method: NNs are difcult to design and
  train but can lead to very small and efcient classiers. In comparison, SVM model selection
  and training is rather straightforward, and, more importantly, guaranteed to converge to
  a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers
  tend to have large representations which are inappropriate for time-critical image processing
  applications.
  In this work, we examine various existing and new methods for simplifying support vector
  decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical
  and statistical advantages of SVMs. For a given SVM solution, we compute a cascade
  of approximations with increasing complexities. Each classier is tuned so that the detection
  rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image.
  Then, any subsequent classier is applied only to those positions that have been classied as
  positive throughout all previous stages. The false positive rate at the end equals that of the
  last (i.e. most complex) detector. In contrast, since many image positions are discarded by
  lower-complexity classiers, the average computation time per patch decreases signicantly
  compared to the time needed for evaluating the highest-complexity classier alone.},
  volume = {7},
  pages = {68},
  month = feb,
  year = {2004},
  slug = {kienzlef2004},
  author = {Kienzle, W. and Franz, MO.},
  month_numeric = {2}
}