Empirical Inference Article 2011

Analysis of Fixed-Point and Coordinate Descent Algorithms for Regularized Kernel Methods

In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.

Author(s): Dinuzzo, F.
Journal: IEEE Transactions on Neural Networks
Volume: 22
Number (issue): 10
Pages: 1576-1587
Year: 2011
Month: October
Day: 0
Bibtex Type: Article (article)
DOI: 10.1109/TNN.2011.2164096
Digital: 0
Electronic Archiving: grant_archive
Links:

BibTex

@article{Dinuzzo2011_2,
  title = {Analysis of Fixed-Point and Coordinate Descent Algorithms for Regularized Kernel Methods},
  journal = {IEEE Transactions on Neural Networks},
  abstract = {In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.},
  volume = {22},
  number = {10},
  pages = {1576-1587},
  month = oct,
  year = {2011},
  slug = {dinuzzo2011_2},
  author = {Dinuzzo, F.},
  month_numeric = {10}
}