Probabilistic Numerics Perzeptive Systeme Conference Paper 2017

Coupling Adaptive Batch Sizes with Learning Rates

Teaser

Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On three image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available.

Author(s): Balles, L. and Romero, J. and Hennig, P.
Book Title: Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)
Pages: ID 141
Year: 2017
Month: August
Editors: Gal Elidan, Kristian Kersting, and Alexander T. Ihler
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Event Place: Sydney, Australia
State: Published
URL: http://auai.org/uai2017/proceedings/papers/141.pdf
Electronic Archiving: grant_archive
Links:

BibTex

@inproceedings{balles2017coupling,
  title = {Coupling Adaptive Batch Sizes with Learning Rates},
  booktitle = {Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)},
  abstract = {Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On three image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available. },
  pages = {ID 141},
  editors = {Gal Elidan, Kristian Kersting, and Alexander T. Ihler},
  month = aug,
  year = {2017},
  slug = {balles2016cabs},
  author = {Balles, L. and Romero, J. and Hennig, P.},
  url = {http://auai.org/uai2017/proceedings/papers/141.pdf},
  month_numeric = {8}
}