Skip to content

Latest commit

 

History

History
24 lines (21 loc) · 1.1 KB

README.md

File metadata and controls

24 lines (21 loc) · 1.1 KB

Stability and optimality in stochastic gradient descent

This is the accompanying code implementation of the methods and algorithms for a paper in progress.

Maintainer

References

  • Francis Bach and Eric Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n). Advances in Neural Information Processing Systems, 2013.
  • Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1-22, 2010.
  • Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in Neural Information Processing Systems, 2013.
  • David Ruppert. Efficient estimations from a slowly convergent robbins-monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.
  • Wei Xu. Towards optimal one pass large scale learning with averaged stochastic gradient descent. arXiv preprint arXiv:1107.2490, 2011.