Academy of Mathematics and Systems Science, CAS Colloquia & Seminars | Speaker: | 凌青 教授,中山大学 | Inviter: | 刘歆 研究员 | Title: | Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning | Time & Venue: | 2021.10.25 09:30-10:30 科技综合楼311教室 | Abstract: | In this work, we investigate stochastic quasi-Newton methods for minimizing a finite sum of cost functions over a decentralized network. We develop a general algorithmic framework that incorporates stochastic quasi-Newton approximation with variance reduction so as to achieve fast convergence. At each time each node constructs a local, inexact quasi-Newton direction that asymptotically approaches the global, exact one. To be specific, (i) A local gradient approximation is constructed by using dynamic average consensus to track the average of variance-reduced local stochastic gradients over the entire network; (ii) A local Hessian inverse approximation is assumed to be positive definite with bounded eigenvalues, and we specify two fully decentralized stochastic quasi-Newton methods, damped regularized limited-memory DFP (Davidon-Fletcher-Powell) and damped limited-memory BFGS (Broyden-Fletcher-Goldfarb-Shanno), to locally construct such a Hessian inverse approximation without extra sampling or communication. Compared to the existing decentralized stochastic first-order methods, the proposed general framework introduces the second-order curvature information without incurring extra sampling or communication. With a fixed step size, we establish the conditions under which the proposed general framework linearly converges to an exact optimal solution. | | | |