Ali Ramezani-Kebrya (left) presenting the present research in a ICML 2024 poster session

Blog

Understanding Deep Learning via Generalization and Optimzation Analysis for Accenerated SGD

November 15, 2024

By Ali Ramezani-Kebrya, Associate professor at SFI Visual Intelligence/University of Oslo

Momentum methods have been used to accelerate empirical risk minimization when training neural networks. In particular, momentum methods are used for training deep neural networks with complex and nonconvex loss landscapes. Intuitively, adding momentum accelerates convergence by circumventing sharp curvatures and long ravines of the sub-level sets of the objective function.

While momentum-based accelerated variants of stochastic gradient descent (SGD) are widely used when training deep neural networks, there is little theoretical understanding on the generalization error of such methods.

We have shown it is crucial to establish an appropriate balance between the optimization error associated with the empirical risk and the generalization error when accelerating SGD with momentum and established generalization error bounds and explicit convergence rates for SGD with momentum under a broad range of hyperparameters including a general step-size rule.

We first show that there exists a convex loss function for which the stability gap for multiple epochs of SGD with standard heavy-ball momentum (SGDM) becomes unbounded. For smooth Lipschitz loss functions, we analyze SGD with early momentum (SGDEM) under a broad range of step-sizes, and show that it can train machine learning models for multiple epochs with a guarantee for generalization.  

A similar hybrid method has been shown to generalize better than both vanilla SGD and Adaptive Moment Estimation (Adam) in deep learning practice. However, it remains unclear why such hybrid methods generalize better. Our work sheds theoretical light on this question.

For the special case of strongly convex loss functions, we find a range of momentum such that multiple epochs of standard SGDM, as a special form of SGDEM, also generalizes. Extending our results on generalization, we also develop an upper bound on the expected true risk, in terms of the number of training steps, sample size, and momentum.

Publication

On the Generalization of Stochastic Gradient Descent with Momentum

January 1, 2024

Ali Ramezani-Kebrya, Kimon Antonakopoulos, Volkan Cevher, Ashish Khisti, Ben Liang

Paper abstract

While momentum-based accelerated variants of stochastic gradient descent (SGD) are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In this work, we first show that there exists a convex loss function for which the stability gap for multiple epochs of SGD with standard heavy-ball momentum (SGDM) becomes unbounded. Then, for smooth Lipschitz loss functions, we analyze a modified momentum-based update rule, i.e., SGD with early momentum (SGDEM) under a broad range of step-sizes, and show that it can train machine learning models for multiple epochs with a guarantee for generalization. Finally, for the special case of strongly convex loss functions, we find a range of momentum such that multiple epochs of standard SGDM, as a special form of SGDEM, also generalizes. Extending our results on generalization, we also develop an upper bound on the expected true risk, in terms of the number of training steps, sample size, and momentum. Our experimental evaluations verify the consistency between the numerical results and our theoretical bounds. SGDEM improves the generalization error of SGDM when training ResNet-18 on ImageNet in practical distributed settings.