@(Paper summaries)[Neural Networks|Optimization]
应用场景:训练集很大。
好处:避免取到局部最优解。
Neural networks are often trained stochastically, i.e. using a method where the objective function changes at each iteration. This stochastic variation is due to the model being trained on different data during each iteration. This is motivated by (at least) two factors: First, the dataset used as training data is often too large to fit in memory and/or be optimized over efficiently. Second, the objective function is typically nonconvex, so using different data at each iteration can help prevent the model from settling in a local minimum. Furthermore, training neural networks is usually done using only the first-order gradient of the parameters with respect to the loss function. This is due to the large number of parameters present in a neural network, which for practical purposes prevents the computation of the Hessian matrix. Because vanilla gradient descent can diverge or converge incredibly slowly if its learning rate hyperparameter is set inappropriately, many alternative methods have been proposed which are intended to produce desirable convergence with less dependence on hyperparameter settings. These methods often effectively compute and utilize a preconditioner on the gradient, adaptively change the learning rate over time or approximate the Hessian matrix.
In the following, we will use θt to denote some generic parameter of the model at iteration t , to be optimized according to some loss function
Stochastic Gradient Descent
Stochastic gradient descent (SGD) simply updates each parameter by subtracting the gradient of the loss with respect to the parameter, scaled by the learning rate η , a hyperparameter. If η is too large, SGD will diverge; if it’s too small, it will converge slowly. The update rule is simply
Momentum
In SGD, the gradient ∇(θt) often changes rapidly at each iteration t due to the fact that the loss is being computed over different data. This is often partially mitigated by re-using the gradient value from the previous iteration, scaled by a momentum hyperparameter
It has been argued that including the previous gradient step has the effect of approximating some second-order information about the gradient.
Nesterov’s Accelerated Gradient
In Nesterov’s Accelerated Gradient (NAG), the gradient of the loss at each step is computed at θt+μvt instead of θt . In momentum, the parameter update could be written θt+1=θt+μvt−η∇(θt) , so NAG effectively computes the gradient at the new parameter location but without considering the gradient term. In practice, this causes NAG to behave more stably than regular momentum in many situations. A more thorough analysis can be found in ((Sutskever, Martens, Dahl, and Hinton, “On the importance of initialization and momentum in deep learning” (ICML 2013) )). The update rules are then as follows:

本文探讨了在训练大规模神经网络时使用的随机优化技术,如随机梯度下降(SGD)、动量法、Nesterov动量、Adagrad、RMSProp、Adadelta、Adam等。这些方法通过调整学习率、引入历史梯度信息或使用动量,帮助避免局部最优解,提高训练效率。
最低0.47元/天 解锁文章
3223

被折叠的 条评论
为什么被折叠?



