借鉴了:
Hongyi Li的ML课程第九节《tips for DL》
https://www.jianshu.com/p/aebcaf8af76e
《ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION》
https://zhuanlan.zhihu.com/p/105788925
Gradient descent及其变形
-
stochastic gradient descent
一次更新take in一个data point或一个mini-batch。

-
Adagrad
use first derivative to estimate second derivative
adaptive learning rate = learning rate/RMS of all previous and current gradients
large RMS(sum of gradients): small learning rate, which means we need to slow down the update speed;
small RMS: large learning rate

-
RMSProp
梯度平方进行加权均值
adaptive learning rate = learning rate/sigma
sigma includes all previous gradients g0 to gt-1, and current gradient gt.
small alpha: tends to believe gt to update parameters w_t-1
large alpha: tends to believe previous gradients (sigma t-1) to update w_t-1

-
momentum

大致朝原方向v_t-1走,新计算出的gradient(gt)会修正原更新方向v_t-1 by simply adding v_t-1 onto gt, which means强化与之同向的分量,弱化与之反向的分量。 -
Adam


mt //一阶矩(1st moment vector),movement vector using momentum. First Moment Estimation,即梯度的均值.
vt //二阶原始矩(2nd raw moment vector),i.e. E[(X^2)],RMSProp. Second Moment Estimation,即梯度的未中心化的方差
then bias correction:
由于m0 and v0初始化为0,会导致mt and vt偏向于0,尤其在训练初期阶段。所以,此处需要对梯度均值mt and vt进行偏差纠正,降低偏差对训练初期的影响。开始1-beta^t较小,接近于0,随着t增大,逐渐接近于1。
本文详细介绍了深度学习中常用的优化器,包括随机梯度下降(SGD)、AdaGrad、RMSProp、动量(momentum)以及Adam等。针对每种优化器的特点进行了深入解析,并探讨了如何选择合适的优化器来提高模型的训练效率。
627





