【转载】Overview of gradient descent algorithms

本文深入探讨了各种流行的基于梯度的优化算法,如动量(Momentum)、AdaGrad、RMSprop和Adam的工作原理,旨在为这些常用于神经网络优化的算法提供直观的理解。

Overview of gradient descent algorithms

 

 

An overview of gradient descent optimization algorithms

Gradient descent is the preferred way to optimize neural networks and many other machine learning algorithms but is often used as a black box. This post explores how many of the most popular gradient-based optimization algorithms such as Momentum, Adagrad, and Adam actually work.

  • Sebastian Ruder

    Sebastian Ruder

    Read more posts by this author.

    Sebastian Ruder

SEBASTIAN RUDER

19 JAN 2016 • 28 MIN READ

 

This post explores how many of the most popular gradient-based optimization algorithms actually work.

Note: If you are looking for a review paper, this blog post is also available as an article on arXiv.

Update 20.03.2020: Added a note on recent optimizers.

Update 09.02.2018: Added AMSGrad.

Update 24.11.2017: Most of the content in this article is now also available as slides.

Update 15.06.2017: Added derivations of AdaMax and Nadam.

Update 21.06.16: This post was posted to Hacker News. The discussion provides some interesting pointers to related work and other techniques.

Table of contents:

Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne'scaffe's, and keras' documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by.

This blog post aims at providing you with intuitions towards the behaviour of different algorithms for optimizing gradient descent that will help you put them to use. We are first going to look at the different variants of gradient descent. We will then briefly summarize challenges during training. Subsequently, we will introduce the most common optimization algorithms by showing their motivation to resolve these challenges and how this leads to the derivation of their update rules. We will also take a short look at algorithms and architectures to optimize gradient descent in a parallel and distributed setting. Finally, we will consider additional strategies that are helpful for optimizing gradient descent.

Gradient descent is a way to minimize an objective function J(θ)J(θ) parameterized by a model's parameters θ∈Rdθ∈Rd by updating the parameters in the opposite direction of the gradient of the objective function ∇θJ(θ)∇θJ(θ) w.r.t. to the parameters. The learning rate ηη determines the size of the steps we take to reach a (local) minimum. In other words, we follow the direction of the slope of the surface created by the objective function downhill until we reach a valley. If you are unfamiliar with gradient descent, you can find a good introduction on optimizing neural networks here.

请访问原文链接:Overview of gradient descent algorithms

带开环升压转换器和逆变器的太阳能光伏系统 太阳能光伏系统驱动开环升压转换器和SPWM逆变器提供波形稳定、设计简单的交流电的模型 Simulink模型展示了一个完整的基于太阳能光伏的直流到交流电力转换系统,该系统由简单、透明、易于理解的模块构建而成。该系统从配置为提供真实直流输出电压的光伏阵列开始,然后由开环DC-DC升压转换器进行处理。升压转换器将光伏电压提高到适合为单相全桥逆变器供电的稳定直流链路电平。 逆变器使用正弦PWM(SPWM)开关来产生干净的交流输出波形,使该模型成为研究直流-交流转换基本操作的理想选择。该设计避免了闭环和MPPT的复杂性,使用户能够专注于光伏接口、升压转换和逆变器开关的核心概念。 此模型包含的主要功能: •太阳能光伏阵列在标准条件下产生~200V电压 •具有固定占空比操作的开环升压转换器 •直流链路电容器,用于平滑和稳定转换器输出 •单相全桥SPWM逆变器 •交流负载,用于观察实际输出行为 •显示光伏电压、升压输出、直流链路电压、逆变器交流波形和负载电流的组织良好的范围 •完全可编辑的结构,适合分析、实验和扩展 该模型旨在为太阳能直流-交流转换提供一个干净高效的仿真框架。布局简单明了,允许用户快速了解信号流,检查各个阶段,并根据需要修改参数。 系统架构有意保持模块化,因此可以轻松扩展,例如通过添加MPPT、动态负载行为、闭环升压控制或并网逆变器概念。该模型为进一步开发或整合到更大的可再生能源模拟中奠定了坚实的基础。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值