Lr_policy, 持续更新.....

本文详细解释了Caffe中不同学习率策略的工作原理及应用,包括'fixed'、'step'、'exp'、'inv'、'multistep'、'poly'和'sigmoid'等,并通过图表对比了各种策略下学习率的变化趋势。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

reference:

https://stackoverflow.com/questions/30033096/what-is-lr-policy-in-caffe/30045244

I just try to find out how I can use Caffe. To do so, I just took a look at the different .prototxt files in the examples folder. There is one option I don’t understand:

The learning rate policy

lr_policy: "inv"
Possible values seem to be:

"fixed"
"inv"
"step"
"multistep"
"stepearly"
Could somebody please explain those options?

If you look inside the /caffe-master/src/caffe/proto/caffe.proto file (you can find it online here) you will see the following descriptions:

// The learning rate decay policy. The currently implemented learning rate
// policies are as follows:
//    - fixed: always return base_lr.
//    - step: return base_lr * gamma ^ (floor(iter / step))
//    - exp: return base_lr * gamma ^ iter
//    - inv: return base_lr * (1 + gamma * iter) ^ (- power)
//    - multistep: similar to step but it allows non uniform steps defined by
//      stepvalue
//    - poly: the effective learning rate follows a polynomial decay, to be
//      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)
//    - sigmoid: the effective learning rate follows a sigmod decay
//      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
//
// where base_lr, max_iter, gamma, step, stepvalue and power are defined
// in the solver parameter protocol buffer, and iter is the current iteration.
iter=1:50000;
max_iter=50000;
base_lr=0.01;
gamma=0.0001;
power=0.75;
step_size=5000;

画图比较各lr_policy:

iter=1:50000;
max_iter=50000;
base_lr=0.01;
gamma=0.0001;
power=0.75;
step_size=5000;
min_lr=0.0005;

% - fixed: always return base_lr.
lr=base_lr*ones(1,50000);
subplot(2,4,1)
plot(lr)
title('fixed')
% - step: return base_lr * gamma ^ (floor(iter / step))
lr=base_lr .* gamma.^(floor(iter./10000));
subplot(2,4,2)
plot(lr)
title('step')
% - exp: return base_lr * gamma ^ iter
lr=base_lr * gamma .^ iter;
subplot(2,4,3)
plot(lr)
title('exp')
% - inv: return base_lr * (1 + gamma * iter) ^ (- power)
lr=base_lr.*(1./(1+gamma.*iter).^power);
subplot(2,4,4)
plot(lr)
title('inv')
% - multistep: similar to step but it allows non uniform steps defined by
% stepvalue
% - poly: the effective learning rate follows a polynomial decay, to be
% zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)
lr=base_lr *(1 - iter./max_iter) .^ (power);
subplot(2,4,5)
plot(lr)
title('poly')
% - sigmoid: the effective learning rate follows a sigmod decay
% return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
lr=base_lr *( 1./(1 + exp(-gamma * (iter - step_size))));
subplot(2,4,6)
plot(lr)
title('sigmoid')

%-stepdecr: return base_lr * (1 - gamma * (floor(iter / step))
%-min_lr: The minimum learning rate.
step_size=floor(max_iter/27); %27 is the number of descending. We can change it 
lr=max(min_lr,base_lr * (1 - gamma * (floor(iter / step_size))));
subplot(2,4,7)
plot(lr)
title('stepderc')

效果如下:
这里写图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值