torch入门笔记20:torch实现freeze layer的操作

本文记录了作者在PyTorch环境中实现冻结层的过程,主要通过搜索资料和实验,总结了有效的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这两天一直在尝试着在torch的框架内实现freeze layer,通过search google,从极少的文档中找到比较work的方法,故而总结在这。

You can set the learning rate of certain layers to zero by overriding their updateParameters and accGradParameters to zero. You don't necessarily have to subclass, but it is cleaner.

If you are building the model for the first time, you simply get the initialized layer's handle and overrride the mentioned functions.

If you have a pre-trained network, and you want to freeze certain layers, you can either use the generic :apply function, that applies a closure to each layer, such as:

https://gist.github.com/soumith/5010de75f7a6805d33c9

Or you can traverse them via traversing the model's .modules table:

https://gist.github.com/soumith/b910efc4dd559736a9b0

Finally, here's an example freezing the first 3 layers of a net:

https://gist.github.com/soumith/6cd0f9b8462d0507a91b


link:https://www.reddit.com/r/MachineLearning/comments/44ochv/forcing_learning_rate_to_zero_in_torch/


model = nn.Sequential()
  model:add(nn.SpatialConvolution(3, 64, 5, 5, 1, 1))
  model:add(nn.ReLU())
  model:add(nn.SpatialConvolution(64, 64, 5, 5, 1, 1))
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值