- 博客(6)
- 收藏
- 关注
原创 nn.Model的输入,输出
nn.Linear N,#,feature_in N,#feature_outnn.Embedding (#) (#) embedding_sizenn.LSTM (intput,(h_0,c_0)) output,(h_n,c_n)input: L,N,h_{in}) h_0,c_0: (D*num_layers, N, H_{out})output:L, N, D * H_{out}, h_n,c_n=D * num_l
2021-10-06 13:29:14
416
原创 2021-10-03
参数初始化self.in_embed.weight.data.uniform_(-1,1)#self.out_embed.weight.data.uniform_(-1,1)torch.nn.init.uniform_(self.out_embed.weight.data,a=-1,b=1)
2021-10-03 14:23:18
149
原创 requires_grad=False
如果一个参数requires_grad=False,并且这个参数在optimizer里面,则不对它进行更新,并且程序不会报错
2021-08-26 20:51:48
3054
原创 对部分参数进行l2正则化
weight_p, bias_p = [],[]for name, p in model.named_parameters():if ‘bias’ in name:bias_p += [p]else:weight_p += [p]optim.SGD([{‘params’: weight_p, ‘weight_decay’:1e-5},{‘params’: bias_p, ‘weight_decay’:0}], lr=1e-2, momentum=0.9){}内优先级最高embeddin
2021-08-26 18:17:03
249
原创 nn.dropout和nn.function.dropout区别
nn.dropout在model.eval后,即关闭nn.function.dropout在model.eval后,不关闭参考:https://www.zhihu.com/question/66782101/answer/579393790
2021-08-26 17:19:45
561
原创 2021-08-24
self.embedding.weight.requires=False即可不进行训练加载预训练权重:self.embedding.weight.data.copy_(tensor)如果要对输入进行求导,需再输入上加一个input=Variable(input,requires_grad),然后可以通过input.requires_grad查看是否有梯度model里有一个linear1,可以通过model.linear1.weight查看linear1参数,RNN_stock.linear_layer
2021-08-24 10:26:36
403
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人
RSS订阅