学习笔记|Pytorch使用教程25
本学习笔记主要摘自“深度之眼”,做一个总结,方便查阅。
使用Pytorch版本为1.2
- Batch Normalization概念
- PyTorch的Batch Normalization 1d/2d/3d实现
一.Batch Normalization概念
Batch Normalization :批标准化
批:一批数据,通常为mini- batch
标准化: 0均值,1方差
优点:
- 1.可以用更大学习率,加速模型收敛
- 2.可以不用精心设计权值初始化
- 3.可以不用dropout或较小的dropout
- 4.可以不用L2或者较小的weight decay
- 5.可以不用LRN(local response normalization)
- 《Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift》
测试代码:
import torch
import numpy as np
import torch.nn as nn
from tools.common_tools import set_seed
set_seed(1) # 设置随机种子
class MLP(nn.Module):
def __init__(self, neural_num, layers=100):
super(MLP, self).__init__()
self.linears = nn.ModuleList([nn.Linear(neural_num, neural_num, bias=False) for i in range(layers)])
self.bns = nn.ModuleList([nn.BatchNorm1d(neural_num) for i in range(layers)])
self.neural_num = neural_num
def forward(self, x):
for (i, linear), bn in zip(enumerate(self.linears), self.bns):
x = linear(x)
# x = bn(x)
x = torch.relu(x)
if torch.isnan(x.std()):
print("output is nan in {} layers".format(i))
break
print("layers:{}, std:{}".format(i, x.std().item()))
return x
def initialize(self):
for m in self.modules():
if isinstance(m, nn.Linear):
# method 1
nn.init.normal_(m.weight.data, std=1) # normal: mean=0, std=1
# method 2 kaiming
# nn.init.kaiming_normal_(m.weight.data)
neural_nums = 256
layer_nums = 100
batch_size = 16
net = MLP(neural_nums, layer_nums)
# net.initialize()
inputs = torch.randn((batch_size, neural_nums)) # normal: mean=0, std=1
output = net(inputs)
print(output)
输出:
layers:0, std:0.3342404067516327
layers:1, std:0.13787388801574707
layers:2, std:0.05783054977655411
layers:3, std:0.02498556487262249
layers:4, std:0.009679116308689117
layers:5, std:0.0040797945111989975
layers:6, std:0.0016723505686968565
layers:7, std:0.000768698868341744
......
layers:93, std:7.51512610937515e-38
layers:94, std:2.6169094678434883e-38
layers:95, std:1.1516209894049713e-38
layers:96, std:4.344910860036386e-39
layers:97, std:1.5943525511579185e-39
layers:98, std:5.721221370145363e-40
layers:99, std:2.4877251637158477e-40
tensor([[0.0000e+00, 2.1158e-41, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 5.1800e-41, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
...,
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 5.8066e-41, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00]], grad_fn=<ReluBackward0>)
发现参数在100层的时候非常小了。
现在进行初始化,设置:net.initialize()
输出:
layers:0, std:9.35224723815918
layers:1, std:112.47123718261719
layers:2, std:1322.805419921875
layers:3, std:14569.419921875
layers:4, std:154672.703125
layers:5, std:1834037.125
layers:6, std:18807968.0
layers:7, std:209552880.0
......
layers:28, std:3.221297392084588e+30
layers:29, std:3.530939139138446e+31
layers:30, std:4.525336236359181e+32
layers:31, std:4.714992054712809e+33
layers:32, std:5.369568386632447e+34
layers:33, std:6.712290740934239e+35
layers:34, std:7.451081630611702e+36
output is nan in 35 layers
tensor([[3.2625e+36, 0.0000e+00, 7.2931e+37, ..., 0.0000e+00, 0.0000e+00,
2.5465e+38],
[3.9236e+36, 0.0000e+00, 7.5033e+37, ..., 0.0000e+00, 0.0000e+00,
2.1274e+38],
[0.0000e+00, 0.0000e+00, 4.4931e+37, ..., 0.0000e+00, 0.0000e+00,
1.7016e+38],
...,
[0.0000e+00, 0.0000e+00, 2.4222e+37, ..., 0.0000e+00, 0.0000e+00,
2.5295e+38],
[4.7380e+37, 0.0000e+00, 2.1579e+37, ..., 0.0000e+00, 0.0000e+00,
2.6028e+38],
[0.0000e+00, 0.0000e+00, 6.0877e+37, ..., 0.0000e+00, 0.0000e+00,
2.1695e+38]], grad_fn=<ReluBackward0>)
网络在35层的时候就出现了nan的情况。
使用凯明初始化:nn.init.kaiming_normal_(m.weight.data)
输出:
layers:0, std:0.826629638671875
layers:1, std:0.878681480884552
layers:2, std:0.9134420156478882
layers:3, std:0.8892467617988586
layers:4, std:0.8344276547431946
layers:5, std:0.87453693151474
layers:6, std:0.792696475982666
layers:7, std:0.7806451916694641
......
layers:92, std:0.6094536185264587
layers:93, std:0.6019036173820496
layers:94, std:0.595414936542511
layers:95, std:0.6624482870101929
layers:96, std:0.6377813220024109
layers:97, std:0.6079217195510864
layers:98, std:0.6579239368438721
layers:99, std:0.6668398976325989
tensor([[0.0000, 1.3437, 0.0000, ..., 0.0000, 0.6444, 1.1867],
[0.0000, 0.9757, 0.0000, ..., 0.0000, 0.4645, 0.8594],
[0.0000, 1.0023, 0.0000, ..., 0.0000, 0.5147, 0.9196],
...,
[0.0000, 1.2873, 0.0000, ..., 0.0000, 0.6454, 1.1411],
[0.0000, 1.3588, 0.0000, ..., 0.0000, 0.6749, 1.2437],
[0.0000, 1.1807, 0.0000, ..., 0.0000, 0.5668, 1.0600]],
grad_fn=<ReluBackward0>)
数据有一定的波动,现在加入bn层:x = bn(x)
输出:
layers:0, std:0.5872595906257629
layers:1, std:0.579325795173645
layers:2, std:0.5757012367248535
layers:3, std:0.5840616822242737
layers:4, std:0.5781518220901489
layers:5, std:0.5856173634529114
layers:6, std:0.5862171053886414
......
layers:95