蒸馏神经网络取名为蒸馏(Distill),其实是一个非常形象的过程。
我们把数据结构信息和数据本身当作一个混合物,分布信息通过概率分布被分离出来。首先,T值很大,相当于用很高的温度将关键的分布信息从原有的数据中分离,之后在同样的温度下用新模型融合蒸馏出来的数据分布,最后恢复温度,让两者充分融合。这也可以看成Prof. Hinton将这一个迁移学习过程命名为蒸馏的原因。
蒸馏神经网络想做的事情,本质上更接近于迁移学习(Transfer Learning),当然也可从模型压缩(Model Compression)的角度取理解蒸馏神经网络。
详细的推道过程和理论,可以参见我的另外一篇博客:知识蒸馏(Distillation)简介_自蒸馏算法大神-优快云博客
talk is cheap show me code
"""
Function:knowledge distillation
"""
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
torch.manual_seed(0)
# torch.cuda.manual_seed(0)
# 定义教师网络
class TeacherNet(nn.Module):
def __init__(self):
super(TeacherNet, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout2d(0.3)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x,2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
output = self.fc2(x)
return output
# 训练过程
def train_teacher(model, device, train_loader, optimizer, epoch):
# 启用 BatchNormalization 和 Dropout
model.train()
trained_samples = 0
for batch_idx, (data, target) in enumerate(train_loader):
# 搬到指定gpu或者cpu设备上运算
data, target = data.to(device), target.to(device)
# 梯度清零
optimizer.zero_grad()
# 前向传播
output = model(data)
# 计算误差
loss = F.cross_entropy(output, target)
# 误差反向传播
loss.backward()
# 梯度更新一步
optimizer.step()
# 统计已经训练的数据量
trained_sampl