开题之问: 如果使用不同的优化器、损失函数和module模型,我们的代码能否保持不变
答案: 不会
我们这里使用了jupyter的,将我们的代码拆分成了很多的部分,分别用于生成数据,预处理数据,模型配置,模型训练,模型验证
这个表示我们会将如下的代码写到这个data_preparation/v0.py的文件中,这个是数据的准备过程
%%writefile data_preparation/v0.py
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Our data was in Numpy arrays, but we need to transform them
# into PyTorch's Tensors and then we send them to the
# chosen device
x_train_tensor = torch.as_tensor(x_train).float().to(device)
我们会调用
%run -i data_preparation/v0.py来执行
原文中的训练的代码为,这里无论任何的model都是执行model函数,使用loss_fn函数并且调用backward函数,最后调用优化器的step和zero函数,所以即使model,loss_fn如何改变,我们都不会改变我们的训练代码
%%writefile model_training/v0.py
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Sets model to TRAIN mode
model.train()
# Step 1 - Computes model's predicted output - forward pass
yhat = model(x_train_tensor)
# Step 2 - Computes the loss
loss = loss_fn(yhat, y_train_tensor)
# Step 3 - Computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - Updates parameters using gradients and
# the learning rate
optimizer.step()
optimizer.zero_grad()
高阶函数
一个接受参数,并且返回一个函数的函数,如下是一个很真实的例子,我们荣国传入的exponent函数,来得到一个可以接受一个参数(x)并且返回x的exponent次方的函数,返回的是skeleton_exponentiation这个函数,但是这个函数的执行的内容和我们传入的exponent的参数有关系,exponentiation_builder(2)就会返回一个计算x^2的函数,
def exponentiation_builder(exponent):
def skeleton_exponentiation(x):
return x ** exponent
return skeleton_exponentiation
func = exponentiation_builder(2)
func(5)
我这边实际跑出来的结果如下
显然,我们可以生成一个函数,传入我们的参数是我们的各种函数,然后我们直接返回他,就可以变成一个我们需要的训练函数,
#定义一个生成训练的辅助函数
def make_train_step_fn(model, loss_fn, optimizer):
# Builds function that performs a step in the train loop
def perform_train_step_fn(x, y):
# Sets model to TRAIN mode
model.train()
# Step 1 - Computes our model's predicted output - forward pass
yhat = model(x)
# Step 2 - Computes the loss
loss = loss_fn(yhat, y)
# Step 3 - Computes gradients for both "a" and "b" parameters
loss.backward()
# Step 4 - Updates parameters using gradients and the learning rate
optimizer.step()
optimizer.zero_grad()
# Returns the loss
return loss.item()
# Returns the function that will be called inside the train loop
return perform_train_step_fn
在这个的基础上,我们额就可以得到一个新的模型配置的函数
%%writefile model_configuration/v1.py
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Sets learning rate - this is "eta" ~ the "n" like Greek letter
lr = 0.1
torch.manual_seed(42)
# Now we can create a model and send it at once to the device
model = nn.Sequential(nn.Linear(1, 1)).to(device)
# Defines a SGD optimizer to update the parameters (now retrieved directly from the model)
optimizer = optim.SGD(model.parameters(), lr=lr)
# Defines a MSE loss function
loss_fn = nn.MSELoss(reduction='mean')
# Creates the train_step function for our model, loss function and optimizer
train_step_fn = make_train_step_fn(model, loss_fn, optimizer)
接下来我们看看这个函数要如何使用
n_epochs = 1000
losses = []
# For each epoch...
for epoch in range(n_epochs):
# Performs one train step and returns the corresponding loss
loss = train_step_fn(x_train_tensor, y_train_tensor)
losses.append(loss)
dataset
引入的一个类就是TensorDataset,这是一个从tensor创建dataset的类
train_data = TensorDataset(x_train_tensor, y_train_tensor)
#引用第一个x和y
print(train_data[0])
dataloader
为什么要用dateset,因为我们想要使用批量(小批量)的训练模式
预处理数据应该打乱数据(shuffle),在处理任何数据之前(除非是时间或者其他的顺序数据)
需要注意的是,我们并不会在创建数据的时候将其放到GPU中,因为GPU的内存是宝贵的
DataLoader表现的像是一个迭代器,可以循环使用,获取不同大小的批量数据
train_loader = DataLoader(dataset=train_data, batch_size=16, shuffle=True)
此时已经加载完毕了,如果我们使用,我们通过一个例子来看看现在的数据预处理的部分
%%writefile data_preparation/v1.py
# Our data was in Numpy arrays, but we need to transform them into PyTorch's Tensors
x_train_tensor = torch.from_numpy(x_train).float()
y_train_tensor = torch.from_numpy(y_train).float()
# Builds Dataset
train_data = TensorDataset(x_train_tensor, y_train_tensor)
# Builds DataLoader
train_loader = DataLoader(dataset=train_data, batch_size=16, shuffle=True)
那么我们需要修改我们的训练模型,需要使用迭代器来没词的epoch中迭代所有的小批量的数据,并且取多次小批量的平均值作为本次迭代的loss值
%%writefile model_training/v2.py
# Defines number of epochs
n_epochs = 1000
losses = []
# For each epoch...
for epoch in range(n_epochs):
# inner loop
mini_batch_losses = []
for x_batch, y_batch in train_loader:
# 发送数据到GPU,直到真正用的时候
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
# Performs one train step and returns the corresponding loss
# for this mini-batch
mini_batch_loss = train_step_fn(x_batch, y_batch)
mini_batch_losses.append(mini_batch_loss)
# Computes average loss over all mini-batches - that's the epoch loss
loss = np.mean(mini_batch_losses)
losses.append(loss)
注意用了小批量以后,可能会导致训练的周期变成,因为train_step被调用的次数也发生了变化,所以如果考虑到函数的训练周期,可能需要进行相应的调整
显然我们也可以定义一个函数来简化我们的训练函数
def mini_batch(device, data_loader, step_fn):
mini_batch_losses = []
for x_batch, y_batch in data_loader:
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
mini_batch_loss = step_fn(x_batch, y_batch)
mini_batch_losses.append(mini_batch_loss)
loss = np.mean(mini_batch_losses)
return loss
n_epochs = 200
losses = []
for epoch in range(n_epochs):
# inner loop
loss = mini_batch(device, train_loader, train_step_fn)
losses.append(loss)
随机拆分
如果大家还记得,我们其实生成了100个数据点,其中80个用于训练,20个用于验证,我们当时使用的是shuffle函数,随机打乱了我们的数据点的排序,并且取了前80个作为我们的训练数据,对于TensorDateset可以直接使用random_split函数就可以拆分
我们的数据准备v2
%%writefile data_preparation/v2.py
torch.manual_seed(13)
# Builds tensors from numpy arrays BEFORE split
x_tensor = torch.from_numpy(x).float()
y_tensor = torch.from_numpy(y).float()
# Builds dataset containing ALL data points
dataset = TensorDataset(x_tensor, y_tensor)
# Performs the split
ratio = .8
n_total = len(dataset)
n_train = int(n_total * ratio)
n_val = n_total - n_train
train_data, val_data = random_split(dataset, [n_train, n_val])
# Builds a loader of each set
train_loader = DataLoader(dataset=train_data, batch_size=16, shuffle=True)
#验证数据不需要shuffet
val_loader = DataLoader(dataset=val_data, batch_size=16)
评估
我们已经讲解了如何生成我们的验证数据,如何学习我们的模型参数,接下来我们需要对训练的模型的结果进行一波验证,在验证之前,我们最重要的东西就是配置当前的运行的模式为评估模式model.eval(),老规矩一个高阶函数
def make_val_step_fn(model,loss_fn):
def perform_val_step(x,y):
model.eval()
yhat = model(x)
loss = loss_fn(yhat,y)
return loss.item()
return perform_val_step
那么对于每一轮的训练迭代我们都将会同时获取训练数据和验证数据的损失函数的额值
切莫注意:评估的时候,需要关闭梯度计算,一来可以加快运算的速度,二来可以避免引入的梯度的变化
%%writefile model_training/v4.py
# Defines number of epochs
n_epochs = 200
losses = []
val_losses = []
for epoch in range(n_epochs):
# inner loop
loss = mini_batch(device, train_loader, train_step_fn)
losses.append(loss)
# VALIDATION
# no gradients in validation!
with torch.no_grad():
#注意一定不要有梯度的计算在这个位置
val_loss = mini_batch(device, val_loader, val_step_fn)
val_losses.append(val_loss)
我自己跑的时候就忘记了要把model放到GPU上,导致了device不匹配的问题运行错误,我这边跑出来的结果如下,和原文的非常类似
tensorboard
大家自己咯,这个是我的跑出来的结果
保存和加载模型
一个训练的模型包括了所有的权重,当前的优化器的参数,losses的变化参数,epoch的值
一个推演的模型应该包括了所有的权重
废话不多说,注意模型本身和优化器都有自己的state_dict可以load
checkpoint = {'epoch': n_epochs,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': losses,
'val_loss': val_losses}
torch.save(checkpoint, 'model_checkpoint.pth')
恢复模型,读取数据完毕还要配置为train模式
checkpoint = torch.load('model_checkpoint.pth')
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
saved_epoch = checkpoint['epoch']
saved_losses = checkpoint['loss']
saved_val_losses = checkpoint['val_loss']
model.train() # always use TRAIN for resuming training
如果只是部署,那么我们完全可以直接model.load_state_dict(),一样记得配置为eval()模式