最小化损失函数
神经网络的训练都需要设置一个训练目标,而这个目标通常设定为带有若干参数的最小化损失函数。计算机视觉和自然语言处理大多处理的是分类或回归问题,无监督学习(如聚类算法样本类内平均距离较小)、自监督学习(序列预测实际上也是一个分类问题)、度量学习(如对比学习中triplet loss)、强化学习(价值学习需要一个最大价值函数)等实际上都有自己的训练目标,无论是分类还是预测回归通常都需要设定一个最小化损失函数。最小化损失函数的方法通常采用梯度下降法,即沿着负梯度方向可以达到函数值不断下降,一直到下降到参数无法更新或损失值为极小值等停止。
更多详细原理可参考B站视频梯度下降法原理
反向传播算法
设定了神经网络的网络结构(特征映射层,隐含层,输出层),一旦有了输入样本,就可根据网络结构输出预测值。神经网络的训练目标是使预测值尽可能接近真实值,预测值和真实值的差异就可用损失函数描述,神经网络的学习目标即使损失函数尽可能小,求解损失函数最小值的过程就是反向传播算法,反向传播算法每一步都使用梯度下降法来更新当层的网络参数。
更多详细推导过程参考反向传播算法推导
简单非线性回归预测
本节演示等非线性函数是
y
=
2
x
2
+
3
x
+
4
y=2x^2+3x+4
y=2x2+3x+4,输入的样本
x
x
x事先进行了归一化,避免反向传播时梯度发生爆炸。可以知道经过若干轮参数更新(每一步使用梯度下降法更新,通常使用小批量随机梯度下降)后,预测值predict和真实值target已经非常接近了。
示例代码
import matplotlib.pyplot as plt
import matplotlib.pyplot as pyplot
import math
import sys
# X = [0.01 * x for x in range(100)]
# Y = [2*x**2 + 3*x + 4 for x in X]
# print(X)
# print(Y)
# pyplot.scatter(X, Y, color='red')
# pyplot.show()
# input()
X = [0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.34, 0.35000000000000003, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41000000000000003, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47000000000000003, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.5700000000000001, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.6900000000000001, 0.7000000000000001, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8, 0.81, 0.8200000000000001, 0.8300000000000001, 0.84, 0.85,
0.86, 0.87, 0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.9400000000000001, 0.9500000000000001, 0.96, 0.97, 0.98, 0.99]
Y = [4.0, 4.0302, 4.0608, 4.0918, 4.1232, 4.155, 4.1872, 4.2198, 4.2528, 4.2862, 4.32, 4.3542, 4.3888, 4.4238, 4.4592, 4.495, 4.5312, 4.5678, 4.6048, 4.6422, 4.68, 4.7181999999999995, 4.7568, 4.7958, 4.8352, 4.875, 4.9152000000000005, 4.9558, 4.9968, 5.0382, 5.08, 5.122199999999999, 5.1648, 5.2078, 5.2512, 5.295, 5.3392, 5.3838, 5.4288, 5.4742, 5.5200000000000005, 5.5662, 5.6128, 5.6598, 5.7072, 5.755, 5.8032, 5.851800000000001, 5.9008, 5.9502, 6.0, 6.0502, 6.1008, 6.1518, 6.203200000000001, 6.255000000000001, 6.3072, 6.3598, 6.4128, 6.4662, 6.52, 6.5742, 6.6288, 6.6838, 6.7392, 6.795, 6.8512, 6.9078, 6.9648, 7.022200000000001, 7.08, 7.138199999999999, 7.1968, 7.2558, 7.3152, 7.375, 7.4352, 7.4958, 7.5568, 7.6182, 7.680000000000001, 7.7422, 7.8048, 7.867800000000001, 7.9312, 7.994999999999999, 8.0592, 8.1238, 8.1888, 8.2542, 8.32, 8.3862, 8.4528, 8.5198, 8.587200000000001, 8.655000000000001, 8.7232, 8.7918, 8.8608, 8.9302]
def func(x):
y = w1 * x**2 + w2 * x + w3
return y
def loss(y_pred, y_true):
return (y_pred - y_true) ** 2
# 权重随机初始化
w1, w2, w3 = 1, 0, -1
# 学习率设置
lr = 0.1
#batch size
batch_size = 20
# 训练过程
for epoch in range(1000):
epoch_loss = 0
grad_w1 = 0
grad_w2 = 0
grad_w3 = 0
counter = 0
for x, y_true in zip(X, Y):
y_pred = func(x)
epoch_loss += loss(y_pred, y_true)
counter += 1
#梯度计算
grad_w1 += 2 * (y_pred - y_true) * x ** 2
grad_w2 += 2 * (y_pred - y_true) * x
grad_w3 += 2 * (y_pred - y_true)
if counter == batch_size:
#权重更新
w1 = w1 - lr * grad_w1/batch_size #sgd
w2 = w2 - lr * grad_w2/batch_size
w3 = w3 - lr * grad_w3/batch_size
counter = 0
grad_w1 = 0
grad_w2 = 0
grad_w3 = 0
epoch_loss /= len(X)
print("第%d轮, loss %f" %(epoch, epoch_loss))
if epoch_loss < 1e-5:
break
print(f"训练后权重:w1:{w1} w2:{w2} w3:{w3}")
#使用训练后模型输出预测值
Yp = [func(i) for i in X]
#预测值与真实值比对数据分布
pyplot.scatter(X, Y, color="red", label="target", s=2)
pyplot.scatter(X, Yp, color="green", label="predict", s=2)
plt.legend(fontsize=8)
pyplot.show()