I. What is softmax regression?
在线性回归模型 (linear regression model) 中,根据所给的输入特征,来训练模型,从而得到连续的输出值作为模型训练结果,即 so-called y-hat。所以这些连续的预测值一般使用于气温的预测、房屋估计、股票预测等。 然而对于像图像这类的离散值,softmax 便是一个很高效的分类模型。和线性回归不同, softmax 回归的输出单元从一个变成了多个,且引入了 softmax 运算使输出更适合离散值的预测与训练。
II. Basic mathematics knowledge about softmax.
- softmax model
# eg. num_inputs = 4, num_outputs = 3
o1 = x1w11 + x2w21 + x3w31 + x4w41 + b1
o2 = x1w12 + x2w22 + x3w32 + x4w42 + b2
o3 = x1w13 + x2w23 + x3w33 + x4w43 + b3
- softmax computation:
为什么要引入 softmax 运算呢? 干嘛不直接根据所得的 output 结果进行预测呢?
原因是:一方面由于输出层的输出值的范围不确定(比如o1,o2, o3 = 0.1, 100, 0.01),我们难以直观上判断这些值的意义。另一方面,由于真实标签是离散值,这些离散值与不确定范围的输出值之间的误差难以衡量。
So, softmax solves the two problems above. {\\}
softmax 主要通过一个特殊的公式将输出值变换值为正且和为1的概率分布:
- vectorization calculation expression:
- cross entropy loss function
我们可以像线性回归那样使用平方损失函数 ∥ y ˆ ( i ) − y ( i ) ∥ 2 \frac{∥yˆ(i) − y(i)∥}{2} 2∥yˆ(i)−y(i)∥。然而,想要预测分类结果正确,我们其实并不需要预测概率完全等于标签概率。例如,在图像分类的例子里,如果y(i) = 3,那么我们只需要yˆ(i) 比其他两个预测值yˆ(i) 和yˆ(i) 大就行了。即使yˆ(i) 值为0.6,不管其他两个预测值为多少,类别预测均正确。而平方损失则过于严格,例如yˆ(i) = yˆ(i) = 0.2比yˆ(i) = 0, yˆ(i) = 0.4的损失要小很多,虽然两者都有同样正确的分类预测结果。 改善上述问题的一个方法是使用更适合衡量两个概率分布差异的测量函数。
其中,交叉熵(cross entropy)是一个常用的衡量方法:
III. Get started to implement a sample softmax model from scratch.
- import package:
import MachineLearning.Utility.utils as tool
from mxnet import autograd, nd
- read dataset:
# load dataset(training dataset and test dataset)
batch_size = 256
# train_iter and test_iter represent two DataLoader objects
train_iter, test_iter = tool.load_data_fashion_mnist(batch_size)
- initialize parameters:
# d = 28 * 28 = 784, q = 10
num_inputs = 784
num_outputs = 10
# w.shape: (784, 10) b.shape: (1, 10)
W = nd.random.normal(scale=0.01, shape=(num_inputs, num_outputs))
b = nd.zeros(num_outputs)
W.attach_grad()
b.attach_grad()
- define softmax function:
def softmax(X):
X_exp = X.exp()
partition = X_exp.sum(axis=1, keepdims=True)
return X_exp / partition # 这里应用了广播机制
- define model:
def net(X):
return softmax(nd.dot(X.reshape((-1, num_inputs)), W) + b)
- define loss function:
def cross_entropy(y_hat, y):
return -nd.pick(y_hat, y).log()
- calculate accuracy:
def accuracy(y_hat, y):
return (y_hat.argmax(axis=1) == y.astype('float32')).mean().asscalar()
def evaluate_accuracy(data_iter, net):
acc_sum, n = 0.0, 0
for X, y in data_iter:
y = y.astype('float32')
acc_sum += (net(X).argmax(axis=1) == y).sum().asscalar()
n += y.size
return acc_sum / n
- training model:
num_epochs, lr = 5, 0.1
def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,
params=None, lr=None, trainer=None):
"""Train and evaluate a model with CPU."""
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n = 0.0, 0.0, 0
for X, y in train_iter:
with autograd.record():
y_hat = net(X)
l = loss(y_hat, y).sum()
l.backward()
if trainer is None:
sgd(params, lr, batch_size)
else:
trainer.step(batch_size)
y = y.astype('float32')
train_l_sum += l.asscalar()
train_acc_sum += (y_hat.argmax(axis=1) == y).sum().asscalar()
n += y.size
test_acc = evaluate_accuracy(test_iter, net)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'
% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc))
tool.train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs,
batch_size, [W, b], lr)
- test dataset predict:
print(X.asnumpy().shape)
true_labels = tool.get_fashion_mnist_labels(y.asnumpy())
pred_labels = tool.get_fashion_mnist_labels(net(X).argmax(axis=1).asnumpy())
titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)]
tool.show_fashion_mnist(X[0:9], titles[0:9])
- training results:
III. summary
…