-
对多维 Tensor 按维度操作
给定⼀个 Tensor 矩阵 X 。我们可以只对其中同⼀列( dim=0 )或同⼀⾏( dim=1 )的元素求和,并在结果中保留⾏和列这两个维度( keepdim=True )。X = torch.tensor([[1, 2, 3], [4, 5, 6]]) print(X.sum(dim=0, keepdim=True)) print(X.sum(dim=1, keepdim=True)) # 输出 tensor([[5, 7, 9]]) tensor([[ 6], [15]]) -
gather函数
变量 y_hat 是2个样本在3个类别的预测概率,变量 y 是这2个样本的标签类别。通过使⽤ gather 函数,我们得到了2个样本的标签的预测概率。y_hat = torch.tensor([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]]) y = torch.LongTensor([0, 2]) y_hat.gather(1, y.view(-1, 1)) # 输出 tensor([[0.1000], [0.5000]])下⾯实现了(softmax回归)交叉熵损失函数。def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))) -
计算准确率
定义准确率 accuracy 函数。其中 y_hat.argmax(dim=1) 返回矩阵 y_hat 每⾏中最⼤元素的索 引,且返回结果与变量 y 形状相同。相等条件判断式 (y_hat.argmax(dim=1) == y) 是⼀个类型为 ByteTensor 的 Tensor ,我们⽤ float() 将其转 换为值为0(相等为假)或1(相等为真)的浮点型 Tensor 。def accuracy(y_hat, y): return (y_hat.argmax(dim=1) == y).float().mean().item() -
训练模型
num_epochs, lr = 5, 0.1 # 本函数已保存在d2lzh包中⽅便以后使⽤ def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,params=None, lr=None, optimizer=None): for epoch in range(num_epochs): train_l_sum, train_acc_sum, n = 0.0, 0.0, 0 for X, y in train_iter: y_hat = net(X) l = loss(y_hat, y).sum() # 梯度清零 if optimizer is not None: optimizer.zero_grad() elif params is not None and params[0].grad is not None: for param in params: param.grad.data.zero_() l.backward() if optimizer is None: d2l.sgd(params, lr, batch_size) else: optimizer.step() # “softmax回归的简洁实现”将⽤到 train_l_sum += l.item() train_acc_sum += (y_hat.argmax(dim=1) == y).sum().item() n += y.shape[0] test_acc = evaluate_accuracy(test_iter, net) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f' % (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc)) train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr) # 输出 epoch 1, loss 0.7878, train acc 0.749, test acc 0.794 epoch 2, loss 0.5702, train acc 0.814, test acc 0.813 epoch 3, loss 0.5252, train acc 0.827, test acc 0.819 epoch 4, loss 0.5010, train acc 0.833, test acc 0.824 epoch 5, loss 0.4858, train acc 0.836, test acc 0.815
Pytorch基础(5)——SOFTMAX回归相关知识
PyTorch多维Tensor操作与模型训练
最新推荐文章于 2025-10-02 23:33:48 发布
博客围绕PyTorch展开,介绍了对多维Tensor按维度操作,如对矩阵X按列或行求和并保留维度;讲解了gather函数的使用,以获取样本标签的预测概率;还实现了交叉熵损失函数,定义准确率函数,并提及训练模型。
1920

被折叠的 条评论
为什么被折叠?



