pytorch FANISHION-MNIST训练了1280次loss降不下去了,一直在1.73左右,模型最后一步用的softmax

本文详细介绍了使用Python和PyTorch库训练神经网络的过程,包括数据加载、网络前向传播、损失计算、反向传播及权重更新等关键步骤。通过实际代码展示了如何调整网络参数以最小化损失函数。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

import collections 
# TODO: Train the network here
print("initial fc1.weights=",model.fc1.weight)
epoches = 1280
for i in range(epoches):
    images ,labels = next(iter(trainloader))
    #print("images.size()=",images.size()[0])
    #print("labels=",labels)
    t= images.size()[0]
    #print("t=",t)
    images.resize_(t,784)
    optimizer.zero_grad()
    output = model.forward(images)
    criterion = nn.CrossEntropyLoss()
    loss = criterion(output,labels)
    loss.backward()
    # print("gradient=",model.fc1.weight.grad)
    print("loss=",loss)
    optimizer.step()

 

#各步打印的loss

loss= tensor(1.8425, grad_fn=<NllLossBackward>)
loss= tensor(1.7099, grad_fn=<NllLossBackward>)
loss= tensor(1.6928, grad_fn=<NllLossBackward>)
loss= tensor(1.8132, grad_fn=<NllLossBackward>)
loss= tensor(1.7459, grad_fn=<NllLossBackward>)
loss= tensor(1.7712, grad_fn=<NllLossBackward>)
loss= tensor(1.7174, grad_fn=<NllLossBackward>)
loss= tensor(1.8287, grad_fn=<NllLossBackward>)
loss= tensor(1.8221, grad_fn=<NllLossBackward>)
loss= tensor(1.7079, grad_fn=<NllLossBackward>)
loss= tensor(1.8615, grad_fn=<NllLossBackward>)
loss= tensor(1.8535, grad_fn=<NllLossBackward>)
loss= tensor(1.7789, grad_fn=<NllLossBackward>)
loss= tensor(1.8826, grad_fn=<NllLossBackward>)
loss= tensor(1.7923, grad_fn=<NllLossBackward>)
loss= tensor(1.7446, grad_fn=<NllLossBackward>)
loss= tensor(1.7387, grad_fn=<NllLossBackward>)
loss= tensor(1.8063, grad_fn=<NllLossBackward>)
loss= tensor(1.7541, grad_fn=<NllLossBackward>)
loss= tensor(1.7205, grad_fn=<NllLossBackward>)
loss= tensor(1.7760, grad_fn=<NllLossBackward>)
loss= tensor(1.7013, grad_fn=<NllLossBackward>)
loss= tensor(1.8267, grad_fn=<NllLossBackward>)
loss= tensor(1.6684, grad_fn=<NllLossBackward>)
loss= tensor(1.7829, grad_fn=<NllLossBackward>)
loss= tensor(1.7570, grad_fn=<NllLossBackward>)
loss= tensor(1.7603, grad_fn=<NllLossBackward>)
loss= tensor(1.6776, grad_fn=<NllLossBackward>)
loss= tensor(1.7989, grad_fn=<NllLossBackward>)
loss= tensor(1.7488, grad_fn=<NllLossBackward>)
loss= tensor(1.7861, grad_fn=<NllLossBackward>)
loss= tensor(1.7122, grad_fn=<NllLossBackward>)
loss= tensor(1.7824, grad_fn=<NllLossBackward>)
loss= tensor(1.7683, grad_fn=<NllLossBackward>)
loss= tensor(1.7521, grad_fn=<NllLossBackward>)
loss= tensor(1.7753, grad_fn=<NllLossBackward>)
loss= tensor(1.7420, grad_fn=<NllLossBackward>)
loss= tensor(1.7792, grad_fn=<NllLossBackward>)
loss= tensor(1.8161, grad_fn=<NllLossBackward>)
loss= tensor(1.7306, grad_fn=<NllLossBackward>)
loss= tensor(1.7359, grad_fn=<NllLossBackward>)
loss= tensor(1.6829, grad_fn=<NllLossBackward>)
loss= tensor(1.7521, grad_fn=<NllLossBackward>)
loss= tensor(1.8347, grad_fn=<NllLossBackward>)
loss= tensor(1.7310, grad_fn=<NllLossBackward>)
loss= tensor(1.7629, grad_fn=<NllLossBackward>)
loss= tensor(1.7304, grad_fn=<NllLossBackward>)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值