mnist的pytorch版本,简单的迭代可得到97%的概率

本文详细介绍了使用PyTorch框架在MNIST数据集上训练神经网络的过程,包括数据预处理、模型定义、损失函数选择、优化器配置及训练循环等关键步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

需要修改几个关键位置,一是模型,二是损失,三是优化器

import torch

import torch.nn as nn

import torchvision.datasets as dsets

import torchvision.transforms as transforms

from torch.autograd import Variable

import win_unicode_console

win_unicode_console.enable()

# hyper parameter

input_size = 28 * 28

# # image size of MNIST data

num_classes = 10

num_epochs = 10

batch_size = 100

learning_rate = 1e-3

# MNIST dataset

#train_dataset = dsets.MNIST(root = '../../data_sets/mnist', train = True, transform = transforms.ToTensor(), download = True)

train_dataset = dsets.MNIST(root = '../../data_sets/mnist', train = True, transform = transforms.ToTensor(), download = False)

#test_dataset = dsets.MNIST(root = '../../data_sets/mnist', train = False, transform = transforms.ToTensor(), download = True)

test_dataset = dsets.MNIST(root = '../../data_sets/mnist', train = False, transform = transforms.ToTensor(), download = False)

train_loader = torch.utils.data.DataLoader(dataset = train_dataset, batch_size = batch_size, shuffle = True)

test_loader = torch.utils.data.DataLoader(dataset = test_dataset, batch_size = batch_size, shuffle = True)

#input_size = 2

hidden_size = 100

#num_classes = 3

# # 创建神经网络模型

class neural_net(nn.Module):

def __init__(self, input_num,hidden_size, out_put):

super(neural_net, self).__init__()

self.fc1 = nn.Linear(input_num, hidden_size)

self.relu = nn.ReLU()

self.fc2 = nn.Linear(hidden_size, out_put)

def forward(self, x):

out = self.fc1(x)

out = self.relu(out)

out = self.fc2(out)

return out

model = neural_net(input_size, hidden_size, num_classes)

print(model)

 

# optimization

learning_rate = 1e-3

num_epoches = 5

criterion = nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)

for epoch in range(num_epoches):

#inputs = Variable(torch.from_numpy(train_x))

# #targets = Variable(torch.from_numpy(train_y))

# #print(inputs) #print(targets)

# #optimizer.zero_grad()

# #outputs = model(inputs)

# #loss = criterion(outputs, targets)

# #loss.backward() #optimizer.step()

# #print('current loss = %.5f' % loss.data[0])

print('current epoch = %d' % epoch)

for i, (images, labels) in enumerate(train_loader):

#利用enumerate取出一个可迭代对象的内容

images = Variable(images.view(-1, 28 * 28))

labels = Variable(labels)

optimizer.zero_grad()

outputs = model(images)

loss = criterion(outputs, labels)

loss.backward()

optimizer.step()

if i % 100 == 0:

print('current loss = %.5f' % loss.data[0])

 

total = 0

correct = 0

for images, labels in test_loader:

images = Variable(images.view(-1, 28 * 28))

outputs = model(images)

_, predicts = torch.max(outputs.data, 1)

total += labels.size(0)

correct += (predicts == labels).sum()

print('Accuracy = %.2f' % (100 * correct / total))

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值