PyTorch学习笔记(五)卷积神经网络

该博客展示了如何使用PyTorch构建一个简单的两层卷积神经网络来训练MNIST数据集。配置了设备、超参数,加载数据,并定义了一个小型CNN模型进行训练。经过10个epoch的训练,模型在测试集上达到了98.88%的准确率。最后,模型被保存以便后续使用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

# 导入相关库函数
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import time
# 配置设备
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# 设置超参数
num_epochs = 10
num_classes = 10
batch_size = 200
learning_rate = 0.001
# MNIST数据集
train_dataset = torchvision.datasets.MNIST(root='../../data',
                                          train=True,
                                          transform=transforms.ToTensor(),
                                          download=True)
test_dataset = torchvision.datasets.MNIST(root='../../data',
                                         train=False,
                                         transform=transforms.ToTensor())
# Data Loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                          batch_size=batch_size,
                                          shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                                         batch_size=batch_size,
                                         shuffle=False)
# 只有两层神经元的卷积神经网络
class ConvNet(nn.Module):
    def __init__(self,num_classes=10):
        super(ConvNet,self).__init__()
        self.layer1 = nn.Sequential(nn.Conv2d(1,16,kernel_size=5,stride=1,padding=2),
                                   nn.BatchNorm2d(16),
                                   nn.ReLU(),
                                   nn.MaxPool2d(kernel_size=2,stride=2))
        self.layer2 = nn.Sequential(nn.Conv2d(16,32,kernel_size=5,stride=1,padding=2),
                                   nn.BatchNorm2d(32),
                                   nn.ReLU(),
                                   nn.MaxPool2d(kernel_size=2,stride=2))
        self.fc = nn.Linear(7*7*32,num_classes)
    
    def forward(self,x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0),-1)
        out = self.fc(out)
        return out
model = ConvNet(num_classes).to(device)
# 损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(),lr=learning_rate)
start = time.time()
# 训练模型
total_step = len(train_loader)
for epoch in range(num_epochs):
    for i,(images,labels) in enumerate(train_loader):
        images = images.to(device)
        labels = labels.to(device)
        
        # 前向传播
        outputs = model(images)
        loss = criterion(outputs,labels)
        
        # 后向传播并优化
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        if (i+1) % 100 == 0:
            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
                  .format(epoch+1,num_epochs,i+1,total_step,loss.item()))
Epoch [1/10], Step [100/300], Loss: 0.1235
Epoch [1/10], Step [200/300], Loss: 0.0800
Epoch [1/10], Step [300/300], Loss: 0.0638
Epoch [2/10], Step [100/300], Loss: 0.1008
Epoch [2/10], Step [200/300], Loss: 0.0510
Epoch [2/10], Step [300/300], Loss: 0.0924
Epoch [3/10], Step [100/300], Loss: 0.0230
Epoch [3/10], Step [200/300], Loss: 0.0456
Epoch [3/10], Step [300/300], Loss: 0.0413
Epoch [4/10], Step [100/300], Loss: 0.0509
Epoch [4/10], Step [200/300], Loss: 0.0124
Epoch [4/10], Step [300/300], Loss: 0.0416
Epoch [5/10], Step [100/300], Loss: 0.0698
Epoch [5/10], Step [200/300], Loss: 0.0155
Epoch [5/10], Step [300/300], Loss: 0.0185
Epoch [6/10], Step [100/300], Loss: 0.0094
Epoch [6/10], Step [200/300], Loss: 0.0239
Epoch [6/10], Step [300/300], Loss: 0.0536
Epoch [7/10], Step [100/300], Loss: 0.0165
Epoch [7/10], Step [200/300], Loss: 0.0144
Epoch [7/10], Step [300/300], Loss: 0.0044
Epoch [8/10], Step [100/300], Loss: 0.0099
Epoch [8/10], Step [200/300], Loss: 0.0128
Epoch [8/10], Step [300/300], Loss: 0.0436
Epoch [9/10], Step [100/300], Loss: 0.0312
Epoch [9/10], Step [200/300], Loss: 0.0062
Epoch [9/10], Step [300/300], Loss: 0.0445
Epoch [10/10], Step [100/300], Loss: 0.0209
Epoch [10/10], Step [200/300], Loss: 0.0025
Epoch [10/10], Step [300/300], Loss: 0.0048
end = time.time()
print('用时:{}.'.format(end-start))
用时:52.60681462287903.
# 测试模型
model.eval() # eval mode
with torch.no_grad():
    correct = 0
    total = 0
    for images,labels in test_loader:
        images = images.to(device)
        labels = labels.to(device)
        outputs = model(images)
        _,predicted = torch.max(outputs.data,1)
        total += labels.size(0)
        correct += (labels==predicted).sum().item()
    
    print('Test accuracy of the model on the 10000 test images:{}%'
         .format(100*correct/total))
Test accuracy of the model on the 10000 test images:98.88%
# 保存模型
torch.save(model.state_dict(),'model.ckpt')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

wydxry

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值