pytorch_basics

本文主要介绍了PyTorch的基本概念和使用方法,包括张量操作、自动梯度计算以及构建简单的神经网络。通过实例展示了PyTorch在深度学习中的应用,适合初学者入门。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

import torch           # 包 torch 包含了多维张量的数据结构以及基于其上的多种数学操作。另外,它也提供了多种工具,其中一些可以更有效地对张量和任意类型进行序列化
import torchvision     # torchvision包 包含了目前流行的数据集,模型结构和常用的图片转换工具
import torch.nn as nn
import numpy as np
import torchvision.transforms as transforms


# ================================================================== #
#                     1. Basic autograd example 1                    #
# ================================================================== #

# Create tensors
x = torch.tensor(1., requires_grad = True)  # 有一点因为只有float型才能求梯度
w = torch.tensor(2., requires_grad = True)
b = torch.tensor(3., requires_grad = True)

# Build a computational graph.
y = w * x + b

# Compute gradients.
y.backward()

# Print out the gradients.
print(x.grad)    # x.grad = 2
print(w.grad)    # w.grad = 1
print(b.grad)


# ================================================================== #
#                    2. Basic autograd example 2                     #
# ================================================================== #
# Create tensors of shape (10, 3) and (10, 2)
x = torch.randn(10, 3) # 输入
y = torch.randn(10, 2) # 标签

# Build a fully connected layer.
linear = nn.Linear(3, 2)  # 做线性变换
print('w:', linear.weight)
print('b:', linear.bias)  #class torch.nn.Linear(in_features, out_features, bias=True)

# Build loss function and optimizer.
criterion = nn.MSELoss()   # 均方误差:n个对应元素差值的平方和再除以n
optimizer = torch.optim.SGD(linear.parameters(), lr=0.01)  # 待优化的参数和学习率

# Forward pass.
pred = linear(x)

# Compute loss.
loss = criterion(pred, y)
print('loss:', loss.item())  # loss.item()输出值

# 1-step gradient descent.
optimizer.step()

# You can also perform gradient descent at the low level.
# linear.weight.data.sub_(0.01 * linear.weight.grad.data)
# linear.bias.data.sub_(0.01 * linear.bias.grad.data)

pred = linear(x)
loss = criterion(pred,y)
print('loss after 1 step optimization:', loss.item())


# ================================================================== #
#                     3. Loading data from numpy                     #
# ================================================================== #
# Create a numpy array.
x = np.array([[1,2]],[3,4])

# Convert the numpy array to a torch tensor.
y = torch.from_numpy(x)

# Convert the torch tensor to a numpy array.
z = y.numpy()


# ================================================================== #
#                         4. Input pipline                           #
# ================================================================== #

# Download and construct CIFAR-10 dataset
train_dataset = torchvision.datasets.CIFAR10(root='../../data/', train=True, transform=transforms.ToTensor(),
                                             download=True)  #

# Fetch one data pair (read data from disk).
image = label = train_dataset[0]
print(image.size())
print(label)

# Data loader (this provides queues and threads in a very simple way).
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=1, shuffle=False, sampler=None, num_workers=0, drop_last=False)
 # shuffle设置为True时会在每个epoch重新打乱数据

# Actual usage of the data loader is as below.
for images, labels in train_loader:
    # Training code should be written here.
    pass


# ================================================================== #
#                5. Input pipline for custom dataset                 #
# ================================================================== #
# -------------------------------------------------------------------------------------------
# You should build your custom dataset as below.
class CustomDataset(torch.utils.data.Dataset):  # Dataset的抽象类
    def __init__(self):
        # TODO
        # 1. Initialize file paths or a list of file names.
        pass

    def __getitem__(self, index):
        # TODO
        # 1. Read one data from file (e.g. using numpy.fromfile, PIL.Image.open).
        # 2. Preprocess the data (e.g. torchvision.Transform).
        # 3. Return a data pair (e.g. image and label).
        pass

    def __len__(self):
        # You should change 0 to the total size of your dataset.
        return 0

# You can then use the prebuilt data loader.
custom_dataset = CustomDataset()
train_loader = torch.utils.data.DataLoader(dataset=custom_dataset, batch_size=64, shuffle=False, sampler=None, num_workers=0, drop_last=False)
#

# ================================================================== #
#                        6. Pretrained model                         #
# ================================================================== #

# Download and load the pretrained ResNet-18.
resnet = torchvision.models.resnet18(pretrained=True)

# If you want to finetune only the top layer of the model, set as below.
for param in resnet.parameters():
    param.requires_grad = False

# Replace the top layer for finetuning.
resnet.fc = nn.Linear(resnet.fc.in_features, 100) # 线性分类器输出是100

# Forward pass.
images = torch.randn(64, 3, 224,224)
outputs = resnet(images)
print(outputs.size())

# ================================================================== #
#                      7. Save and load the model                    #
# ================================================================== #

# Save and load the entire model.
torch.save(resnet,'model.ckpt')
model = torch.load('model.ckpt')

# Save and load only the model parameters (recommended).
torch.save(resnet.state_dict(), 'params.ckpt')
resnet.load_state_dict(torch.load('params.ckpt'))

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值