pytorch自学-cnn

本文介绍了如何在PyTorch中使用Sequential容器构建卷积神经网络(CNN),包括使用OrderedDict的例子。通过nn.Sequential创建了一个包含两个Conv2d层和ReLU激活函数的模型,并展示了如何定义损失函数MSELoss进行训练。

torch.nn.Parameter

一种被认为是模块参数的Tensor。

参数是Tensor子类,当与Modules一起使用时具有非常特殊的属性- 当它们被指定为模块属性时,它们会自动添加到其参数列表中,并且将出现在例如parameters()迭代器中。分配张量没有这种效果。这是因为人们可能希望在模型中缓存一些临时状态,如RNN的最后隐藏状态。如果没有这样的课程Parameter,这些临时工作也会被注册。

参数:
data(Tensor) - 参数张量。
requires_grad(bool,optional) - 如果参数需要渐变。有关详细信息,请参阅 从后面排除子图。默认值:True

##class torch.nn.Sequential(*args)
A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.

To make it easier to understand, given is a small example:

Example of using Sequential

model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)

Example of using Sequential with OrderedDict

model = nn.Sequential(OrderedDict([
(‘conv1’, nn.Conv2d(1,20,5)),
(‘relu1’, nn.ReLU()),
(‘conv2’, nn.Conv2d(20,64,5)),
(‘relu2’, nn.ReLU())
]))
import torch

N is batch size; D_in is input dimension;

H is hidden dimension; D_out is output dimension.

N, D_in, H, D_out = 64, 1000, 100, 10

Create random Tensors to hold inputs and outputs

x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

Use the nn package to define our model as a sequence of layers. nn.Sequential

is a Module which contains other Modules, and applies them in sequence to

produce its output. Each Linear Module computes output from input using a

linear function, and holds internal Tensors for its weight and bias.

model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)

The nn package also contains definitions of popular loss functions; in this case we will use Mean Squared Error (MSE) as our loss function.

loss_fn = torch.nn.MSELoss(reduction=‘sum’)

learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the call operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)

# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.item())

# Zero the gradients before running the backward pass.
model.zero_grad()

# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()

# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
    for param in model.parameters():
        param -= learning_rate * param.grad
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值