图像识别技术与应用

五.Pytorch神经网络工具箱

1.核心组件

层:神经网络的基本结构,将输入张量转换为输出张量

模型:层构成的网络

损失函数:参数学习的目标函数,通过最小化损失函数来学习各种参数

优化器:

如何是损失函数最小,这就涉及到优化器。

2.主要工具

nn.Module

①继承自Module类,可自动提取可学习的参数。

②适用于卷积层、全连接层、dropout层。

 

➢nn.functional

①更像是纯函数。

②适用于激活函数、池化层

nn.Module,写法一般为nn.Xxx,如nn.Linear、nn.Conv2d、nn.CrossEntropyLoss等。

 

➢nn.functional中的函数,写法一般为nn.funtional.xxx,如nn.funtional.linear、nn.funtional.conv2d、nn.funtional.cross_entropy等。

者的主要区别如下。

➢nn.Xxx继承于nn.Module,nn.Xxx 需要先实例化并传入参数,然后以函数调用的方式调用实例化的对象并传入输入数据。它能够很好的与nn.Sequential结合使用,而nn.functional.xxx无法与nn.Sequential结合使用。

➢nn.Xxx不需要自己定义和管理weight、bias参数;而nn.functional.xxx需要你自己定义weight、bias,每次调用的时候都需要手动传入weight、bias等参数, 不利于代码复用。

➢dropout操作在训练和测试阶段是有区别的,使用nn.Xxx方式

3.构建模型

继承nn.Module基类构建模型。

➢使用nn.Sequential按层顺序构建模型。

➢继承nn.Module基类构建模型,又使用相关模型容器

import torch
from torch import nn
import torch. nn. functional as F
class Model_Seq(nn. Module) :
通过继承基类mn.Module来构建模型
definit_(self, in_dim, n_hidden_1, n_hidden_2, out_dim) :
super (Model_Seq, self). _init_ (
self. flatten = nn. Flatten self. linear1= nn. Linear (in_din, n_hidden_1)
self. bn1=nn. BatchNormid(n hidden 1)
self. linear2= nn. Linear (n_hidden_1, n_hidden_2)
self. bn2 = n. BatchNornid (n_hidden_2)
self. out = nn. Linear (n_hidden_2, out_dim)

forward (self, x) : x=self. flatten (x)
x=self. linear1 (x)
x=self. bn1(x)
x = F. relu(x)
x=self. linear2 (x)
x=self. bn2 (x)
x = F.relu(x)
x=self. out (x)
x = F. sof tmax (x, dim=1)|
return x in_dim, n_hidden_1, n_hidden_2, out_dim=28 * 28, 300, 100, 10
model_seq= Model_Seq(in_dim, n_hidden_1, n_hidden_2, out_dim)
print (model_seq)

运行结果:Model Seq(
(flatten) : Flatten(start_dim=1, end_dim=-1)
(linear1) : Linear (in_features=784, out_features=300, bias=True)
(bnl): BatchNormld (300, eps=1-05, momentun=0.1, affine=True, track_running_stats=True)
(linear2) : Linear (in_features=300, out_features=100, bias=True)
(bn2) : BatchNormld (100, eps=1-05, momentun=0.1, affine=True, track_running_stats=True)
(out) : Linear (in_features=100, out_features=10, bias=True)

4.使用nn.Sequential按层顺序构建模型

Seqarg = nn. Sequential (
nn. Flatten, nn. Linear (in_dim, n_hidden_1),
mn. BatchNormid(n_hidden_1),
nn. ReLUO, nn. Linear (n hidden_1, n hidden_2) ,
nn. BatchMormid (n_hidden_2) ,
nn. ReLUO, nn. Linear (n_hidden_2, out_dim),
nh. Sof tmax (din=1)
) in_dim, n_ hidden_1, n _hidden_2, out_dim=28 * 28, 300, 100, 10
print (Sequarg)

(0): Flatten(start_dim=1, end_dim=-1)
(1): Linear (in_features=784, out_features=300, bias=True) (2): BatchNorm1d(300, BatchNorm1d(300, eps=1e-05, momentun=0.1, affine=True,
track_running_stats=True)
(3) : ReLU(
(4): Linear (in_features=300, out_features=100, bias=True)
(5) : BatchNorm1d(100, eps=1e-05, momentun=0.1, affine=True, track_running_stats=True)
(6) : ReLUO
(7): Linear (in_features=100, out_features=10, bias=True)

该方法构建时不能给每个层指定名称,如果需要给每个层指定名称,可使用add_module方法或OrderedDict方法。

Sequential (
Seq module. add_module("flatten", an. Flatten 0)
Seq_module. add_module ("linearl", nn. Linear (in_dim, n _hidden_1))
Seq module. add_module ("bnl", nn. BateShNorm1d(n_hidden_1))
Seq module. add_module ("relul", nn. ReLU)
Seq module. add module ("linear2", nn. Linear (n hidden_ 1, n hidden_ 2))
Seq module. add_module ("bn2", nn. BatchNormid (n_hidden_2))
Seq module. add_module ("relu2", an. ReLUO)
Seg module. add_module ("out", nn. Linear (n _hidden_2, out_dim))
Seq_module. add_module ("softmax", nn. Sof tmax (dim=1)) in_dim, n_hidden_1, n_hidden_2, out_din=28 * 28, 300, 100, 10
print (Seq_module)

Sequential (
(flatten) : Flatten(start_dim=1, end_dim=-1)
(linearl): Linear (in_features=184, out_features=300, bias=True)
(bnl): BatchNormld(300, eps=1e-05, momentum=0.1, affine-True, track_running_stats=True)
(relul) : ReLUO
(linear2) : Linear (in_features=300, out_features=100, bias=True)
(bn2) : BatchWormld (100, eps=1e-05, momentun=0.1, affine=True, track_running_stats=True,
(relu2) : ReLUQ
(out) : Linear (in_features=100, out_features=10, bias=True)
(softmax) : Sof tmax (dim=1)

import torch from torch import nn
from collections import OrderedDict
Seg dict = n. Sequential (OrderedDict ([
("flatten", nn. Flatten), ("linearl", nn. Linear (in_dim, n _hidden_1)),
("bn1", nn. BatchNorm1d(n_hidden_1)),
("relul", nn. ReLUO),
("lineax2", nn. Linear (n_hidden_1, n_hidden_2)),
("bn2", nn. BatchNorm1d(n_hidden_2)),
("relu2", nn. ReLUO), ("out", nn. Linear (n_hidden_2, out_dim)),
("softmax", nn. Softmax (dim=1) )])) in_dim, n_hidden_1, n_hidden_2, out_dim=28 * 28, 300, 100, 10
print (Seq dict)

(flatten) : Flatten (start_dim=1, end_din=-1)
(linearl): Linear (in_features=784, out_features=300, bias=True)
(bnl): BatchNormld BatchNormld(300, eps=1e-05, monentun=0.1, affine=True, track_running_stats=True)
(relul): ReLUO
(linear2) : Linear (in_features=300, out_features=100, bias=True)
(bn2) : BatchNormld (100, eps=le-05, momentun=0.1, affine-True, track_running_stats=True)
(relu2) : ReLU(
(out) : Linear (in_features=100, out_features=10, bias=True)
(softmax) : Softmax (dim=1)

5.继承nn.Module基类并应用模型容器构建模型

Model_dict (
(layers_dict) : ModuleDict (
(flatten) : Flatten(start_dim=1, end_din=-1)
(linear1) : Linear (in_features=784, out_features=300, bias=True)
(bl): BatchNormld(300, eps=1e-05, momentun=0.1, affine=True, track_running_stats=True)
(relu) : ReLUO
(linear2) : Linear (in_features=300, out_features=100, bias=True)
(bn2) : BatchNormid(100, eps=1e-05, momentun=0.1, affine=True, track_running_stats=True)
(out) : Linear (in_features=100, out_features=10, bias=True)
(softmax)

7.自定义网络模块

·残差块有两种,一种是正常的模块方式,将输入与输出相加,然后应用激活函数ReLU。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值