pytorch训练过程中的chunk_size及num_workers

pytorch训练过程中的chunk_size及num_workers作用。

Chunk_size:涉及到torch下的矩阵切片和划分,chunk_size有几个元素就占用几张显卡。

num_workers:多进程提取数据,用于dataloader

DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,

num_workers=0, collate_fn=default_collate, pin_memory=False,

drop_last=False)

 

1.  dataset:加载的数据集(Dataset对象)

2.  batch_size:batch size

3.  shuffle::是否将数据打乱

4.  sampler: 样本抽样,后续会详细介绍

5.  num_workers:使用多进程加载的进程数,0代表不使用多进程

6.  collate_fn: 如何将多个样本数据拼接成一个batch,一般使用默认的拼接方式即可

7.  pin_memory:是否将数据保存在pin memory区,pin memory中的数据转到GPU会快一些

8.  drop_last:dataset中的数据个数可能不是batch_size的整数倍,drop_last为True会将多出来不足一个batch的数据丢弃

 

增加多线程并不是提高了运算速度,而是解决运行多个任务时使用。程序一直在执行这个多线程的循环,非常非常耗时。

https://www.jianshu.com/p/98d3a23a2d6

下面是使用PyTorch实现ShuffleNet的分类代码: ```python import torch import torch.nn as nn class ShuffleNetBlock(nn.Module): def __init__(self, inp, oup, mid_channels, ksize, stride): super(ShuffleNetBlock, self).__init__() self.stride = stride self.mid_channels = mid_channels self.inp = inp self.oup = oup assert stride in [1, 2] if stride == 2: self.branch1 = nn.Sequential( nn.Conv2d(inp, inp, 3, 2, 1, groups=inp, bias=False), nn.BatchNorm2d(inp), nn.Conv2d(inp, mid_channels, 1, 1, 0, bias=False), nn.BatchNorm2d(mid_channels), nn.ReLU(inplace=True), ) self.branch2 = nn.Sequential( nn.Conv2d(inp, mid_channels, 1, 1, 0, bias=False), nn.BatchNorm2d(mid_channels), nn.ReLU(inplace=True), nn.Conv2d(mid_channels, mid_channels, ksize, stride, ksize//2, groups=mid_channels, bias=False), nn.BatchNorm2d(mid_channels), nn.Conv2d(mid_channels, mid_channels, 1, 1, 0, bias=False), nn.BatchNorm2d(mid_channels), nn.ReLU(inplace=True), ) else: assert inp == oup self.branch1 = nn.Sequential() self.branch2 = nn.Sequential( nn.Conv2d(mid_channels, mid_channels, ksize, stride, ksize//2, groups=mid_channels, bias=False), nn.BatchNorm2d(mid_channels), nn.Conv2d(mid_channels, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), nn.ReLU(inplace=True), ) def forward(self, x): if self.stride == 1: x1, x2 = x.chunk(2, dim=1) out = torch.cat((x1, self.branch2(x2)), dim=1) else: out = torch.cat((self.branch1(x), self.branch2(x)), dim=1) return out class ShuffleNet(nn.Module): def __init__(self, num_classes=1000): super(ShuffleNet, self).__init__() self.conv1 = nn.Conv2d(3, 24, 3, 2, 1, bias=False) self.bn1 = nn.BatchNorm2d(24) self.maxpool = nn.MaxPool2d(3, 2, 1) self.stage2 = self._make_stage(24, 144, 3, 2) self.stage3 = self._make_stage(144, 288, 7, 2) self.stage4 = self._make_stage(288, 576, 3, 2) self.conv5 = nn.Conv2d(576, 1024, 1, 1, 0, bias=False) self.bn5 = nn.BatchNorm2d(1024) self.fc = nn.Linear(1024, num_classes) def _make_stage(self, inp, oup, ksize, stride): layers = [] layers.append(ShuffleNetBlock(inp, oup, int(oup/2), ksize, stride)) for i in range(1, 4): layers.append(ShuffleNetBlock(oup, oup, int(oup/2), ksize, 1)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = nn.ReLU(inplace=True)(x) x = self.maxpool(x) x = self.stage2(x) x = self.stage3(x) x = self.stage4(x) x = self.conv5(x) x = self.bn5(x) x = nn.ReLU(inplace=True)(x) x = x.mean([2, 3]) x = self.fc(x) return x ``` 其中,ShuffleNetBlock是ShuffleNet的基本模块,_make_stage是构建ShuffleNet每个阶段的函数,ShuffleNet是整个模型的定义。 可以使用以下代码来进行模型的实例化和训练: ```python import torch.optim as optim import torchvision.datasets as datasets import torchvision.transforms as transforms # 实例化模型 model = ShuffleNet() # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=0.0001) # 加载数据集 train_dataset = datasets.ImageFolder(root='./train', transform=transforms.ToTensor()) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=4) # 训练模型 for epoch in range(100): for i, (inputs, targets) in enumerate(train_loader): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() if i % 10 == 0: print('Epoch: %d, Batch: %d, Loss: %.3f' % (epoch+1, i, loss.item())) ``` 其中,train文件夹下存放的是训练数据集,每个子文件夹代表一个类别。可以使用torchvision.transforms对数据进行预处理,例如将图像转换为tensor、对图像进行随机裁剪、随机翻转等。在训练过程中,使用优化器对模型的参数进行更新,同时计算损失函数,以监督模型的训练效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值