深度学习笔记
pytorch在linux下似乎比在windows下有几倍性能的提高
虽然没有独显(穷),但是仍然希望能尽可能快的训练模型,看看收敛的结果。之前就有听人说在linux跑比在windows下快。于是便简单测试了一下最简单的CNN网络在不同系统下的执行所花的时间。由于没有Linux系统的电脑(穷),加上是win10是家庭版,所以使用virtualbox版的docker来代替,docker使用的是pytorch的官方镜像(系统为ubuntu),Window系统使用Windows10。代码修改自莫烦Python教学里的pytorch的卷积神经网络一集。数据集使用的是Mnist手写数字数据集。两个卷积层加一个输出层。EPOCH是2。BATCH_SIZE是50。代码如下:
"""
View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/
My Youtube Channel: https://www.youtube.com/user/MorvanZhou
Dependencies:
torch: 0.4
torchvision
matplotlib
"""
# library
# standard library
import os
import datetime
# third-party library
import torch
import torch.nn as nn
import torch.utils.data as Data
import torchvision
# torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 2 # train the training data n times, to save time, we just train 1 epoch
BATCH_SIZE = 50
LR = 0.001 # learning rate
DOWNLOAD_MNIST = False
# Mnist digits dataset
if not(os.path.exists('./mnist/')) or not os.listdir('./mnist/'):
# not mnist dir or mnist is empyt dir
DOWNLOAD_MNIST = True
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True, # this is training data
transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
download=DOWNLOAD_MNIST,
)
print(train_data.train_data.size()) # (60000, 28, 28)
print(train_data.train_labels