1.
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at …\torch\csrc\utils\tensor_numpy.cpp:180.)
return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
这个不用管,不会影响正常的使用,据csdn八卦说下版本会修复
这里的ok表示运行完最后一行

2. 有些反直觉的torch中的axis

原本一维是先行后列
但这里的0指按列计算,1指按行计算
3.DataLoader()
https://www.cnblogs.com/ranjiewen/p/10128046.html
输入数据PipeLine
pytorch 的数据加载到模型的操作顺序是这样的:
① 创建一个 Dataset 对象
② 创建一个 DataLoader 对象
③ 循环这个 DataLoader 对象,将img, label加载到模型中进行训练
dataset = MyDataset()
dataloader = DataLoader(dataset)
num_epoches = 100
for epoch in range(num_epoches):
for img, label in dataloader:
…
DataLoader的变量
dataset(Dataset): 传入的数据集
batch_size(int, optional): 每个batch有多少个样本
shuffle(bool, optional): 在每个epoch开始的时候,对数据进行重新排序
sampler(Sampler, optional): 自定义从数据集中取样本的策略,如果指定这个参数,那么shuffle必须为False
batch_sampler(Sampler, optional): 与sampler类似,但是一次只返回一个batch的indices(索引),需要注意的是,一旦指定了这个参数,那么batch_size,shuffle,sampler,drop_last就不能再制定了(互斥——Mutually exclusive)
num_workers (int, optional): 这个参数决定了有几个进程来处理data loading。0意味着所有的数据都会被load进主进程。(默认为0)
collate_fn (callable, optional): 将一个list的sample组成一个mini-batch的函数
pin_memory (bool, optional): 如果设置为True,那么data loader将会在返回它们之前,将tensors拷贝到CUDA中的固定内存(CUDA pinned memory)中.
drop_last (bool, optional): 如果设置为True:这个是对最后的未完成的batch来说的,比如你的batch_size设置为64,而一个epoch只有100个样本,那么训练的时候后面的36个就被扔掉了…
如果为False(默认),那么会继续正常执行,只是最后的batch_size会小一点。
timeout(numeric, optional): 如果是正数,表明等待从worker进程中收集一个batch等待的时间,若超出设定的时间还没有收集到,那就不收集这个内容了。这个numeric应总是大于等于0。默认为0
worker_init_fn (callable, optional): 每个worker初始化函数 If not None, this will be called on each
worker subprocess with the worker id (an int in [0, num_workers - 1]) as
input, after seeding and before data loading. (default: None)
3.1num_worker的理解
https://blog.youkuaiyun.com/qq_24407657/article/details/103992170

3.2 num_worker取不同值的效果
https://deeplizard.com/learn/video/kWVgvsejXsE

4.isinstance
isinstance() 函数来判断一个对象是否是一个已知的类型
eg.

5.model.eval()
https://blog.youkuaiyun.com/qq_46284579/article/details/120439049?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_title~default-0.no_search_link&spm=1001.2101.3001.4242
简单说法:开启验证模式,梯度清零,不改变权重

6.Accumulator()
an=f(a_(n-1), i_n),默认f是加法
要强制类型转换来输出结果

a1=1
a2=2=a12=12=2
a3=a23=23=6
a4=a34=64=24
7. torch.numel()
numel()函数:返回数组中元素的个数
这是torch专用的一个东西(或者说是一个tensor才有的)
import torch
x=torch.randn(3,3)
print(x)
print(x.numel())
print('-'*50)
y=[[ 0.2991, 0.2958, 1.0647],
[ 1.4085, -0.7193, -0.5152],
[-0.6111, 0.6379, -0.9669]]
print(y)
print(y.numel())
普通的list里面没有,ndarray也没有

8.Pytorch optimizer.step() 和loss.backward()和scheduler.step()的关系与区别

在跟着敲代码的时候突生疑惑,按之前的理解l.backward已经能跟新参数,为什么还要updater.step()
首先说下结论:
优化器是基于反向梯度的一种计算方法,但它自己不计算梯度,所有需要提供梯度,此外l.backward()只是更新梯度(就是)

它并不会更新原数据
所以还需要updater的使用
reference
https://www.cnblogs.com/ranjiewen/p/10128046.html
https://blog.youkuaiyun.com/qq_24407657/article/details/103992170
https://deeplizard.com/learn/video/kWVgvsejXsE
https://www.runoob.com/python/python-func-isinstance.html
https://blog.youkuaiyun.com/qq_46284579/article/details/120439049?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_title~default-0.no_search_link&spm=1001.2101.3001.4242
https://cloud.tencent.com/developer/article/1653756#:~:text=%E9%A6%96%E5%85%88%E9%9C%80%E8%A6%81%E6%98%8E%E7%A1%AEoptimzier%E4%BC%98%E5%8C%96%E5%99%A8%E7%9A%84%E4%BD%9C%E7%94%A8%2C%20%E5%BD%A2%E8%B1%A1%E5%9C%B0%E6%9D%A5%E8%AF%B4%EF%BC%8C%E4%BC%98%E5%8C%96%E5%99%A8%E5%B0%B1%E6%98%AF%E9%9C%80%E8%A6%81%E6%A0%B9%E6%8D%AE%E7%BD%91%E7%BB%9C%E5%8F%8D%E5%90%91%E4%BC%A0%E6%92%AD%E7%9A%84%E6%A2%AF%E5%BA%A6%E4%BF%A1%E6%81%AF%E6%9D%A5%E6%9B%B4%E6%96%B0%E7%BD%91%E7%BB%9C%E7%9A%84%E5%8F%82%E6%95%B0%EF%BC%8C%E4%BB%A5%E8%B5%B7%E5%88%B0%E9%99%8D%E4%BD%8Eloss%E5%87%BD%E6%95%B0%E8%AE%A1%E7%AE%97%E5%80%BC%E7%9A%84%E4%BD%9C%E7%94%A8%EF%BC%8C%E8%BF%99%E4%B9%9F%E6%98%AF%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E9%87%8C%E9%9D%A2%E6%9C%80%E4%B8%80%E8%88%AC%E7%9A%84%E6%96%B9%E6%B3%95%E8%AE%BA%E3%80%82%201.%20%E4%BC%98%E5%8C%96%E5%99%A8%E9%9C%80%E8%A6%81%E7%9F%A5%E9%81%93%E5%BD%93%E5%89%8D%E7%9A%84%E7%BD%91%E7%BB%9C%E6%88%96%E8%80%85%E5%88%AB%E7%9A%84%E4%BB%80%E4%B9%88%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%8F%82%E6%95%B0%E7%A9%BA%E9%97%B4%EF%BC%8C%E8%BF%99%E4%B9%9F%E5%B0%B1%E6%98%AF%E4%B8%BA%E4%BB%80%E4%B9%88%E5%9C%A8%E8%AE%AD%E7%BB%83%E6%96%87%E4%BB%B6%E4%B8%AD%EF%BC%8C%E6%AD%A3%E5%BC%8F%E5%BC%80%E5%A7%8B%E8%AE%AD%E7%BB%83%E4%B9%8B%E5%89%8D%E9%9C%80%E8%A6%81%E5%B0%86%E7%BD%91%E7%BB%9C%E7%9A%84%E5%8F%82%E6%95%B0%E6%94%BE%E5%88%B0%E4%BC%98%E5%8C%96%E5%99%A8%E9%87%8C%E9%9D%A2%EF%BC%8C%E6%AF%94%E5%A6%82%E4%BD%BF%E7%94%A8pytorch%E7%9A%84%E8%AF%9D%E6%80%BB%E4%BC%9A%E5%87%BA%E7%8E%B0%E7%B1%BB%E4%BC%BC%E5%A6%82%E4%B8%8B%E7%9A%84%E4%BB%A3%E7%A0%81%EF%BC%9A%202.%20%E9%9C%80%E8%A6%81%E7%9F%A5%E9%81%93%E5%8F%8D%E5%90%91%E4%BC%A0%E6%92%AD%E7%9A%84%E6%A2%AF%E5%BA%A6%E4%BF%A1%E6%81%AF%EF%BC%8C%E6%88%91%E4%BB%AC%E8%BF%98%E6%98%AF%E4%BB%8E%E4%BB%A3%E7%A0%81%E5%85%A5%E6%89%8B%EF%BC%8C%E5%A6%82%E4%B8%8B%E6%89%80%E7%A4%BA%E6%98%AFPytorch%20%E4%B8%ADSGD%E4%BC%98%E5%8C%96%E7%AE%97%E6%B3%95%E7%9A%84step%20%28%29%E5%87%BD%E6%95%B0%E5%85%B7%E4%BD%93%E5%86%99%E6%B3%95%EF%BC%8C%E5%85%B7%E4%BD%93SGD%E7%9A%84%E5%86%99%E6%B3%95%E6%94%BE%E5%9C%A8%E5%8F%82%E8%80%83%E9%83%A8%E5%88%86%E3%80%82,%3D%20None%29%3A%20%22%22%20%22Performs%20a%20single%20optimization%20step.
1万+

被折叠的 条评论
为什么被折叠?



