jeiguopwei在看代码的时候发现这里很混乱就自己敲了看看什么区别,这是我自己武断总结的,希望能帮助大家,有错误希望能指出来~
embedding = nn.Embedding(2, 30)
nn.Embedding这个函数就不多说了,可以看这个(19条消息) 详细介绍pytorch中的nn.Embedding()_nn.embedding可以训练么_拒绝省略号的博客-优快云博客讲的很清楚
我在使用这个函数时候发现nn.embedding.weight和nn.embedding.weight.data发现值是一样的唯一的区别是:
import torch
from torch import nn
embedding = nn.Embedding(2, 5)
m = embedding.weight[0]
a=embedding.weight
b=embedding.weight.data
print(a)
print(b)
输出为
C:\Users\MADAO\anaconda3\envs\GNN\python.exe D:\pycharm_\pythonProject1\111.py
Parameter containing:
tensor([[-0.4117, 0.9681, 0.3480, -1.5160, -0.3936],
[ 0.4915, 1.8726, -0.0110, -1.5273, 0.0603]], requires_grad=True)
tensor([[-0.4117, 0.9681, 0.3480, -1.5160, -0.3936],
[ 0.4915, 1.8726, -0.0110, -1.5273, 0.0603]])
Process finished with exit code 0
可以看到第一个自动设置了梯度为Ture,
去查看b发现
然后的话可以看到除了梯度的T or F是没有什么区别的,当设置with torch.no_grad():后可以直接weight.uniform_()的
import torch
from torch import nn
embedding = nn.Embedding(2, 5)
c= embedding.weight.data.uniform_(-0.5,0.5)/128
with torch.no_grad():
d=embedding.weight.uniform_(-0.5,0.5)/128
print(c)
print(b)
结果为
tensor([[ 0.0012, -0.0034, -0.0036, 0.0001, 0.0013],
[-0.0015, 0.0033, -0.0002, 0.0023, 0.0003]])
tensor([[ 0.0021, -0.0035, -0.0031, 0.0004, 0.0009],
[-0.0006, -0.0005, 0.0015, -0.0024, -0.0028]])
Process finished with exit code 0
乱写的,不对请告诉,正在自学ing,孩子真的在努力学了