
Pytorch
43118
JonyChan技术学习过程中的总结
展开
-
HDF文件(.h5)的写入与读取
1. 使用pd.DataFrame将数据写进数据框df = pd.DataFrame([[1, 1.0, 'a']], columns=['x', 'y', 'z'])2. 把DataFrame写入HDF文件——df.to_hdf()df.to_hdf('./store.h5', 'data')3. HDF文件的读取——pd.read_hdf()read = pd.read_hdf('./store.h5')data = read.values # ndarray类型..原创 2021-07-13 14:16:04 · 1590 阅读 · 0 评论 -
图神经网络会用到的相关函数util
1. 对稀疏矩阵的正则化- 对称邻接矩阵的正则化# [A * D^(-1/2)]^T * D^(-1/2) = D^(-1/2) * A * D^(-1/2)def sym_adj(adj): """Symmetrically normalize adjacency matrix.""" # 压缩的邻接矩阵 adj = sp.coo_matrix(adj) rowsum = np.array(adj.sum(1)) # 将n*1的矩阵转换为1个向量原创 2021-07-13 14:15:12 · 1555 阅读 · 1 评论 -
Graph Wavenet:入门图神经网络训练的demo
1. 数据的准备DCRNN的github项目有完整的数据Graph Wavenet提供的数据缺少 '/data/sensor_graph/adj_mx.pkl'原创 2021-07-13 14:08:25 · 2399 阅读 · 4 评论 -
为什么会用到masked loss
序列模型常常会用到padding(nn.functional.pad()),而padding添加的就是0.0。例如: def train(self, input, real_val): self.model.train() self.optimizer.zero_grad() input = nn.functional.pad(input, (1,0,0,0)) output = self.model(input)原创 2021-07-13 13:51:57 · 3166 阅读 · 1 评论 -
R-GCN
class RelationalGraphConvolution(Module)class RelationalGraphConvolution(Module): """ Relational Graph Convolution (RGC) Layer for Node Classification (as described in https://arxiv.org/abs/1703.06103) """ def __init__(self,原创 2021-05-16 20:25:00 · 512 阅读 · 0 评论 -
Pytorch编写的Module出现class not callable问题
原本:import torch.nn.modules.module as Moduleclass neighbor_pooling(Module):应该改为:import torch.nn as nnclass neighbor_pooling(nn.Module):原创 2021-05-27 15:41:29 · 430 阅读 · 0 评论 -
Pytorch的unsqueeze、expand与squeeze
PytorchunsqueezeexpandsqueezeTftf.expand_dimstf.broadcast_totf.squeeze原创 2021-05-21 21:25:28 · 333 阅读 · 0 评论 -
Torch的参数初始化
1.不需要初始化2.需要初始化只有自己定义的参数,例如weight与bias才需要自定义初始化。一般在__init__层里,调用self.reset_parameters()来实现。def __init__(self, in_features,out_features,bias=True): self.weight = Parameter(torch.FloatTensor(in_features, out_features)) if bias:原创 2021-05-18 09:18:37 · 4611 阅读 · 1 评论 -
Torch:WX+b中,b的维度应与WX的列向量相同
self.B=Parameter(torch.FloatTensor(h_dim)) # (16, ) # (16, 8285) * (8285, 8285) -> (16, 8285) -> (8285, 16)output=(output.transpose(1,0)*temp_drop).transpose(1,0)output+=self.B # (8285, 16) + (16,)原创 2021-05-17 21:25:35 · 355 阅读 · 0 评论 -
Pytorch DeepLearning-models
1. Perceptronimport numpy as npimport matplotlib.pyplot as pltimport torchimport osbasedir = os.path.dirname(os.path.realpath(__file__))# np.genfromtxt():Load data from a text file, with missing values handled as specified.data = np.genfromtxt(bas原创 2021-03-19 16:19:36 · 228 阅读 · 0 评论 -
Pyplot Example
包含x轴、y轴的label;限定x轴、y轴的取值范围;在同一张图里画多个曲线;设置legend()plt.title("Validation Accuracy vs. Number of Training Epochs")plt.xlabel("Training Epochs")plt.ylabel("Validation Accuracy")plt.plot(range(1, num_epochs+1), ohist, label="Pretrained")plt.plot(rang原创 2021-03-13 11:35:44 · 133 阅读 · 0 评论 -
Pytorch note
对array数组进行shuffle简单实现——利用np.array型的index下标对np.array进行排序idx = np.array([0,2,1])x = np.array([[5],[6],[7]])x = x[idx][[5], [7], [6]]array可以直接求均值与标准差(.mean, .std)X.mean()X.std()label与plt.legend()一起用plt.scatter(X_train[y_train==0, 0], X_train[y原创 2021-03-17 21:08:48 · 335 阅读 · 0 评论 -
Pytorch tutorials
1.Bridge with NumPyTorsors 与 numpy的变量指向同一个内存(Tensors on the CPU and NumPy arrays can share their underlying memory locations, and changing one will change the other)1.1 Tensor To Numpy Arrayt = torch.ones(5)n = t.numpy()t.add_(1)print(f"t: {t}")pr原创 2021-03-16 10:55:41 · 295 阅读 · 0 评论 -
Finetuning torchvision models
from __future__ import print_functionfrom __future__ import divisionimport torchimport torch.nn as nnimport torch.optim as optimimport numpy as npimport torchvisionfrom torchvision import datasets, models, transformsimport matplotlib.pyplot as plt原创 2021-03-13 14:14:32 · 178 阅读 · 0 评论