pytorch 常用操作

本文深入探讨了PyTorch中关键的操作函数,包括张量的索引选择、掩码选择、元素比较、上三角矩阵获取、数据指针地址读取、张量切分、批量矩阵乘法以及模块参数初始化等。通过详细的示例,读者可以了解这些函数的使用场景和参数配置,为实际应用提供指导。

pytorch 常用操作

1. torch.index_select(input, dim, index, out=None) → Tensor # index must be LongTensor
2. Tensor.index_select(dim, index, out=None) → Tensor # index must be LongTensor
3. torch.masked_select(input, mask, out=None) → Tensor
4. torch.gt(input, other, out=None) → Tensor
	Computes input>other element-wise.
	The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
	Parameters:	
	input (Tensor) – the tensor to compare
	other (Tensor or float) – the tensor or value to compare
	out (Tensor, optional) – the output tensor that must be a ByteTensor
	Returns:	
	A torch.ByteTensor containing a 1 at each location where comparison is true
	Return type:	
	Tensor
	Example:
	>>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
	tensor([[ 0,  1],
	        [ 0,  0]], dtype=torch.uint8)
5. torch.triu(input, diagonal=0, out=None) → Tensor
	Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
	The upper triangular part of the matrix is defined as the elements on and above the diagonal.
	The argument diagonal controls which diagonal to consider. If diagonal = 0, all elements on and below the main diagonal are retained. A positive value excludes just as many diagonals above the main diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. 
	Parameters:	
	input (Tensor) – the input tensor
	diagonal (int, optional) – the diagonal to consider
	out (Tensor, optional) – the output tensor
	Example:
	>>> a = torch.randn(3, 3)
	>>> a
	tensor([[ 0.2309,  0.5207,  2.0049],
	        [ 0.2072, -1.0680,  0.6602],
	        [ 0.3480, -0.5211, -0.4573]])
	>>> torch.triu(a)
	tensor([[ 0.2309,  0.5207,  2.0049],
	        [ 0.0000, -1.0680,  0.6602],
	        [ 0.0000,  0.0000, -0.4573]])
	>>> torch.triu(a, diagonal=1)
	tensor([[ 0.0000,  0.5207,  2.0049],
	        [ 0.0000,  0.0000,  0.6602],
	        [ 0.0000,  0.0000,  0.0000]])
	>>> torch.triu(a, diagonal=-1)
	tensor([[ 0.2309,  0.5207,  2.0049],
	        [ 0.2072, -1.0680,  0.6602],
	        [ 0.0000, -0.5211, -0.4573]])
	>>> b = torch.randn(4, 6)
	>>> b
	tensor([[ 0.5876, -0.0794, -1.8373,  0.6654,  0.2604,  1.5235],
	        [-0.2447,  0.9556, -1.2919,  1.3378, -0.1768, -1.0857],
	        [ 0.4333,  0.3146,  0.6576, -1.0432,  0.9348, -0.4410],
	        [-0.9888,  1.0679, -1.3337, -1.6556,  0.4798,  0.2830]])
	>>> torch.triu(b, diagonal=1)
	tensor([[ 0.0000, -0.0794, -1.8373,  0.6654,  0.2604,  1.5235],
	        [ 0.0000,  0.0000, -1.2919,  1.3378, -0.1768, -1.0857],
	        [ 0.0000,  0.0000,  0.0000, -1.0432,  0.9348, -0.4410],
	        [ 0.0000,  0.0000,  0.0000,  0.0000,  0.4798,  0.2830]])
	>>> torch.triu(b, diagonal=-1)
	tensor([[ 0.5876, -0.0794, -1.8373,  0.6654,  0.2604,  1.5235],
	        [-0.2447,  0.9556, -1.2919,  1.3378, -0.1768, -1.0857],
	        [ 0.0000,  0.3146,  0.6576, -1.0432,  0.9348, -0.4410],
	        [ 0.0000,  0.0000, -1.3337, -1.6556,  0.4798,  0.2830]])
6. data_ptr() → int
	Returns the address of the first element of self tensor.
	Uesd to compare whether two Tensors  are the same
7. torch.chunk(tensor, chunks, dim=0) → List of Tensors
	Splits a tensor into a specific number of chunks.
	Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by chunks.
	Parameters:	
	tensor (Tensor) – the tensor to split
	chunks (int) – number of chunks to return
	dim (int) – dimension along which to split the tensor
8. torch.bmm(batc1, batch2, out=None)  -> Tensor
	Performs a batch matrix-matrix product of matrices stored in batch1 and batch2.
	batch1 and batch2 must be 3-D tensors each containing the same number of matrices.
	This function does not broadcast. For broadcasting matrix products, see torch.matmul().
	Parameters:
	batch1 (Tensor) – the first batch of matrices to be multiplied
	batch2 (Tensor) – the second batch of matrices to be multiplied
	out (Tensor, optional) – the output tensor
	Example:
	>>> batch1 = torch.randn(10, 3, 4)
	>>> batch2 = torch.randn(10, 4, 5)
	>>> res = torch.bmm(batch1, batch2)
	>>> res.size()
	torch.Size([10, 3, 5])
9. apply(fn)
	Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init).
	Parameters:		fn (Module -> None) – function to be applied to each submodule
	Returns:		self
	Return type:	Module
	Example:
	>>> def init_weights(m):
	        print(m)
	        if type(m) == nn.Linear:
	            m.weight.data.fill_(1.0)
	            print(m.weight)
	>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
	>>> net.apply(init_weights)
	Linear(in_features=2, out_features=2, bias=True)
	Parameter containing:
	tensor([[ 1.,  1.],
	        [ 1.,  1.]])
	Linear(in_features=2, out_features=2, bias=True)
	Parameter containing:
	tensor([[ 1.,  1.],
	        [ 1.,  1.]])
	Sequential(
	  (0): Linear(in_features=2, out_features=2, bias=True)
	  (1): Linear(in_features=2, out_features=2, bias=True)
	)
	Sequential(
	  (0): Linear(in_features=2, out_features=2, bias=True)
	  (1): Linear(in_features=2, out_features=2, bias=True)
	)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值