Tensor的基本运算

基本运算

Add/minus/multiply/divide

>>> a=torch.rand(3,4)
>>> b=torch.rand(4)
>>> a+b
tensor([[1.5473, 0.8469, 1.3391, 0.4954],
        [1.8049, 1.5318, 1.4299, 0.6006],
        [1.1770, 1.3373, 0.7201, 1.2777]])
>>> torch.add(a,b)
tensor([[1.5473, 0.8469, 1.3391, 0.4954],
        [1.8049, 1.5318, 1.4299, 0.6006],
        [1.1770, 1.3373, 0.7201, 1.2777]])
>>> torch.all(torch.eq(a-b,torch.sub(a,b)))
tensor(True)
>>> torch.all(torch.eq(a*b,torch.mul(a,b)))
tensor(True)
>>> torch.all(torch.eq(a/b,torch.div(a,b)))
tensor(True)

Matmul

>>> a=torch.tensor([[3.,3.],[3.,3.]])
>>> a
tensor([[3., 3.],
        [3., 3.]])
>>> b=torch.ones(2,2)
>>> b
tensor([[1., 1.],
        [1., 1.]])
>>> torch.mm(a,b)
tensor([[6., 6.],
        [6., 6.]])
>>> torch.matmul(a,b)
tensor([[6., 6.],
        [6., 6.]])
>>> a@b
tensor([[6., 6.],
        [6., 6.]])

An example

>>> x=torch.rand(4,784)
>>> w=torch.rand(512,784)
>>> (x@w.t()).shape
torch.Size([4, 512])

2d tensor matmul?

>>> a=torch.rand(4,3,28,64)
>>> b=torch.rand(4,3,64,32)
>>> torch.mm(a,b).shape
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: self must be a matrix
>>> torch.matmul(a,b).shape
torch.Size([4, 3, 28, 32])
>>> b=torch.rand(4,1,64,32)
>>> torch.matmul(a,b).shape
torch.Size([4, 3, 28, 32])
>>> b=torch.rand(4,64,32)
>>> torch.matmul(a,b).shape
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 1

Pow/Sqrt/rsqrt

>>> a=torch.full([2,2],3)
>>> a.pow(2)
tensor([[9, 9],
        [9, 9]])
>>> a**2
tensor([[9, 9],
        [9, 9]])
>>> aa=a**2
>>> aa.sqrt()
tensor([[3., 3.],
        [3., 3.]])
>>> aa.rsqrt()
tensor([[0.3333, 0.3333],
        [0.3333, 0.3333]])
>>> aa**(0.5)
tensor([[3., 3.],
        [3., 3.]])

Exp/log

>>> a=torch.exp(torch.ones(2,2))
>>> a
tensor([[2.7183, 2.7183],
        [2.7183, 2.7183]])
>>> torch.log(a)
tensor([[1., 1.],
        [1., 1.]])

Approximation

.floor()/ .ceil()/.trunc()/ .frac()

>>> a=torch.tensor(3.14)
>>> a.floor(),a.ceil(),a.trunc(),a.frac()
(tensor(3.), tensor(4.), tensor(3.), tensor(0.1400))

.round()

>>> a=torch.tensor(3.499)
>>> a.round()
tensor(3.)
>>> a=torch.tensor(3.5)
>>> a.round()
tensor(4.)

clamp

>>> grad=torch.rand(2,3)*15
>>> grad.max()
tensor(12.9840)
>>> grad.median()
tensor(4.5466)
>>> grad.clamp(10)
tensor([[10.0000, 12.9840, 10.0000],
        [10.0000, 10.0000, 10.0000]])
>>> grad
tensor([[ 9.8407, 12.9840,  4.5466],
        [ 0.5669,  1.5547,  7.0860]])
>>> grad.clamp(0,10)
tensor([[ 9.8407, 10.0000,  4.5466],
        [ 0.5669,  1.5547,  7.0860]])

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值