Tensor的统计属性

该篇博客深入探讨了 PyTorch 中的张量操作,包括 norm、mean、sum、min、max、prod、argmin、argmax、topk 和 kthvalue 等统计函数的使用,以及比较和索引操作。通过实例展示了如何进行维度变换、元素级比较和获取最大最小值等操作,是理解 PyTorch 张量操作的重要参考资料。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

统计属性

norm-p

>>> a=torch.full([8],1.)
>>> b=a.view(2,4)
>>> c=a.view(2,2,2)
>>> b
tensor([[1., 1., 1., 1.],
        [1., 1., 1., 1.]])
>>> c
tensor([[[1., 1.],
         [1., 1.]],

        [[1., 1.],
         [1., 1.]]])
>>> a.norm(1),b.norm(1),c.norm(1)
(tensor(8.), tensor(8.), tensor(8.))
>>> a.norm(2),b.norm(2),c.norm(2)
(tensor(2.8284), tensor(2.8284), tensor(2.8284))
>>> b.norm(1,dim=1)
tensor([4., 4.])
>>> b.norm(2,dim=1)
tensor([2., 2.])
>>> c.norm(1,dim=1)
tensor([[2., 2.],
        [2., 2.]])
>>> c.norm(2,dim=1)
tensor([[1.4142, 1.4142],
        [1.4142, 1.4142]])

mean, sum, min, max, prod

>>> a=torch.arange(8).view(2,4).float()
>>> a
tensor([[0., 1., 2., 3.],
        [4., 5., 6., 7.]])
>>> a.min(),a.max(),a.mean(),a.prod(),a.sum()  # prod()累乘
(tensor(0.), tensor(7.), tensor(3.5000), tensor(0.), tensor(28.))
>>> a.argmax(),a.argmin()  # 返回索引,如果不指定参数,会将数据变成一维后返回索引
(tensor(7), tensor(0))
>>> a=torch.randn(4,10)
>>> a.argmax()
tensor(31)
>>> a.argmax(dim=1)
tensor([6, 3, 6, 1])

dim, keepdim

>>> a=torch.randn(4,10)
>>> a
tensor([[ 0.6893, -0.0819,  0.7485,  1.3642,  0.0030,  0.4745,  1.5248,  0.0414,
         -1.4308, -1.7226],
        [ 0.2916, -0.0455, -1.3166,  2.0126, -0.3199,  0.2111, -0.2048, -0.4260,
          0.1673,  1.3173],
        [ 1.0404, -0.1565, -0.2779, -0.3392, -0.5311, -0.1544,  1.5886,  0.0950,
          0.9083, -0.1829],
        [ 0.1790,  2.3940, -0.7230,  0.4152, -1.3105, -0.2937, -0.6306,  0.1036,
          0.7949, -0.0158]])
>>> a.max(dim=1)
torch.return_types.max(
values=tensor([1.5248, 2.0126, 1.5886, 2.3940]),
indices=tensor([6, 3, 6, 1]))
>>> a.argmax(dim=1)
tensor([6, 3, 6, 1])
>>> a.max(dim=1,keepdim=True)
torch.return_types.max(
values=tensor([[1.5248],
        [2.0126],
        [1.5886],
        [2.3940]]),
indices=tensor([[6],
        [3],
        [6],
        [1]]))
>>> a.argmax(dim=1,keepdim=True)
tensor([[6],
        [3],
        [6],
        [1]])

Top-k or k-th

>>> a=torch.randn(4,10)
>>> a
tensor([[ 0.6893, -0.0819,  0.7485,  1.3642,  0.0030,  0.4745,  1.5248,  0.0414,
         -1.4308, -1.7226],
        [ 0.2916, -0.0455, -1.3166,  2.0126, -0.3199,  0.2111, -0.2048, -0.4260,
          0.1673,  1.3173],
        [ 1.0404, -0.1565, -0.2779, -0.3392, -0.5311, -0.1544,  1.5886,  0.0950,
          0.9083, -0.1829],
        [ 0.1790,  2.3940, -0.7230,  0.4152, -1.3105, -0.2937, -0.6306,  0.1036,
          0.7949, -0.0158]])
>>> a.topk(3,dim=1,largest=False)
torch.return_types.topk(
values=tensor([[-1.7226, -1.4308, -0.0819],
        [-1.3166, -0.4260, -0.3199],
        [-0.5311, -0.3392, -0.2779],
        [-1.3105, -0.7230, -0.6306]]),
indices=tensor([[9, 8, 1],
        [2, 7, 4],
        [4, 3, 2],
        [4, 2, 6]]))
>>> a.kthvalue(8,dim=1)  # 返回第8小
torch.return_types.kthvalue(
values=tensor([0.7485, 0.2916, 0.9083, 0.4152]),
indices=tensor([2, 0, 8, 3]))
>>> a.kthvalue(3)
torch.return_types.kthvalue(
values=tensor([-0.0819, -0.3199, -0.2779, -0.6306]),
indices=tensor([1, 4, 2, 6]))
>>> a.kthvalue(3,dim=1)
torch.return_types.kthvalue(
values=tensor([-0.0819, -0.3199, -0.2779, -0.6306]),
indices=tensor([1, 4, 2, 6]))

compare

>>> a=torch.randn(4,10)
>>> a
tensor([[ 0.6893, -0.0819,  0.7485,  1.3642,  0.0030,  0.4745,  1.5248,  0.0414,
         -1.4308, -1.7226],
        [ 0.2916, -0.0455, -1.3166,  2.0126, -0.3199,  0.2111, -0.2048, -0.4260,
          0.1673,  1.3173],
        [ 1.0404, -0.1565, -0.2779, -0.3392, -0.5311, -0.1544,  1.5886,  0.0950,
          0.9083, -0.1829],
        [ 0.1790,  2.3940, -0.7230,  0.4152, -1.3105, -0.2937, -0.6306,  0.1036,
          0.7949, -0.0158]])
>>> a>0
tensor([[ True, False,  True,  True,  True,  True,  True,  True, False, False],
        [ True, False, False,  True, False,  True, False, False,  True,  True],
        [ True, False, False, False, False, False,  True,  True,  True, False],
        [ True,  True, False,  True, False, False, False,  True,  True, False]])
>>> torch.gt(a,0)
tensor([[ True, False,  True,  True,  True,  True,  True,  True, False, False],
        [ True, False, False,  True, False,  True, False, False,  True,  True],
        [ True, False, False, False, False, False,  True,  True,  True, False],
        [ True,  True, False,  True, False, False, False,  True,  True, False]])
>>> a!=0
tensor([[True, True, True, True, True, True, True, True, True, True],
        [True, True, True, True, True, True, True, True, True, True],
        [True, True, True, True, True, True, True, True, True, True],
        [True, True, True, True, True, True, True, True, True, True]])
>>> a=torch.ones(2,3)
>>> b=torch.randn(2,3)
>>> torch.eq(a,b)
tensor([[False, False, False],
        [False, False, False]])
>>> torch.eq(a,a)
tensor([[True, True, True],
        [True, True, True]])
>>> torch.equal(a,a)
True

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值