tensor.flatten(2).transpose(1,2)

部署运行你感兴趣的模型镜像

1. 描述

对于一个四维向量来说,A=[batch_size,in_channles,height,width]

  • A.flatten(2)–> 将矩阵A中的最后两个维度拉成一个向量
  • A.flatten(2).transpose(1,2) 将第1维和第2维进行转置

2. pytorch 代码

import torch
import torch.nn as nn
import torch.nn.functional as F

torch.set_printoptions(precision=3, sci_mode=False)

if __name__ == "__main__":
    run_code = 0
    bs = 2
    in_channel = 3
    height = 4
    width = 4
    a_total = bs * height * width * in_channel
    a_matrix = torch.arange(a_total).reshape((bs, in_channel, height, width))
    print(f"a_matrix.shape=\n{a_matrix.shape}")
    print(f"a_matrix=\n{a_matrix}")
    b_matrix = a_matrix.flatten(2)
    print(f"b_matrix=\n{b_matrix}")
    c_matrix = b_matrix.transpose(1, 2)
    print(f"c_matrix=\n{c_matrix}")

  • result
a_matrix.shape=
torch.Size([2, 3, 4, 4])
a_matrix=
tensor([[[[ 0,  1,  2,  3],
          [ 4,  5,  6,  7],
          [ 8,  9, 10, 11],
          [12, 13, 14, 15]],

         [[16, 17, 18, 19],
          [20, 21, 22, 23],
          [24, 25, 26, 27],
          [28, 29, 30, 31]],

         [[32, 33, 34, 35],
          [36, 37, 38, 39],
          [40, 41, 42, 43],
          [44, 45, 46, 47]]],


        [[[48, 49, 50, 51],
          [52, 53, 54, 55],
          [56, 57, 58, 59],
          [60, 61, 62, 63]],

         [[64, 65, 66, 67],
          [68, 69, 70, 71],
          [72, 73, 74, 75],
          [76, 77, 78, 79]],

         [[80, 81, 82, 83],
          [84, 85, 86, 87],
          [88, 89, 90, 91],
          [92, 93, 94, 95]]]])
b_matrix=
tensor([[[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15],
         [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31],
         [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]],

        [[48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63],
         [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79],
         [80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]]])
c_matrix=
tensor([[[ 0, 16, 32],
         [ 1, 17, 33],
         [ 2, 18, 34],
         [ 3, 19, 35],
         [ 4, 20, 36],
         [ 5, 21, 37],
         [ 6, 22, 38],
         [ 7, 23, 39],
         [ 8, 24, 40],
         [ 9, 25, 41],
         [10, 26, 42],
         [11, 27, 43],
         [12, 28, 44],
         [13, 29, 45],
         [14, 30, 46],
         [15, 31, 47]],

        [[48, 64, 80],
         [49, 65, 81],
         [50, 66, 82],
         [51, 67, 83],
         [52, 68, 84],
         [53, 69, 85],
         [54, 70, 86],
         [55, 71, 87],
         [56, 72, 88],
         [57, 73, 89],
         [58, 74, 90],
         [59, 75, 91],
         [60, 76, 92],
         [61, 77, 93],
         [62, 78, 94],
         [63, 79, 95]]])

您可能感兴趣的与本文相关的镜像

PyTorch 2.7

PyTorch 2.7

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值