torch.summary打印出神经网络的形状和参数大小,及AdaptiveAvgPool2d

目录

一:resnet部分加速理解

二:全局pool辅助理解


一、resnet部分加速理解

  • 网络整体流向
    在ResNet类中的forward( )函数规定了网络数据的流向:
    (1)数据进入网络后先经过输入部分(conv1, bn1, relu, maxpool);
    (2)然后进入中间卷积部分(layer1, layer2, layer3, layer4,这里的layer对应我们之前所说的stage);
    (3)最后数据经过一个平均池化和全连接层(avgpool, fc)输出得到结果;
    具体来说,resnet50和其他res系列网络的差异主要在于layer1~layer4,其他的部件都是相似的。

  • 网络输入部分详解:
    所有的ResNet网络输入部分是一个size=7x7, stride=2的大卷积核,以及一个size=3x3, stride=2的最大池化组成,通过这一步,一个224x224的输入图像就会变56x56大小的特征图,极大减少了存储所需大小。

  • 网络中间卷积部分
    中间卷积部分主要是下图中的蓝框部分,通过3*3卷积的堆叠来实现信息的提取。红框中的[2, 2, 2, 2]和[3, 4, 6, 3]等则代表了bolck的重复堆叠次数[3, 4, 6, 3]代表resnet-34,同理,[3, 4, 23, 3]代表resnet-101.

resnet残差部分可参考:PyTorch—torchvision.models导入预训练模型—残差网络代码讲解

此时,以resnet50为例,查看神经网络的形状和参数大小,主代码如下:

from torchsummary import summary
import torchvision.models as models
model = models.resnet50()

summary(model, input_size=(3,256,256), batch_size=-1, device='cuda')

打印出来的结果如下所示:

C:\ProgramData\Anaconda3\python.exe F:/lingjun2019/Body_Recog/study_test/print_netParam.py
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 112, 112]           9,408
       BatchNorm2d-2         [-1, 64, 112, 112]             128
              ReLU-3         [-1, 64, 112, 112]               0
         MaxPool2d-4           [-1, 64, 56, 56]               0
            Conv2d-5           [-1, 64, 56, 56]           4,096
       BatchNorm2d-6           [-1, 64, 56, 56]             128
              ReLU-7           [-1, 64, 56, 56]               0
            Conv2d-8           [-1, 64, 56, 56]          36,864
       BatchNorm2d-9           [-1, 64, 56, 56]             128
             ReLU-10           [-1, 64, 56, 56]               0
           Conv2d-11          [-1, 256, 56, 56]          16,384
      BatchNorm2d-12          [-1, 256, 56, 56]             512
           Conv2d-13          [-1, 256, 56, 56]          16,384
      BatchNorm2d-14          [-1, 256, 56, 56]             512
             ReLU-15          [-1, 256, 56, 56]               0
       Bottleneck-16          [-1, 256, 56, 56]               0
           Conv2d-17           [-1, 64, 56, 56]          16,384
      BatchNorm2d-18           [-1, 64, 56, 56]             128
             ReLU-19           [-1, 64, 56, 56]               0
           Conv2d-20           [-1, 64, 56, 56]          36,864
      BatchNorm2d-21           [-1, 64, 56, 56]             128
             ReLU-22           [-1, 64, 56, 56]               0
           Conv2d-23          [-1, 256, 56, 56]          16,384
      BatchNorm2d-24          [-1, 256, 56, 56]             512
             ReLU-25          [-1, 256, 56, 56]               0
       Bottleneck-26          [-1, 256, 56, 56]               0
           Conv2d-27           [-1, 64, 56, 56]          16,384
      BatchNorm2d-28           [-1, 64, 56, 56]             128
             ReLU-29           [-1, 64, 56, 56]               0
           Conv2d-30           [-1, 64, 56, 56]          36,864
      BatchNorm2d-31           [-1, 64, 56, 56]             128
             ReLU-32           [-1, 64, 56, 56]               0
           Conv2d-33          [-1, 256, 56, 56]          16,384
      BatchNorm2d-34          [-1, 256, 56, 56]             512
             ReLU-35          [-1, 256, 56, 56]               0
       Bottleneck-36          [-1, 256, 56, 56]               0
           Conv2d-37          [-1, 128, 56, 56]          32,768
      BatchNorm2d-38          [-1, 128, 56, 56]             256
             ReLU-39          [-1, 128, 56, 56]               0
           Conv2d-40          [-1, 128, 28, 28]         147,456
      BatchNorm2d-41          [-1, 128, 28, 28]             256
             ReLU-42          [-1, 128, 28, 28]               0
           Conv2d-43          [-1, 512, 28, 28]          65,536
      BatchNorm2d-44          [-1, 512, 28, 28]           1,024
           Conv2d-45          [-1, 512, 28, 28]         131,072
      BatchNorm2d-46          [-1, 512, 28, 28]           1,024
             ReLU-47          [-1, 512, 28, 28]               0
       Bottleneck-48          [-1, 512, 28, 28]               0
           Conv2d-49          [-1, 128, 28, 28]          65,536
      BatchNorm2d-50          [-1, 128, 28, 28]             256
             ReLU-51          [-1, 128, 28, 28]               0
           Conv2d-52          [-1, 128, 28, 28]         147,456
      BatchNorm2d-53          [-1, 128, 28, 28]             256
             ReLU-54          [-1, 128, 28, 28]               0
           Conv2d-55          [-1, 512, 28, 28]          65,536
      BatchNorm2d-56          [-1, 512, 28, 28]           1,024
             ReLU-57          [-1, 512, 28, 28]               0
       Bottleneck-58          [-1, 512, 28, 28]               0
           Conv2d-59          [-1, 128, 28, 28]          65,536
      BatchNorm2d-60          [-1, 128, 28, 28]             256
             ReLU-61          [-1, 128, 28, 28]               0
           Conv2d-62          [-1, 128, 28, 28]         147,456
      BatchNorm2d-63          [-1, 128, 28, 28]             256
             ReLU-64          [-1, 128, 28, 28]               0
           Conv2d-65          [-1, 512, 28, 28]          65,536
      BatchNorm2d-66          [-1, 512, 28, 28]           1,024
             ReLU-67          [-1, 512, 28, 28]               0
       Bottleneck-68          [-1, 512, 28, 28]               0
           Conv2d-69          [-1, 128, 28, 28]          65,536
      BatchNorm2d-70          [-1, 128, 28, 28]             256
             ReLU-71          [-1, 128, 28, 28]               0
           Conv2d-72          [-1, 128, 28, 28]         147,456
      BatchNorm2d-73          [-1, 128, 28, 28]             256
             ReLU-74          [-1, 128, 28, 28]               0
           Conv2d-75          [-1, 512, 28, 28]          65,536
      BatchNorm2d-76          [-1, 512, 28, 28]           1,024
             ReLU-77          [-1, 512, 28, 28]               0
       Bottleneck-78          [-1, 512, 28, 28]               0
           Conv2d-79          [-1, 256, 28, 28]         131,072
      BatchNorm2d-80          [-1, 256, 28, 28]             512
             ReLU-81          [-1, 256, 28, 28]               0
           Conv2d-82          [-1, 256, 14, 14]         589,824
      BatchNorm2d-83          [-1, 256, 14, 14]             512
             ReLU-84          [-1, 256, 14, 14]               0
           Conv2d-85         [-1, 1024, 14, 14]         262,144
      BatchNorm2d-86         [-1, 1024, 14, 14]           2,048
           Conv2d-87         [-1, 1024, 14, 14]         524,288
      BatchNorm2d-88         [-1, 1024, 14, 14]           2,048
             ReLU-89         [-1, 1024, 14, 14]               0
       Bottleneck-90         [-1, 1024, 14, 14]               0
           Conv2d-91          [-1, 256, 14, 14]         262,144
      BatchNorm2d-92          [-1, 256, 14, 14]             512
             ReLU-93          [-1, 256, 14, 14]               0
           Conv2d-94          [-1, 256, 14, 14]         589,824
      BatchNorm2d-95          [-1, 256, 14, 14]             512
             ReLU-96          [-1, 256, 14, 14]               0
           Conv2d-97         [-1, 1024, 14, 14]         262,144
      BatchNorm2d-98         [-1, 1024, 14, 14]           2,048
             ReLU-99         [-1, 1024, 14, 14]               0
      Bottleneck-100         [-1, 1024, 14, 14]               0
          Conv2d-101          [-1, 256, 14, 14]         262,144
     BatchNorm2d-102          [-1, 256, 14, 14]             512
            ReLU-103          [-1, 256, 14, 14]               0
          Conv2d-104          [-1, 256, 14, 14]         589,824
     BatchNorm2d-105          [-1, 256, 14, 14]             512
            ReLU-106          [-1, 256, 14, 14]               0
          Conv2d-107         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-108         [-1, 1024, 14, 14]           2,048
            ReLU-109         [-1, 1024, 14, 14]               0
      Bottleneck-110         [-1, 1024, 14, 14]               0
          Conv2d-111          [-1, 256, 14, 14]         262,144
     BatchNorm2d-112          [-1, 256, 14, 14]             512
            ReLU-113          [-1, 256, 14, 14]               0
          Conv2d-114          [-1, 256, 14, 14]         589,824
     BatchNorm2d-115          [-1, 256, 14, 14]             512
            ReLU-116          [-1, 256, 14, 14]               0
          Conv2d-117         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-118         [-1, 1024, 14, 14]           2,048
            ReLU-119         [-1, 1024, 14, 14]               0
      Bottleneck-120         [-1, 1024, 14, 14]               0
          Conv2d-121          [-1, 256, 14, 14]         262,144
     BatchNorm2d-122          [-1, 256, 14, 14]             512
            ReLU-123          [-1, 256, 14, 14]               0
          Conv2d-124          [-1, 256, 14, 14]         589,824
     BatchNorm2d-125          [-1, 256, 14, 14]             512
            ReLU-126          [-1, 256, 14, 14]               0
          Conv2d-127         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-128         [-1, 1024, 14, 14]           2,048
            ReLU-129         [-1, 1024, 14, 14]               0
      Bottleneck-130         [-1, 1024, 14, 14]               0
          Conv2d-131          [-1, 256, 14, 14]         262,144
     BatchNorm2d-132          [-1, 256, 14, 14]             512
            ReLU-133          [-1, 256, 14, 14]               0
          Conv2d-134          [-1, 256, 14, 14]         589,824
     BatchNorm2d-135          [-1, 256, 14, 14]             512
            ReLU-136          [-1, 256, 14, 14]               0
          Conv2d-137         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-138         [-1, 1024, 14, 14]           2,048
            ReLU-139         [-1, 1024, 14, 14]               0
      Bottleneck-140         [-1, 1024, 14, 14]               0
          Conv2d-141          [-1, 256, 14, 14]         262,144
     BatchNorm2d-142          [-1, 256, 14, 14]             512
            ReLU-143          [-1, 256, 14, 14]               0
          Conv2d-144          [-1, 256, 14, 14]         589,824
     BatchNorm2d-145          [-1, 256, 14, 14]             512
            ReLU-146          [-1, 256, 14, 14]               0
          Conv2d-147         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-148         [-1, 1024, 14, 14]           2,048
            ReLU-149         [-1, 1024, 14, 14]               0
      Bottleneck-150         [-1, 1024, 14, 14]               0
          Conv2d-151          [-1, 256, 14, 14]         262,144
     BatchNorm2d-152          [-1, 256, 14, 14]             512
            ReLU-153          [-1, 256, 14, 14]               0
          Conv2d-154          [-1, 256, 14, 14]         589,824
     BatchNorm2d-155          [-1, 256, 14, 14]             512
            ReLU-156          [-1, 256, 14, 14]               0
          Conv2d-157         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-158         [-1, 1024, 14, 14]           2,048
            ReLU-159         [-1, 1024, 14, 14]               0
      Bottleneck-160         [-1, 1024, 14, 14]               0
          Conv2d-161          [-1, 256, 14, 14]         262,144
     BatchNorm2d-162          [-1, 256, 14, 14]             512
            ReLU-163          [-1, 256, 14, 14]               0
          Conv2d-164          [-1, 256, 14, 14]         589,824
     BatchNorm2d-165          [-1, 256, 14, 14]             512
            ReLU-166          [-1, 256, 14, 14]               0
          Conv2d-167         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-168         [-1, 1024, 14, 14]           2,048
            ReLU-169         [-1, 1024, 14, 14]               0
      Bottleneck-170         [-1, 1024, 14, 14]               0
          Conv2d-171          [-1, 256, 14, 14]         262,144
     BatchNorm2d-172          [-1, 256, 14, 14]             512
            ReLU-173          [-1, 256, 14, 14]               0
          Conv2d-174          [-1, 256, 14, 14]         589,824
     BatchNorm2d-175          [-1, 256, 14, 14]             512
            ReLU-176          [-1, 256, 14, 14]               0
          Conv2d-177         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-178         [-1, 1024, 14, 14]           2,048
            ReLU-179         [-1, 1024, 14, 14]               0
      Bottleneck-180         [-1, 1024, 14, 14]               0
          Conv2d-181          [-1, 256, 14, 14]         262,144
     BatchNorm2d-182          [-1, 256, 14, 14]             512
            ReLU-183          [-1, 256, 14, 14]               0
          Conv2d-184          [-1, 256, 14, 14]         589,824
     BatchNorm2d-185          [-1, 256, 14, 14]             512
            ReLU-186          [-1, 256, 14, 14]               0
          Conv2d-187         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-188         [-1, 1024, 14, 14]           2,048
            ReLU-189         [-1, 1024, 14, 14]               0
      Bottleneck-190         [-1, 1024, 14, 14]               0
          Conv2d-191          [-1, 256, 14, 14]         262,144
     BatchNorm2d-192          [-1, 256, 14, 14]             512
            ReLU-193          [-1, 256, 14, 14]               0
          Conv2d-194          [-1, 256, 14, 14]         589,824
     BatchNorm2d-195          [-1, 256, 14, 14]             512
            ReLU-196          [-1, 256, 14, 14]               0
          Conv2d-197         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-198         [-1, 1024, 14, 14]           2,048
            ReLU-199         [-1, 1024, 14, 14]               0
      Bottleneck-200         [-1, 1024, 14, 14]               0
          Conv2d-201          [-1, 256, 14, 14]         262,144
     BatchNorm2d-202          [-1, 256, 14, 14]             512
            ReLU-203          [-1, 256, 14, 14]               0
          Conv2d-204          [-1, 256, 14, 14]         589,824
     BatchNorm2d-205          [-1, 256, 14, 14]             512
            ReLU-206          [-1, 256, 14, 14]               0
          Conv2d-207         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-208         [-1, 1024, 14, 14]           2,048
            ReLU-209         [-1, 1024, 14, 14]               0
      Bottleneck-210         [-1, 1024, 14, 14]               0
          Conv2d-211          [-1, 256, 14, 14]         262,144
     BatchNorm2d-212          [-1, 256, 14, 14]             512
            ReLU-213          [-1, 256, 14, 14]               0
          Conv2d-214          [-1, 256, 14, 14]         589,824
     BatchNorm2d-215          [-1, 256, 14, 14]             512
            ReLU-216          [-1, 256, 14, 14]               0
          Conv2d-217         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-218         [-1, 1024, 14, 14]           2,048
            ReLU-219         [-1, 1024, 14, 14]               0
      Bottleneck-220         [-1, 1024, 14, 14]               0
          Conv2d-221          [-1, 256, 14, 14]         262,144
     BatchNorm2d-222          [-1, 256, 14, 14]             512
            ReLU-223          [-1, 256, 14, 14]               0
          Conv2d-224          [-1, 256, 14, 14]         589,824
     BatchNorm2d-225          [-1, 256, 14, 14]             512
            ReLU-226          [-1, 256, 14, 14]               0
          Conv2d-227         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-228         [-1, 1024, 14, 14]           2,048
            ReLU-229         [-1, 1024, 14, 14]               0
      Bottleneck-230         [-1, 1024, 14, 14]               0
          Conv2d-231          [-1, 256, 14, 14]         262,144
     BatchNorm2d-232          [-1, 256, 14, 14]             512
            ReLU-233          [-1, 256, 14, 14]               0
          Conv2d-234          [-1, 256, 14, 14]         589,824
     BatchNorm2d-235          [-1, 256, 14, 14]             512
            ReLU-236          [-1, 256, 14, 14]               0
          Conv2d-237         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-238         [-1, 1024, 14, 14]           2,048
            ReLU-239         [-1, 1024, 14, 14]               0
      Bottleneck-240         [-1, 1024, 14, 14]               0
          Conv2d-241          [-1, 256, 14, 14]         262,144
     BatchNorm2d-242          [-1, 256, 14, 14]             512
            ReLU-243          [-1, 256, 14, 14]               0
          Conv2d-244          [-1, 256, 14, 14]         589,824
     BatchNorm2d-245          [-1, 256, 14, 14]             512
            ReLU-246          [-1, 256, 14, 14]               0
          Conv2d-247         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-248         [-1, 1024, 14, 14]           2,048
            ReLU-249         [-1, 1024, 14, 14]               0
      Bottleneck-250         [-1, 1024, 14, 14]               0
          Conv2d-251          [-1, 256, 14, 14]         262,144
     BatchNorm2d-252          [-1, 256, 14, 14]             512
            ReLU-253          [-1, 256, 14, 14]               0
          Conv2d-254          [-1, 256, 14, 14]         589,824
     BatchNorm2d-255          [-1, 256, 14, 14]             512
            ReLU-256          [-1, 256, 14, 14]               0
          Conv2d-257         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-258         [-1, 1024, 14, 14]           2,048
            ReLU-259         [-1, 1024, 14, 14]               0
      Bottleneck-260         [-1, 1024, 14, 14]               0
          Conv2d-261          [-1, 256, 14, 14]         262,144
     BatchNorm2d-262          [-1, 256, 14, 14]             512
            ReLU-263          [-1, 256, 14, 14]               0
          Conv2d-264          [-1, 256, 14, 14]         589,824
     BatchNorm2d-265          [-1, 256, 14, 14]             512
            ReLU-266          [-1, 256, 14, 14]               0
          Conv2d-267         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-268         [-1, 1024, 14, 14]           2,048
            ReLU-269         [-1, 1024, 14, 14]               0
      Bottleneck-270         [-1, 1024, 14, 14]               0
          Conv2d-271          [-1, 256, 14, 14]         262,144
     BatchNorm2d-272          [-1, 256, 14, 14]             512
            ReLU-273          [-1, 256, 14, 14]               0
          Conv2d-274          [-1, 256, 14, 14]         589,824
     BatchNorm2d-275          [-1, 256, 14, 14]             512
            ReLU-276          [-1, 256, 14, 14]               0
          Conv2d-277         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-278         [-1, 1024, 14, 14]           2,048
            ReLU-279         [-1, 1024, 14, 14]               0
      Bottleneck-280         [-1, 1024, 14, 14]               0
          Conv2d-281          [-1, 256, 14, 14]         262,144
     BatchNorm2d-282          [-1, 256, 14, 14]             512
            ReLU-283          [-1, 256, 14, 14]               0
          Conv2d-284          [-1, 256, 14, 14]         589,824
     BatchNorm2d-285          [-1, 256, 14, 14]             512
            ReLU-286          [-1, 256, 14, 14]               0
          Conv2d-287         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-288         [-1, 1024, 14, 14]           2,048
            ReLU-289         [-1, 1024, 14, 14]               0
      Bottleneck-290         [-1, 1024, 14, 14]               0
          Conv2d-291          [-1, 256, 14, 14]         262,144
     BatchNorm2d-292          [-1, 256, 14, 14]             512
            ReLU-293          [-1, 256, 14, 14]               0
          Conv2d-294          [-1, 256, 14, 14]         589,824
     BatchNorm2d-295          [-1, 256, 14, 14]             512
            ReLU-296          [-1, 256, 14, 14]               0
          Conv2d-297         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-298         [-1, 1024, 14, 14]           2,048
            ReLU-299         [-1, 1024, 14, 14]               0
      Bottleneck-300         [-1, 1024, 14, 14]               0
          Conv2d-301          [-1, 256, 14, 14]         262,144
     BatchNorm2d-302          [-1, 256, 14, 14]             512
            ReLU-303          [-1, 256, 14, 14]               0
          Conv2d-304          [-1, 256, 14, 14]         589,824
     BatchNorm2d-305          [-1, 256, 14, 14]             512
            ReLU-306          [-1, 256, 14, 14]               0
          Conv2d-307         [-1, 1024, 14, 14]         262,144
     BatchNorm2d-308         [-1, 1024, 14, 14]           2,048
            ReLU-309         [-1, 1024, 14, 14]               0
      Bottleneck-310         [-1, 1024, 14, 14]               0
          Conv2d-311          [-1, 512, 14, 14]         524,288
     BatchNorm2d-312          [-1, 512, 14, 14]           1,024
            ReLU-313          [-1, 512, 14, 14]               0
          Conv2d-314            [-1, 512, 7, 7]       2,359,296
     BatchNorm2d-315            [-1, 512, 7, 7]           1,024
            ReLU-316            [-1, 512, 7, 7]               0
          Conv2d-317           [-1, 2048, 7, 7]       1,048,576
     BatchNorm2d-318           [-1, 2048, 7, 7]           4,096
          Conv2d-319           [-1, 2048, 7, 7]       2,097,152
     BatchNorm2d-320           [-1, 2048, 7, 7]           4,096
            ReLU-321           [-1, 2048, 7, 7]               0
      Bottleneck-322           [-1, 2048, 7, 7]               0
          Conv2d-323            [-1, 512, 7, 7]       1,048,576
     BatchNorm2d-324            [-1, 512, 7, 7]           1,024
            ReLU-325            [-1, 512, 7, 7]               0
          Conv2d-326            [-1, 512, 7, 7]       2,359,296
     BatchNorm2d-327            [-1, 512, 7, 7]           1,024
            ReLU-328            [-1, 512, 7, 7]               0
          Conv2d-329           [-1, 2048, 7, 7]       1,048,576
     BatchNorm2d-330           [-1, 2048, 7, 7]           4,096
            ReLU-331           [-1, 2048, 7, 7]               0
      Bottleneck-332           [-1, 2048, 7, 7]               0
          Conv2d-333            [-1, 512, 7, 7]       1,048,576
     BatchNorm2d-334            [-1, 512, 7, 7]           1,024
            ReLU-335            [-1, 512, 7, 7]               0
          Conv2d-336            [-1, 512, 7, 7]       2,359,296
     BatchNorm2d-337            [-1, 512, 7, 7]           1,024
            ReLU-338            [-1, 512, 7, 7]               0
          Conv2d-339           [-1, 2048, 7, 7]       1,048,576
     BatchNorm2d-340           [-1, 2048, 7, 7]           4,096
            ReLU-341           [-1, 2048, 7, 7]               0
      Bottleneck-342           [-1, 2048, 7, 7]               0
AdaptiveAvgPool2d-343           [-1, 2048, 1, 1]               0
          Linear-344                 [-1, 1000]       2,049,000
================================================================
Total params: 44,549,160
Trainable params: 44,549,160
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 429.73
Params size (MB): 169.94
Estimated Total Size (MB): 600.25
----------------------------------------------------------------

Process finished with exit code 0

 如何将这个输出内容,保存到本地的TXT文件?

当你使用torch.summary函数来输出模型的形状和参数大小时,你可以通过将标准输出(stdout)重定向到本地TXT文件来保存输出内容。Python提供了一个上下文管理器with open(),可以帮助我们实现这一功能,将输出写入文件而不在控制台上显示。

以下是实现方法:

import sys
from contextlib import contextmanager

@contextmanager
def stdout_redirected(to=None):
    """
    上下文管理器,用于临时将stdout重定向到文件或控制台。
    使用方法:`with stdout_redirected(to='output.txt'):`
    """
    if to is None:
        yield
    else:
        sys.stdout.flush()
        original_stdout = sys.stdout
        with open(to, 'w') as file:
            sys.stdout = file
            try:
                yield file
            finally:
                sys.stdout = original_stdout

# 假设你已经创建了PyTorch模型,并将其命名为`model`
# 下面演示如何使用`torch.summary`函数并将输出保存到文件
from torchsummary import summary

output_file = "model_summary.txt"

with stdout_redirected(to=output_file):
    summary(model, input_size=(input_channels, input_height, input_width))

print(f"模型摘要已保存到{output_file}文件中。")


在这段代码中,我们定义了一个上下文管理器stdout_redirected,它可以临时将标准输出(sys.stdout)重定向到一个文件(如果提供了to参数),或者将其恢复到控制台(如果to是None)。然后,我们使用这个上下文管理器来捕获torch.summary的输出,并将其保存到指定的文件model_summary.txt中。

确保将model替换为你实际的PyTorch模型,并根据你的模型输入形状为input_channels、input_height和input_width设置适当的值。运行代码后,你应该会在model_summary.txt文件中看到模型摘要信息。

为了便于修改,我们换一种方式,打印下model看看,如下这样:

from torchsummary import summary
import torchvision.models as models

model = models.resnet50()
print(model)
# summary(model, input_size=(3, 256, 256), batch_size=-1, device='cuda')

 打印的结果是下面这样的,就是model的结构:

C:\ProgramData\Anaconda3\python.exe F:/lingjun2019/Body_Recog/study_test/print_netParam.py
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer2): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer3): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (4): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer4): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=1000, bias=True)
)

Process finished with exit code 0

 最后全连接层的out_features的数量,就是输出的类别数量,这里默认是1000。假若我们自己的类别只有23个,那这里就需要做个修改,修改方式如下:

from torchsummary import summary
import torchvision.models as models
import torch
model = models.resnet50()

num_frts = model.fc.in_features
model.fc = torch.nn.Linear(num_frts, 23)
print(model)
# summary(model, input_size=(3, 256, 256), batch_size=-1, device='cuda')

 前面不变,最后的输出的结果变成了下面这样,是我们所希望的:

···
      (2): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=23, bias=True)
)

更多关于分类的详解内容,欢迎订阅我的gitchat: https://gitbook.cn/new/gitchat/activity/603f4254494aab1ce4f231cdGitChat 是一款基于微信平台的知识分享产品。通过这款产品我们希望改变IT知识的学习方式。icon-default.png?t=O83Ahttps://gitbook.cn/new/gitchat/activity/603f4254494aab1ce4f231cd

二、全局pool辅助理解

ROI pooling是目标检测中常用的一步操作,尤其是在faster rcnn两段式目标检测中。由于是在经过anchor框选的待进一步分类的目标的尺寸大小是不一致的(汽车远近都是汽车,灯塔高高瘦瘦,橘猫又肥又圆),但是简单分类网络的全连接层要求输入尺寸是一致的,这种矛盾就需要一个组件,在中间起到调和的作用。在全连接前,无论什么尺寸,都给统一到一样的大小。

这点我的理解有些类似于一个压榨机。压榨机的出口筛子是固定输出7*7,共49条,并且这个筛子的长宽是可变的,但是,7*7=49个网格不会变。

那长宽尺寸一遍,每一个小格子的尺寸也会发生改变,但始终保持49条输出的这个特性不能变。把它理解成一个中间组件就可以的,如果你有更深入的见解,欢迎评论。

下面是改变input_size,查看网络形状的改变。无论输入图像大小如何变化,到AdaptiveAvgPool2d这一层都需要转化为统一大小。这里就是形如其名的Adaptive,自适应的改变pool形式大小。无论是224的7*7 pool为1*1,还是448的14*14 pool成1*1。(PS,AdaptiveAvgPool2d的详细介绍,帮助理解。:Pytorch 里 nn.AdaptiveAvgPool2d(output_size) 原理是什么?icon-default.png?t=O83Ahttps://www.zhihu.com/question/282046628

除此之外,还引申以下Pool形式进行理解:RoIPooling、RoIAlign笔记

三、总结

torch.summary 这个工具,可以直接打印出神经网络的形状和参数大小,不需要一个节点一个节点的进行打印分析。这样对于我们自己构建出来的模型,查看结构是否和我们预期的一致,以及帮助理解开源的模型结果有非常大的帮助。


torchsummary 的 summary 函数不直接支持多输入模型,建议使用 torchinfo 库来查看模型摘要,它通常对复杂模型有更好的兼容性:

from torchinfo import summary

# 使用 torchinfo
summary(model, 
                  input_size=[(batch, 96, 7), (batch, 96, 4), (batch, 72, 7), (batch, 72, 4)],
                  device=device)

torchinfo 的更多信息,可参考:https://gitcode.com/gh_mirrors/to/torchinfo/overview?utm_source=replace_article_gitcode&index=bottom&type=card&&isLogin=1


最后,如果您觉得本篇文章对你有帮助,欢迎点赞,让更多人看到,这是对我继续写下去的鼓励。如果能再点击下方的红包打赏,给博主来一杯咖啡,那就太好。

参考内容:

.1使用torchsummary打印出神经网络的形状和参数大小icon-default.png?t=O83Ahttps://blog.youkuaiyun.com/sinat_41026716/article/details/105726239

基于PyTorch的卷积神经网络经典BackBone(骨干网络)复现icon-default.png?t=O83Ahttps://mp.weixin.qq.com/s/-t77D71Uu3OLDyUa_a4Yrg

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

钱多多先森

你的鼓励,是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值