腾讯NCNN框架中自带了测模型推理时间和每层时间的功能,然而,这些功能并没有文档写明白如何使用,也没有默认开启和计算每层的平均时间,为了更好地进行速度的测试,我重写了NCNN源码,并以此文记录下以下功能是如何实现的。
效果
在手机上,给出任何一个网络可以用的ncnn param文件,可以得到以下的结果:
loop_count = 10
num_threads = 4
powersave = 2
output_blob_name: prob
mode: cls
resnet18.param :
Convolution conv1 8.014ms | feature_map: 224 x 224 inch: 3 outch: 64 kernel: 7 x 7 stride: 2
BatchNorm bn_conv1 0.336ms | feature_map: 112 x 112 inch: 64 outch: 64
Scale scale_conv1 0.218ms | feature_map: 112 x 112 inch: 64 outch: 64
ReLU conv1_relu 0.200ms | feature_map: 112 x 112 inch: 64 outch: 64
Pooling pool1 0.534ms | feature_map: 112 x 112 inch: 64 outch: 64 kernel: 3 x 3 stride: 2
Split splitncnn_0 0.001ms |
Convolution res2a_branch1 0.854ms | feature_map: 56 x 56 inch: 64 outch: 64 kernel: 1 x 1
BatchNorm bn2a_branch1 0.052ms | feature_map: 56 x 56 inch: 64 outch: 64
Scale scale2a_branch1 0.028ms | feature_map: 56 x 56 inch: 64 outch: 64
Convolution res2a_branch2a 3.417ms | feature_map: 56 x 56 inch: 64 outch: 64 kernel: 3 x 3
BatchNorm bn2a_branch2a 0.120ms | feature_map: 56 x 56 inch: 64 outch: 64
Scale scale2a_branch2a 0.078ms | feature_map: 56 x 56 inch: 64 outch: 64
ReLU res2a_branch2a_relu 0.047ms | feature_map: 56 x 56 inch: 64 outch: 64
Convolution res2a_branch2b 3.274ms | feature_map: 56 x 56 inch: 64 outch: 64 kernel: 3 x 3
BatchNorm bn2a_branch2b 0.113ms | feature_map: 56 x 56 inch: 64 outch: 64
Scale scale2a_branch2b 0.139ms | feature_map: 56 x 56 inch: 64 outch: 64
Eltwise res2a 0.146ms |
ReLU res2a_relu 0.095ms | feature_map: 56 x 56 inch: 64 outch: 64
Split splitncnn_1 0.001ms |
Convolution res2b_branch2a 3.356ms | feature_map: 56 x 56 inch: 64 outch: 64 kernel: 3 x 3
BatchNorm bn2b_branch2a 0.107ms | feature_map: 56 x 56 inch: 64 outch: 64
Scale scale2b_branch2a 0.085ms | feature_map: 56 x 56 inch: 64 outch: 64
ReLU res2b_branch2a_relu 0.106ms | feature_map: 56 x 56 inch: 64 outch: 64
Convolution res2b_branch2b 3.544ms | feature_map: 56 x 56 inch: 64 outch: 64 kernel: 3 x 3
BatchNorm bn2b_branch2b 0.065ms | feature_map: 56 x 56 inch: 64 outch: 64
Scale scale2b_branch2b 0.128ms | feature_map: 56 x 56 inch: 64 outch: 64
Eltwise res2b 0.159ms |
ReLU res2b_relu 0.110ms | feature_map: 56 x 56 inch: 64 outch: 64
Split splitncnn_2 0.001ms |
Convolution res3a_branch1 0.679ms | feature_map: 56 x 56 inch: 64 outch: 128 kernel: 1 x 1 stride: 2
BatchNorm bn3a_branch1 0.044ms | feature_map: 28 x 28 inch: 128 outch: 128
Scale scale3a_branch1 0.119ms | feature_map: 28 x 28 inch: 128 outch: 128
Convolution res3a_branch2a 2.173ms | feature_map: 56 x 56 inch: 64 outch: 128 kernel: 3 x 3 stride: 2
BatchNorm bn3a_branch2a 0.071ms | feature_map: 28 x 28 inch: 128 outch: 128
Scale scale3a_branch2a 0.035ms | feature_map: 28 x 28 inch: 128 outch: 128
ReLU res3a_branch2a_relu 0.115ms | feature_map: 28 x 28 inch: 128 outch: 128
Convolution res3a_branch2b 2.586ms | feature_map: 28 x 28 inch: 128 outch: 128 kernel: 3 x 3
BatchNorm bn3a_branch2b 0.102ms | feature_map: 28 x 28 inch: 128 outch: 128
Scale scale3a_branch2b 0.045ms | feature_map: 28 x 28 inch: