【Caffe FCN】全卷积神经网络-图像语义分割实验-数据集制作,网络模型定义,网络训练(提供数据集和模型文件,以供参考)

下载:自制数据集供参考
下载: 自己改的网络

论文:《Fully Convolutional Networks for Semantic Segmentation》
代码:FCN的Caffe 实现
数据集:PascalVOC

一 数据集制作

PascalVOC数据下载下来后,制作用以图像分割的图像数据集和标签数据集,LMDB或者LEVELDB格式。
最好resize一下(填充的方式)。

  1. 数据文件夹构成
    包括原始图片和标签图片,如下。
    Class
    这里写图片描述

然后,构建对应的lmdb文件。可以将所有图片4:1分为train:val的比例。每个txt文件列出图像路径就可以,不用给label,因为image的label还是image,在caffe中指定就行。

Img_train.txt
SegmentationImage/002120.png
SegmentationImage/002132.png
SegmentationImage/002142.png
SegmentationImage/002212.png
SegmentationImage/002234.png
SegmentationImage/002260.png
SegmentationImage/002266.png
SegmentationImage/002268.png
SegmentationImage/002273.png
SegmentationImage/002281.png
SegmentationImage/002284.png
SegmentationImage/002293.png
SegmentationImage/002361.png
Label_train.txt
SegmentationClass/002120.png
SegmentationClass/002132.png
SegmentationClass/002142.png
SegmentationClass/002212.png
SegmentationClass/002234.png
SegmentationClass/002260.png
SegmentationClass/002266.png
SegmentationClass/002268.png
SegmentationClass/002273.png
SegmentationClass/002281.png
SegmentationClass/002284.png
SegmentationClass/002293.png

注意:label要自己生成,根据SegmentationClass下的gt图片。
每个类别的像素值如下:

类别名称 R G B
background 0 0 0 背景
aeroplane 128 0 0 飞机
bicycle 0 128 0
bird 128 128 0
boat 0 0 128
bottle 128 0 128 瓶子
bus 0 128 128 大巴
car 128 128 128
cat 64 0 0 猫
chair 192 0 0
cow 64 128 0
diningtable 192 128 0 餐桌
dog 64 0 128
horse 192 0 128
motorbike 64 128 128
person 192 128 128
pottedplant 0 64 0 盆栽
sheep 128 64 0
sofa 0 192 0
train 128 192 0
tvmonitor 0 64 128 显示器
对数据集中的grand truth 图像进行处理,生成用以训练的label图像。
需要注意的是,label文件要是gray格式,不然会出错:scores层输出与label的数据尺寸不一致,通道问题导致的。
然后生成lmdb就行了。数据集准备完毕。
这里写图片描述

二 网络模型定义

这里主要考虑的是数据输入的问题,指定data和label,如下。

layer {
   
   
  name: "data"
  type: "Data"
  top:"data"
  include {
   
   
    phase: TRAIN
  }
  transform_param {
   
   
scale: 0.00390625
    mean_file: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_train_mean.binaryproto"
  }
  data_param {
   
   
    source: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_train"
    batch_size: 1
    backend: LMDB
  }
}
layer {
   
   
  name: "label"
  type: "Data"
  top:"label"
  include {
   
   
    phase: TRAIN
  }
  transform_param {
   
   
scale: 0.00390625
    mean_file: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_train_mean.binaryproto"
  }
  data_param {
   
   
    source: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_train"
    batch_size: 1
    backend: LMDB
  }
}
layer {
   
   
  name: "data"
  type: "Data"
  top: "data"
  include {
   
   
    phase: TEST
  }
  transform_param {
   
   
scale: 0.00390625
    mean_file: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_val_mean.binaryproto"
  }
  data_param {
   
   
    source: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_val"
    batch_size: 1
    backend: LMDB
  }
}
layer {
   
   
  name: "label"
  type: "Data"
  top: "label"
  include {
   
   
    phase: TEST
  }
  transform_param {
   
   
scale: 0.00390625
    mean_file: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_val_mean.binaryproto"
  }
  data_param {
   
   
    source: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_val"
    batch_size: 1
    backend: LMDB
  }
}


三 网络训练

最好fintune,不然loss下降太慢。

Log file created at: 2016/12/13 12:14:07
Running on machine: DESKTOP
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 12:14:07.177220  1380 caffe.cpp:218] Using GPUs 0
I1213 12:14:07.436894  1380 caffe.cpp:223] GPU 0: GeForce GTX 960
I1213 12:14:07.758122  1380 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 12:14:07.758623  1380 solver.cpp:48] Initializing solver from parameters: 
test_iter: 84
test_interval: 338
base_lr: 1e-014
display: 20
max_iter: 100000
lr_policy: "fixed"
momentum: 0.95
weight_decay: 0.0005
snapshot: 4000
snapshot_prefix: "FCN"
solver_mode: GPU
device_id: 0
net: "train_val.prototxt"
train_state {
   
   
  level: 0
  stage: ""
}
iter_size: 1
I1213 12:14:07.759624  1380 solver.cpp:91] Creating training net from net file: train_val.prototxt
I1213 12:14:07.760124  1380 net.cpp:332] The NetState phase (0) differed from the phase (1) specified by a rule in layer data
I1213 12:14:07.760124  1380 net.cpp:332] The NetState phase (0) differed from the phase (1) specified by a rule in layer label
I1213 12:14:07.760124  1380 net.cpp:332] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I1213 12:14:07.761126  1380 net.cpp:58] Initializing net from parameters: 
state {
   
   
  phase: TRAIN
  level: 0
  stage: ""
}
layer {
   
   
  name: "data"
  type: "Data"
  top: "data"
  include {
   
   
    phase: TRAIN
  }
  transform_param {
   
   
    scale: 0.00390625
    mean_file: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_train_mean.binaryproto"
  }
  data_param {
   
   
    source: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_train"
    batch_size: 1
    backend: LMDB
  }
}
layer {
   
   
  name: "label"
  type: "Data"
  top: "label"
  include {
   
   
    phase: TRAIN
  }
  transform_param {
   
   
    scale: 0.00390625
    mean_file: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_train_mean.binaryproto"
  }
  data_param {
   
   
    source: "G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_train"
    batch_size: 1
    backend: LMDB
  }
}
layer {
   
   
  name: "conv1_1"
  type: "Convolution"
  bottom: "data"
  top: "conv1_1"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 64
    pad: 100
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu1_1"
  type: "ReLU"
  bottom: "conv1_1"
  top: "conv1_1"
}
layer {
   
   
  name: "conv1_2"
  type: "Convolution"
  bottom: "conv1_1"
  top: "conv1_2"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu1_2"
  type: "ReLU"
  bottom: "conv1_2"
  top: "conv1_2"
}
layer {
   
   
  name: "pool1"
  type: "Pooling"
  bottom: "conv1_2"
  top: "pool1"
  pooling_param {
   
   
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
   
   
  name: "conv2_1"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2_1"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu2_1"
  type: "ReLU"
  bottom: "conv2_1"
  top: "conv2_1"
}
layer {
   
   
  name: "conv2_2"
  type: "Convolution"
  bottom: "conv2_1"
  top: "conv2_2"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu2_2"
  type: "ReLU"
  bottom: "conv2_2"
  top: "conv2_2"
}
layer {
   
   
  name: "pool2"
  type: "Pooling"
  bottom: "conv2_2"
  top: "pool2"
  pooling_param {
   
   
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
   
   
  name: "conv3_1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3_1"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu3_1"
  type: "ReLU"
  bottom: "conv3_1"
  top: "conv3_1"
}
layer {
   
   
  name: "conv3_2"
  type: "Convolution"
  bottom: "conv3_1"
  top: "conv3_2"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu3_2"
  type: "ReLU"
  bottom: "conv3_2"
  top: "conv3_2"
}
layer {
   
   
  name: "conv3_3"
  type: "Convolution"
  bottom: "conv3_2"
  top: "conv3_3"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu3_3"
  type: "ReLU"
  bottom: "conv3_3"
  top: "conv3_3"
}
layer {
   
   
  name: "pool3"
  type: "Pooling"
  bottom: "conv3_3"
  top: "pool3"
  pooling_param {
   
   
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
   
   
  name: "conv4_1"
  type: "Convolution"
  bottom: "pool3"
  top: "conv4_1"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu4_1"
  type: "ReLU"
  bottom: "conv4_1"
  top: "conv4_1"
}
layer {
   
   
  name: "conv4_2"
  type: "Convolution"
  bottom: "conv4_1"
  top: "conv4_2"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu4_2"
  type: "ReLU"
  bottom: "conv4_2"
  top: "conv4_2"
}
layer {
   
   
  name: "conv4_3"
  type: "Convolution"
  bottom: "conv4_2"
  top: "conv4_3"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu4_3"
  type: "ReLU"
  bottom: "conv4_3"
  top: "conv4_3"
}
layer {
   
   
  name: "pool4"
  type: "Pooling"
  bottom: "conv4_3"
  top: "pool4"
  pooling_param {
   
   
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
   
   
  name: "conv5_1"
  type: "Convolution"
  bottom: "pool4"
  top: "conv5_1"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu5_1"
  type: "ReLU"
  bottom: "conv5_1"
  top: "conv5_1"
}
layer {
   
   
  name: "conv5_2"
  type: "Convolution"
  bottom: "conv5_1"
  top: "conv5_2"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu5_2"
  type: "ReLU"
  bottom: "conv5_2"
  top: "conv5_2"
}
layer {
   
   
  name: "conv5_3"
  type: "Convolution"
  bottom: "conv5_2"
  top: "conv5_3"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
   
   
  name: "relu5_3"
  type: "ReLU"
  bottom: "conv5_3"
  top: "conv5_3"
}
layer {
   
   
  name: "pool5"
  type: "Pooling"
  bottom: "conv5_3"
  top: "pool5"
  pooling_param {
   
   
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
   
   
  name: "fc6"
  type: "Convolution"
  bottom: "pool5"
  top: "fc6"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 4096
    pad: 0
    kernel_size: 7
    stride: 1
  }
}
layer {
   
   
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
   
   
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param {
   
   
    dropout_ratio: 0.5
  }
}
layer {
   
   
  name: "fc7"
  type: "Convolution"
  bottom: "fc6"
  top: "fc7"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 4096
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
   
   
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
   
   
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param {
   
   
    dropout_ratio: 0.5
  }
}
layer {
   
   
  name: "score_fr"
  type: "Convolution"
  bottom: "fc7"
  top: "score_fr"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 21
    pad: 0
    kernel_size: 1
  }
}
layer {
   
   
  name: "upscore2"
  type: "Deconvolution"
  bottom: "score_fr"
  top: "upscore2"
  param {
   
   
    lr_mult: 0
  }
  convolution_param {
   
   
    num_output: 21
    bias_term: false
    kernel_size: 4
    stride: 2
  }
}
layer {
   
   
  name: "score_pool4"
  type: "Convolution"
  bottom: "pool4"
  top: "score_pool4"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 21
    pad: 0
    kernel_size: 1
  }
}
layer {
   
   
  name: "score_pool4c"
  type: "Crop"
  bottom: "score_pool4"
  bottom: "upscore2"
  top: "score_pool4c"
  crop_param {
   
   
    axis: 2
    offset: 5
  }
}
layer {
   
   
  name: "fuse_pool4"
  type: "Eltwise"
  bottom: "upscore2"
  bottom: "score_pool4c"
  top: "fuse_pool4"
  eltwise_param {
   
   
    operation: SUM
  }
}
layer {
   
   
  name: "upscore_pool4"
  type: "Deconvolution"
  bottom: "fuse_pool4"
  top: "upscore_pool4"
  param {
   
   
    lr_mult: 0
  }
  convolution_param {
   
   
    num_output: 21
    bias_term: false
    kernel_size: 4
    stride: 2
  }
}
layer {
   
   
  name: "score_pool3"
  type: "Convolution"
  bottom: "pool3"
  top: "score_pool3"
  param {
   
   
    lr_mult: 1
    decay_mult: 1
  }
  param {
   
   
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
   
   
    num_output: 21
    pad: 0
    kernel_size: 1
  }
}
layer {
   
   
  name: "score_pool3c"
  type: "Crop"
  bottom: "score_pool3"
  bottom: "upscore_pool4"
  top: "score_pool3c"
  crop_param {
   
   
    axis: 2
    offset: 9
  }
}
layer {
   
   
  name: "fuse_pool3"
  type: "Eltwise"
  bottom: "upscore_pool4"
  bottom: "score_pool3c"
  top: "fuse_pool3"
  eltwise_param {
   
   
    operation: SUM
  }
}
layer {
   
   
  name: "upscore8"
  type: "Deconvolution"
  bottom: "fuse_pool3"
  top: "upscore8"
  param {
   
   
    lr_mult: 0
  }
  convolution_param {
   
   
    num_output: 21
    bias_term: false
    kernel_size: 16
    stride: 8
  }
}
layer {
   
   
  name: "score"
  type: "Crop"
  bottom: "upscore8"
  bottom: "data"
  top: "score"
  crop_param {
   
   
    axis: 2
    offset: 31
  }
}
layer {
   
   
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "score"
  bottom: "label"
  top: "loss"
  loss_param {
   
   
    ignore_label: 255
    normalize: false
  }
}
I1213 12:14:07.787643  1380 layer_factory.hpp:77] Creating layer data
I1213 12:14:07.788645  1380 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 12:14:07.789145  1380 net.cpp:100] Creating Layer data
I1213 12:14:07.789645  1380 net.cpp:418] data -> data
I1213 12:14:07.790145 12764 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 12:14:07.790145  1380 data_transformer.cpp:25] Loading mean file from: G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_train_mean.binaryproto
I1213 12:14:07.791647 12764 db_lmdb.cpp:40] Opened lmdb G:/interest_of_imags_for_recognation/VOC2007/Resize224/Img_train
I1213 12:14:07.841182  1380 data_layer.cpp:41] output data size: 1,3,224,224
I1213 12:14:07.846186  1380 net.cpp:150] Setting up data
I1213 12:14:07.846688  1380 net.cpp:157] Top shape: 1 3 224 224 (150528)
I1213 12:14:07.849189 11676 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 12:14:07.849689  1380 net.cpp:165] Memory required for data: 602112
I1213 12:14:07.852190  1380 layer_factory.hpp:77] Creating layer data_data_0_split
I1213 12:14:07.853691  1380 net.cpp:100] Creating Layer data_data_0_split
I1213 12:14:07.855195  1380 net.cpp:444] data_data_0_split <- data
I1213 12:14:07.856194  1380 net.cpp:418] data_data_0_split -> data_data_0_split_0
I1213 12:14:07.857697  1380 net.cpp:418] data_data_0_split -> data_data_0_split_1
I1213 12:14:07.858695  1380 net.cpp:150] Setting up data_data_0_split
I1213 12:14:07.859695  1380 net.cpp:157] Top shape: 1 3 224 224 (150528)
I1213 12:14:07.862702  1380 net.cpp:157] Top shape: 1 3 224 224 (150528)
I1213 12:14:07.864199  1380 net.cpp:165] Memory required for data: 1806336
I1213 12:14:07.865211  1380 layer_factory.hpp:77] Creating layer label
I1213 12:14:07.866701  1380 net.cpp:100] Creating Layer label
I1213 12:14:07.867712  1380 net.cpp:418] label -> label
I1213 12:14:07.869706  2072 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 12:14:07.870203  1380 data_transformer.cpp:25] Loading mean file from: G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_train_mean.binaryproto
I1213 12:14:07.873206  2072 db_lmdb.cpp:40] Opened lmdb G:/interest_of_imags_for_recognation/VOC2007/Resize224/Label_train
I1213 12:14:07.875710  1380 data_layer.cpp:41] output data size: 1,1,224,224
I1213 12:14:07.877709  1380 net.cpp:150] Setting up label
I1213 12:14:07.879212  1380 net.cpp:157] Top shape: 1 1 224 224 (50176)
I1213 12:14:07.881211  7064 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 12:14:07.882211  1380 net.cpp:165] Memory required for data: 2007040
I1213 12:14:07.883713  1380 layer_factory.hpp:77] Creating layer conv1_1
I1213 12:14:07.884716  1380 net.cpp:100] Creating Layer conv1_1
I1213 12:14:07.885215  1380 net.cpp:444] conv1_1 <- data_data_0_split_0
I1213 12:14:07.886214  1380 net.cpp:418] conv1_1 -> conv1_1
I1213 12:14:08.172420  1380 net.cpp:150] Setting up conv1_1
I1213 12:14:08.172919  1380 net.cpp:157] Top shape: 1 64 422 422 (11397376)
I1213 12:14:08.173419  1380 net.cpp:165] Memory required for data: 47596544
I1213 12:14:08.173919  1380 layer_factory.hpp:77] Creating layer relu1_1
I1213 12:14:08.173919  1380 net.cpp:100] Creating Layer relu1_1
I1213 12:14:08.173919  1380 net.cpp:444] relu1_1 <- conv1_1
I1213 12:14:08.174420  1380 net.cpp:405] relu1_1 -> conv1_1 (in-place)
I1213 12:14:08.174921  1380 net.cpp:150] Setting up relu1_1
I1213 12:14:08.175420  1380 net.cpp:157] Top shape: 1 64 422 422 (11397376)
I1213 12:14:08.175921  1380 net.cpp:165] Memory required for data: 93186048
I1213 12:14:08.175921  1380 layer_factory.hpp:77] Creating layer conv1_2
I1213 12:14:08.176421  1380 net.cpp:100] Creating Layer conv1_2
I1213 12:14:08.176421  1380 net.cpp:444] conv1_2 <- conv1_1
I1213 12:14:08.176421  1380 net.cpp:418] conv1_2 -> conv1_2
I1213 12:14:08.178923  1380 net.cpp:150] Setting up conv1_2
I1213 12:14:08.179424  1380 net.cpp:157] Top shape: 1 64 422 422 (11397376)
I1213 12:14:08.179424  1380 net.cpp:165] Memory required for data: 138775552
I1213 12:14:08.179424  1380 layer_factory.hpp:77] Creating layer relu1_2
I1213 12:14:08.179424  1380 net.cpp:100] Creating Layer relu1_2
I1213 12:14:08.180424  1380 net.cpp:444] relu1_2 <- conv1_2
I1213 12:14:08.180424  1380 net.cpp:405] relu1_2 -> conv1_2 (in-place)
I1213 12:14:08.180924  1380 net.cpp:150] Setting up relu1_2
I1213 12:14:08.181426  1380 net.cpp:157] Top shape: 1 64 422 422 (11397376)
I1213 12:14:08.181426  1380 net.cpp:165] Memory required for data: 184365056
I1213 12:14:08.181426  1380 layer_factory.hpp:77] Creating layer pool1
I1213 12:14:08.181426  1380 net.cpp:100] Creating Layer pool1
I1213 12:14:08.182425  1380 net.cpp:444] pool1 <- conv1_2
I1213 12:14:08.182425  1380 net.cpp:418] pool1 -> pool1
I1213 12:14:08.182425  1380 net.cpp:150] Setting up pool1
I1213 12:14:08.183426  1380 net.cpp:157] Top shape: 1 64 211 211 (2849344)
I1213 12:14:08.183426  1380 net.cpp:165] Memory required for data: 195762432
I1213 12:14:08.183426  1380 layer_factory.hpp:77] Creating layer conv2_1
I1213 12:14:08.183926  1380 net.cpp:100] Creating Layer conv2_1
I1213 12:14:08.183926  1380 net.cpp:444] conv2_1 <- pool1
I1213 12:14:08.183926  1380 net.cpp:418] conv2_1 -> conv2_1
I1213 12:14:08.189931  1380 net.cpp:150] Setting up conv2_1
I1213 12:14:08.189931  1380 net.cpp:157] Top shape: 1 128 211 211 (5698688)
I1213 12:14:08.190433  1380 net.cpp:165] Memory required for data: 218557184
I1213 12:14:08.190932  1380 layer_factory.hpp:77] Creating layer relu2_1
I1213 12:14:08.191432  1380 net.cpp:100] Creating Layer relu2_1
I1213 12:14:08.191432  1380 net.cpp:444] relu2_1 <- conv2_1
I1213 12:14:08.191432  1380 net.cpp:405] relu2_1 -> conv2_1 (in-place)
I1213 12:14:08.192433  1380 net.cpp:150] Setting up relu2_1
I1213 12:14:08.192934  1380 net.cpp:157] Top shape: 1 128 211 211 (5698688)
I1213 12:14:08.192934  1380 net.cpp:165] Memory required for data: 241351936
I1213 12:14:08.193434  1380 layer_factory.hpp:77] Creating layer conv2_2
I1213 12:14:08.193434  1380 net.cpp:100] Creating Layer conv2_2
I1213 12:14:08.194434  1380 net.cpp:444] conv2_2 <- conv2_1
I1213 12:14:08.194434  1380 net.cpp:418] conv2_2 -> conv2_2
I1213 12:14:08.197937  1380 net.cpp:150] Setting up conv2_2
I1213 12:14:08.197937  1380 net.cpp:157] Top shape: 1 128 211 211 (5698688)
I1213 12:14:08.198437  1380 net.cpp:165] Memory required for data: 264146688
I1213 12:14:08.198437  1380 layer_factory.hpp:77] Creating layer relu2_2
I1213 12:14:08.198437  1380 net.cpp:100] Creating Layer relu2_2
I1213 12:14:08.199439  1380 net.cpp:444] relu2_2 <- conv2_2
I1213 12:14:08.199439  1380 net.cpp:405] relu2_2 -> conv2_2 (in-place)
I1213 12:14:08.199939  1380 net.cpp:150] Setting up relu2_2
I1213 12:14:08.200939  1380 net.cpp:157] Top shape: 1 128 211 211 (5698688)
I1213 12:14:08.200939  1380 net.cpp:165] Memory required for data: 286941440
I1213 12:14:08.200939  1380 layer_factory.hpp:77] Creating layer pool2
I1213 12:14:08.200939  1380 net.cpp:100] Creating Layer pool2
I1213 12:14:08.202940  1380 net.cpp:444] pool2 <- conv2_2
I1213 12:14:08.203441  1380 net.cpp:418] pool2 -> pool2
I1213 12:14:08.203441  1380 net.cpp:150] Setting up pool2
I1213 12:14:08.203441  1380 net.cpp:157] Top shape: 1 128 106 106 (1438208)
I1213 12:14:08.203441  1380 net.cpp:165] Memory required for data: 292694272
I1213 12:14:08.203941  1380 layer_factory.hpp:77] Creating layer conv3_1
I1213 12:14:08.203941  1380 net.cpp:100] Creating Layer conv3_1
I1213 
评论 54
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

穆友航

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值