利用程序写协议,最大的困难在于:同名层的书写:
# coding=gbk
from caffe import layers as L, params as P
import caffe
ns = caffe.NetSpec()
ns.Features,ns.Headposes,ns.Smiles = L.Data(name="data", ntop=3,
include={'phase':caffe.TRAIN})
ns.test_data = L.Data(name="data", ntop = 0, top=['Features','Headposes','Smiles'],
include={'phase':caffe.TEST})
print '{}'.format(ns.to_proto())
输出:
layer {
name: "data"
type: "Data"
top: "Features"
top: "Headposes"
top: "Smiles"
include {
phase: TRAIN
}
}
layer {
name: "data"
type: "Data"
top: "Features"
top: "Headposes"
top: "Smiles"
include {
phase: TEST
}
}
黏在一起的书写:
n.relu1= L.ReLU(n.conv1,ntop = 0,top='conv1')
n.norm1 = L.LRN(n.conv1,local_size=5,alpha=0.0001,beta=0.75)
结果:
code:
n.conv1 = L.Convolution(n.Features, kernel_size=11,stride=4, num_output=96,
param=[dict(lr_mult=1,decay_mult=1), dict(lr_mult=2,decay_mult=0)],
weight_filler=dict(type='gaussian',std= 0.01),
bias_filler=dict(type='constant',value= 0))
output:
layer {
name: "conv1"
type: "Convolution"
bottom: "Features"
top: "conv1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
Reshape定义在Python中:
n.resh = L.Reshape(n.fc3, reshape_param={'shape':{'dim': [1, 1, 64, 64]}})
Note that the shape vector [1, 1, 64, 64] is passed as a list and not as a string like in the prototxt syntax.
In fact, any entry defined as repeated in caffe.proto, should be considered as a list/vector when interfacing using NetSpec.
参考网址:
https://stackoverflow.com/questions/38480599/how-to-reshape-layer-in-caffe-with-python