caffe 模型可视化网页网址:http://ethereon.github.io/netscope/#/editor
caffe模型是自底向上流的,所以bottom是前一层,top就是本层,输出层
卷积层
lr_mult: 學習率的係數,最終的學習率是這個數乘以solver.prototxt配置文檔中的base_lr。如果有兩個lr_mult, 則第一個表示權值的學習率,第二個表示偏置項的學習率。一般偏置項的學習率是權值學習率的兩倍
r_mult indicates what to multiply the learning rate by for a particular layer. This is useful if you want to update some layers with a smaller learning rate (e.g. when finetuning some layers while training others from scratch) or if you do not want to update the weights for one layer, decay_mult is the same, just for weight decay.
layer {
name: "conv_att"
type: "Convolution"
bottom: "input_feature"
top: "conv_att"
param {
lr_mult: 1 #kernel lr——mult
decay_mult: 1
}
param {
lr_mult: 2 # bias lr——mult
decay_mult: 0
}
convolution_param {
num_output: 8 #输出feature map个数
kernel_size: 1 # 卷积核大小
weight_filler {
type: "msra"
}
bias_filler {
type: "constant"
}
}
}