VGG Face Descriptor 是牛津大学VGG小组的工作,现在已经开源训练好的网络结构和模型参数,本文将基于此模型在caffe上使用自己的人脸数据微调,并进行特征提取与精确度验证。
数据传送门:CASIA WebFace
模型传送门:http://www.robots.ox.ac.uk/~vgg/software/vgg_face/
模型准备
1.从上面的网址中下载VGG-Face已经训练好的模型和网络结构文件,根据deploy.proto文件来修改得到train_val.prototxt文件,主要有:修改数据层的输入和最后全连接层以及损失层等,注意fc8层名称的修改,具体如下:
name: "VGG_FACE_16_layers"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 224
# mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
}
data_param {
source: "vggface/webface_train_lmdb"
batch_size: 32
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 224
# mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
}
data_param {
source: "vggface/webface_val_lmdb"
batch_size: 32
backend: LMDB
}
}
......
layer {
bottom: "fc7"
top: "fc8_s"
name: "fc8_s"
type: "InnerProduct"
param {
lr_mult: 10
decay_mult: 1
}
param {
lr_mult: 20
decay_mult: 0
}
inner_product_param {
num_output: 2031
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc8_s"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "fc8_s"
bottom: "label"
top: "loss"
}
模型训练
本次训练,我采用的是webface中人脸数量在50张以上的个人类别,总共有两千多个。按照caffe的工具转换成lmdb格式即可开始训练。
模型效果测试
模型训练结束之后,在lfw上验证实验效果。lfw数据对使用官方提供的txt文件,不过我觉得格式不太好,就自己用脚本进行了些许修改:
pairs.txt
Abel_Pacheco 1 4
Akhmed_Zakayev 1 3
Akhmed_Zakayev 2 3
Amber_Tamblyn 1 2
Anders_Fogh_Rasmussen 1