MNIST(Mixed National Institute of Standards and Technology)是一个大型的手写体数字数据库,广泛应用于机器学习领域的训练和测试,由纽约大学Yann LeCun教授整理. MNIST包括60000张训练图片和10000张测试图片,每张图都已经进行了尺寸归一化和数字居中处理。其固定尺寸为28*28.
1. 下载MNIST数据集
cd CAFFE_ROOT
./data/mnist/get_mnist.sh
执行以上命令,开始下载mnist数据集并自动存放在文件夹CAFFE_ROOT/data/mnist. 其包括以下四个文件:

下面以训练数集图片文件和标签文件为例进行其格式描述


get_mnist.sh文件的补充说明,其内容如下:
#!/usr/bin/env sh
# This scripts downloads the mnist data and unzips it.
DIR="$( cd "$(dirname "$0")" ; pwd -P )" #
cd "$DIR"
echo "Downloading..."
for fname in train-images-idx3-ubyte train-labels-idx1-ubyte t10k-images-idx3-ubyte t10k-labels-idx1-ubyte
do
if [ ! -e $fname ]; then
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/${fname}.gz
gunzip ${fname}.gz
fi
done
2. 转换数据格式
下载到的MNIST原始数据集为二进制文件,需转换为LEVELDB或LMDB格式才能够被Caffe识别。
在Caffe的根目录下执行以下命令:
~/caffe$ ./examples/mnist/create_mnist.sh
执行完上述命令,将在目录examples/mnist下生成两个文件,即mnist_train_lmdb和mnist_test_lmdb. 每个文件下都有两个子
文件:data.mdb和lock.mdb. 顾名思义,mnist_train_lmdb是LMDB格式的MNIST训练集,mnist_test_lmdb则是LMDB格式的测
试集.
create_mnist.sh文件的补充说明,其内容如下:
#!/usr/bin/env sh
# This script converts the mnist data into lmdb/leveldb format,
# depending on the value assigned to $BACKEND.
set -e
EXAMPLE=examples/mnist # LMDB/LEVELDB生成路径
DATA=data/mnist # 原始数据路径
BUILD=build/examples/mnist # 二进制文件路径
BACKEND="lmdb" # 后端类型,可选lmdb/leveldb
echo "Creating ${BACKEND}..."
rm -rf $EXAMPLE/mnist_train_${BACKEND} # 若已存在lmdb/leveldb,则先删除
rm -rf $EXAMPLE/mnist_test_${BACKEND}
$BUILD/convert_mnist_data.bin $DATA/train-images-idx3-ubyte \ # 创建训练集db
$DATA/train-labels-idx1-ubyte $EXAMPLE/mnist_train_${BACKEND} --backend=${BACKEND}
$BUILD/convert_mnist_data.bin $DATA/t10k-images-idx3-ubyte \ # 创建测试集db
$DATA/t10k-labels-idx1-ubyte $EXAMPLE/mnist_test_${BACKEND} --backend=${BACKEND}
echo "Done."
另外,create_mnist.sh调用了可执行程序build/examples/mnist/convert_mnist_data.bin,其对应的源文件为examples/mnist/convert_mnist_data.cpp (其源码需了解).
3. 基于MNIST数据集训练测试LeNet网络
~/caffe$ ./examples/mnist/train_lenet.sh
测试结果如下图所示:

在这一步中,主要涉及到以下三个文件:
- /examples/mnist/train_lenet.sh,其源码如下:
#!/usr/bin/env sh
set -e
./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt $@
其调用了之前编译好的build/tools/caffe.bin二进制文件,参数--solver=examples/mnist/lenet_solver.prototxt指定了训练
超参数文件.
- /examples/mnist/lenet_solver.prototxt,其源码如下:
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations. 训练时每迭代500次,进行一次训练
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations 每经过100次迭代,在屏幕上打印一次运行log
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results 每5000次迭代打印一次快照
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: CPU
- /examples/mnist/lenet_train_test.prototxt,其源码如下:
name: "LeNet"
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
总结:
参考链接:https://blog.youkuaiyun.com/fly_egg/article/details/53309256