教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial
Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:_Implement_deep_networks_for_digit_classification
代码
将sparseAutoencoderCost.m softmaxCost.m feedForwardAutoencoder.m initializeParameters.m loadMNISTImages.m loadMNISTLabels.m文件以及minFunc文件夹从以前的文件中拷贝到本节练习所在的工作路径中。
Step 0: Initialize constants and parameters ——代码已给
Step 00: Load data from the MNIST database——代码已给
根据自己的训练集所在的路径修改相应路径。
我的路径设置为:'F:/DeepLearning/UFLDL/mnist/train-images.idx3-ubyte' ;'F:/DeepLearning/UFLDL/mnist/train-labels.idx1-ubyte'
Step 1: Train the data on the first stacked autoencoder ——stackedAEExercise.m
%% ---------------------- YOUR CODE HERE ---------------------------------
% Instructions: Train the first layer sparse autoencoder, this layer has
% an hidden size of "hiddenSizeL1"
% You should store the optimal parameters in sae1OptTheta
sae1OptTheta = sae1Theta;
% Use minFunc to minimize the function
addpath minFunc/
options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost function.
options.maxIter = 400; % Maximum number of iterations of L-BFGS to run
options.display = 'on';
[sae1OptTheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
inputSize, hiddenSizeL1, ...
lambda, sparsityParam, ...
beta, trainData), ...