Exercise: Implement deep networks for digit classification 代码示例

练习参考Implement deep networks for digit classification


       这个练习用到了一个四层的深度神经网络。第一层是数据输入层;第二、三层是稀疏自编码器层,分别取两个稀疏自编码器的隐藏层作为第二、三层;第四层为Softmax分类器,用于分类0到9的手写数字。在训练Softmax分类器后,又对整个网络进行了微调。在微调时,需要把L2~L4这三层作为一个模型进行调整,调用一次minFunc函数,得到调整后的三层的参数值。经测试,发现微调后在测试数据上的准确率有大幅度提高。


       微调时需要利用反向传播算法计算三层的梯度值:

STEP 2: Train the first sparse autoencoder

addpath minFunc/
options.Method = 'lbfgs'; 
options.maxIter = 400;
options.display = 'on';

[sae1OptTheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                   inputSize, hiddenSizeL1, ...
                                   lambda, sparsityParam, ...
                                   beta, trainData), ...
                              sae1Theta, options);

STEP 2: Train the second sparse autoencoder

[sae2OptTheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                   hiddenSizeL1, hiddenSizeL2, ...
                                   lambda, sparsityParam, ...
                                   beta, sae1Features), ...
                              sae2Theta, options);

STEP 3: Train the softmax classifier

options.maxIter = 100;
softmaxModel = softmaxTrain(hiddenSizeL2, numClasses, lambda, ...
                            sae2Features, trainLabels, options);
saeSoftmaxOptTheta = softmaxModel.optTheta(:);

Step 4: Implement fine-tuning

stackedAECost.m

stack{1}.i = data;
for d = 1:numel(stack)-1
    stack{d}.o = sigmoid(stack{d}.w * stack{d}.i + repmat(stack{d}.b,1,M));
    stack{d+1}.i = stack{d}.o;
end
stack{end}.o = sigmoid(stack{end}.w * stack{end}.i + repmat(stack{end}.b,1,M));

Mat = softmaxTheta * stack{end}.o;
Mat = exp(bsxfun(@minus, Mat, max(Mat, [], 1)));
P = bsxfun(@rdivide, Mat, sum(Mat));
Mat = log(P);
WD = lambda / 2 * sum(sum(softmaxTheta.^2)); 
cost = -sum(sum(groundTruth.*Mat)) / M + WD;
softmaxThetaGrad = -(groundTruth - P) * stack{end}.o' ./ M + lambda.*softmaxTheta;

stack{end}.delta = -softmaxTheta' * (groundTruth - P) .* stack{end}.o .* (1 - stack{end}.o);
for d = numel(stack)-1:-1:1
    stack{d}.delta = stack{d+1}.w' * stack{d+1}.delta .* stack{d}.o .* (1 - stack{d}.o);
end

for d = 1:numel(stack)
    stackgrad{d}.w = stack{d}.delta * stack{d}.i' / M;
    stackgrad{d}.b = mean(stack{d}.delta,2);
end

STEP 5: Finetune softmax model

options.Method = 'lbfgs';  
options.maxIter = 400;  
options.display = 'on';  
[stackedAEOptTheta, cost] = minFunc( @(p) stackedAECost(p, inputSize, hiddenSizeL2, ...  
                                              numClasses, netconfig, ...  
                                              lambda, trainData, trainLabels), ...                                     
                              stackedAETheta, options);  
STEP 6: Test 
stackedAEPredict.m

M = size(data, 2);
stack{1}.i = data;
for d = 1:numel(stack)-1
    stack{d}.o = sigmoid(stack{d}.w * stack{d}.i + repmat(stack{d}.b,1,M));
    stack{d+1}.i = stack{d}.o;
end
stack{end}.o = sigmoid(stack{end}.w * stack{end}.i + repmat(stack{end}.b,1,M));
Mat = softmaxTheta * stack{end}.o;
[~,pred] = max(Mat);


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值