【读书1】【2017】MATLAB与深度学习——示例:MNIST(2)

MnistConv函数详解
本文深入解析MnistConv函数,该函数采用反向传播算法,通过小批量方法训练卷积神经网络,调整权重以提高预测准确性。代码示例展示了如何在Matlab中实现这一过程。

函数MnistConv使用反向传播算法训练网络,获取神经网络的权重和训练数据,并返回训练后的权重。

The function MnistConv, which trains thenetwork using the back-propagation algorithm, takes the neural network’sweights and training data and returns the trained weights.

[W1,W5, Wo] = MnistConv(W1, W5, Wo, X, D)

其中,W1、W5和Wo分别是卷积滤波器矩阵、池化-隐藏层权重矩阵和隐藏-输出层权重矩阵。

where W1, W5, and Wo are the convolutionfilter matrix, pooling-hidden layer weight matrix, and hidden-output layerweight matrix, respectively.

X和D分别是训练数据的输入和正确输出。

X and D are the input and correct outputfrom the training data, respectively.

下面的清单显示了MnistConv.m文件中的代码,它实现了MnistConv函数的功能。

The following listing shows the MnistConv.mfile, which implements the MnistConv function.

function [W1, W5, Wo] = MnistConv(W1, W5,Wo, X, D)

alpha = 0.01;

beta = 0.95;

momentum1 = zeros(size(W1));

momentum5 = zeros(size(W5));

momentumo = zeros(size(Wo));

N = length(D);

bsize = 100;

blist = 1:bsize:(N-bsize+1);

% One epoch loop 按照时代进行循环

for batch = 1:length(blist)

   dW1= zeros(size(W1));

   dW5= zeros(size(W5));

   dWo= zeros(size(Wo));

   %Mini-batch loop

   begin= blist(batch);

   fork = begin:begin+bsize-1

          %Forward pass = inference

          x= X(:, :, k); % Input, 28x28

          y1 = Conv(x, W1); % Convolution,20x20x20

          y2 = ReLU(y1); %

          y3 = Pool(y2); % Pool, 10x10x20

          y4 = reshape(y3, [], 1); % 2000

          v5 = W5*y4; % ReLU, 360

          y5 = ReLU(v5); %

          v = Wo*y5; % Softmax, 10

          y= Softmax(v); %

          %One-hot encoding

          d= zeros(10, 1);

          d(sub2ind(size(d),D(k), 1)) = 1;

          % Backpropagation 反向传播

          e = d - y; % Output layer

          delta = e;

          e5= Wo' * delta; % Hidden(ReLU) layer

          delta5= (y5 > 0) .* e5;

          e4= W5' * delta5; % Pooling layer

          e3 = reshape(e4, size(y3));

          e2= zeros(size(y2));

          W3= ones(size(y2)) / (2*2);

          forc = 1:20

                 e2(:, :, c) = kron(e3(:, :,c), ones([2 2])) .* W3(:, :, c);

          end

          delta2= (y2 > 0) .* e2; % ReLU layer

          delta1_x= zeros(size(W1)); % Convolutional layer

          forc = 1:20

                 delta1_x(:,:, c) = conv2(x(:, :), rot90(delta2(:, :, c), 2), 'valid');

          end

          dW1= dW1 + delta1_x;

          dW5= dW5 + delta5*y4';

          dWo= dWo + delta *y5';

   end

   %Update weights 更新权值

   dW1= dW1 / bsize;

   dW5= dW5 / bsize;

   dWo= dWo / bsize;

   momentum1= alpha*dW1 + beta*momentum1;

   W1= W1 + momentum1;

   momentum5= alpha*dW5 + beta*momentum5;

   W5= W5 + momentum5;

   momentumo= alpha*dWo + beta*momentumo;

   Wo= Wo + momentumo;

end

end % 函数结束的end标记

这个代码看起来比以前的例子要复杂得多。

This code appears to be rather more complexthan the previous examples.

让我们一部分一部分地看看这段代码。

Let’s take a look at it part by part.

函数MnistConv通过小批量方法训练网络,而前面的示例使用SGD和批量方法。

The function MnistConv trains the networkvia the minibatch method, while the previous examples employed the SGD andbatch methods.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值