1.3.3 Vectorizing regularized logistic regression
Now modify your code in lrCostFunction.m to account for regularization.Once again, you should not put any loops into your code.
此处参考上一周加入正则化的逻辑回归代码即可。
n = size(theta, 1);
% You should not regularize theta(1)
theta_reg = [0; theta(2:n)];
J = (1 / m) * (-y' * log(sigmoid(X * theta)) - (1-y)' * log(1 - sigmoid(X*theta))) + lambda * (1 / (2 * m)) * sum(theta_reg .^ 2);
grad = (1 / m) * X' * (sigmoid(X * theta) - y) + lambda * (1 / m) * theta_reg;
1.4 One-vs-all Classication
Furthermore, you will be using fmincg for this exercise (instead of fminunc).fmincg works similarly to fminunc, but is more more ecient for dealing with a large number of parameters.
After you have correctly completed the code for oneVsAll.m, the script ex3.m will continue to use your oneVsAll function to train a multi-class classier.
% Example Code for fmincg:
%
for c = 1:num_labels
% % Set Initial theta
initial_theta = zeros(n + 1, 1);
%
% % Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 50);
%
% % Run fmincg to obtain the optimal theta
% % This function will return theta and the cost
theta = fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), initial_theta, options);
all_theta(c, :) = theta';
end
1.4.1 One-vs-all Prediction
You should now complete the code in predictOneVsAll.m to use the one-vs-all classier to make predictions.
A = sigmoid(X * all_theta');
[MAX_VALUE, index] = max(A, [], 2);
p = index;
2.2 Feedforward Propagation and Prediction
Now you will implement feedforward propagation for the neural network. You will need to complete the code in predict.m to return the neural network’s prediction.
% The matrix X contains the examples in rows.
% When you complete the code in predict.m, you will need to add the column of 1's to the matrix
X = [ones(m, 1), X];
Layer_Hidden = sigmoid(X * Theta1');
Layer_Hidden = [ones(m, 1), Layer_Hidden];
Layer_Output = sigmoid(Layer_Hidden * Theta2');
[MAX_VALUE, index] = max(Layer_Output, [], 2);
p = index;