[Coursera机器学习]Multi-class Classi cation and Neural Networks WEEK4编程作业

本文介绍如何实现正则化逻辑回归,并通过向量化的形式提高计算效率。此外,还介绍了使用fmincg进行参数优化的方法以及如何实现one-vs-all分类策略,最后讲解了神经网络的前向传播算法及其预测过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.3.3 Vectorizing regularized logistic regression

Now modify your code in lrCostFunction.m to account for regularization.Once again, you should not put any loops into your code.
此处参考上一周加入正则化的逻辑回归代码即可。

n = size(theta, 1);
% You should not regularize theta(1)
theta_reg = [0; theta(2:n)];
J = (1 / m) * (-y' * log(sigmoid(X * theta)) - (1-y)' * log(1 - sigmoid(X*theta))) + lambda * (1 / (2 * m)) * sum(theta_reg .^ 2);
grad = (1 / m) * X' * (sigmoid(X * theta) - y) + lambda * (1 / m) * theta_reg;

1.4 One-vs-all Classication

Furthermore, you will be using fmincg for this exercise (instead of fminunc).fmincg works similarly to fminunc, but is more more ecient for dealing with a large number of parameters.
After you have correctly completed the code for oneVsAll.m, the script ex3.m will continue to use your oneVsAll function to train a multi-class classier.

% Example Code for fmincg:
%
for c = 1:num_labels
%     % Set Initial theta
    initial_theta = zeros(n + 1, 1);
%     
%     % Set options for fminunc
    options = optimset('GradObj', 'on', 'MaxIter', 50);
% 
%     % Run fmincg to obtain the optimal theta
%     % This function will return theta and the cost 
    theta = fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), initial_theta, options);
    all_theta(c, :) = theta';   
end

1.4.1 One-vs-all Prediction

You should now complete the code in predictOneVsAll.m to use the one-vs-all classier to make predictions.

A = sigmoid(X * all_theta');
[MAX_VALUE, index] = max(A, [], 2);
p = index;

2.2 Feedforward Propagation and Prediction

Now you will implement feedforward propagation for the neural network. You will need to complete the code in predict.m to return the neural network’s prediction.

% The matrix X contains the examples in rows.
% When you complete the code in predict.m, you will need to add the column of 1's to the matrix

X = [ones(m, 1), X];
Layer_Hidden = sigmoid(X * Theta1');
Layer_Hidden = [ones(m, 1), Layer_Hidden];
Layer_Output = sigmoid(Layer_Hidden * Theta2');
[MAX_VALUE, index] = max(Layer_Output, [], 2);
p = index;

这里写图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值