coursera机器学习课程笔记 第二周 线性回归

ex1

github:https://github.com/DLW3D/coursera-machine-learning-ex
练习文件下载地址:https://s3.amazonaws.com/spark-public/ml/exercises/on-demand/machine-learning-ex1.zip

Linear Regression 线性回归

Feature Normalize 特征规范化

featureNormalize.m

function [X_norm, mu, sigma] = featureNormalize(X)
	mu = mean(X);
	sigma = std(X);
	X_norm = (X - mu) ./ sigma;
end

Cost Function 代价函数

在这里插入图片描述
computeCostMulti.m

function J = computeCostMulti(X, y, theta)
	% number of training examples
	m = length(y); 
	% Compute the cost of a particular choice of theta
	J = (X*theta-y)'*(X*theta-y)/2/m;
end

Gradient Descent 梯度下降

在这里插入图片描述
gradientDescentMulti.m

function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
	%GRADIENTDESCENTMULTI Performs gradient descent to learn theta
	%   theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by
	%   taking num_iters gradient steps with learning rate alpha
	
	m = length(y); 
	J_history = zeros(num_iters, 1);
	
	for iter = 1:num_iters
	    % Perform a single gradient step on the parameter vector
	    %               theta. 
	    n = size(X,2);
	    h = X * theta;
	    dtheta = zeros(n,1);
	    for j = 1:n
	        gradient = 0;
	        for i = 1:m
	            gradient = gradient + (h(i)-y(i))*X(i,j);
	        end
	        dtheta(j) = gradient * alpha / m;
	    end
	    theta = theta - dtheta;
	    
	    % Save the cost J in every iteration    
	    J_history(iter) = computeCostMulti(X, y, theta);
	end
end

Normal Equation 正规方程

normalEqn.m

function [theta] = normalEqn(X, y)
	%NORMALEQN Computes the closed-form solution to linear regression 
	%   NORMALEQN(X,y) computes the closed-form solution to linear 
	%   regression using the normal equations.
	theta = inv(X'*X)*X'*y;
end

运行线性回归

data = load('ex1data2.txt');
X = data(:, 1:2);
y = data(:, 3);
m = length(y);
% Scale features and set them to zero mean
fprintf('Normalizing Features ...\n');
[X mu sigma] = featureNormalize(X);
% Add intercept term to X
X = [ones(m, 1) X];

fprintf('Running gradient descent ...\n');
% Choose some alpha value
alpha = 0.01;
num_iters = 400;
% Init Theta and Run Gradient Descent 
theta = zeros(3, 1);
[theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters);


% Plot the convergence graph
figure;
plot(1:numel(J_history), J_history, '-b', 'LineWidth', 2);
xlabel('Number of iterations');
ylabel('Cost J');

% Display gradient descent's result
fprintf('Theta computed from gradient descent: \n');
fprintf(' %f \n', theta);
fprintf('\n');

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值