Batch Gradient Descent

本文介绍了批量梯度下降法在优化线性回归中的应用,详细阐述了成本函数、学习曲线和梯度下降公式,并探讨了如何选择合适的步长以确保算法收敛。内容包括不同参数数量的学习曲线和迭代过程的可视化,强调了适当步长选择的重要性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Batch Gradient Descent

We use linear regression as example to explain this optimization algorithm.

1. Formula

1.1. Cost Function

We prefer residual sum of squared to evaluate linear regression.

J(θ)=12mi=1n[hθ(xi)yi]2

1.2. Visualize Cost Function

E.g. 1 :

one parameter only θ1 –> hθ(x)=θ1x1

Learning Curve 1

1. Learning Curve 1 [1]


E.g. 2 :

two parameters θ0,θ1 –> hθ(x)=θ0+θ1x1

Learning Curve 2

2. Learning Curve 2 [2]


Switch to contour plot

Learning Curve 2 - contour

3. Learning Curve 2 - contour[2]


1.3. Gradient Descent Formula

For all θi

Jθθi=1mi=1n[hθ(xi)yi](xi)

E.g.,
two parameters θ0,θ1 –> hθ(x)=θ0+θ1x1

For i = 0 :

Jθθ0=1mi=1n[hθ(xi)yi](x0)

For i = 1:

Jθθ1=1mi=1n[hθ(xi)yi](x1)
% Octave
%% =================== Gradient Descent ===================
% Add a column(x0) of ones to X

X = [ones(len, 1), data(:,1)];
theta = zeros(2, 1);
alpha = 0.01;
ITERATION = 1500;
jTheta = zeros(ITERATION, 1);

for iter = 1:ITERATION
    % Perform a single gradient descent on the parameter vector
    % Note: since the theta will be updated, a tempTheta is needed to store the data.
    tempTheta = theta;
    theta(1) = theta(1) - (alpha / len) * (sum(X * tempTheta - Y));  % ignore the X(:,1) since the values are all ones.
    theta(2) = theta(2) - (alpha / len) * (sum((X * tempTheta - Y) .* X(:,2)));

    %% =================== Compute Cost ===================
    jTheta(iter) = sum((X * theta - Y) .^ 2) / (2 * len);
endfor

2. Algorithm

For all θi

θi:=θiαθiJ(θ1,θ2,,θn)

E.g.,
two parameters θ0,θ1 –> hθ(x)=θ0+θ1x1

For i = 0 :

θ0:=θ0α1mi=1n[hθ(xi)yi]

For i = 1 :

θ1:=θ1α1mi=1n[hθ(xi)yi](x1)

Iterative for multiple times (depends on data content, data size and step size). Finally, we could see the result as below.

Converge
Visualize Convergence

3. Analyze

ProsCons
Controllable by manuplate stepsize, datasizeComputing effort is large
Easy to program

4. How to Choose Step Size?

Choose an approriate step size is significant. If the step size is too small, it doesn’t hurt the result, but it took even more times to converge. If the step size is too large, it may cause the algorithm diverge (not converge).

The graph below shows that the value is not converge since the step size is too big.

Large Step Size
Large Step Size

The best way, as far as I know, is to decrease the step size according to the iteration times.

E.g.,

α(t+1)=αtt

or

α(t+1)=αtt

Reference

  1. 机器学习基石(台湾大学-林轩田)\lecture_slides-09_handout.pdf

  2. Coursera-Standard Ford CS229: Machine Learning - Andrew Ng

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值