【Machine Learning】【Andrew Ng】- notes(Week 2: Multivariate Linear Regression)

本文介绍了多元线性回归的基本概念及其假设函数形式,并详细讨论了梯度下降算法在多元特征情况下的应用,包括如何进行特征缩放、学习率调整及多项式回归等内容。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Multiple Features

Linear regression with multiple variables is also known as “multivariate linear regression”.
We now introduce notation for equations where we can have any number of input variables.
这里写图片描述
The multivariable form of the hypothesis function accommodating these multiple features is as follows:
hθ(x)=θ0+θ1x1+θ2x2+θ3x3++θnxnhθ(x)=θ0+θ1x1+θ2x2+θ3x3+⋯+θnxn
In order to develop intuition about this function, we can think about θ0θ0 as the basic price of a house, θ1θ1 as the price per square meter, θ2θ2 as the price per floor, etc. x1x1 will be the number of square meters in the house, x2x2 the number of floors, etc.
Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
这里写图片描述
This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.
Remark: Note that for convenience reasons in this course we assume x(i)0=1 for (i1,,m)x0(i)=1 for (i∈1,…,m). This allows us to do matrix operations with theta and xx. Hence making the two vectors ‘θ’ and x(i)x(i) match each other element-wise (that is, have the same number of elements: n+1).]

Gradient Descent for Multiple Variables

The gradient descent equation itself is generally the same form; we just have to repeat it for our ‘n’ features:
这里写图片描述
In other words:
这里写图片描述
The following image compares gradient descent with one variable to gradient descent with multiple variables:
这里写图片描述

Gradient Descent in Practice I - Feature Scaling

We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
1x(i)1−1≤x(i)≤1 or 0.5x(i)−0.5≤x(i)
These aren’t exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.
Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:
xi:=xiμisixi:=xi−μisi
Where μiμi is the average of all the values for feature (i) and sisi is the range of values (max - min), or sisi is the standard deviation.
Note that dividing by the range, or dividing by the standard deviation, give different results. The quizzes in this course use range - the programming exercises use standard deviation.
For example, if xixi represents housing prices with a range of 100 to 2000 and a mean value of 1000, then, xi:=price10001900xi:=price−10001900.

Gradient Descent in Practice II - Learning Rate

Debugging gradient descent. Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.
Automatic convergence test. Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as 10310−3. However in practice it’s difficult to choose this threshold value.
这里写图片描述
It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.
这里写图片描述
To summarize:
If αα is too small: slow convergence.
If αα is too large: may not decrease on every iteration and thus may not converge.

Features and Polynomial Regression

We can improve our features and the form of our hypothesis function in a couple different ways.

We can combine multiple features into one. For example, we can combine x1x1 and x2x2 into a new feature x3x3 by taking x1x2x1⋅x2.
Polynomial Regression
Our hypothesis function need not be linear (a straight line) if that does not fit the data well.
We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).
For example, if our hypothesis function is hθ(x)=θ0+θ1x1hθ(x)=θ0+θ1x1 then we can create additional features based on x1x1, to get the quadratic function hθ(x)=θ0+θ1x1+θ2x21hθ(x)=θ0+θ1x1+θ2x12 or the cubic function hθ(x)=θ0+θ1x1+θ2x21+θ3x31hθ(x)=θ0+θ1x1+θ2x12+θ3x13
In the cubic version, we have created new features x2x2 and x3x3 where x2=x21x2=x12 and x3=x31x3=x13.
To make it a square root function, we could do: hθ(x)=θ0+θ1x1+θ2x1hθ(x)=θ0+θ1x1+θ2x1
One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.
eg. if x1x1 has range 1 - 1000 then range of x21x12 becomes 1 - 1000000 and that of x31x13 becomes 1 - 1000000000.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值