线性回归模型
线性函数的定义如下:
h(x)=w1x1+w2x2+...+wdxd+b=wTx+bh(\bm{x})=w_{1}x_{1}+w_{2}x_{2}+...+w_{d}x_{d}+b=\bm{w}^{T}\bm{x}+bh(x)=w1x1+w2x2+...+wdxd+b=wTx+b
给定数据集D={(xi,yi)}1ND=\{(\bm{x}_{i},y_{i})\}_{1}^{N}D={(xi,yi)}1N,其中xi,yi\bm{x}_{i},y_{i}xi,yi都是连续型变量。线性回归试图去学习到h(x)h(\bm{x})h(x)能准确地预测yyy。
回归任务最常用的性能度量就是均平方误差(MSE,mean squared error):
J(w,b)=12∑i=1N(yi−h(xi))2=12∑i=1N(yi−wTxi+b)2J(\bm{w},b)=\frac{1}{2}\sum_{i=1}^{N}(y_{i}-h(\bm{x}_{i}))^{2}=\frac{1}{2}\sum_{i=1}^{N}(y_{i}-\bm{w}^{T}\bm{x}_{i}+b)^{2}J(w,b)=21i=1∑N(yi−h(xi))2=21i=1∑N(yi−wTxi+b)2
参数的优化:(入门人士只需掌握(1)即可,实用简单)
(1)可以使用梯度下降来优化误差,每一步的更新为:
wj:=wj−η∂J(w,b)∂wj=wj−η∑i=1N(yi−h(xi))xij,j=1,2,...db:=b−η∂J(w,b)∂b=wj−η∑i=1N(yi−h(xi)),j=1,2,...dw_{j}:=w_{j}-\eta \frac{\partial J(\bm{w},b)}{\partial w_{j}}=w_{j}-\eta \sum_{i=1}^{N}(y_{i}-h(\bm{x}_{i}))\bm{x}_{i}^{j},j=1,2,...d \\
b:=b-\eta \frac{\partial J(\bm{w},b)}{\partial b}=w_{j}-\eta \sum_{i=1}^{N}(y_{i}-h(\bm{x}_{i})),j=1,2,...dwj:=wj−η∂wj∂J(w,b)=wj−ηi=1∑N(yi−h(xi))xij,j=1,2,...db:=b−η∂b∂J(w,b)=wj−ηi=1∑N(yi−h(xi)),j=1,2,...d
(2)最小二乘估计:
将w\bm{w}w和bbb合起来写为w^=(w;b),\hat{\bm{w}}=(\bm{w};b),w^=(w;b),把数据集表示为N×(d+1)N\times (d+1)N×(d+1)的矩阵,前ddd个元素对应属性值,最后一个数为111对应参数bbb。
标记也写为向量的形式:y=(y1;y2;...;yN)\bm{y}=(y_{1};y_{2};...;y_{N})y=(y1;y2;...;yN),则求最小化误差的参数w^\hat{\bm{w}}w^可表示为:
w^∗=minw^J(w^)=minw^(y−Xw^)T(y−Xw^)\hat{\bm{w}}^{*}=\min_{\hat{\bm{w}}}J(\hat{\bm{w}})=\min_{\hat{\bm{w}}}(\bm{y}-\mathbf{X}\bm{\hat{w}})^{T}(\bm{y}-\mathbf{X}\bm{\hat{w}})w^∗=w^minJ(w^)=w^min(y−Xw^)T(y−Xw^)
∂J(w^)∂w^=0⇒2XT(Xw^−y)=0\frac{\partial J(\hat{\bm{w}})}{\partial\hat{\bm{w}}}=0 \\
\Rightarrow 2\mathbf{X}^{T}(\mathbf{X}\hat{\bm{w}}-\bm{y})=0∂w^∂J(w^)=0⇒2XT(Xw^−y)=0
当XTX\mathbf{X}^{T}\mathbf{X}XTX为满秩矩阵(判断矩阵是否可逆的充要条件)或正定矩阵时,上式可解为:
w^=(XTX)−1XTy\hat{\bm{w}}=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\bm{y}w^=(XTX)−1XTy
此时的回归模型为:
h(xi)=xiT(XTX)−1XTyh(\bm{x}_{i})=\bm{x}_{i}^{T}(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\bm{y}h(xi)=xiT(XTX)−1XTy
当XTX\mathbf{X}^{T}\mathbf{X}XTX不是满秩矩阵时,相当于求线性方程组(XTX)w^=XTy(\mathbf{X}^{T}\mathbf{X})\hat{\bm{w}}=\mathbf{X}^{T}\bm{y}(XTX)w^=XTy,此时可能解出多个w^\hat{\bm{w}}w^,它们都能使得误差均方误差最小化(所得到的误差都是相同的)。选择哪个将由算法决定,通常引入正则化项来解决。
当加入正则项时,误差为:
w^∗=minw^J(w^)=minw^[(y−Xw^)T(y−Xw^)+λ2∣∣w^∣∣2]\hat{\bm{w}}^{*}=\min_{\hat{\bm{w}}}J(\hat{\bm{w}})=\min_{\hat{\bm{w}}}[(\bm{y}-\mathbf{X}\bm{\hat{w}})^{T}(\bm{y}-\mathbf{X}\bm{\hat{w}})+\frac{\lambda}{2}||\hat{\bm{w}}||^{2}]w^∗=w^minJ(w^)=w^min[(y−Xw^)T(y−Xw^)+2λ∣∣w^∣∣2]
w^=(XTX+λI)−1XTy\hat{\bm{w}}=(\mathbf{X}^{T}\mathbf{X}+\lambda \mathbf{I})^{-1}\mathbf{X}^{T}\bm{y}w^=(XTX+λI)−1XTy
(3)最大似然估计
设ϵ(i)\epsilon^{(i)}ϵ(i)为样本iii的误差,则有:
y(i)=wTx(i)+ϵ(i),i=1,2,...,Ny^{(i)}=\bm{w}^{T}\bm{x}^{(i)}+\epsilon^{(i)},i=1,2,...,Ny(i)=wTx(i)+ϵ(i),i=1,2,...,N
ϵ(i)\epsilon^{(i)}ϵ(i)与ϵ(j),i≠j\epsilon^{(j)},i \neq jϵ(j),i̸=j之间独立同分布,都满足正态分布ϵ(i)∼N(0,σ2),i=1,2,...,N\epsilon^{(i)}\sim \mathcal{N}(0,\sigma^{2}),i=1,2,...,Nϵ(i)∼N(0,σ2),i=1,2,...,N
p(ϵ(i))=12πσexp(−(ϵ(i))22σ2)p(\epsilon^{(i)})=\frac{1}{\sqrt{2\pi}\sigma}\exp(-\frac{(\epsilon^{(i)})^{2}}{2\sigma^{2}})p(ϵ(i))=2πσ1exp(−2σ2(ϵ(i))2)
因为ϵ(i)=y(i)−wTx(i)\epsilon^{(i)}=y^{(i)}-\bm{w}^{T}\bm{x}^{(i)}ϵ(i)=y(i)−wTx(i),ϵ(i)\epsilon^{(i)}ϵ(i)的概率就是x(i)\bm{x}^{(i)}x(i)能映射到y(i)y^{(i)}y(i)的概率。
p(y(i)∣x(i);w)=12πσexp(−(y(i)−wTx(i))22σ2)p(y^{(i)}|\bm{x}^{(i)};\bm{w})=\frac{1}{\sqrt{2\pi}\sigma}\exp(-\frac{(y^{(i)}-\bm{w}^{T}\bm{x}^{(i)})^{2}}{2\sigma^{2}})p(y(i)∣x(i);w)=2πσ1exp(−2σ2(y(i)−wTx(i))2)
最大化似然函数:
L(w)=∏i=1Np(y(i)∣x(i);w)L(\bm{w})=\prod_{i=1}^{N}p(y^{(i)}|\bm{x}^{(i)};\bm{w})L(w)=i=1∏Np(y(i)∣x(i);w)
取对数似然:
logL(w)=∑i=1Nlog(p(y(i)∣x(i);w)=∑i=1Nlog[12πσexp(−(y(i)−wTx(i))22σ2)]=∑i=1N[−12log(2π)−logσ−(y(i)−wTx(i))22σ2]=−N12log(2π)−Nlogσ−12σ2∑i=1N(y(i)−wTx(i))2\log{L(\bm{w})}=\sum_{i=1}^{N}\log(p(y^{(i)}|\bm{x}^{(i)};\bm{w}) \\
=\sum_{i=1}^{N}\log[\frac{1}{\sqrt{2\pi}\sigma}\exp(-\frac{(y^{(i)}-\bm{w}^{T}\bm{x}^{(i)})^{2}}{2\sigma^{2}})]\\
=\sum_{i=1}^{N}[-\frac{1}{2}\log{(2\pi)-\log{\sigma}-\frac{(y^{(i)}-\bm{w}^{T}\bm{x}^{(i)})^{2}}{2\sigma^{2}}}]\\
=-N\frac{1}{2}\log{(2\pi)-N\log{\sigma}-\frac{1}{2\sigma^{2}}}\sum_{i=1}^{N}(y^{(i)}-\bm{w}^{T}\bm{x}^{(i)})^{2}logL(w)=i=1∑Nlog(p(y(i)∣x(i);w)=i=1∑Nlog[2πσ1exp(−2σ2(y(i)−wTx(i))2)]=i=1∑N[−21log(2π)−logσ−2σ2(y(i)−wTx(i))2]=−N21log(2π)−Nlogσ−2σ21i=1∑N(y(i)−wTx(i))2
最大化对数似然函数等价于最小化均方误差。