线性回归学习及实现
线性回归的原理
用一条直线来拟合数据样本,求得该直线的回归系数,这个过程就叫做回归,然后将回归系数带入直线回归方程,最后将待预测数据带入回归方程得到预测结果。
线性回归的优缺点
优点:结果易于理解,计算上不复杂。
缺点:对非线性的数据拟合不好。
适用数据类型:数值型和标称型数据。
线性回归算法分析
1.假设样本数据拟合一条直线
2.验证回归预测结果的准确度,需要用实际值(y)减去预测值(y^\hat yy^)的和求最小化
3.为便于求解最小值,演化成求∑i=1m(yi−yˉi)2\sum_{i=1}^m (y^i - \bar y^i)^2∑i=1m(yi−yˉi)2的最小值
4.获得最小值对应的回归系数
简单线性回归-最小二乘法
假设我们找到了最佳拟合直线方程:y^=ax+b\hat y = ax + by^=ax+b 根据每一个样本点xix^ixi,都对应有一个预测结果y^i=axi+b\hat y^i =ax^i + by^i=axi+b,真实值为yiy^iyi 目标找到a和b使得:∑i=1m(yi−yˉi)2\sum_{i=1}^m (y^i - \bar y^i)^2∑i=1m(yi−yˉi)2,即∑i=1m(yi−axi−b)2\sum_{i=1}^m (y^i - ax^i-b)^2∑i=1m(yi−axi−b)2 尽可能的小。
目标函数(损失函数)为:
J(a,b)=∑i=1m(yi−axi−b)2J(a,b) = \sum_{i=1}^m (y^i - ax^i-b)^2J(a,b)=i=1∑m(yi−axi−b)2
要使得J(a,b)J(a,b)J(a,b)最小,转化为求极值。其中未知参数a和b,那么分别对a和b求导。
∂J(a,b)∂a=0\frac{\partial J(a,b)}{\partial a} = 0∂a∂J(a,b)=0 ∂J(a,b)∂b=0\frac{\partial J(a,b)}{\partial b} = 0∂b∂J(a,b)=0
对b求导:
∂J(a,b)∂a=∑i=1m2(yi−axi−b)(−1)=0\frac{\partial J(a,b)}{\partial a} = \sum_{i=1}^m 2(y^i - ax^i -b)(-1) = 0 ∂a∂J(a,b)=i=1∑m2(yi−axi−b)(−1)=0
进一步推导得(两边除以2):
∑i=1m(yi−axi−b)(−1)=∑i=1myi−a∑i=1nxi−mb=0\sum_{i=1}^m (y^i - ax^i -b)(-1) = \sum_{i=1}^m y^i -a\sum_{i=1}^n x^i - mb=0i=1∑m(yi−axi−b)(−1)=i=1∑myi−ai=1∑nxi−mb=0
进一步推导,两边同时除以m:
∑i=1myi−a∑i=1nxi−mb=0⇒∑i=1myi−a∑i=1nxi=mb⇒b=yˉ−axˉ\sum_{i=1}^m y^i -a\sum_{i=1}^n x^i - mb=0 \Rightarrow \sum_{i=1}^m y^i -a\sum_{i=1}^n x^i =mb \Rightarrow b = \bar y -a \bar xi=1∑myi−ai=1∑nxi−mb=0⇒i=1∑myi−ai=1∑nxi=mb⇒b=yˉ−axˉ
对a求导:
∂J(a,b)∂a=∑i=1m2(yi−axi−b)(−xi)=0⇒∑i=1m(yi−axi−b)xi=0\frac{\partial J(a,b)}{\partial a} = \sum_{i=1}^m 2(y^i - ax^i -b)(-x^i) = 0 \Rightarrow \sum_{i=1}^m (y^i-ax^i-b)x^i = 0∂a∂J(a,b)=i=1∑m2(yi−axi−b)(−xi)=0⇒i=1∑m(yi−axi−b)xi=0
将b=yˉ−axˉb = \bar y -a \bar xb=yˉ−axˉ 带入:
∑i=1m(yi−axi−yˉ+axˉ)xi=0⇒∑i=1m(yixi−a(xi)2−yˉxi+axˉxi)=0⇒∑i=1m(yixi−yˉxi)−∑i=1m(a(xi)2−axˉxi)=0⇒∑i=1m(yixi−yˉxi)=a∑i=1m((xi)2−xˉxi)\sum_{i=1}^m(y^i - ax^i -\bar y + a \bar x) x^i =0 \Rightarrow \sum_{i=1}^m(y^ix^i -a(x^i)^2-\bar y x^i +a\bar xx^i) =0 \Rightarrow \sum_{i=1}^m(y^ix^i-\bar yx^i) - \sum_{i=1}^m(a(x^i)^2-a\bar x x^i) = 0 \Rightarrow \sum_{i=1}^m(y^ix^i-\bar yx^i) = a\sum_{i=1}^m((x^i)^2-\bar x x^i) i=1∑m(yi−axi−yˉ+axˉ)xi=0⇒i=1∑m(yixi−a(xi)2−yˉxi+axˉxi)=0⇒i=1∑m(yixi−yˉxi)−i=1∑m(a(xi)2−axˉxi)=0⇒i=1∑m(yixi−yˉxi)=ai=1∑m((xi)2−xˉxi)
最后获得a的表达式:
a=∑i=1m(yixi−yˉxi)∑i=1m((xi)2−xˉxi)⇒∑i=1m(yixi−yˉxi−xˉyi+xˉyˉ)∑i=1m((xi)2−xˉxi−xˉxi+xˉ2)⇒∑i=1m(xi−xˉ)(yi−yˉ)∑i=1m(xi−xˉ)2a = \frac {\sum_{i=1}^m(y^ix^i-\bar y x^i)}{\sum_{i=1}^m((x^i)^2-\bar x x^i) } \Rightarrow \frac {\sum_{i=1}^m(y^ix^i-\bar y x^i - \bar x y^i + \bar x \bar y)}{\sum_{i=1}^m((x^i)^2-\bar x x^i -\bar x x^i + \bar x^2) } \Rightarrow \frac{\sum_{i=1}^m(x^i-\bar x)(y^i-\bar y)}{\sum_{i=1}^m(x^i-\bar x)^2}a=∑i=1m((xi)2−xˉxi)∑i=1m(yixi−yˉxi)⇒∑i=1m((xi)2−xˉxi−xˉxi+xˉ2)∑i=1m(yixi−yˉxi−xˉyi+xˉyˉ)⇒∑i=1m(xi−xˉ)2∑i=1m(xi−xˉ)(yi−yˉ)
将获得的a向量化处理(向量的点乘计算):
∑i=1mWi∗Vi⇒W∙V,其中W=(w1,w2,..wm),V=(v1,v2...vm)\sum_{i=1}^m W^i * V^i \Rightarrow W \bullet V ,其中W =(w^1,w^2,..w^m) ,V = (v^1,v^2...v^m)i=1∑mWi∗Vi⇒W∙V,其中W=(w1,w2,..wm),V=(v1,v2...vm)
a=∑i=1m(xi−xˉ)(yi−yˉ)∑i=1m(xi−xˉ)2⇒(xi−xˉ)∙(yi−yˉ)(xi−xˉ)∙(xi−xˉ)a = \frac{\sum_{i=1}^m(x^i-\bar x)(y^i-\bar y)}{\sum_{i=1}^m(x^i-\bar x)^2} \Rightarrow \frac{(x^i - \bar x) \bullet (y^i - \bar y)}{(x^i-\bar x)\bullet(x^i-\bar x)}a=∑i=1m(xi−xˉ)2∑i=1m(xi−xˉ)(yi−yˉ)⇒(xi−xˉ)∙(xi−xˉ)(xi−xˉ)∙(yi−yˉ)
###简单线性回归代码实现
class LinearRegression():
def __init__(self):
'''初始化LinearRegression分类器'''
self.ceof_ = None
self.interp_ = None
def fit(self,x_train,y_train):
'''训练分类器,获取对应的回归系数和截距'''
# 求X和y的均值
x_mean = np.mean(x_train)
y_mean = np.mean(y_train)
# 分子num
num = (x_train-x_mean).dot(y_train-y_mean)
# 分母d
d = (x_train-x_mean).dot(x_train-x_mean)
self.ceof_ = num /d
self.interp_ = y_mean - self.ceof_*x_mean
return self
def predict(self,x_test):
y_predict = [self.ceof_ * x + self.interp_ for x in x_test]
return y_predict
def __repr__(self):
return 'LinearRegression(vector)'
多元线性回归分析及代码实现
多元线性回归分析
假设我们的数据拟合直线y:
y=θ0+θ1x1+θ2X2...+θnxn y = \theta_0 +\theta_1 x_1 +\theta_2 X_2 ... +\theta_n x_ny=θ0+θ1x1+θ2X2...+θnxn
注释:y^=θ0+θ1x1i+θ2X2i...+θnxni,其中xi=(x1i,x2i,x3i,...xni),i表示第i个样本,n表示样本xi的第n个维度;θ=(θ0,θ1,θ2,.....,θn)\hat y = \theta_0 +\theta_1 x_1^i +\theta_2 X_2^i ... +\theta_n x_n^i,其中 x^i = (x_1^i,x_2^i,x_3^i,...x_n^i),i表示第i个样本,n表示样本x^i的第n个维度; \theta = (\theta_0,\theta_1,\theta_2,.....,\theta_n)y^=θ0+θ1x1i+θ2X2i...+θnxni,其中xi=(x1i,x2i,x3i,...xni),i表示第i个样本,n表示样本xi的第n个维度;θ=(θ0,θ1,θ2,.....,θn)
那么只要得到一组θ\thetaθ值,既可以求得对应的新样本的预测值y^\hat yy^。
将$\hat y 推到成推到成推到成\hat y = \theta_0 x_0^i +\theta_1 x_1^i +\theta_2 X_2^i … +\theta_n x_ni,其中x_0i=1$
那么第i个样本xix^ixi可以表示为 :
xi=(x0i,x1i,x2i,...,xni)=(1,x1i,x2i,...,xni)x^i = (x_0^i,x_1^i,x_2^i,...,x_n^i) = (1,x_1^i,x_2^i,...,x_n^i) xi=(x0i,x1i,x2i,...,xni)=(1,x1i,x2i,...,xni)
其中θ\thetaθ为一个列向量:
θ=(θ0,θ1,θ2,.....,θn)T\theta = (\theta_0,\theta_1,\theta_2,.....,\theta_n)^Tθ=(θ0,θ1,θ2,.....,θn)T
最后每一个样本的预测值y^i\hat y^iy^i:
y^i=xi∙θ\hat y^i = x_i \bullet \theta y^i=xi∙θ
样本集合set可以表示为:
Xb=(1x11x21⋯xn11x12x22⋯xn2⋮⋮⋮⋮1x1mx2m⋯xnm)X_b = \begin {pmatrix} 1 & x_1^1 & x_2^1 & \cdots & x_n^1 \\ 1 & x_1^2 & x_2^2 & \cdots & x_n^2 \\ \vdots & \vdots& \vdots & & \vdots \\ 1 & x_1^m & x_2^m & \cdots & x_n^m \end {pmatrix} Xb=⎝⎜⎜⎜⎛11⋮1x11x12⋮x1mx21x22⋮x2m⋯⋯⋯xn1xn2⋮xnm⎠⎟⎟⎟⎞
那么y^\hat yy^可以写成***矩阵运算***:
y^=Xb∙θ\hat y = X_b \bullet \theta y^=Xb∙θ
目标函数(损失函数)尽可能的小:
J(θ)=∑i=1m(yi−y^i)2⇒(y−Xb∙θ)T∙(y−Xb∙θ)J(\theta) = \sum_{i=1}^m(y^i - \hat y^i)^2 \Rightarrow (y -Xb \bullet \theta)^T \bullet (y -Xb \bullet \theta)J(θ)=i=1∑m(yi−y^i)2⇒(y−Xb∙θ)T∙(y−Xb∙θ)
J(θ)对θ求导为0,得到θJ(\theta)对\theta 求导为0,得到\thetaJ(θ)对θ求导为0,得到θ:
θ=(XbT∙Xb)−1∙XbT∙y \theta= (X_b^T \bullet X_b)^-1 \bullet X_b^T \bullet yθ=(XbT∙Xb)−1∙XbT∙y
问题:时间复杂度高O(n3)O(n^3)O(n3)
注:-1 表示矩阵的逆,θ\thetaθ是矩阵
多元线性回归(正规方程解)实现封装
import numpy as np
from sklearn.metrics import r2_score
class LinearRegression2():
def __init__(self):
'''初始化分类器'''
self.coef_ = None
self.interp_ = None
self.theta_ = None
def fit(self, X_train, y_train):
'''训练分类器'''
assert X_train.shape[0] == y_train.shape[0],'The size of X_train must be equal to y_train`s'
X_b = np.hstack( [np.ones((len(X_train),1)), X_train] )
self.theta_ = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
self.coef_ = self.theta_[1:]
self.interp_ = self.theta_[0]
return self
def predict(self,X_test):
assert self.interp_ is not None and self.coef_ is not None, 'Must be fit before predict!'
assert X_test.shape[1] == len(self.coef_), 'The feather`s number of X_test must be equal to the length of self.coef_'
X_b = np.hstack( [np.ones((len(X_test),1)), X_test] )
y_predict = X_b.dot(self.theta_)
return y_predict
def score(self, X_test, y_test):
y_predict = self.predict(X_test)
return r2_score(y_test, y_predict)
def __repr__(self):
return 'LinearRegression(mat)'
多元线性回归实现(批量梯度/随机梯度/mini批量梯度下降)封装
⋆\star⋆注意:使用梯度下降法求解极值,需要对数据特征进行归一化处理。
import numpy as np
from sklearn.metrics import r2_score
class LinearRegression6:
def __init__(self):
"""初始化Linear Regression模型"""
self.coef_ = None
self.intercept_ = None
self._theta = None
def fit_normal(self, X_train, y_train):
"""根据训练数据集X_train, y_train训练Linear Regression模型"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4):
"""根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
except:
return float('inf')
def dJ(theta, X_b, y):
return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(X_b)
def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):
theta = initial_theta
cur_iter = 0
while cur_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta * gradient
if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
break
cur_iter += 1
return theta
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
initial_theta = np.zeros(X_b.shape[1])
self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def fit_sgd(self, X_train, y_train, n_iters=1e5, t0=5, t1=50):
"""根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
assert n_iters >= 1
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
except:
return float('inf')
def dJ_sgd(theta, X_b_i, y_i):
return X_b_i * (X_b_i.dot(theta) - y_i) * 2.
def sgd(X_b, y, initial_theta, n_iters, t0=5, t1=50):
def learning_rate(t):
return t0 / (t + t1)
theta = initial_theta
m = len(X_b)
for cur_iter in range(n_iters):
indexes = np.random.permutation(m)
X_b_new = X_b[indexes]
y_new = y[indexes]
for i in range(m):
gradient = dJ_sgd(theta, X_b_new[i], y_new[i])
last_theta = theta
theta = theta - learning_rate(cur_iter * m + i) * gradient
return theta
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
initial_theta = np.random.randn(X_b.shape[1])
self._theta = sgd(X_b, y_train, initial_theta, n_iters, t0, t1)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def fit_ssgd(self, X_train, y_train, n_iters=5, t0=5, t1=50,k=10):
'''k表示每个子样本的长度'''
def dJ_ssgd(theta, X_b_new, y_new):
return X_b_new.T.dot(X_b_new.dot(theta) - y_new) * 2. / len(X_b_new)
def ssgd(X_b_list, y_list, initial_theta, n_iters, t0=5, t1=50):
def learning_rate(t):
return t0 / (t + t1)
theta = initial_theta
m = len(X_b)
for cur_iter in range(n_iters):
for i in range(int(m/k)):
gradient = dJ_ssgd(theta, X_b_list[i], y_list[i])
theta = theta - learning_rate(cur_iter * m + i) * gradient
return theta
def X_b_split(X_b, y):
m = len(X_b)
num = int(len(X_b)/k)
indexes = np.random.permutation(m)
X_b = X_b[indexes]
y = y[indexes]
X_b_list = []
y_list = []
for i in range(num):
start = i * k
stop = (i+1)*k
X_b_list.append(X_b[start:stop])
y_list.append(y[start:stop])
return X_b_list, y_list
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
X_b_list, y_list = X_b_split(X_b, y)
initial_theta = np.random.randn(X_b.shape[1])
self._theta = ssgd(X_b_list, y_list, initial_theta, n_iters, t0, t1)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def predict(self, X_predict):
"""给定待预测数据集X_predict,返回表示X_predict的结果向量"""
assert self.intercept_ is not None and self.coef_ is not None, \
"must fit before predict!"
assert X_predict.shape[1] == len(self.coef_), \
"the feature number of X_predict must be equal to X_train"
X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
return X_b.dot(self._theta)
def score(self, X_test, y_test):
"""根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""
y_predict = self.predict(X_test)
return r2_score(y_test, y_predict)
def __repr__(self):
return "LinearRegression()"
本学习笔记参考:
《机器学习实战》和《Python3入门机器学习 经典算法与应用》