1.简介
在学习完线性回归后,我们就会想到Python有没有带相关的包让我们开发者更快上手呢?
scikit-learn就是一个功能强大的包,其中提供了回归,分类等等实现接口!!
2.线性回归
导包
import numpy as np
np.set_printoptions(precision=2)
from sklearn.linear_model import LinearRegression, SGDRegressor
from sklearn.preprocessing import StandardScaler
from lab_utils_multi import load_house_data
import matplotlib.pyplot as plt
dlblue = '#0096ff'; dlorange = '#FF9300'; dldarkred='#C00000'; dlmagenta='#FF40FF'; dlpurple='#7030A0';
plt.style.use('./deeplearning.mplstyle')
归一化
X_train, y_train = load_house_data() #读数据
X_features = ['size(sqft)','bedrooms','floors','age']
scaler = StandardScaler()
X_norm = scaler.fit_transform(X_train) #使用z-score标准化所有数据
print(f"Peak to Peak range by column in Raw X:{np.ptp(X_train,axis=0)}")
print(f"Peak to Peak range by column in Normalized X:{np.ptp(X_norm,axis=0)}")
创建回归模型
sgdr = SGDRegressor(max_iter=1000) #获取回归模型,迭代1000次
sgdr.fit(X_norm, y_train) #输入模型参数
print(sgdr)
print(f"number of iterations completed: {sgdr.n_iter_}, number of weight updates: {sgdr.t_}")
获取拟合结果
b_norm = sgdr.intercept_ #获取b的拟合值
w_norm = sgdr.coef_ #获取w向量的拟合值
print(f"model parameters: w: {w_norm}, b:{b_norm}")
print(f"model parameters from previous lab: w: [110.56 -21.27 -32.71 -37.97], b: 363.16")
预测结果
# make a prediction using sgdr.predict()
y_pred_sgd = sgdr.predict(X_norm)
# make a prediction using w,b. 两种方法预测
y_pred = np.dot(X_norm, w_norm) + b_norm
print(f"prediction using np.dot() and sgdr.predict match: {(y_pred == y_pred_sgd).all()}") #结果应该是一样的
print(f"Prediction on training set:\n{y_pred[:4]}" )
print(f"Target values \n{y_train[:4]}")
3. 逻辑回归
定义数据
import numpy as np
X = np.array([[0.5, 1.5], [1,1], [1.5, 0.5], [3, 0.5], [2, 2], [1, 2.5]])
y = np.array([0, 0, 0, 1, 1, 1])
创建模型
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression()
lr_model.fit(X, y)
预测准确性
y_pred = lr_model.predict(X)
print("Prediction on training set:", y_pred)
其他模块等用到了再继续补充😁
本文介绍了如何使用Python的scikit-learn库进行线性回归和逻辑回归。首先导入必要的模块,然后展示如何对数据进行归一化处理,创建并训练SGDRegressor和LogisticRegression模型,以及进行预测。
1953





