knn_regression_model(Python)

本文通过使用多种机器学习方法(如Lasso回归、Ridge回归、随机森林及K近邻算法)对波士顿房价数据集进行特征选择与预测,并通过交叉验证寻找最优参数组合。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn import preprocessing
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error

dataset = datasets.load_boston()
featurenames = list(dataset.feature_names)
X,y = dataset.data,dataset.target

scaler = preprocessing.StandardScaler()
x = scaler.fit_transform(X)

#基于Lasso(L1)的特征选择方法
clf = Lasso(alpha=0.3)
clf.fit(x,y)
coefs = clf.coef_
string = ''
for name,coef in zip(featurenames,coefs):
    string = string + '{}*{}'.format(round(coef,4),name)
print('Lasso Model:\n',string)

#基于Ridge(L2)的特征选择方法
clf = Ridge(10)
clf.fit(x,y)
coefs = clf.coef_
string = ''
for name,coef in zip(featurenames,coefs):
    string = string + '{}*{}'.format(round(coef,4),name)
print('Ridge Model:\n',string)

#基于随机森林的特征选择方法
clf = RandomForestRegressor()
clf.fit(x,y)
item = {}
for attr,score in zip(featurenames,clf.feature_importances_):
    item[attr] = round(score,4)
print(item)

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=1)

#寻找最优参数
param = {'n_neighbors':range(2,50),
             'weights':['uniform','distance'],
             'algorithm':['ball_tree','kd_tree','brute','auto']
        }
model = GridSearchCV(KNeighborsRegressor(weights='uniform'),param)
model.fit(x_train,y_train)
print('最佳模型为:\n',model.best_estimator_)
print('最佳参数为:\n',model.best_params_)

#训练模型
clf = KNeighborsRegressor(n_neighbors=3,weights='distance')
clf.fit(x_train,y_train)
predict_train = clf.predict(x_train)
predict_test = clf.predict(x_test)
train_mse = mean_squared_error(y_train,predict_train)
test_mse = mean_squared_error(y_test,predict_test)
print('Train MSE = ',train_mse,' Test MSE = ',test_mse)

 

### Simple KNN Algorithm Implementation and Usage K-nearest neighbors (KNN) is one of the simplest yet effective supervised learning algorithms used for both classification and regression tasks. The principle behind this algorithm involves finding the closest points in the training dataset to a new point based on distance metrics such as Euclidean distance. Below demonstrates how simple KNN can be implemented using Python with Scikit-Learn library: ```python from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris import numpy as np # Load iris dataset as an example iris = load_iris() X = iris.data y = iris.target # Split into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # Initialize classifier with number of neighbors k=3 knn_classifier = KNeighborsClassifier(n_neighbors=3) # Fit the model knn_classifier.fit(X_train, y_train) # Predicting labels for unseen data predictions = knn_classifier.predict(X_test) print(f'Predicted classes: {predictions}') ``` For evaluating the performance of the KNN classifier, various methods exist including confusion matrix, precision-recall curves, F1 score, etc., but specifically regarding ROC-AUC evaluation which measures area under receiver operating characteristic curve providing insight about true positive rate against false positive rate at different thresholds[^4]. To choose optimal `k` value, cross-validation technique could prove beneficial where multiple rounds of splitting datasets occur ensuring robustness across all possible splits within given data.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值