kaggle房价预测参考danB
链接:https://www.kaggle.com/learn/machine-learning
首先是局部依赖图partial dependence plot:简单说就是其他特征边缘化,看这一两个特征的结果特征的依赖关系
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import Imputer
from xgboost import XGBRegressor
from sklearn.ensemble.partial_dependence import partial_dependence, plot_partial_dependence
from sklearn.ensemble import GradientBoostingRegressor
original_data = pd.read_csv( 'D:/NOTEBOOK/train.csv') #读取训练数据
test_data = pd.read_csv( 'D:/NOTEBOOK/test.csv') #读取测试数据
original_data_y = original_data.SalePrice #获取y
original_data = original_data.drop(['SalePrice'], axis=1) #删除y
X_train = original_data.select_dtypes(exclude=['object']) #只使用数字预测器
X_test = test_data.select_dtypes(exclude=['object'])
my_imputer = Imputer()
train_X = my_imputer.fit_transform(X_train)
test_X = my_imputer.transform(X_test)
#########################################################################################################
my_model = GradientBoostingRegressor()
my_model.fit(train_X, original_data_y)#变量最好一两个,下面是one-way和two-way
my_plots = plot_partial_dependence(my_model,
features=[0, 1, (0, 1)], # column numbers of plots we want to show
X=train_X, # raw predictors data.
feature_names=[ 'BedroomAbvGr','YearBuilt'], # labels on graphs
grid_resolution=10) # number of values to plot on x axis
plt.show()
接下来是Pipelines,简单说把模型拼一起,就有流水线的感觉了。代码后面加上交叉验证,我是分成了5份验证
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import Imputer
from xgboost import XGBRegressor
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
original_data = pd.read_csv( 'D:/NOTEBOOK/train.csv') #读取训练数据
test_data = pd.read_csv( 'D:/NOTEBOOK/test.csv') #读取测试数据
original_data_y = original_data.SalePrice #获取y
original_data = original_data.drop(['SalePrice'], axis=1) #删除y
X_train = original_data.select_dtypes(exclude=['object']) #只使用数字预测器
X_test = test_data.select_dtypes(exclude=['object'])
###############################################################################################
my_pipeline = make_pipeline(Imputer(), RandomForestRegressor())
my_pipeline.fit(X_train, original_data_y)
predictions = my_pipeline.predict(X_test)
################################################################################################
scores = cross_val_score(my_pipeline, X_train, original_data_y, scoring='neg_mean_absolute_error', cv = 5) #交叉验证
print(scores)
print('Mean Absolute Error %2f' %(-1 * scores.mean())) #取平均值
最好是数据泄露问题,感觉还没理解清楚,等理解清楚再改
个人理解:
1.有的特征是跟着结果变的,所以不考虑,是因为结果才产生的
2.还有就是这个特征是否能用来使用,具体看特征是什么