本文的参考资料:《Python数据科学手册》;
本文的源代上传到了Gitee上;
本文用到的包:
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.base import ClassifierMixin
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.datasets import make_blobs, load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
sns.set()
plt.rc('font', family='SimHei')
plt.rc('axes', unicode_minus=False)
随机森林(Random Forest)
随机森林是一种建立在决策树(Decision Tree)的基础之上的集成学习器;
决策树在sklearn中由DecisionTreeClassifier/DecisionTreeRegressor类实现,随机森林由RandomForestClassifier/RandomTreeRegressor类实现;
决策树
决策树在每一个节点上都会依据某一标准将结果的可能性减半,能够很快的缩小范围;
但是随着决策树的深度加深,决策树很容易出现过拟合;
如下面的示例所示,同一组数据分成两个部分分别进行拟合得到的决策树会有很大的差别;
def decision_tree_visualization(x_train, y_train, model: ClassifierMixin, ax: plt.Axes=None) -> None:
if not ax:
_, ax = plt.figure(figsize=(10, 10)), plt.axes()
ax.scatter(
x=x_train[:, 0],
y=x_train[:, 1],
c=y_train,
edgecolors='k',
cmap=plt.cm.get_cmap('rainbow', lut=3)
)
model.fit(x_train, y_train)
xx, yy = np.meshgrid(np.linspace(*ax.get_xlim(), 200), np.linspace(*ax.get_ylim(), 200))
res = model.predict(np.vstack((xx.flatten(), yy.flatten())).T).reshape(xx.shape)
ax.contourf(
xx, yy, res,
levels=np.arange(len(np.unique(res)) + 1) - 0.5,
cmap='rainbow',
alpha=0.3
)
x, y = make_blobs(
n_samples=200,
n_features=2,
centers=3,
random_state=233,
cluster_std=4,
)
x1, x2, y1, y2 = train_test_split(x, y, random_state=233, test_size=0.5