K折交叉验证

本文详细介绍了K折交叉验证的概念及其在机器学习中的应用。包括K折交叉验证的基础流程、不同交叉验证方法(如KFold、LeaveOneOut、ShuffleSplit等)的特点,并通过实例展示了如何使用Python的sklearn库进行交叉验证。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

K折交叉验证


先导入需要的库及数据集

In [1]: import numpy as np

In [2]: from sklearn.model_selection import train_test_split

In [3]: from sklearn.datasets import load_iris

In [4]: from sklearn import svm

In [5]: iris = load_iris()

In [6]: iris.data.shape, iris.target.shape
Out[6]: ((150, 4), (150,))

1.train_test_split

对数据集进行快速打乱(分为训练集和测试集)

这里相当于对数据集进行了shuffle后按照给定的test_size 进行数据集划分。

In [7]: X_train, X_test, y_train, y_test = train_test_split(
   ...:         iris.data, iris.target, test_size=.4, random_state=0)
   #这里是按照6:4对训练集测试集进行划分

In [8]: X_train.shape, y_train.shape
Out[8]: ((90, 4), (90,))

In [9]: X_test.shape, y_test.shape
Out[9]: ((60, 4), (60,))

In [10]: iris.data[:5]
Out[10]: 
array([[ 5.1,  3.5,  1.4,  0.2],
       [ 4.9,  3. ,  1.4,  0.2],
       [ 4.7,  3.2,  1.3,  0.2],
       [ 4.6,  3.1,  1.5,  0.2],
       [ 5. ,  3.6,  1.4,  0.2]])

In [11]: X_train[:5]
Out[11]: 
array([[ 6. ,  3.4,  4.5,  1.6],
       [ 4.8,  3.1,  1.6,  0.2],
       [ 5.8,  2.7,  5.1,  1.9],
       [ 5.6,  2.7,  4.2,  1.3],
       [ 5.6,  2.9,  3.6,  1.3]])

In [12]: clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)

In [13]: clf.score(X_test, y_test)
Out[13]: 0.96666666666666667

2.cross_val_score

对数据集进行指定次数的交叉验证并为每次验证效果评测

其中,score 默认是以 scoring=’f1_macro’进行评测的,余外针对分类或回归还有:
这里写图片描述
这需要from sklearn import metrics ,通过在cross_val_score 指定参数来设定评测标准;
cv 指定为int 类型时,默认使用KFoldStratifiedKFold 进行数据集打乱,下面会对KFoldStratifiedKFold 进行介绍。


In [15]: from sklearn.model_selection import cross_val_score

In [16]: clf = svm.SVC(kernel='linear', C=1)

In [17]: scores = cross_val_score(clf, iris.data, iris.target, cv=5)

In [18]: scores
Out[18]: array([ 0.96666667,  1.        ,  0.96666667,  0.96666667,  1.        ])

In [19]: scores.mean()
Out[19]: 0.98000000000000009

除使用默认交叉验证方式外,可以对交叉验证方式进行指定,如验证次数,训练集测试集划分比例等

In [20]: from sklearn.model_selection import ShuffleSplit

In [21]: n_samples = iris.data.shape[0]

In [22]: cv = ShuffleSplit(n_splits=3, test_size=.3, random_state=0)

In [23]: cross_val_score(clf, iris.data, iris.target, cv=cv)
Out[23]: array([ 0.97777778,  0.97777778,  1.        ])

cross_val_score 中同样可使用pipeline 进行流水线操作

In [24]: from sklearn import preprocessing

In [25]: from sklearn.pipeline import make_pipeline

In [26]: clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))

In [27]: cross_val_score(clf, iris.data, iris.target, cv=cv)
Out[27]: array([ 0.97777778,  0.93333333,  0.95555556])

3.cross_val_predict

cross_val_predictcross_val_score 很相像,不过不同于返回的是评测效果,cross_val_predict 返回的是estimator 的分类结果(或回归值),这个对于后期模型的改善很重要,可以通过该预测输出对比实际目标值,准确定位到预测出错的地方,为我们参数优化及问题排查十分的重要。

In [28]: from sklearn.model_selection import cross_val_predict

In [29]: from sklearn import metrics

In [30]: predicted = cross_val_predict(clf, iris.data, iris.target, cv=10)

In [31]: predicted
Out[31]: 
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])

In [32]: metrics.accuracy_score(iris.target, predicted)
Out[32]: 0.96666666666666667

4.KFold

K折交叉验证,这是将数据集分成K份的官方给定方案,所谓K折就是将数据集通过K次分割,使得所有数据既在训练集出现过,又在测试集出现过,当然,每次分割中不会有重叠。相当于无放回抽样。

In [33]: from sklearn.model_selection import KFold

In [34]: X = ['a','b','c','d']

In [35]: kf = KFold(n_splits=2)

In [36]: for train, test in kf.split(X):
    ...:     print train, test
    ...:     print np.array(X)[train], np.array(X)[test]
    ...:     print '\n'
    ...:     
[2 3] [0 1]
['c' 'd'] ['a' 'b']


[0 1] [2 3]
['a' 'b'] ['c' 'd']

如果要想输出迭代次数

from sklearn.model_selection import KFold
kf = KFold(n_splits = 2)
for i ,(train_index,test_index) in kf(X,y):
	X_train, X_test = X[train_index], X[test_index]
	y_train, y_test = y[train_index], y[test_index]

5.LeaveOneOut

LeaveOneOut 其实就是KFold 的一个特例,因为使用次数比较多,因此独立的定义出来,完全可以通过KFold 实现。


In [37]: from sklearn.model_selection import LeaveOneOut

In [38]: X = [1,2,3,4]

In [39]: loo = LeaveOneOut()

In [41]: for train, test in loo.split(X):
    ...:     print train, test
    ...:     
[1 2 3] [0]
[0 2 3] [1]
[0 1 3] [2]
[0 1 2] [3]


#使用KFold实现LeaveOneOtut
In [42]: kf = KFold(n_splits=len(X))

In [43]: for train, test in kf.split(X):
    ...:     print train, test
    ...:     
[1 2 3] [0]
[0 2 3] [1]
[0 1 3] [2]
[0 1 2] [3]

6.LeavePOut

这个也是KFold 的一个特例,用KFold 实现起来稍麻烦些,跟LeaveOneOut 也很像。


In [44]: from sklearn.model_selection import LeavePOut

In [45]: X = np.ones(4)

In [46]: lpo = LeavePOut(p=2)

In [47]: for train, test in lpo.split(X):
    ...:     print train, test
    ...:     
[2 3] [0 1]
[1 3] [0 2]
[1 2] [0 3]
[0 3] [1 2]
[0 2] [1 3]
[0 1] [2 3]

7.ShuffleSplit

ShuffleSplit 咋一看用法跟LeavePOut 很像,其实两者完全不一样,LeavePOut 是使得数据集经过数次分割后,所有的测试集出现的元素的集合即是完整的数据集,即无放回的抽样,而ShuffleSplit 则是有放回的抽样,只能说经过一个足够大的抽样次数后,保证测试集出现了完成的数据集的倍数。 注意和 train_test_split() 的区别,ShuffleSplit只是分割数据,返回的是分割后数据的索引。

In [48]: from sklearn.model_selection import ShuffleSplit

In [49]: X = np.arange(5)

In [50]: ss = ShuffleSplit(n_splits=3, test_size=.25, random_state=0)

In [51]: for train_index, test_index in ss.split(X):
    ...:     print train_index, test_index
    ...:     
[1 3 4] [2 0]
[1 4 3] [0 2]
[4 0 2] [1 3]

8.StratifiedKFold

利用StratifiedKFold方法分层采样:依照标签的比例来抽取数据,本案例集标签0和1的比例是1:1,因此在抽取数据时也是按照标签比例1:1来提取的

#依照标签的比例来抽取数据,本案例集标签0和1的比例是1:1
#因此在抽取数据时也是按照标签比例1:1来提取的
sfolder = StratifiedKFold(n_splits=4,random_state=0)
# 就是根据y,也就是target类别来进行分的
for train, test in sfolder.split(X,y):
    print('Train: %s | test: %s' % (train, test))
>>>
Train: [1 3 4 5 6 7] | test: [0 2]niuk
Train: [0 2 4 5 6 7] | test: [1 3]
Train: [0 1 2 3 5 7] | test: [4 6]
Train: [0 1 2 3 4 6] | test: [5 7]

9.GroupKFold

这个跟StratifiedKFold 比较像,不过测试集是按照一定分组进行打乱的,即先分堆,然后把这些堆打乱,每个堆里的顺序还是固定不变的。

In [57]: from sklearn.model_selection import GroupKFold

In [58]: X = [.1, .2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10]

In [59]: y = ['a','b','b','b','c','c','c','d','d','d']

In [60]: groups = [1,1,1,2,2,2,3,3,3,3]

In [61]: gkf = GroupKFold(n_splits=3)

In [62]: for train, test in gkf.split(X,y,groups=groups):
    ...:     print train, test
    ...:     
[0 1 2 3 4 5] [6 7 8 9]
[0 1 2 6 7 8 9] [3 4 5]
[3 4 5 6 7 8 9] [0 1 2]

10.LeaveOneGroupOut(不重要)

这个是在GroupKFold 上的基础上混乱度又减小了,按照给定的分组方式将测试集分割下来。

In [63]: from sklearn.model_selection import LeaveOneGroupOut

In [64]: X = [1, 5, 10, 50, 60, 70, 80]

In [65]: y = [0, 1, 1, 2, 2, 2, 2]

In [66]: groups = [1, 1, 2, 2, 3, 3, 3]

In [67]: logo = LeaveOneGroupOut()

In [68]: for train, test in logo.split(X, y, groups=groups):
    ...:     print train, test
    ...:     
[2 3 4 5 6] [0 1]
[0 1 4 5 6] [2 3]
[0 1 2 3] [4 5 6]

11.LeavePGroupsOut(不重要)

这个没啥可说的,跟上面那个一样,只是一个是单组,一个是多组

from sklearn.model_selection import LeavePGroupsOut

X = np.arange(6)

y = [1, 1, 1, 2, 2, 2]

groups = [1, 1, 2, 2, 3, 3]

lpgo = LeavePGroupsOut(n_groups=2)

for train, test in lpgo.split(X, y, groups=groups):
    print train, test

[4 5] [0 1 2 3]
[2 3] [0 1 4 5]
[0 1] [2 3 4 5]

12.GroupShuffleSplit(不重要)

这个是有放回抽样

In [75]: from sklearn.model_selection import GroupShuffleSplit

In [76]: X = [.1, .2, 2.2, 2.4, 2.3, 4.55, 5.8, .001]

In [77]: y = ['a', 'b','b', 'b', 'c','c', 'c', 'a']

In [78]: groups = [1,1,2,2,3,3,4,4]

In [79]: gss = GroupShuffleSplit(n_splits=4, test_size=.5, random_state=0)

In [80]: for train, test in gss.split(X, y, groups=groups):
    ...:     print train, test
    ...:     
[0 1 2 3] [4 5 6 7]
[2 3 6 7] [0 1 4 5]
[2 3 4 5] [0 1 6 7]
[4 5 6 7] [0 1 2 3]

13.TimeSeriesSplit

针对时间序列的处理,防止未来数据的使用,分割时是将数据进行从前到后切割(这个说法其实不太恰当,因为切割是延续性的。。)

In [81]: from sklearn.model_selection import TimeSeriesSplit

In [82]: X = np.array([[1,2],[3,4],[1,2],[3,4],[1,2],[3,4]])

In [83]: tscv = TimeSeriesSplit(n_splits=3)

In [84]: for train, test in tscv.split(X):
    ...:     print train, test
    ...:     
[0 1 2] [3]
[0 1 2 3] [4]
[0 1 2 3 4] [5]

14.参数选择

from sklearn.datasets import load_iris
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import StratifiedKFold

kf = StratifiedKFold(n_splits=3)
iris = load_iris()
print(iris.keys())
data = iris.data
# clf = svm.SVC(kernel= 'rbf')
clf = KNeighborsClassifier(n_neighbors=k)
scores = []
for i in range(1,20):
    cv_score = cross_val_score(KNeighborsClassifier(n_neighbors=i), iris.data, iris.target,cv=kf.split(iris.data, iris.target),scoring='accuracy')
    scores.append(np.mean(cv_score))
print(np.argmax(scores))
print(scores)
import matplotlib.pyplot as plt
plt.plot(range(1,20), scores)
plt.xlabel("k")
plt.ylabel("score")
plt.show()
### k交叉验证原理与应用 #### 基本理论和步骤 k交叉验证是一种用于评估机器学习模型性能的技术。该技术的核心思想是将原始数据随机划分为k个互斥子集(称为“叠”),每个子集大约占总数据量的1/k[^1]。 对于每一次迭代,选择其中一个子集作为验证集,其余(k-1)个子集组合成训练集。这样可以得到k组不同的训练/测试分割方式,从而使得每一个观测值都有机会被选作验证集中的一员。此过程重复执行k次,每次使用不同部分的数据作为验证集,最终汇总这k轮的结果以获得更稳定可靠的估计[^2]。 #### 应用场景 当面临有限规模的数据集时,采用传统的单一分割可能导致过拟合风险增加或泛化能力不足等问题;而通过引入k交叉验证,则可以在一定程度上缓解这些问题并提高模型评价指标的真实性。此外,在超参数调优过程中也经常运用到这种方法来寻找最优配置方案。 #### 实现方法 下面给出Python环境下基于Scikit-Learn库的一个简单例子: ```python from sklearn.model_selection import cross_val_score, KFold from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression # 加载鸢尾花分类数据集 data = load_iris() X, y = data.data, data.target # 定义逻辑回归模型 model = LogisticRegression(max_iter=200) # 创建KFold对象指定分拆策略,默认shuffle=False kf = KFold(n_splits=5) # 使用cross_val_score函数计算平均得分 scores = cross_val_score(model, X, y, cv=kf) print(f'Cross-validation scores: {scores}') print(f'Mean score: {"{:.3f}".format(scores.mean())}') ``` 上述代码展示了如何利用`sklearn`中的工具快速完成一次完整的五交叉验证实验,并输出各轮次的表现分数及其均值。值得注意的是,这里选择了线性支持向量机作为待测算法之一,但在实际操作中可以根据具体需求替换为其他类型的预测器[^3]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值