第十五周 —— sklearn练习

本文通过一个随机生成的二分类问题,对比了朴素贝叶斯、支持向量机及随机森林三种机器学习算法的表现。采用10折交叉验证进行训练与测试,并通过准确率、F1分数及AUC-ROC等指标评估了每种算法的性能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Assignment
In the second ML assignment you have to compare the performance of
three di↵erent classification algorithms, namely Naive Bayes, SVM, and
Random Forest.
For this assignment you need to generate a random binary classification
problem, and then train and test (using 10-fold cross validation) the three
algorithms. For some algorithms inner cross validation (5-fold) for choosing
the parameters is needed. Then, show the classification performace
(per-fold and averaged) in the report, and briefly discussing the results.
Note
The report has to contain also a short description of the methodology used
to obtain the results.
Dragone, Passerini (DISI) Scikit-Learn Machine Learning 20 / 22

Steps
1、 Create a classification dataset (n samples ! 1000, n features ! 10)
2 、Split the dataset using 10-fold cross validation
3 、Train the algorithms
  GaussianNB
  SVC (possible C values [1e-02, 1e-01, 1e00, 1e01, 1e02], RBF kernel)
  RandomForestClassifier (possible n estimators values [10, 100, 1000])
4、 Evaluate the cross-validated performance
  Accuracy
  F1-score
  AUC ROC

5 、Write a short report summar


from sklearn import datasets
from sklearn import cross_validation
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
dataset = datasets.make_classification(n_samples=2000, n_features=10,
n_informative=2, n_redundant=2, n_repeated=0, n_classes=2)
# spilt using 10-fold
kf = cross_validation.KFold(1000, n_folds=10, shuffle=True)
for train_index, test_index in kf:
X_train, y_train = dataset[0][train_index], dataset[1][train_index]
X_test, y_test = dataset[0][test_index], dataset[1][test_index]

# Evaluate the cross-validated performance
print("Evaluate the cross-validated performance:")
acc = metrics.accuracy_score(y_test, pred)
print("Accuracy: ", acc)
f1 = metrics.f1_score(y_test, pred)
print("F1-score: ",f1)
auc = metrics.roc_auc_score(y_test, pred)
print("AUC ROC: ", auc)
print("\n")
# SVC
clf = SVC(C=1e-01, kernel='rbf', gamma=0.1)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print("SVC: ")
print("pred: \n", pred)
print("y_test: \n", y_test)
# Evaluate the cross-validated performance
print("Evaluate the cross-validated performance:")
acc = metrics.accuracy_score(y_test, pred)
print("Accuracy: ", acc)
f1 = metrics.f1_score(y_test, pred)
print("F1-score: ",f1)
auc = metrics.roc_auc_score(y_test, pred)
print("AUC ROC: ", auc)
print("\n")

# Gaussian Naive Bayes
clf = GaussianNB()
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print("GaussianNB:")
print("pred: \n", pred)
print("y_test: \n", y_test)



# Evaluate the cross-validated performance
print("Evaluate the cross-validated performance:")
acc = metrics.accuracy_score(y_test, pred)
print("Accuracy: ", acc)
f1 = metrics.f1_score(y_test, pred)
print("F1-score: ",f1)
auc = metrics.roc_auc_score(y_test, pred)
print("AUC ROC: ", auc)
print("\n")


# Random Forest
clf = RandomForestClassifier(n_estimators=6)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print("RandomForestClassifier: ")
print("pred: \n", pred)
print("y_test: \n", y_test)



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值