KNN算法对鸢尾花进行分类

本文深入解析KNN算法的核心思想,探讨其简单实现与不足,通过实例演示了如何使用Scikit-learn进行Iris数据集的分类,并展示了模型评估的方法。重点讲解了数据获取、预处理和模型选择的过程。

KNN算法的核心思想:根据“邻居”的属性来推断所属类别

优点:

  1. 简单,易于理解,易于实现,无需训练

 缺点:

  1. 必须指定k值,k值选择不当则分类器精度不能保证
  2. 懒惰算法,对测试样本分类时的计算量大,内存开销大

本次实验的思路如下:

  1. 获取数据
  2. 数据处理
  3. 特征工程
  4. KNN算法预估流程
  5. 模型评估 

 代码如下:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV

def knn_iris():
    #1.获取数据
    iris = load_iris()
    #2.划分数据集
    x_train, x_test, y_train, y_test = train_test_split(iris.data,iris.target,random_state=22)

    #3.特征工程:标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    #这里需要注意测试集所用的均值和方差需和训练集的一样
    x_test = transfer.transform(x_test)

    #4.KNN算法预估器
    estimator = KNeighborsClassifier(n_neighbors=3)
    estimator.fit(x_train,y_train)

    #5.模型评估
    #方法1 直接比对真实值和预测值
    y_predict = estimator.predict(x_test)
    print("y_predict:\n",y_predict)
    print("直接比对真实值和预测值:\n",y_test==y_predict)
    #方法2 计算准确率
    score = estimator.score(x_test,y_test)
    print("准确率为:\n",score)
    return None
if __name__ =="__main__":
    knn_iris()

结果如下:

 

好的,以下是KNN算法鸢尾花进行分类,使用欧氏距离计算的Python代码: 首先,导入必要的库: ```python import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split ``` 接着,载入鸢尾花数据集: ```python iris = load_iris() X = iris.data y = iris.target ``` 将数据集分为训练集和测试集: ```python X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ``` 定义欧氏距离计算函数: ```python def euclidean_distance(x1, x2): return np.sqrt(np.sum((x1 - x2)**2)) ``` 定义KNN分类器: ```python class KNN: def __init__(self, k=3): self.k = k def fit(self, X, y): self.X_train = X self.y_train = y def predict(self, X): y_pred = [self._predict(x) for x in X] return np.array(y_pred) def _predict(self, x): distances = [euclidean_distance(x, x_train) for x_train in self.X_train] k_indices = np.argsort(distances)[:self.k] k_nearest_labels = [self.y_train[i] for i in k_indices] most_common = Counter(k_nearest_labels).most_common(1) return most_common[0][0] ``` 最后,训练模型并进行预测: ```python knn = KNN(k=3) knn.fit(X_train, y_train) predictions = knn.predict(X_test) ``` 完整代码如下: ```python import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from collections import Counter def euclidean_distance(x1, x2): return np.sqrt(np.sum((x1 - x2)**2)) class KNN: def __init__(self, k=3): self.k = k def fit(self, X, y): self.X_train = X self.y_train = y def predict(self, X): y_pred = [self._predict(x) for x in X] return np.array(y_pred) def _predict(self, x): distances = [euclidean_distance(x, x_train) for x_train in self.X_train] k_indices = np.argsort(distances)[:self.k] k_nearest_labels = [self.y_train[i] for i in k_indices] most_common = Counter(k_nearest_labels).most_common(1) return most_common[0][0] iris = load_iris() X = iris.data y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) knn = KNN(k=3) knn.fit(X_train, y_train) predictions = knn.predict(X_test) print(predictions) ``` 希望对你有所帮助!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lc_MVP

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值