Section I: Brief Introduction on KPCA
Performing a nonlinear mapping via Kernel PCA that transforms the data onto a higher-dimensional space. Then, a standard PCA in this higher-dimensional space to project the data back onto a lower-dimensional space where the samples can be separated by a linear classifier (under the condition that the samples can be separated by density in the input space). However, one downside of this approach is that it is computationally very expensive, and this is where kernel trick is adopted here. The key point lies in that using kernel function, the similarity between two-dimensional feature vectors can be still computed in the original feature space.
FROM
Sebastian Raschka, Vahid Mirjalili. Python机器学习第二版. 南京:东南大学出版社,2018.
Section II: Call KPCA in Sklearn
第一部分:代码
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from PCA.visualize import plot_decision_regions
#Section 1: Prepare data
plt.rcParams['figure.dpi']&