python 特征筛选

本文详细介绍了Python中如何进行特征筛选,包括基于统计学的方法如方差阈值、互信息、皮尔逊相关系数等,以及使用模型选择方法如递归特征消除(RFE)。通过实例展示了如何在实际数据集上应用这些方法,帮助提高模型的预测性能和理解力。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

from sklearn.feature_selection import VarianceThreshold,SelectKBest,chi2
from sklearn.datasets import load_iris
import pandas as pd


X,y = load_iris(return_X_y=True) 
X_df = pd.DataFrame(X,columns=list("ABCD"))

(chi2,pval) = chi2(X_df,y)

dict_feature = {}
for i,j in zip(X_df.columns.values,chi2):
    dict_feature[i]=j

#对字典按照values排序
ls = sorted(dict_feature.items(),key=lambda item:item[1],reverse=True)

#特征选取数量
k =2
ls_new_feature=[]
for i in range(k):
    ls_new_feature.append(ls[i][0])


X_new = X_df[ls_new_feature]




from sklearn.feature_selection import VarianceThreshold,SelectKBest,chi2
from sklearn.datasets import load_iris
import pandas as pd
from sklearn.feature_selection import mutual_info_classif

#用于度量特征和离散目标的互信息
X,y = load_iris(return_X_y=True) 
X_df = pd.DataFrame(X,columns=list("ABCD"))

feature_cat = ["A","D"]
discrete_features =  []
feature = X_df.columns.values.tolist()
for k in feature_cat:
    if k in feature:
        discrete_features.append(feature.index(k))
        


mu = mutual_info_classif(X_df,y,discrete_features=discrete_features, 
                         n_neighbors=3, copy=True, random_state=None)


dict_feature = {}
for i,j in zip(X_df.columns.values,mu):
    dict_feature[i]=j

#对字典按照values排序
ls = sorted(dict_feature.items(),key=lambda item:item[1],reverse=True)

#特征选取数量
k =2
ls_new_feature=[]
for i in range(k):
    ls_new_feature.append(ls[i][0])


X_new = X_df[ls_new_feature]





from sklearn.datasets import load_iris
import pandas as pd
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import LogisticRegression

X,y = load_iris(return_X_y=True) 
X_df = pd.DataFrame(X,columns=list("ABCD"))

sf = SelectFromModel(estimator=LogisticRegression(penalty="l1", C=0.1),
                     threshold=None, 
                     prefit=False, 
                     norm_order=1)

sf.fit(X_df,y)

X_new = X_df[X_df.columns.values[sf.get_support()]]

from sklearn.feature_selection import VarianceThreshold
from sklearn.datasets import load_iris
import pandas as pd

X,y = load_iris(return_X_y=True) 
X_df = pd.DataFrame(X,columns=list("ABCD"))

#建议作为数值特征的筛选方法,对于分类特征可以考虑每个类别的占比问题
ts = 0.5
vt = VarianceThreshold(threshold=ts)
vt.fit(X_df)

#查看各个特征的方差
dict_variance = {}
for i,j in zip(X_df.columns.values,vt.variances_):
    dict_variance[i] = j

#获取保留了的特征的特征名
ls = list()
for i,j in dict_variance.items():
    if j >= ts:
        ls.append(i)  
X_new = pd.DataFrame(vt.fit_transform(X_df),columns=ls)



 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值