《Python金融大数据风控建模实战》 第7章 变量选择
本章引言
变量选择常见的方法有过滤法、包装法、嵌入法,并且在上述方法中又有单变量选择、多变量选择、有监督选择、无监督选择。在实际应用中,单纯从数据挖掘的角度进行变量选择是不够的,还要结合业务理解对选择后的变量进行回测,以符合业务解释。
Python代码实现及注释
# 第7章:变量选择
'''
在变量分箱的基础上进行变量编码,然后进行变量编码,然后进行变量选择,变量选择程序主要采用scikit-learn包中的feature_selection
模块来完成,实践中常用的函数为:
category_continue_separation: 区分离散变量与连续变量
sklearn.feature_selection.VarianceThreshold: 方差筛选函数
sklearn.feature_selection.SelectKBest: 单变量筛选函数
Pandas.DataFrame.corr: 计算相关系数矩阵
sklearn.feature_selection.RFECV: 递归消除变量筛选函数
sklearn.feature_selection.SelectFromModel: 嵌入法变量选择函数
feature_selector: 一个集成的变量选择函数
程序运行逻辑:数据读取-->划分数据集与测试集-->区分离散变量与连续变量-->变量分箱-->分箱后进行WOE编码-->多种变量选择方法测试
'''
import os
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import variable_bin_methods as varbin_meth
import variable_encode as var_encode
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use(arg='Qt5Agg')
matplotlib.rcParams['font.sans-serif']=['SimHei']
matplotlib.rcParams['axes.unicode_minus']=False
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.feature_selection import RFECV
from sklearn.svm import SVR
from sklearn.feature_selection import SelectFromModel
'''
Seaborn是matplotlib库的扩展,主要专注于统计学的分析
Seaborn背后有调色板
import seaborn as sns
sns.set_style(‘darkgrid’) 设置一些背景
sns.load_dataset(‘tips’) 加载数据 seaborn datasets
'''
import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from feature_selector import FeatureSelector
import warnings
warnings.filterwarnings("ignore") ##忽略警告
##数据读取
def data_read(data_path,file_name):
df = pd.read_csv( os.path.join(data_path, file_name), delim_whitespace = True, header = None )
##变量重命名
columns = ['status_account','duration','credit_history','purpose', 'amount',
'svaing_account', 'present_emp', 'income_rate', 'personal_status',
'other_debtors', 'residence_info', 'property', 'age',
'inst_plans', 'housing', 'num_credits',
'job', 'dependents', 'telephone', 'foreign_worker', 'target']
df.columns = columns
##将标签变量由状态1,2转为0,1;0表示好用户,1表示坏用户
df.target = df.target - 1
##数据分为data_train和 data_test两部分,训练集用于得到编码函数,验证集用已知的编码规则对验证集编码
data_train, data_test = train_test_split(df, test_size=0.2, random_state=0,stratify=df.target)
return data_train, data_test
##离散变量与连续变量区分
def category_continue_separation(df,feature_names):
categorical_var = []
numerical_var = []
if 'target' in feature_names:
feature_names.remove('target')
##先判断类型,如果是int或float就直接作为连续变量
numerical_var = list(df[feature_names].select_dtypes(include=['int','float','int32','float32','int64','float64']).columns.values)
categorical_var = [x for x in feature_names if x not in numerical_var]
return categorical_var,numerical_var
if __name__ == '__main__':
path = 'D:\\code\\chapter7'
data_path = os.path.join(path ,'data')
file_name = 'german.csv'
##读取数据
data_train, data_test = data_read(data_path