AI竞赛2-Xgboost算法之天猫用户复购项目

1.赛题背景

  • 商家有时会在特定日期,例如黑色星期五或是双十一开展大型促销活动或者发放优惠券以吸引消费者,然而很多被吸引来的买家都是一次性消费者,这些促销活动可能对销售业绩的增长并没有长远帮助。因此为解决这个问题,商家需要识别出哪类消费者可以转化为重复购买者。通过对这些潜在的忠诚客户进行定位,商家可以大大降低促销成本,提高投资回报率(ReturnonInvestment,ROI)。众所周知的是,在线投放广告时精准定位客户是件比较难的事情,尤其是针对新消费者的定位。不过,利用天猫长期积累的用户行为日志,我们或许可以通过算法建模解决这个问题。
  • 天猫提供了一些商家信息,以及在“双十一”期间购买了对应产品的新消费者信息。你的任务是预测给定的商家中,哪些新消费者在未来会成为忠实客户,即需要预测这些新消费者在未来6个月内再次购买的概率

2.数据描述

  • 数据集包含了匿名用户在"双十一"前6个月和"双十一"当天的购物记录,标签为是否是重复购买者。出于隐私保护,数据采样存在部分偏差,该数据集的统计结果会与天猫的实际情况有一定的偏差,但不影响解决方案的适用性
  • 下面继续了解数据的详细信息,主要有用户行为日志、用户特征表、以及训练数据和测试数据三张表格。

2.1 用户行为日志表

action_type字段:
  0 表示点击
  1 表示加入购物车
  2 表示购买
  3 表示收藏

在这里插入图片描述

2.2 用户特征表

age_range字段:
  1 表示<18
  2 表示[18,24]
  3 表示[25,29]
  4 表示[30,34]
  5 表示[35,39]
  6 表示[40,49]
  78 表示>=50
  0和Null 表示未知
  
gender字段:
  0 表示女性
  1表示男性
  2和Null 表示未知

在这里插入图片描述

2.3 训练测试数据表

label字段:
  0 表示非重复购买
  1 表示重复购买

在这里插入图片描述

3.模型导入

import gc # 垃圾回收
import pandas as pd  #导入分析库
import numpy as np
import warnings
warnings.filterwarnings('ignore')
from sklearn.model_selection import train_test_split  # 数据拆分
from sklearn.model_selection import StratifiedKFold  # 同分布数据拆分,交叉验证
import lightgbm as lgb  # 微软
import xgboost as xgb 

4.数据加载

%%time
# 4.1 用户行为日志数据
user_log = pd.read_csv('./data_format1/user_log_format1.csv', dtype={'time_stamp':'str'})
# 4.2 用户画像数据
user_info = pd.read_csv('./data_format1/user_info_format1.csv')   
# 4.3 训练数据
train_data = pd.read_csv('./data_format1/train_format1.csv')      
# 4.4 测试数据
test_data = pd.read_csv('./data_format1/test_format1.csv')

5.数据查看

print('---data shape---')     
for data in [user_log, user_info, train_data, test_data]:
    print(data.shape)
---data shape---
(54925330, 7)
(424170, 3)
(260864, 3)
(261477, 3)


print('---data info ---')
for data in [user_log, user_info, train_data, test_data]:
    print(data.info())
---data info ---
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 54925330 entries, 0 to 54925329
Data columns (total 7 columns):
 #   Column       Dtype  
---  ------       -----  
 0   user_id      int64  
 1   item_id      int64  
 2   cat_id       int64  
 3   seller_id    int64  
 4   brand_id     float64
 5   time_stamp   object 
 6   action_type  int64  
dtypes: float64(1), int64(5), object(1)
memory usage: 2.9+ GB
None
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 424170 entries, 0 to 424169
Data columns (total 3 columns):
 #   Column     Non-Null Count   Dtype  
---  ------     --------------   -----  
 0   user_id    424170 non-null  int64  
 1   age_range  421953 non-null  float64
 2   gender     417734 non-null  float64
dtypes: float64(2), int64(1)
memory usage: 9.7 MB
None
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 260864 entries, 0 to 260863
Data columns (total 3 columns):
 #   Column       Non-Null Count   Dtype
---  ------       --------------   -----
 0   user_id      260864 non-null  int64
 1   merchant_id  260864 non-null  int64
 2   label        260864 non-null  int64
dtypes: int64(3)
memory usage: 6.0 MB
None
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 261477 entries, 0 to 261476
Data columns (total 3 columns):
 #   Column       Non-Null Count   Dtype  
---  ------       --------------   -----  
 0   user_id      261477 non-null  int64  
 1   merchant_id  261477 non-null  int64  
 2   prob         0 non-null       float64
dtypes: float64(1), int64(2)
memory usage: 6.0 MB
None
display(user_info.head())

在这里插入图片描述

display(train_data.head(),test_data.head())
# 可以看到user_log占据内存特别大2.9G。后面数据处理时,需要时及时回收内存!

在这里插入图片描述

6.数据集

# 6.1 训练测试数据集
train_data['origin'] = 'train'
test_data['origin'] = 'test'
all_data = pd.concat([train_data, test_data], ignore_index=True, sort=False)  
all_data.drop(['prob'], axis=1, inplace=True)  # prob测试数据中特有的一列 删除概率这一列
display(all_data.head(),all_data.shape)
(522341, 4)

在这里插入图片描述

all_data = all_data.merge(user_info, on='user_id', how='left') # all_data连接user_info表 通过user_id左关联
display(all_data.shape,all_data.head())
(522341, 6)

在这里插入图片描述

user_log.rename(columns={'seller_id':'merchant_id'}, inplace=True) # 原列名seller_id更名为merchant_id
del train_data,test_data,user_info
gc.collect()   # 回收内存
48

7.数据类型转换

%%time
display(user_log.info())  # 类型转换前信息
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 54925330 entries, 0 to 54925329
Data columns (total 7 columns):
 #   Column       Dtype  
---  ------       -----  
 0   user_id      int64  
 1   item_id      int64  
 2   cat_id       int64  
 3   merchant_id  int64  
 4   brand_id     float64
 5   time_stamp   object 
 6   action_type  int64  
dtypes: float64(1), int64(5), object(1)
memory usage: 2.9+ GB
None
Wall time: 8.94 ms
%%time
display(user_log.head())
Wall time: 7.99 ms

在这里插入图片描述

7.1 用户行为数据类型转换

%%time
user_log['user_id'] = user_log['user_id'].astype('int32')
user_log['merchant_id'] = user_log['merchant_id'].astype('int32')
user_log['item_id'] = user_log['item_id'].astype('int32')
user_log['cat_id'] = user_log['cat_id'].astype('int32')
user_log['brand_id'].fillna(0, inplace=True)
user_log['brand_id'] = user_log['brand_id'].astype('int32')
user_log['time_stamp'] = pd.to_datetime(user_log['time_stamp'], format='%H%M')  # 日期类型
user_log['action_type'] = user_log['action_type'].astype('int32')
display(user_log.info(),user_log.head())  # 类型转换后信息
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 54925330 entries, 0 to 54925329
Data columns (total 7 columns):
 #   Column       Dtype         
---  ------       -----         
 0   user_id      int32         
 1   item_id      int32         
 2   cat_id       int32         
 3   merchant_id  int32         
 4   brand_id     int32         
 5   time_stamp   datetime64[ns]
 6   action_type  int32         
dtypes: datetime64[ns](1), int32(6)
memory usage: 1.6 GB
None
Wall time: 8.59 s

在这里插入图片描述

display(all_data.isnull().sum())  # 为null的数量
user_id             0
merchant_id         0
label          261477
origin              0
age_range        2578
gender           7545
dtype: int64

7.2 all_data数据集缺失值填充

all_data['age_range'].fillna(0, inplace=True)
all_data['gender'].fillna(2, inplace=True)
all_data.isnull().sum()
user_id             0
merchant_id         0
label          261477
origin              0
age_range           0
gender              0
dtype: int64

7.3 all_data数据集类型转换

all_data.info()  # 类型转换前
<class 'pandas.core.frame.DataFrame'>
Int64Index: 522341 entries, 0 to 522340
Data columns (total 6 columns):
 #   Column       Non-Null Count   Dtype  
---  ------       --------------   -----  
 0   user_id      522341 non-null  int64  
 1   merchant_id  522341 non-null  int64  
 2   label        260864 non-null  float64
 3   origin       522341 non-null  object 
 4   age_range    522341 non-null  float64
 5   gender       522341 non-null  float64
dtypes: float64(3), int64(2), object(1)
memory usage: 27.9+ MB
all_data['age_range'] = all_data['age_range'].astype('int8')
all_data['gender'] = all_data['gender'].astype('int8')
all_data['label'] = all_data['label'].astype('str')
all_data['user_id'] = all_data['user_id'].astype('int32')
all_data['merchant_id'] = all_data['merchant_id'].astype('int32')
all_data.info()  # 类型转换后
<class 'pandas.core.frame.DataFrame'>
Int64Index: 522341 entries, 0 to 522340
Data columns (total 6 columns):
 #   Column       Non-Null Count   Dtype 
---  ------       --------------   ----- 
 0   user_id      522341 non-null  int32 
 1   merchant_id  522341 non-null  int32 
 2   label        522341 non-null  object
 3   origin       522341 non-null  object
 4   age_range    522341 non-null  int8  
 5   gender       522341 non-null  int8  
dtypes: int32(2), int8(2), object(2)
memory usage: 16.9+ MB

8.特征工程1-用户

%%time
groups = user_log.groupby(['user_id'])  # 按用户特征排序
# 8.1 用户交互行为数量u1
temp = groups.size().reset_index().rename(columns={0:'u1'})
all_data = all_data.merge(temp, on='user_id', how='left')
# 8.2 统计item_id唯一值个数u2
temp = groups['item_id'].agg([('u2', 'nunique')]).reset_index()  #  agg基于列聚合操作
all_data = all_data.merge(temp, on='user_id', how='left')  # 用户交互行为:点商品数
# 8.3 统计cat_id唯一值个数u3
temp = groups['cat_id'].agg([('u3', 'nunique')]).reset_index()  # 用户交互行为具体统计:类目多少
all_data = all_data.merge(temp, on='user_id', how='left')
# 8.4 统计merchant_id唯一值个数u4
temp = groups['merchant_id'].agg([('u4', 'nunique')]).reset_index()
all_data = all_data.merge(temp, on='user_id', how='left')
# 8.5 统计brand_id唯一值个数u5
temp = groups['brand_id'].agg([('u5', 'nunique')]).reset_index()
all_data = all_data.merge(temp, on='user_id', how='left')
# 8.6 购物时间间隔特征u6
temp = groups['time_stamp'].agg([('F_time', 'min'), ('B_time', 'max')]).reset_index()
temp['u6'] = (temp['B_time'] - temp['F_time']).dt.seconds/3600  #  按小时
all_data = all_data.merge(temp[['user_id', 'u6']], on='user_id', how='left')
# 8.7 统计操作类型为0、1、2、3的个数 
temp = groups['action_type'].value_counts().unstack().reset_index().rename(
    columns={0:'u7', 1:'u8', 2:'u9', 3:'u10'})
all_data = all_data.merge(temp, on='user_id', how='left')
del temp,groups
gc.collect()
Wall time: 4min 18s
all_data.head()

在这里插入图片描述

9.特征工程2-店铺

%%time
groups = user_log.groupby(['merchant_id'])
# 9.1 商家被交互行为数量 m1
temp = groups.size().reset_index().rename(columns={0:'m1'})
all_data = all_data.merge(temp, on='merchant_id', how='left')
# 9.2 统计商家被交互的user_id, item_id, cat_id, brand_id唯一值即'm2'、'm3'、'm4'、'm5'
temp = groups['user_id', 'item_id', 'cat_id', 'brand_id'].nunique().reset_index().rename(
    columns={
    'user_id':'m2',
    'item_id':'m3', 
    'cat_id':'m4', 
    'brand_id':'m5'})
all_data = all_data.merge(temp, on='merchant_id', how='left')
# 9.3 统计商家被交互的action_type唯一值即'm6'、'm7'、'm8'、'm9'
temp = groups['action_type'].value_counts().unstack().reset_index().rename(  
    columns={0:'m6', 1:'m7', 2:'m8', 3:'m9'})
all_data = all_data.merge(temp, on='merchant_id', how='left')
del temp
gc.collect()
Wall time: 5min 21s
8798

display(all_data.tail())

在这里插入图片描述

10.特征工程3-用户及店铺

%%time
groups = user_log.groupby(['user_id', 'merchant_id'])
# 10.1 用户在不同商家交互统计'um1'
temp = groups.size().reset_index().rename(columns={0:'um1'})
all_data = all_data.merge(temp, on=['user_id', 'merchant_id'], how='left')
# 10.2 统计用户在不同商家交互的item_id, cat_id, brand_id唯一值即'um2'、'um3'、'um4'
temp = groups['item_id', 'cat_id', 'brand_id'].nunique().reset_index().rename(
    columns={
    'item_id':'um2',
    'cat_id':'um3',
    'brand_id':'um4'})
all_data = all_data.merge(temp, on=['user_id', 'merchant_id'], how='left')
# 10.3 统计用户在不同商家交互的action_type唯一值即'um5'、'um6'、'um7'、'um8'
temp = groups['action_type'].value_counts().unstack().reset_index().rename(
    columns={
    0:'um5',
    1:'um6',
    2:'um7',
    3:'um8'})
all_data = all_data.merge(temp, on=['user_id', 'merchant_id'], how='left')
# 10.4 统计用户在不同商家购物时间间隔特征即'um9'
temp = groups['time_stamp'].agg([('F_time', 'min'), ('B_time', 'max')]).reset_index()
temp['um9'] = (temp['B_time'] - temp['F_time']).dt.seconds/3600
all_data = all_data.merge(temp[['user_id','merchant_id','um9']], on=['user_id', 'merchant_id'], how='left')
del temp,groups
gc.collect()
Wall time: 4min 22s
9096

display(all_data.head())

在这里插入图片描述

11.特征工程4-购买点击比

# 11.1 用户购买点击比
all_data['r1'] = all_data['u9']/all_data['u7']    
# 11.2 商家购买点击比  
all_data['r2'] = all_data['m8']/all_data['m6']    
# 11.3 不同用户不同商家购买点击比
all_data['r3'] = all_data['um7']/all_data['um5']  
display(all_data.head())

在这里插入图片描述

12.空数据填充

display(all_data.isnull().sum())
user_id             0
merchant_id         0
...
r2                  0
r3              59408
dtype: int64

all_data.fillna(0, inplace=True)   # nill值以0填充
all_data.isnull().sum()
user_id        0
merchant_id    0
...
r2             0
r3             0
dtype: int64

13.特征转换-年龄、性别

all_data['age_range']
0         6
1         6
2         6
         ..
522339    0
522340    0
Name: age_range, Length: 522341, dtype: int8
%%time
# 13.1 年龄转换 
temp = pd.get_dummies(all_data['age_range'], prefix='age')  # 独立编码
display(temp.head(10))   # 修改age_range年龄字段名称为 age_0, age_1, age_2... age_8
all_data = pd.concat([all_data, temp], axis=1)

在这里插入图片描述

# 13.2 性别转换
temp = pd.get_dummies(all_data['gender'], prefix='g')
all_data = pd.concat([all_data, temp], axis=1) # 列进行合并
# 13.3 删除年龄、性别原数据
all_data.drop(['age_range', 'gender'], axis=1, inplace=True)
del temp
gc.collect()
18438

all_data.head()

在这里插入图片描述

14.数据存储

%%time
train_data = all_data[all_data['origin'] == 'train'].drop(['origin'], axis=1)
test_data = all_data[all_data['origin'] == 'test'].drop(['label', 'origin'], axis=1)
train_data.to_csv('train_data.csv')
test_data.to_csv('test_data.csv')
Wall time: 9.14 s

15.LGB模型

# 15.1 模型训练
def lgb_train(X_train, y_train, X_valid, y_valid, verbose=True):
    model_lgb = lgb.LGBMClassifier(
        max_depth=10,       # 树最大的深度
        n_estimators=5000,   # 集成算法,树数量
        min_child_weight=100, 
        colsample_bytree=0.7,   # 特征筛选
        subsample=0.9,         # 样本采样比例
        learning_rate=0.1)       # 学习率
    model_lgb.fit(
        X_train, 
        y_train,
        eval_metric='auc',
        eval_set=[(X_train, y_train), (X_valid, y_valid)],
        verbose=verbose,            # 是否打印输出训练过程
        early_stopping_rounds=10)    # 早停,等10轮决策评价指标不再变化停止
    print(model_lgb.best_score_['valid_1']['auc'])
    return model_lgb
X_train

在这里插入图片描述

X_train.values
array([[2.10694e+05, 2.92800e+03, 1.10000e+01, ..., 1.00000e+00,
        0.00000e+00, 0.00000e+00],
       [1.87320e+05, 3.48400e+03, 5.70000e+01, ..., 0.00000e+00,
        1.00000e+00, 0.00000e+00],
       [3.45450e+05, 3.35000e+03, 6.70000e+01, ..., 1.00000e+00,
        0.00000e+00, 0.00000e+00],
       ...,
       [2.30182e+05, 3.12300e+03, 2.90000e+01, ..., 1.00000e+00,
        0.00000e+00, 0.00000e+00],
       [8.60920e+04, 4.04400e+03, 1.65000e+02, ..., 1.00000e+00,
        0.00000e+00, 0.00000e+00],
       [1.06327e+05, 1.49900e+03, 2.30000e+02, ..., 1.00000e+00,
        0.00000e+00, 0.00000e+00]])
        
model_lgb = lgb_train(X_train.values, y_train, X_valid.values, y_valid, verbose=True)
[1]	training's auc: 0.640009	training's binary_logloss: 0.228212	valid_1's auc: 0.627955	valid_1's binary_logloss: 0.229246
Training until validation scores don't improve for 10 rounds.
[2]	training's auc: 0.648943	training's binary_logloss: 0.226955	valid_1's auc: 0.636741	valid_1's binary_logloss: 0.228055
...
[113]	training's auc: 0.734736	training's binary_logloss: 0.208817	valid_1's auc: 0.676738	valid_1's binary_logloss: 0.218372
[114]	training's auc: 0.73495	training's binary_logloss: 0.208776	valid_1's auc: 0.676778	valid_1's binary_logloss: 0.218366
Early stopping, best iteration is:
[104]	training's auc: 0.731472	training's binary_logloss: 0.209388	valid_1's auc: 0.6767	valid_1's binary_logloss: 0.218354
0.6766996167512115
# 15.2 模型预测
%%time
prob = model_lgb.predict_proba(test_data.values)
prob
array([[0.94812479, 0.05187521],
       [0.88438038, 0.11561962],
       [0.94936734, 0.05063266],
       ...,
       [0.85627925, 0.14372075],
       [0.95222606, 0.04777394],
       [0.92295125, 0.07704875]])   
# 15.3 用户在某个店铺的复购概率
submission = pd.read_csv('./data_format1/test_format1.csv')
submission['prob'] = pd.Series(prob[:,1])   # 预测数据赋值给提交数据
display(submission.head())
submission.to_csv('submission_lgb.csv', index=False)
del submission
gc.collect()
22630

在这里插入图片描述

16.XGB模型

# 16.1 模型训练
def xgb_train(X_train, y_train, X_valid, y_valid, verbose=True):
    model_xgb = xgb.XGBClassifier(
        max_depth=10, # raw8
        n_estimators=5000,
        min_child_weight=300, 
        colsample_bytree=0.7, 
        subsample=0.9, 
        learing_rate=0.1)
    model_xgb.fit(
        X_train, 
        y_train,
        eval_metric='auc',
        eval_set=[(X_train, y_train), (X_valid, y_valid)],
        verbose=verbose,
        early_stopping_rounds=10)   # 早停法 如果auc在10epoch没有进步就stop
    print(model_xgb.best_score)
    return model_xgb
model_xgb = xgb_train(X_train, y_train, X_valid, y_valid, verbose=False)
0.673734
# 16.2 模型预测
%%time
prob = model_xgb.predict_proba(test_data)
# 16.3 用户在某个店铺的复购概率
submission = pd.read_csv('./data_format1/test_format1.csv')
submission['prob'] = pd.Series(prob[:,1])
submission.to_csv('submission_xgb.csv', index=False)
display(submission.head())
del submission
gc.collect()
509

在这里插入图片描述

17.交叉验证多轮建模

# 17.1 构造10份训练集和测试集
def get_train_test_datas(train_df,label_df):
    skv = StratifiedKFold(n_splits=10, shuffle=True)
    trainX = []
    trainY = []
    testX = []
    testY = []         # 索引:训练数据索引train_index,目标值的索引test_index
    for train_index, test_index in skv.split(X=train_df, y=label_df):   # 10轮for循环
        train_x, train_y, test_x, test_y = train_df.iloc[train_index, :], label_df.iloc[train_index], \
                                            train_df.iloc[test_index, :], label_df.iloc[test_index]
        trainX.append(train_x)
        trainY.append(train_y)
        testX.append(test_x)
        testY.append(test_y)
    return trainX, testX, trainY, testY
# 17.2 LGB模型
%%time
train_X, train_y = train_data.drop(['label'], axis=1), train_data['label']
X_train, X_valid, y_train, y_valid = get_train_test_datas(train_X, train_y) # 拆分为10份训练数据和验证数据
print('----训练数据,长度',len(X_train))
print('----验证数据,长度',len(X_valid))
pred_lgbms = []      # 列表 接受目标值 10轮的平均值
for i in range(10):
    print('\n============================LGB training use Data {}/10============================\n'.format(i+1))
    model_lgb = lgb.LGBMClassifier(
        max_depth=10, # 8
        n_estimators=1000,
        min_child_weight=100,
        colsample_bytree=0.7,
        subsample=0.9,
        learning_rate=0.05)
    model_lgb.fit(
        X_train[i].values, 
        y_train[i],
        eval_metric='auc',
        eval_set=[(X_train[i].values, y_train[i]), (X_valid[i].values, y_valid[i])],
        verbose=False,
        early_stopping_rounds=10)
    print(model_lgb.best_score_['valid_1']['auc'])
    pred = model_lgb.predict_proba(test_data.values)
    pred = pd.DataFrame(pred[:,1])    # 将预测概率(复购)转换成DataFrame
    pred_lgbms.append(pred)   # 求10轮平均值生成预测结果及保存,每一轮的结果作为一列进行添加
pred_lgbms = pd.concat(pred_lgbms, axis=1)    # 列级联
submission = pd.read_csv('./data_format1/test_format1.csv')  # 加载提交数据
submission['prob'] = pred_lgbms.mean(axis=1)    # 10轮训练的平均值
submission.to_csv('submission_KFold_lgb.csv', index=False)  # 结果保存
----训练数据,长度 10
----验证数据,长度 10

============================LGB training use Data 1/10============================

0.6764325578514025

...
============================LGB training use Data 9/10============================

0.6850513956313553

============================LGB training use Data 10/10============================

0.6863430735031704
Wall time: 1min 17s
pred_lgbms

在这里插入图片描述

# 17.3 构造20份训练集和测试集
def get_train_test_datas(train_df,label_df):
    skv = StratifiedKFold(n_splits=20, shuffle=True)
    trainX = []
    trainY = []
    testX = []
    testY = []
    for train_index, test_index in skv.split(X=train_df, y=label_df):
        train_x, train_y, test_x, test_y = train_df.iloc[train_index, :], label_df.iloc[train_index], \
                                            train_df.iloc[test_index, :], label_df.iloc[test_index]
        trainX.append(train_x)
        trainY.append(train_y)
        testX.append(test_x)
        testY.append(test_y)
    return trainX, testX, trainY, testY
# 17.4 XGB模型
%%time
train_X, train_y = train_data.drop(['label'], axis=1), train_data['label']
X_train, X_valid, y_train, y_valid = get_train_test_datas(train_X, train_y)  # 拆分为20份训练数据和验证数据
print('------数据长度',len(X_train),len(y_train))
pred_xgbs = []
for i in range(20):
    print('\n============================XGB training use Data {}/20============================\n'.format(i+1))
    model_xgb = xgb.XGBClassifier(
        max_depth=10, # raw8
        n_estimators=5000,
        min_child_weight=200, 
        colsample_bytree=0.7, 
        subsample=0.9,
        learning_rate = 0.1)
    model_xgb.fit(
        X_train[i], 
        y_train[i],
        eval_metric='auc',
        eval_set=[(X_train[i], y_train[i]), (X_valid[i], y_valid[i])],
        verbose=False,
        early_stopping_rounds=10 )  # 早停法 如果auc在10epoch没有进步就stop
    print(model_xgb.best_score)
    pred = model_xgb.predict_proba(test_data)
    pred = pd.DataFrame(pred[:,1])
    pred_xgbs.append(pred)
pred_xgbs = pd.concat(pred_xgbs, axis=1)  # 求20轮平均值生成预测结果及保存
submission = pd.read_csv('./data_format1/test_format1.csv')
submission['prob'] = pred_xgbs.mean(axis=1)
submission.to_csv('submission_KFold_xgb.csv', index=False)
------数据长度 20 20

============================XGB training use Data 1/20============================

0.710332
...

============================XGB training use Data 19/20============================

0.674658

============================XGB training use Data 20/20============================

0.694476
Wall time: 8min 39s
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

阿值

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值