学习目标
了解常用的机器学习模型,并掌握机器学习模型的建模与调参流程
内容介绍
-
线性回归模型:
线性回归对于特征的要求;
处理长尾分布;
理解线性回归模型; -
模型性能验证:
评价函数与目标函数;
交叉验证方法;
留一验证方法;
针对时间序列问题的验证;
绘制学习率曲线;
绘制验证曲线; -
嵌入式特征选择:
Lasso回归;
Ridge回归;
决策树; -
模型对比:
常用线性模型;
常用非线性模型; -
模型调参:
贪心调参方法;
网格调参方法;
贝叶斯调参方法;
相关知识
1 线性回归模型https://zhuanlan.zhihu.com/p/49480391
2 决策树模型https://zhuanlan.zhihu.com/p/65304798
3 GBDT模型https://zhuanlan.zhihu.com/p/45145899
4 XGBoost模型https://zhuanlan.zhihu.com/p/86816771
5 LightGBM模型https://zhuanlan.zhihu.com/p/89360721
6 推荐教材:
•《机器学习》 https://book.douban.com/subject/26708119/
•《统计学习方法》 https://book.douban.com/subject/10590856/
•《Python大战机器学习》 https://book.douban.com/subject/26987890/
•《面向机器学习的特征工程》 https://book.douban.com/subject/26826639/
•《数据科学家访谈录》 https://book.douban.com/subject/30129410/
代码示例
1 读取数据
# 引入模块,读取数据
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
reduce_mem_usage 函数通过调整数据类型,帮助我们减少数据在内存中占用的空间
def reduce_mem_usage(df):
"""
对数据进行压缩,从而减少内存消耗
"""
start_mem = df.memory_usage().sum()
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum()
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
sample_feature = reduce_mem_usage(pd.read_csv('data_for_tree.csv'))
continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','brand','model','brand']]
2 线性回归 & 五折交叉验证 & 模拟真实业务情况
sample_feature = sample_feature.dropna().replace('-', 0).reset_index(drop=True)
sample_feature['notRepairedDamage'] = sample_feature['notRepairedDamage'].astype(np.float32)
train = sample_feature[continuous_feature_names + ['price']]
train_X = train[continuous_feature_names]
train_y = train['price']
2.1简单建模
from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model = model.fit(train_X, train_y)
# 查看训练的线性回归模型的截距(intercept)与权重(coef)
'intercept:'+ str(model.intercept_)
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
结果如下:
[('v_6', 3342612.384537345),
, ('v_8', 684205.534533214),