Data_Analysis_数据清洗及特征处理

本文详细介绍数据清洗过程,包括缺失值与重复值处理,数据转换技巧,以及特征工程实践,如年龄分箱、文本变量转换和从Name中提取Title特征。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

数据清洗

首先导入numpy、pandas包和数据集
#加载所需的库
import pandas as pd
import numpy as np
#加载数据train.csv
df=pd.read_csv('F:\Datewheel资料\hands-on-data-analysis-master\hands-on-data-analysis-master\第一单元项目集合/train.csv')
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

2 第二章:数据清洗及特征处理

我们拿到的数据通常是不干净的,所谓的不干净,就是数据中有缺失值,有一些异常点等,需要经过一定的处理才能继续做后面的分析或建模,所以拿到数据的第一步是进行数据清洗,本章我们将学习缺失值、重复值、字符串和数据转换等操作,将数据清洗成可以分析或建模的亚子。

2.1 缺失值观察与处理

我们拿到的数据经常会有很多缺失值,比如我们可以看到Cabin列存在NaN,那其他列还有没有缺失值,这些缺失值要怎么处理呢

2.1.1 任务一:缺失值观察

(1) 请查看每个特征缺失值个数
(2) 请查看Age, Cabin, Embarked列的数据
以上方式都有多种方式,所以大家多多益善

#写入代码
df.info()


<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId    891 non-null int64
Survived       891 non-null int64
Pclass         891 non-null int64
Name           891 non-null object
Sex            891 non-null object
Age            714 non-null float64
SibSp          891 non-null int64
Parch          891 non-null int64
Ticket         891 non-null object
Fare           891 non-null float64
Cabin          204 non-null object
Embarked       889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
#写入代码
df.isnull().sum()


PassengerId      0
Survived         0
Pclass           0
Name             0
Sex              0
Age            177
SibSp            0
Parch            0
Ticket           0
Fare             0
Cabin          687
Embarked         2
dtype: int64
#写入代码
#df[['Age','Cabin','Embarked']].head()
df['Age'].isnull().sum()

177
2.1.2 任务二:对缺失值进行处理

(1)处理缺失值一般有几种思路

(2) 请尝试对Age列的数据的缺失值进行处理

(3) 请尝试使用不同的方法直接对整张表的缺失值进行处理

处理缺失值的思路:丢弃——直接将带有缺失值的行记录或列记录删除,从而减少缺失数据对整体趋势的影响;补全——补全的方式有很多种,如统计法:利用平均数、中位数、众数等方式补全;也可用模型法,将缺失较多的列看成是目标变量,利用模型进行预测,从而补全;不做任何处理

#处理缺失值的一般思路:
#提醒:可使用的函数有--->dropna函数与fillna函数
#第一种之家将带有缺失值的行删去
df.dropna().head()


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
6701McCarthy, Mr. Timothy Jmale54.0001746351.8625E46S
101113Sandstrom, Miss. Marguerite Rutfemale4.011PP 954916.7000G6S
111211Bonnell, Miss. Elizabethfemale58.00011378326.5500C103S
#写入代码
#第二种,采用一定的方法对缺失值补全,fillna函数
#用0对缺失值进行填补
df.fillna(0).head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.25000S
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.92500S
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.05000S
#写入代码
df[df['Age'] == np.nan] = 0
df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
#写入代码
#fillna常用的参数:method,method='ffill'-前向填充,method='backfill',后向填充,默认为None-什么都不填充
df['Cabin'].fillna(method='backfill').head()


0     C85
1     C85
2    C123
3    C123
4     E46
Name: Cabin, dtype: object

【思考】dropna和fillna有哪些参数,分别如何使用呢?

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html

2.2 重复值观察与处理

由于这样那样的原因,数据中会不会存在重复值呢,如果存在要怎样处理呢

2.2.1 任务一:请查看数据中的重复值
#写入代码
#查看重复值
df[df.duplicated()]


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
2.2.2 任务二:对重复值进行处理

(1)重复值有哪些处理方式呢?

(2)处理我们数据的重复值

方法多多益善

#重复值有哪些处理方式:
存在重复值时可以不做任何处理,也可以把重复值删除,删除的方法 drop_duplicates()

#写入代码
#删去重复值
df.drop_duplicates().head()


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
2.2.3 任务三:将前面清洗的数据保存为csv格式
#写入代码
df.to_csv('F:\Datewheel资料\hands-on-data-analysis-master\hands-on-data-analysis-master\第一单元项目集合/train_clear.csv')


2.3 特征观察与处理

我们对特征进行一下观察,可以把特征大概分为两大类:
数值型特征:Survived ,Pclass, Age ,SibSp, Parch, Fare,其中Survived, Pclass为离散型数值特征,Age,SibSp, Parch, Fare为连续型数值特征
文本型特征:Name, Sex, Cabin,Embarked, Ticket,其中Sex, Cabin, Embarked, Ticket为类别型文本特征,数值型特征一般可以直接用于模型的训练,但有时候为了模型的稳定性及鲁棒性会对连续变量进行离散化。文本型特征往往需要转换成数值型特征才能用于建模分析。

2.3.1 任务一:对年龄进行分箱(离散化)处理

(1) 分箱操作是什么?

(2) 将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示

(3) 将连续变量Age划分为[0,5) [5,15) [15,30) [30,50) [50,80)五个年龄段,并分别用类别变量12345表示

(4) 将连续变量Age按10% 30% 50 70% 90%五个年龄段,并用分类变量12345表示

(5) 将上面的获得的数据分别进行保存,保存为csv格式

#分箱操作是什么:
数据分箱(也称为离散分箱或分段)是一种数据预处理技术,用于减少次要观察误差的影响,是一种将多个连续值分组为较少数量的“分箱”的方法。

#写入代码
df['Ageclass']=pd.cut(df['Age'],5,labels=['1','2','3','4','5'])
df.head()
df.to_csv('F:\Datewheel资料\hands-on-data-analysis-master\hands-on-data-analysis-master\第一单元项目集合/train_cut.csv')

#写入代码
df['Ageclass']=pd.cut(df['Age'],[0,5,15,30,50,80],labels=['1','2','3','4','5'])
df.head()
df.to_csv('F:\Datewheel资料\hands-on-data-analysis-master\hands-on-data-analysis-master\第一单元项目集合/train_cut1.csv')

#写入代码
df['Ageclass']=pd.qcut(df['Age'],[0,0.1,0.3,0.5,0.7,0.9],labels=['1','2','3','4','5'])
df.head()
df.to_csv('F:\Datewheel资料\hands-on-data-analysis-master\hands-on-data-analysis-master\第一单元项目集合/train_cut2.csv')

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html

2.3.2 任务二:对文本变量进行转换

(1) 查看文本变量名及种类
(2) 将文本变量Sex, Cabin ,Embarked用数值变量12345表示
(3) 将文本变量Sex, Cabin, Embarked用one-hot编码表示

#写入代码
df['Sex'].value_counts()


male      577
female    314
Name: Sex, dtype: int64
#写入代码
df['Cabin'].value_counts()


G6             4
C23 C25 C27    4
B96 B98        4
D              3
C22 C26        3
              ..
E34            1
B71            1
C82            1
C118           1
C111           1
Name: Cabin, Length: 147, dtype: int64
#写入代码
df['Embarked'].value_counts()


S    644
C    168
Q     77
Name: Embarked, dtype: int64
df['Embarked'].unique()
array(['S', 'C', 'Q', nan], dtype=object)
df['Cabin'].unique()
array([nan, 'C85', 'C123', 'E46', 'G6', 'C103', 'D56', 'A6',
       'C23 C25 C27', 'B78', 'D33', 'B30', 'C52', 'B28', 'C83', 'F33',
       'F G73', 'E31', 'A5', 'D10 D12', 'D26', 'C110', 'B58 B60', 'E101',
       'F E69', 'D47', 'B86', 'F2', 'C2', 'E33', 'B19', 'A7', 'C49', 'F4',
       'A32', 'B4', 'B80', 'A31', 'D36', 'D15', 'C93', 'C78', 'D35',
       'C87', 'B77', 'E67', 'B94', 'C125', 'C99', 'C118', 'D7', 'A19',
       'B49', 'D', 'C22 C26', 'C106', 'C65', 'E36', 'C54',
       'B57 B59 B63 B66', 'C7', 'E34', 'C32', 'B18', 'C124', 'C91', 'E40',
       'T', 'C128', 'D37', 'B35', 'E50', 'C82', 'B96 B98', 'E10', 'E44',
       'A34', 'C104', 'C111', 'C92', 'E38', 'D21', 'E12', 'E63', 'A14',
       'B37', 'C30', 'D20', 'B79', 'E25', 'D46', 'B73', 'C95', 'B38',
       'B39', 'B22', 'C86', 'C70', 'A16', 'C101', 'C68', 'A10', 'E68',
       'B41', 'A20', 'D19', 'D50', 'D9', 'A23', 'B50', 'A26', 'D48',
       'E58', 'C126', 'B71', 'B51 B53 B55', 'D49', 'B5', 'B20', 'F G63',
       'C62 C64', 'E24', 'C90', 'C45', 'E8', 'B101', 'D45', 'C46', 'D30',
       'E121', 'D11', 'E77', 'F38', 'B3', 'D6', 'B82 B84', 'D17', 'A36',
       'B102', 'B69', 'E49', 'C47', 'D28', 'E17', 'A24', 'C50', 'B42',
       'C148'], dtype=object)
df['Sex'].unique()
array(['male', 'female'], dtype=object)
#将文本变量用数值变量表示
#方法1:replace
df['Sex_values']=df['Sex'].replace(['male','female'],[1,2])
df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeclassSex_values
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41
#方法2,map
df['Sex_values']=df['Sex'].map({'male':1,'female':2})
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeclassSex_values
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41
#LabelEncoder编码
#这种方法对于目前的我来说,稍微有点困难
from sklearn.preprocessing import LabelEncoder
for feat in ['Cabin', 'Ticket']:
    le = LabelEncoder()
    label_dict = dict(zip(df[feat].unique(), range(df[feat].nunique())))
    df[feat + "_labelEncode"] = df[feat].map(label_dict)
    df[feat + "_labelEncode"] = le.fit_transform(df[feat].astype(str))
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeclassSex_valuesCabin_labelEncodeTicket_labelEncode
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21147523
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C5281596
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32147669
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S425549
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41147472

LabelEncoder编码是无序编码

df.dtypes
PassengerId              int64
Survived                 int64
Pclass                   int64
Name                    object
Sex                     object
Age                    float64
SibSp                    int64
Parch                    int64
Ticket                  object
Fare                   float64
Cabin                   object
Embarked                object
Ageclass              category
Sex_values               int64
Cabin_labelEncode        int32
Ticket_labelEncode       int32
dtype: object
#one-hot编码,可以直接使用pandas中的get_dummies进行one-hot编码,需要注意的是get_dummies只能处理字符串类别变量
#当然也可以用sklearn中的OneHotEncoder进行one-hot编码,但是sklearn默认只处理数值型类别变量(因此,遇到字符串类别变量是还,需要先 LabelEncoder将字符串类型转换为数值类型 )
for feat in ['Sex','Cabin','Embarked']:
    x1=pd.get_dummies(df[feat],prefix=feat)
    df=pd.concat([df,x1],axis=1)
df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Cabin_F G73Cabin_F2Cabin_F33Cabin_F38Cabin_F4Cabin_G6Cabin_TEmbarked_CEmbarked_QEmbarked_S
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...0000000001
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...0000000100
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...0000000001
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...0000000001
4503Allen, Mr. William Henrymale35.0003734508.0500...0000000001

5 rows × 168 columns

2.3.3 任务三:从纯文本Name特征里提取出Titles的特征(所谓的Titles就是Mr,Miss,Mrs等)

正则表达式需要好好学习下,第一次接触正则表达式

#写入代码
df['Title'] = df.Name.str.extract('([A-Za-z]+)\.', expand=False)
df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Cabin_F2Cabin_F33Cabin_F38Cabin_F4Cabin_G6Cabin_TEmbarked_CEmbarked_QEmbarked_STitle
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...000000001Mr
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...000000100Mrs
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...000000001Miss
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...000000001Mrs
4503Allen, Mr. William Henrymale35.0003734508.0500...000000001Mr

5 rows × 169 columns

#保存最终你完成的已经清理好的数据
df.to_csv('F:\Datewheel资料\hands-on-data-analysis-master\hands-on-data-analysis-master\第一单元项目集合/train_final.csv')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值