github(100-day-of-ml-code)-day1

本文详细介绍如何使用Python的Pandas、NumPy和Scikit-Learn库进行数据预处理,包括读取CSV文件、处理缺失数据、编码类别变量、拆分数据集以及标准化数据。
import numpy as np
import pandas as pd

#pandas读取csv文件
dataset = pd.read_csv("../datasets/Data.csv")
print(dataset.head())

X = dataset.iloc[:,:-1].values
Y = dataset.iloc[:,3].values
print("X:",X)
print("Y:",Y)



#利用sklearn处理缺失数据
from sklearn.preprocessing import  Imputer
imputer = Imputer(missing_values="NaN",strategy="mean",axis=0)
imputer = imputer.fit(X[:,1:3])
X[:,1:3] = imputer.transform(X[:,1:3])
print(X)

from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder_X = LabelEncoder()
X[:,0] = labelencoder_X.fit_transform(X[:,0])
onehotencoder = OneHotEncoder(categorical_features=[0])
X = onehotencoder.fit_transform(X).toarray()
labelencoder_Y = LabelEncoder()
Y = labelencoder_Y.fit_transform(Y)

print(X)
print(Y)

#sklearn 数据拆分为训练集和测试集
from sklearn.model_selection import  train_test_split
x_train ,x_test ,y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=0)
print("x_train:",x_train)
print("x_test:",x_test)

print("y_train:",y_train)
print("y_test:",y_test)

#sklear 数据的标准化处理
from sklearn.preprocessing import  StandardScaler
sc_x = StandardScaler()
x_train = sc_x.fit_transform(x_train)
x_test =sc_x.transform(x_test)


print("x_train:",x_train)
print("x_test:",x_test)

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值