[数据科学从零到壹]·泰坦尼克号生存预测(数据读取、处理与建模)​​​​​​​

本文介绍了一项基于Kaggle比赛的经典项目——泰坦尼克号生存预测。通过数据读取、预处理和特征工程,利用Keras搭建神经网络模型进行生存概率预测,最终实现了模型训练和评估。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

泰坦尼克号生存预测(数据读取、处理与建模)

  • 简介:

本文是泰坦尼克号上的生存概率预测,这是基于Kaggle上的一个经典比赛项目。

数据集:

1.Kaggle泰坦尼克号项目页面下载数据:https://www.kaggle.com/c/titanic

2.网盘地址:https://pan.baidu.com/s/1BfRZdCz6Z1XR6aDXxiHmHA      提取码:jzb3 

  • 代码内容

数据读取:

#%%
import tensorflow as tf
import keras
import pandas as pd
import numpy as np

data = pd.read_csv("titanic/train.csv")
print(data.head())
print(data.describe())

数据处理:

#%%
strs = "Survived Pclass Sex Age SibSp Parch Fare Embarked"
clos = strs.split(" ")
print(clos)
#%%
x_datas = data[clos]
print(x_datas.head())
#%%
print(x_datas.isnull().sum())

#%%
x_datas["Age"] = x_datas["Age"].fillna(x_datas["Age"].mean())
x_datas["Embarked"] = x_datas["Embarked"].fillna(x_datas["Embarked"].mode()[0])


#x_datas["Sex"] = pd.get_dummies(x_datas["Sex"])
x_datas = pd.get_dummies(x_datas,columns=["Pclass","Sex","Embarked"])
x_datas["Age"]/=100
x_datas["Fare"]/=100

print(x_datas.isnull().sum())
print(x_datas.head())

#%%
seq = int(0.75*(len(x_datas)))

X ,Y = x_datas.iloc[:,1:],x_datas.iloc[:,0]
X_train,Y_train,X_test,Y_test = X[:seq],Y[:seq],X[seq:],Y[seq:]

模型搭建:

#%%
model = keras.models.Sequential()

model.add(keras.layers.Dense(64,input_dim = 12,activation="relu"))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(16,activation="relu"))
model.add(keras.layers.Dense(2,activation="softmax"))

model.compile(loss="sparse_categorical_crossentropy",optimizer="adam",metrics=["accuracy"])

print(model.summary())

模型训练与评估:

#%%
model.fit(X_train,Y_train,validation_split=0.2,epochs=100,batch_size=50)

#%%
y = model.evaluate(X_test,Y_test)
print("test loss is %f, acc %f"%(y[0],y[1]))
model.save("model_100_1.h5")
  • 输出结果:
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 64)                832
_________________________________________________________________
dropout_1 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 16)                1040
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 34
=================================================================
Total params: 1,906
Trainable params: 1,906
Non-trainable params: 0
_________________________________________________________________
...
Epoch 96/100
534/534 [==============================] - 0s 80us/step - loss: 0.3870 - acc: 0.8277 - val_loss: 0.5083 - val_acc: 0.7612
Epoch 97/100
534/534 [==============================] - 0s 80us/step - loss: 0.3921 - acc: 0.8352 - val_loss: 0.5070 - val_acc: 0.7687
Epoch 98/100
534/534 [==============================] - 0s 82us/step - loss: 0.3940 - acc: 0.8371 - val_loss: 0.5102 - val_acc: 0.7687
Epoch 99/100
534/534 [==============================] - 0s 78us/step - loss: 0.3996 - acc: 0.8277 - val_loss: 0.5106 - val_acc: 0.7687
Epoch 100/100
534/534 [==============================] - 0s 80us/step - loss: 0.3892 - acc: 0.8352 - val_loss: 0.5082 - val_acc: 0.7612
223/223 [==============================] - 0s 63us/step
test loss is 0.389338, acc 0.829596
  • 完整代码:
#%%
import tensorflow as tf
import keras
import pandas as pd
import numpy as np

data = pd.read_csv("titanic/train.csv")
print(data.head())
print(data.describe())
#%%
strs = "Survived Pclass Sex Age SibSp Parch Fare Embarked"
clos = strs.split(" ")
print(clos)
#%%
x_datas = data[clos]
print(x_datas.head())
#%%
print(x_datas.isnull().sum())

#%%
x_datas["Age"] = x_datas["Age"].fillna(x_datas["Age"].mean())
x_datas["Embarked"] = x_datas["Embarked"].fillna(x_datas["Embarked"].mode()[0])


#x_datas["Sex"] = pd.get_dummies(x_datas["Sex"])
x_datas = pd.get_dummies(x_datas,columns=["Pclass","Sex","Embarked"])
x_datas["Age"]/=100
x_datas["Fare"]/=100

print(x_datas.isnull().sum())
print(x_datas.head())

#%%
seq = int(0.75*(len(x_datas)))

X ,Y = x_datas.iloc[:,1:],x_datas.iloc[:,0]
X_train,Y_train,X_test,Y_test = X[:seq],Y[:seq],X[seq:],Y[seq:]


#%%
model = keras.models.Sequential()

model.add(keras.layers.Dense(64,input_dim = 12,activation="relu"))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(16,activation="relu"))
model.add(keras.layers.Dense(2,activation="softmax"))

model.compile(loss="sparse_categorical_crossentropy",optimizer="adam",metrics=["accuracy"])

print(model.summary())

#%%
model.fit(X_train,Y_train,validation_split=0.2,epochs=100,batch_size=50)

#%%
y = model.evaluate(X_test,Y_test)
print("test loss is %f, acc %f"%(y[0],y[1]))
model.save("model_100_1.h5")

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小宋是呢

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值