tensorflow2.0数据读取-csv文件

这篇教程通过一个示例展示了怎样将 CSV 格式的数据加载进 tf.data.Dataset

这篇教程使用的是泰坦尼克号乘客的数据。模型会根据乘客的年龄、性别、票务舱和是否独自旅行等特征来预测乘客生还的可能性。

设置

from __future__ import absolute_import, division, print_function, unicode_literals
import functools

import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds

 

TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"

train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)

 

Downloading data from https://storage.googleapis.com/tf-datasets/titanic/train.csv
32768/30874 [===============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tf-datasets/titanic/eval.csv
16384/13049 [=====================================] - 0s 0us/step

 

# 让 numpy 数据更易读。
np.set_printoptions(precision=3, suppress=True)

 

加载数据

开始的时候,我们通过打印 CSV 文件的前几行来了解文件的格式。

!head {train_file_path}

 

survived,sex,age,n_siblings_spouses,parch,fare,class,deck,embark_town,alone
0,male,22.0,1,0,7.25,Third,unknown,Southampton,n
1,female,38.0,1,0,71.2833,First,C,Cherbourg,n
1,female,26.0,0,0,7.925,Third,unknown,Southampton,y
1,female,35.0,1,0,53.1,First,C,Southampton,n
0,male,28.0,0,0,8.4583,Third,unknown,Queenstown,y
0,male,2.0,3,1,21.075,Third,unknown,Southampton,n
1,female,27.0,0,2,11.1333,Third,unknown,Southampton,n
1,female,14.0,1,0,30.0708,Second,unknown,Cherbourg,n
1,female,4.0,1,1,16.7,Third,G,Southampton,n

 

正如你看到的那样,CSV 文件的每列都会有一个列名。dataset 的构造函数会自动识别这些列名。如果你使用的文件的第一行不包含列名,那么需要将列名通过字符串列表传给 make_csv_dataset 函数的 column_names 参数。

CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']

dataset = tf.data.experimental.make_csv_dataset(
     ...,
     column_names=CSV_COLUMNS,
     ...)
  

 

这个示例使用了所有的列。如果你需要忽略数据集中的某些列,创建一个包含你需要使用的列的列表,然后传给构造器的(可选)参数 select_columns

dataset = tf.data.experimental.make_csv_dataset(
  ...,
  select_columns = columns_to_use, 
  ...)

 

对于包含模型需要预测的值的列是你需要显式指定的。

LABEL_COLUMN = 'survived'
LABELS = [0, 1]

 

现在从文件中读取 CSV 数据并且创建 dataset。

(完整的文档,参考 tf.data.experimental.make_csv_dataset)

def get_dataset(file_path):
  dataset = tf.data.experimental.make_csv_dataset(
      file_path,
      batch_size=12, # 为了示例更容易展示,手动设置较小的值
      label_name=LABEL_COLUMN,
      na_value="?",
      num_epochs=1,
      ignore_errors=True)
  return dataset

raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)

 

WARNING: Logging before flag parsing goes to stderr.
W0823 13:59:53.210392 140439518127872 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow/python/data/experimental/ops/readers.py:498: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_determinstic`.

 

dataset 中的每个条目都是一个批次,用一个元组(多个样本多个标签)表示。样本中的数据组织形式是以列为主的张量(而不是以行为主的张量),每条数据中包含的元素个数就是批次大小(这个示例中是 12)。

阅读下面的示例有助于你的理解。

examples, labels = next(iter(raw_train_data)) # 第一个批次
print("EXAMPLES: \n", examples, "\n")
print("LABELS: \n", labels)

 

EXAMPLES: 
 OrderedDict([('sex', <tf.Tensor: id=170, shape=(12,), dtype=string, numpy=
array([b'male', b'male', b'female', b'female', b'female', b'male',
       b'male', b'male', b'male', b'male', b'male', b'male'], dtype=object)>), ('age', <tf.Tensor: id=162, shape=(12,), dtype=float32, numpy=
array([19., 17., 42., 22.,  9., 24., 28., 36., 37., 32., 28., 28.],
      dtype=float32)>), ('n_siblings_spouses', <tf.Tensor: id=168, shape=(12,), dtype=int32, numpy=array([0, 0, 1, 1, 4, 1, 0, 0, 2, 0, 1, 0], dtype=int32)>), ('parch', <tf.Tensor: id=169, shape=(12,), dtype=int32, numpy=array([0, 2, 0, 1, 2, 0, 0, 1, 0, 0, 0, 0], dtype=int32)>), ('fare', <tf.Tensor: id=167, shape=(12,), dtype=float32, numpy=
array([  6.75 , 110.883,  26.   ,  29.   ,  31.275,  16.1  ,  13.863,
       512.329,   7.925,   7.896,  19.967,  26.55 ], dtype=float32)>), ('class', <tf.Tensor: id=164, shape=(12,), dtype=string, numpy=
array([b'Third', b'First', b'Second', b'Second', b'Third', b'Third',
       b'Second', b'First', b'Third', b'Third', b'Third', b'First'],
      dtype=object)>), ('deck', <tf.Tensor: id=165, shape=(12,), dtype=string, numpy=
array([b'unknown', b'C', b'unknown', b'unknown', b'unknown', b'unknown',
       b'unknown', b'B', b'unknown', b'unknown', b'unknown', b'C'],
      dtype=object)>), ('embark_town', <tf.Tensor: id=166, shape=(12,), dtype=string, numpy=
array([b'Queenstown', b'Cherbourg', b'Southampton', b'Southampton',
       b'Southampton', b'Southampton', b'Cherbourg', b'Cherbourg',
       b'Southampton', b'Southampton', b'Southampton', b'Southampton'],
      dtype=object)>), ('alone', <tf.Tensor: id=163, shape=(12,), dtype=string, numpy=
array([b'y', b'n', b'n', b'n', b'n', b'n', b'y', b'n', b'n', b'y', b'n',
       b'y'], dtype=object)>)]) 

LABELS: 
 tf.Tensor([0 1 1 1 0 0 1 1 0 0 0 1], shape=(12,), dtype=int32)

 

数据预处理

分类数据

CSV 数据中的有些列是分类的列。也就是说,这些列只能在有限的集合中取值。

使用 tf.feature_column API 创建一个 tf.feature_column.indicator_column 集合,每个 tf.feature_column.indicator_column 对应一个分类的列。

CATEGORIES = {
    'sex': ['male', 'female'],
    'class' : ['First', 'Second', 'Third'],
    'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
    'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
    'alone' : ['y', 'n']
}

 

categorical_columns = []
for feature, vocab in CATEGORIES.items():
  cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
        key=feature, vocabulary_list=vocab)
  categorical_columns.append(tf.feature_column.indicator_column(cat_col))

 

# 你刚才创建的内容
categorical_columns

 

[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='class', vocabulary_list=('First', 'Second', 'Third'), dtype=tf.string, default_value=-1, num_oov_buckets=0)),
 IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='sex', vocabulary_list=('male', 'female'), dtype=tf.string, default_value=-1, num_oov_buckets=0)),
 IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='alone', vocabulary_list=('y', 'n'), dtype=tf.string, default_value=-1, num_oov_buckets=0)),
 IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='deck', vocabulary_list=('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'), dtype=tf.string, default_value=-1, num_oov_buckets=0)),
 IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='embark_town', vocabulary_list=('Cherbourg', 'Southhampton', 'Queenstown'), dtype=tf.string, default_value=-1, num_oov_buckets=0))]

 

这将是后续构建模型时处理输入数据的一部分。

连续数据

连续数据需要标准化。

写一个函数标准化这些值,然后将这些值改造成 2 维的张量。

def process_continuous_data(mean, data):
  # 标准化数据
  data = tf.cast(data, tf.float32) * 1/(2*mean)
  return tf.reshape(data, [-1, 1])

 

现在创建一个数值列的集合。tf.feature_columns.numeric_column API 会使用 normalizer_fn 参数。在传参的时候使用 functools.partialfunctools.partial 由使用每个列的均值进行标准化的函数构成。

MEANS = {
    'age' : 29.631308,
    'n_siblings_spouses' : 0.545455,
    'parch' : 0.379585,
    'fare' : 34.385399
}

numerical_columns = []

for feature in MEANS.keys():
  num_col = tf.feature_column.numeric_column(feature, normalizer_fn=functools.partial(process_continuous_data, MEANS[feature]))
  numerical_columns.append(num_col)

 

# 你刚才创建的内容。
numerical_columns

 

[NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=functools.partial(<function process_continuous_data at 0x7fba7b3fc158>, 29.631308)),
 NumericColumn(key='fare', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=functools.partial(<function process_continuous_data at 0x7fba7b3fc158>, 34.385399)),
 NumericColumn(key='n_siblings_spouses', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=functools.partial(<function process_continuous_data at 0x7fba7b3fc158>, 0.545455)),
 NumericColumn(key='parch', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=functools.partial(<function process_continuous_data at 0x7fba7b3fc158>, 0.379585))]

 

这里使用标准化的方法需要提前知道每列的均值。如果需要计算连续的数据流的标准化的值可以使用 TensorFlow Transform

创建预处理层

将这两个特征列的集合相加,并且传给 tf.keras.layers.DenseFeatures 从而创建一个进行预处理的输入层。

preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numerical_columns)

 

构建模型

从 preprocessing_layer 开始构建 tf.keras.Sequential

model = tf.keras.Sequential([
  preprocessing_layer,
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(1, activation='sigmoid'),
])

model.compile(
    loss='binary_crossentropy',
    optimizer='adam',
    metrics=['accuracy'])

 

训练、评估和预测

现在可以实例化和训练模型。

train_data = raw_train_data.shuffle(500)
test_data = raw_test_data

 

model.fit(train_data, epochs=20)

 

Epoch 1/20

W0823 13:59:53.975711 140439518127872 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow/python/feature_column/feature_column_v2.py:2655: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0823 13:59:53.993308 140439518127872 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow/python/feature_column/feature_column_v2.py:4215: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0823 13:59:53.994395 140439518127872 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow/python/feature_column/feature_column_v2.py:4270: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.

53/53 [==============================] - 4s 69ms/step - loss: 0.5185 - accuracy: 0.7225
Epoch 2/20
53/53 [==============================] - 0s 7ms/step - loss: 0.4347 - accuracy: 0.8013
Epoch 3/20
53/53 [==============================] - 0s 7ms/step - loss: 0.4185 - accuracy: 0.8084
Epoch 4/20
53/53 [==============================] - 0s 7ms/step - loss: 0.4074 - accuracy: 0.8221
Epoch 5/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3983 - accuracy: 0.8274
Epoch 6/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3906 - accuracy: 0.8331
Epoch 7/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3836 - accuracy: 0.8371
Epoch 8/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3773 - accuracy: 0.8371
Epoch 9/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3715 - accuracy: 0.8391
Epoch 10/20
53/53 [==============================] - 0s 8ms/step - loss: 0.3660 - accuracy: 0.8337
Epoch 11/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3613 - accuracy: 0.8369
Epoch 12/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3567 - accuracy: 0.8402
Epoch 13/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3525 - accuracy: 0.8376
Epoch 14/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3491 - accuracy: 0.8396
Epoch 15/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3457 - accuracy: 0.8391
Epoch 16/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3425 - accuracy: 0.8362
Epoch 17/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3395 - accuracy: 0.8398
Epoch 18/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3368 - accuracy: 0.8520
Epoch 19/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3336 - accuracy: 0.8557
Epoch 20/20
53/53 [==============================] - 0s 7ms/step - loss: 0.3311 - accuracy: 0.8616

<tensorflow.python.keras.callbacks.History at 0x7fba9486b780>

 

当模型训练完成的时候,你可以在测试集 test_data 上检查准确性。

test_loss, test_accuracy = model.evaluate(test_data)

print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))

 

     22/Unknown - 1s 31ms/step - loss: 0.4885 - accuracy: 0.7727

Test Loss 0.48847573521462356, Test Accuracy 0.7727272510528564

 

使用 tf.keras.Model.predict 推断一个批次或多个批次的标签。

predictions = model.predict(test_data)

# 显示部分结果
for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):
  print("Predicted survival: {:.2%}".format(prediction[0]),
        " | Actual outcome: ",
        ("SURVIVED" if bool(survived) else "DIED"))

 

Predicted survival: 55.93%  | Actual outcome:  SURVIVED
Predicted survival: 52.15%  | Actual outcome:  DIED
Predicted survival: 42.62%  | Actual outcome:  DIED
Predicted survival: 8.34%  | Actual outcome:  DIED
Predicted survival: 69.12%  | Actual outcome:  DIED
Predicted survival: 10.32%  | Actual outcome:  DIED
Predicted survival: 94.82%  | Actual outcome:  SURVIVED
Predicted survival: 2.17%  | Actual outcome:  DIED
Predicted survival: 17.52%  | Actual outcome:  DIED
Predicted survival: 45.75%  | Actual outcome:  SURVIVED

来源:https://tensorflow.google.cn/tutorials/load_data/csv?hl=zh-cn

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值