TensoFlow-Keras1_1【学习代码笔记】

本文围绕神经网络模型展开,先介绍查看版本,接着阐述建立简单全连接神经网络模型及配置网络层,包括激活函数、参数初始化和正则化选择。然后说明训练和评估模型的关键参数,如优化器、损失函数和指标。还讲述了拟合数据的两种方式,即拟合numpy数据和tf.data数据集,最后提及预测数据。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

!pip install -q pyyaml
import tensorflow as tf
from tensorflow.keras import layers

1 查看版本

print(tf.VERSION)
print(tf.keras.__version__)

2 建立一个简单、全连接的神经网络模型

model = tf.keras.Sequential()

# 增加一个紧密连接的网络层(densely-connected layer),有64个单元
model.add(layers.Dense(64, activation='relu'))
# 再增加一个同样网络层
model.add(layers.Dense(64, activation='relu'))
# 增加一个sofmax网络层,作为输出层,有10个输出单元
model.add(layers.Dense(10, activation='softmax'))

3 配置网络层

  • 激活函数的选择 activation
  • 参数初始化的方案 kernel_initializer, bias_initializer
  • 正则化的选择 kernel_regularizer, bias_regularizer
# 建立一个sigmoid的网络层
layers.Dense(64, activation='sigmoid')
# 另一种方式
layers.Dense(64, activation=tf.sigmoid)

# 网络层采用L1正则化用于权重,正则化参数为0.01
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))

# 采用L2正则化用于偏差
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))

# 将权重初始化为随机正交矩阵
layers.Dense(64, kernel_initializer='orthogonal')

# 将偏差初始化为2.0的向量
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))

4 训练和评估模型

# 构建模型并训练
model = tf.keras.Sequential([
# 增加一个密集网络层,其中有64个单元
layers.Dense(64, activation='relu', input_shape=(32,)),
# 
layers.Dense(64, activation='relu'),
# 添加softmax网络输出层
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.train.AdamOptimizer(0.001),
             loss='categorical_crossentropy',
             metrics=['accuracy'])

tf.keras.Model.compile takes three important arguments:

  • optimizer: This object specifies the training procedure. Pass it optimizer instances from the tf.train module, such as tf.train.AdamOptimizer, tf.train.RMSPropOptimizer, or tf.train.GradientDescentOptimizer.
  • loss: The function to minimize during optimization. Common choices include mean square error (mse), categorical_crossentropy, and binary_crossentropy. Loss functions are specified by name or by passing a callable object from the tf.keras.losses module.
  • metrics: Used to monitor training. These are string names or callables from the tf.keras.metrics module.
# 配置一个回归网络,损失函数为均方误差
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
             loss='mse', # 均方误差
             metrics=['mae']) # 平均绝对值误差

# 配置一个分类网络
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
             loss=tf.keras.losses.categorical_crossentropy,
             metrics=[tf.keras.metrics.categorical_accuracy])

5 拟合数据

5.1 拟合输入的numpy数据,对于小型数据集,使用内存中的numpy数组来训练和评估模型,采用fit来拟合数据

import numpy as np

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

model.fit(data, labels, epochs=10, batch_size=32)
Epoch 1/10 1000/1000 [==============================] - 0s 335us/sample - loss: 11.4519 - categorical_accuracy: 0.1110 
Epoch 2/10 1000/1000 [==============================] - 0s 37us/sample - loss: 11.4034 - categorical_accuracy: 0.0990 
Epoch 3/10 1000/1000 [==============================] - 0s 37us/sample - loss: 11.3982 - categorical_accuracy: 0.1010 
Epoch 4/10 1000/1000 [==============================] - 0s 36us/sample - loss: 11.3903 - categorical_accuracy: 0.0950 
Epoch 5/10 1000/1000 [==============================] - 0s 34us/sample - loss: 11.3892 - categorical_accuracy: 0.0930 
Epoch 6/10 1000/1000 [==============================] - 0s 39us/sample - loss: 11.3875 - categorical_accuracy: 0.0940 
Epoch 7/10 1000/1000 [==============================] - 0s 37us/sample - loss: 11.3864 - categorical_accuracy: 0.1060 
Epoch 8/10 1000/1000 [==============================] - 0s 40us/sample - loss: 11.3871 - categorical_accuracy: 0.1000 
Epoch 9/10 1000/1000 [==============================] - 0s 37us/sample - loss: 11.3830 - categorical_accuracy: 0.1230 
Epoch 10/10 1000/1000 [==============================] - 0s 36us/sample - loss: 11.3845 - categorical_accuracy: 0.1390

tf.keras.Model.fit takes three important arguments:

  • epochs: Training is structured into epochs. An epoch is one iteration over the entire input data (this is done in smaller batches).
  • batch_size: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.
  • validation_data: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.

接下来为利用validation_data,进行交叉检验

import numpy as np

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))

model.fit(data, labels, epochs=10, batch_size=32,
         validation_data=(val_data, val_labels))
Train on 1000 samples, validate on 100 samples
Epoch 1/10
1000/1000 [==============================] - 0s 75us/sample - loss: 11.5435 - categorical_accuracy: 0.0970 - val_loss: 11.3560 - val_categorical_accuracy: 0.0500
Epoch 2/10
1000/1000 [==============================] - 0s 39us/sample - loss: 11.5381 - categorical_accuracy: 0.1070 - val_loss: 11.3582 - val_categorical_accuracy: 0.0700
Epoch 3/10
1000/1000 [==============================] - 0s 40us/sample - loss: 11.5384 - categorical_accuracy: 0.1130 - val_loss: 11.3625 - val_categorical_accuracy: 0.0700
Epoch 4/10
1000/1000 [==============================] - 0s 41us/sample - loss: 11.5387 - categorical_accuracy: 0.0930 - val_loss: 11.3813 - val_categorical_accuracy: 0.0900
Epoch 5/10
1000/1000 [==============================] - 0s 40us/sample - loss: 11.5365 - categorical_accuracy: 0.1020 - val_loss: 11.3612 - val_categorical_accuracy: 0.1300
Epoch 6/10
1000/1000 [==============================] - 0s 38us/sample - loss: 11.5367 - categorical_accuracy: 0.1090 - val_loss: 11.3693 - val_categorical_accuracy: 0.0600
Epoch 7/10
1000/1000 [==============================] - 0s 41us/sample - loss: 11.5342 - categorical_accuracy: 0.1200 - val_loss: 11.3781 - val_categorical_accuracy: 0.0900
Epoch 8/10
1000/1000 [==============================] - 0s 41us/sample - loss: 11.5361 - categorical_accuracy: 0.1080 - val_loss: 11.3790 - val_categorical_accuracy: 0.1300
Epoch 9/10
1000/1000 [==============================] - 0s 41us/sample - loss: 11.5303 - categorical_accuracy: 0.1290 - val_loss: 11.3874 - val_categorical_accuracy: 0.1100
Epoch 10/10
1000/1000 [==============================] - 0s 39us/sample - loss: 11.5276 - categorical_accuracy: 0.1400 - val_loss: 11.3705 - val_categorical_accuracy: 0.0900

5.2 输入tf.data 数据集

Use the Datasets API to scale to large datasets or multi-device training. Pass a tf.data.Dataset instance to the fit method:

以下模型中使用了 steps_per_epoch 参数,这是模型在进行下一次迭代之前运行的训练步骤的数量。 由于数据集已经生成了批量数据,因此代码不需要batch_size

# 实例化一个数据集
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()

# 当拟合一个数据集时,不要忘记设定`steps_per_epoch` 
model.fit(dataset, epochs=10, steps_per_epoch=30)
Epoch 1/10
30/30 [==============================] - 0s 4ms/step - loss: 11.5458 - categorical_accuracy: 0.1250
Epoch 2/10
30/30 [==============================] - 0s 1ms/step - loss: 11.5223 - categorical_accuracy: 0.1186
Epoch 3/10
30/30 [==============================] - 0s 1ms/step - loss: 11.5089 - categorical_accuracy: 0.1261
Epoch 4/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4985 - categorical_accuracy: 0.1293
Epoch 5/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4810 - categorical_accuracy: 0.1271
Epoch 6/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4406 - categorical_accuracy: 0.1325
Epoch 7/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4927 - categorical_accuracy: 0.1325
Epoch 8/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4721 - categorical_accuracy: 0.1335
Epoch 9/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4861 - categorical_accuracy: 0.1378
Epoch 10/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4992 - categorical_accuracy: 0.1442

数据集操作同样也可以用于交叉检验

dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()

val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()

model.fit(dataset, epochs=10, steps_per_epoch=30, 
         validation_data=val_dataset,
         validation_steps=3)
Epoch 1/10
30/30 [==============================] - 0s 5ms/step - loss: 11.5173 - categorical_accuracy: 0.1344 - val_loss: 11.4144 - val_categorical_accuracy: 0.1354
Epoch 2/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4921 - categorical_accuracy: 0.1517 - val_loss: 11.4082 - val_categorical_accuracy: 0.1029
Epoch 3/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4804 - categorical_accuracy: 0.1592 - val_loss: 11.3145 - val_categorical_accuracy: 0.1176
Epoch 4/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4690 - categorical_accuracy: 0.1549 - val_loss: 11.4090 - val_categorical_accuracy: 0.1176
Epoch 5/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4525 - categorical_accuracy: 0.1741 - val_loss: 11.4220 - val_categorical_accuracy: 0.1458
Epoch 6/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4151 - categorical_accuracy: 0.1667 - val_loss: 11.4548 - val_categorical_accuracy: 0.0441
Epoch 7/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4648 - categorical_accuracy: 0.1645 - val_loss: 11.2806 - val_categorical_accuracy: 0.1324
Epoch 8/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4511 - categorical_accuracy: 0.1763 - val_loss: 11.4220 - val_categorical_accuracy: 0.0735
Epoch 9/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4636 - categorical_accuracy: 0.1581 - val_loss: 11.4619 - val_categorical_accuracy: 0.0938
Epoch 10/10
30/30 [==============================] - 0s 1ms/step - loss: 11.4767 - categorical_accuracy: 0.1816 - val_loss: 11.4408 - val_categorical_accuracy: 0.0588

6 预测数据

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

model.evaluate(data, labels, batch_size=32)
1000/1000 [==============================] - 0s 60us/sample - loss: 11.4637 - categorical_accuracy: 0.1010

[11.463740493774415, 0.101]
model.evaluate(dataset, steps=30)
30/30 [==============================] - 0s 3ms/step - loss: 11.5161 - categorical_accuracy: 0.1729

[11.516093063354493, 0.17291667]
result = model.predict(data, batch_size=32)
print(result.shape)
(1000, 10)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值