3-Tensorflow-demo_13_keras实现浅层神经网络

本文介绍如何使用Keras和TensorFlow构建一个简单的神经网络来解决异或问题。通过详细步骤展示了数据预处理、模型搭建、训练及评估过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Tensorflow 与Keras版本对应

pip install keras==2.0.8
import numpy as np
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten

# 设置随机数
np.random.seed(42)

# 演示的数据
X = np.array([[0, 0],
              [0, 1],
              [1, 0],
              [1, 1]]).astype('float32')
y = np.array([[0],
              [1],
              [1],
              [0]]).astype('float32')

# 对标签进行one-hot编码
y = np_utils.to_categorical(y)
print(y)

# 构建模型图
xor = Sequential()                # 构建一个线性的模型层级堆叠对象
"""
    Dense(self, units                       #  你要输出的节点数量
                 activation=None,           #  激活函数
                 use_bias=True,
                 kernel_initializer='glorot_uniform',
                 bias_initializer='zeros',
                 kernel_regularizer=None,
                 bias_regularizer=None,
                 activity_regularizer=None,
                 kernel_constraint=None,
                 bias_constraint=None,
                 **kwargs):

# Example
    ```python
        # 如果作为第一层:
        model = Sequential()
        model.add(Dense(32, input_shape=(16,)))
        # 模型会自动将输入的 shape识别为 (*, 16)   即16是 特征数量
        # 输出的shape是 (*, 32)

        # 如果不是作为第一层,那么无需输入input_shape。他会自动识别上一层的节点数量。
        model.add(Dense(32))
    ```
"""
xor.add(Dense(64, input_dim=2))   # 第一层隐藏层 ,64代表该层隐藏层节点数量
xor.add(Activation("relu"))
xor.add(Dense(32))                # 第二层隐藏层,
xor.add(Activation('relu'))
xor.add(Dense(2))                 # 输出层
xor.add(Activation("sigmoid"))

# 设置模型训练参数
"""
compile(self, optimizer,              用什么优化器
                loss=None,            损失函数
                metrics=None,         训练期间评估模型的指标。和损失函数类型,只是不会用于训练 (https://keras.io/zh/metrics/) ['accuracy', 'acc', 'crossentropy', 'ce']
                loss_weights=None,
                sample_weight_mode=None,
                weighted_metrics=None,
                target_tensors=None,
                **kwargs):
"""
# loss  "categorical_crossentropy" 分类交叉熵    binary_crossentropy 二值交叉熵
xor.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy'])

# #  打印模型的结构图,超级好用
print(xor.summary())

# 训练模型
"""
    fit(self,
            x=None,
            y=None,
            batch_size=None,              # 批次大小
            epochs=1,                     # 迭代次数
            verbose=1,                    # 显示训练进度模式 : 0 = 不显示, 1 = progress bar, 2 = one line per epoch.
            callbacks=None,               # 钩子,早期停止技术。
            validation_split=0.,
            validation_data=None,         # 验证数据集 tuple `(x_val, y_val)`
            shuffle=True,
            class_weight=None,
            sample_weight=None,
            initial_epoch=0,
            steps_per_epoch=None,         # 每1个epoch的步数,即 每一次迭代中 有多少个batch_size.等于 总样本数量/batch_size
            validation_steps=None,
            **kwargs):
"""
history = xor.fit(X, y, epochs=1000, verbose=1)

# 模型评估
score = xor.evaluate(X, y)
print("\n准确率是: ", score[-1])

# 打印预测值
print("\n预测值是:")
print(xor.predict_proba(X))
[[1. 0.]
 [0. 1.]
 [0. 1.]
 [1. 0.]]
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 64)                192       
_________________________________________________________________
activation_1 (Activation)    (None, 64)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 32)                2080      
_________________________________________________________________
activation_2 (Activation)    (None, 32)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 66        
_________________________________________________________________
activation_3 (Activation)    (None, 2)                 0         
=================================================================
Total params: 2,338
Trainable params: 2,338
Non-trainable params: 0
_________________________________________________________________
None
2020-07-28 17:02:54.552537: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Epoch 1/1000
4/4 [==============================] - 0s - loss: 0.6854 - acc: 0.7500
Epoch 2/1000
4/4 [==============================] - 0s - loss: 0.6820 - acc: 0.5000
Epoch 3/1000
4/4 [==============================] - 0s - loss: 0.6778 - acc: 0.7500
Epoch 4/1000
4/4 [==============================] - 0s - loss: 0.6737 - acc: 0.7500
Epoch 5/1000
4/4 [==============================] - 0s - loss: 0.6700 - acc: 0.7500
Epoch 6/1000
4/4 [==============================] - 0s - loss: 0.6671 - acc: 0.7500
Epoch 7/1000
4/4 [==============================] - 0s - loss: 0.6643 - acc: 1.0000
Epoch 8/1000

......
Epoch 993/1000
4/4 [==============================] - 0s - loss: 5.0461e-04 - acc: 1.0000
Epoch 994/1000
4/4 [==============================] - 0s - loss: 5.0350e-04 - acc: 1.0000
Epoch 995/1000
4/4 [==============================] - 0s - loss: 5.0240e-04 - acc: 1.0000
Epoch 996/1000
4/4 [==============================] - 0s - loss: 5.0129e-04 - acc: 1.0000
Epoch 997/1000
4/4 [==============================] - 0s - loss: 5.0021e-04 - acc: 1.0000
Epoch 998/1000
4/4 [==============================] - 0s - loss: 4.9906e-04 - acc: 1.0000
Epoch 999/1000
4/4 [==============================] - 0s - loss: 4.9797e-04 - acc: 1.0000
Epoch 1000/1000
4/4 [==============================] - 0s - loss: 4.9687e-04 - acc: 1.0000
4/4 [==============================] - 0s

准确率是:  1.0

预测值是:
4/4 [==============================] - 0s
[[7.2634822e-01 7.2251959e-04]
 [7.1427326e-05 1.8755811e-01]
 [5.6154968e-05 1.7090979e-01]
 [3.8332991e-02 1.0720866e-05]]

Process finished with exit code 0

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值