Keras学习笔记1_MLP多层感知器

本文是Keras学习系列的第一篇,主要介绍如何使用Keras构建多层感知器(MLP)进行深度学习。通过实例讲解了搭建神经网络的基本步骤,包括设置输入层、隐藏层和输出层,以及训练和评估模型的过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

from keras.datasets import mnist
from matplotlib import pyplot as plt
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils

# 从keras中导入数据集
(x_train, y_train), (x_validation,y_validation) = mnist.load_data()
# 显示四张手写数据图片
plt.subplot(221)
plt.imshow(x_train[0], cmap=plt.get_cmap('gray'))

plt.subplot(222)
plt.imshow(x_train[1], cmap=plt.get_cmap('gray'))

plt.subplot(223)
plt.imshow(x_train[2], cmap=plt.get_cmap('gray'))

plt.subplot(224)
plt.imshow(x_train[3], cmap=plt.get_cmap('gray'))
# plt.imshow(x_train[3])
plt.show()
# 设定随机数种子
seed = 7
np.random.seed()
Using TensorFlow backend.



<Figure size 640x480 with 4 Axes>
#为了取得要有多少输入神经元,所以x.shape[1]是多少行x.shape[2]是多少列,x.shape[0]代表x这个数据里面有多少样本
num_pixels=x_train.shape[1]*x_train.shape[2]
print(x_train.shape[0])
print(num_pixels)
60000
784
x_train = x_train.reshape(x_train.shape[0],num_pixels).astype('float32')
x_validation = x_validation.reshape(x_validation.shape[0],num_pixels).astype('float32')
#归一化
x_train = x_train/255
x_validation = x_validation/255
#进行one-hot编码
y_train = np_utils.to_categorical(y_train)
y_validation = np_utils.to_categorical(y_validation)
num_classes = y_validation.shape[1]
print(num_classes)#取得输出层有几个神经元
10
#定义MLP模型
def create_model():
    #创建模型
    model = Sequential()
    model.add(Dense(units=num_pixels, input_dim = num_pixels,kernel_initializer='normal',activation='relu'))
    model.add(Dense(units=784,kernel_initializer='normal',activation='relu'))
    model.add(Dense(units=num_classes, kernel_initializer='normal',activation='softmax'))
    
    #编译模型
    model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
    return model

model = create_model()
model.fit(x_train,y_train,epochs=10,batch_size=200)

score = model.evaluate(x_validation,y_validation)
print('MLP %.2f%%' % (score[1] * 100))
    
Epoch 1/10
60000/60000 [==============================] - 2s 39us/step - loss: 0.2161 - acc: 0.9346
Epoch 2/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.0745 - acc: 0.9774
Epoch 3/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.0452 - acc: 0.9852
Epoch 4/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.0297 - acc: 0.9904
Epoch 5/10
60000/60000 [==============================] - 2s 36us/step - loss: 0.0245 - acc: 0.9921
Epoch 6/10
60000/60000 [==============================] - 2s 36us/step - loss: 0.0198 - acc: 0.9939
Epoch 7/10
60000/60000 [==============================] - 2s 36us/step - loss: 0.0153 - acc: 0.9949
Epoch 8/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.0145 - acc: 0.9952
Epoch 9/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.0157 - acc: 0.9949
Epoch 10/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.0085 - acc: 0.9974
10000/10000 [==============================] - 0s 46us/step
MLP 98.23%
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值