keras的model.summary()输出参数计算

本文详细解析了使用Keras构建的深度学习模型中各层参数的计算方法,包括全连接层、卷积层、BatchNormalization层、LayerNormalization层、Embedding层及LSTM层等,并给出具体实例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

摘要

使用keras构建深度学习模型,我们会通过model.summary()输出模型各层的参数状况,如下:

________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_4 (Dense)              (None, 7)                 35        
_________________________________________________________________
activation_4 (Activation)    (None, 7)                 0         
_________________________________________________________________
dense_5 (Dense)              (None, 13)                104       
_________________________________________________________________
activation_5 (Activation)    (None, 13)                0         
_________________________________________________________________
dense_6 (Dense)              (None, 5)                 70        
_________________________________________________________________
activation_6 (Activation)    (None, 5)                 0         
=================================================================
Total params: 209
Trainable params: 209
Non-trainable params: 0
_________________________________________________________________


通过这些参数,可以看到模型各个层的组成(dense表示全连接层)。也能看到数据经过每个层后,输出的数据维度。
还能看到Param,它表示每个层参数的个数,这个Param是怎么计算出来的呢?

基本神经网络Param计算过程

我们先用如下代码构建一个最简单的神经网络模型,它只有3个全连接层组成:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation

model = Sequential() # 顺序模型

# 输入层
model.add(Dense(7, input_shape=(4,)))  # Dense就是常用的全连接层
model.add(Activation('sigmoid')) # 激活函数

# 隐层
model.add(Dense(13))  # Dense就是常用的全连接层
model.add(Activation('sigmoid')) # 激活函数

# 输出层
model.add(Dense(5))
model.add(Activation('softmax'))

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])

model.summary()


这个模型的参数输出如下:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_4 (Dense)              (None, 7)                 35        
_________________________________________________________________
activation_4 (Activation)    (None, 7)                 0         
_________________________________________________________________
dense_5 (Dense)              (None, 13)                104       
_________________________________________________________________
activation_5 (Activation)    (None, 13)                0         
_________________________________________________________________
dense_6 (Dense)              (None, 5)                 70        
_________________________________________________________________
activation_6 (Activation)    (None, 5)                 0         
=================================================================
Total params: 209
Trainable params: 209
Non-trainable params: 0
_________________________________________________________________


全连接层神经网络的Param,说明的是每层神经元权重的个数,所以它的计算如下:

Param = (输入数据维度+1)* 神经元个数
之所以要加1,是考虑到每个神经元都有一个Bias

第一个Dense层,输入数据维度是4(一维数据),有7个神经元。所以,Param=(4+1)*7=35.
第二个Dense层,输入数据维度是7(经过第一层7个神经元作用后,输出数据维度就是7了),有13个神经元。所以,Param=(7+1)*13=104.
第三个Dense层,输入数据维度是13(经过第二层13个神经元作用后,输出数据维度就是13了),有5个神经元。所以,Param=(13+1)*5=70.

卷积神经网络Param计算过程

我们先用如下代码构建一个CNN模型,它有3个卷积层组成:

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Convolution2D as Conv2D
from keras.layers import MaxPooling2D
from keras import backend as K


model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 2),
                 input_shape=(8,8,1)))
convout1 = Activation('relu')
model.add(convout1)

model.add(Conv2D(64, (2, 3), activation='relu'))
model.add(Conv2D(64, (2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_10 (Conv2D)           (None, 6, 7, 32)          224       
_________________________________________________________________
activation_4 (Activation)    (None, 6, 7, 32)          0         
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 5, 5, 64)          12352     
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 4, 4, 64)          16448     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 2, 2, 64)          0         
_________________________________________________________________
dropout_7 (Dropout)          (None, 2, 2, 64)          0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_6 (Dense)              (None, 128)               32896     
_________________________________________________________________
dropout_8 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_7 (Dense)              (None, 10)                1290      
=================================================================
Total params: 63,210
Trainable params: 63,210
Non-trainable params: 0
_________________________________________________________________


对CNN模型,Param的计算方法如下:
(卷积核长度卷积核宽度通道数+1)*卷积核个数

所以,

第一个CONV层,Conv2D(32, kernel_size=(3, 2), input_shape=(8,8,1)),Param=(321+1)32 = 224.
第二个CONV层,Conv2D(64, (2, 3), activation=‘relu’),经过第一个层32个卷积核的作用,第二层输入数据通道数为32,Param=(2
332+1)64 = 12352.
第三个CONV层,Conv2D(64, (2, 2), activation=‘relu’),经过第二个层64个卷积核的作用,第二层输入数据通道数为64,Param=(2
2
64+1)*64 = 16448.

dense_6 (Dense)这里的Param为什么是32896呢?
因为经过flatten_4 (Flatten)的作用,输出变为了256,而dense_6 (Dense)中有128个卷积核,所以Param=128*(256+1)= 32896。

BatchNormalization层相关的参数数量

x = keras.layers.Input(batch_shape = (None, 4096))
hidden = keras.layers.Dense(512, activation = 'relu')(x)
hidden = keras.layers.BatchNormalization()(hidden)
hidden = keras.layers.Dropout(0.5)(hidden)
predictions = keras.layers.Dense(80, activation = 'sigmoid')(hidden)
mlp_model = keras.models.Model(input = [x], output = [predictions])
mlp_model.summary()

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_3 (InputLayer)             (None, 4096)          0                                            
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 512)           2097664     input_3[0][0]                    
____________________________________________________________________________________________________
batchnormalization_1 (BatchNorma (None, 512)           2048        dense_1[0][0]                    
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 512)           0           batchnormalization_1[0][0]       
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 80)            41040       dropout_1[0][0]                  
====================================================================================================
Total params: 2,140,752
Trainable params: 2,139,728
Non-trainable params: 1,024
____________________________________________________________________________________________________

BatchNormalization(BN)层的输入大小为512.根据Keras documentation,BN层的输出形状与512的输入相同.

那么与BN层相关的参数数量是多少?

Keras中的批量标准化实现了 this paper.

正如您可以在那里阅读的那样,为了在训练期间使批量标准化工作,他们需要跟踪每个标准化维度的分布.为此,由于您默认情况下处于mode = 0状态,因此它们会在前一层上为每个要素计算4个参数.这些参数确保您正确传播和反向传播信息.

所以4 * 512 = 2048,这应该回答你的问题.

LayerNormalization

该层的参数数量是2 * 输入。

Embeding层参数

vocab_size = 50
max_length = 4
model = Sequential()
model.add(Embedding(vocab_size, 7, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()

结果

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 4, 7)              350       
_________________________________________________________________
flatten_1 (Flatten)          (None, 28)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 29        
=================================================================
Total params: 379
Trainable params: 379
Non-trainable params: 0

embedding层第一个参数是词汇表的大小,第二个参数是输出的词向量的维度,参数这它们的乘积

LSTM层参数

from keras.layers import LSTM
from keras.models import Sequential

time_step=13
featrue=5
hidenfeatrue=10

model=Sequential()
model.add( LSTM(hidenfeatrue,input_shape=(time_step,featrue)))
model.summary()

结果是

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, 10)                640       
=================================================================
Total params: 640
Trainable params: 640
Non-trainable params: 0
_________________________________________________________________

设 LSTM 输入维度为 x_dim, 输出维度为 y_dim,那么参数个数 n 为:

n = 4 * ((x_dim + y_dim) * y_dim + y_dim)

上面的计算过程是:((5+10)10 +10) 4

转载自:https://blog.youkuaiyun.com/sunghosts/article/details/108047293

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值