常见神经网络实现
环境配置:tensorflow=2.2,keras=2.3.1
(1)全连接神经网络
model = Sequential()
model.add(Embedding(output_dim = 32, # 词向量的维度
input_dim = 2000, # Size of the vocabulary 字典大小
input_length = 50 # 每个数字列表的长度
) //每次输入50个单词,每个词的词向量维度为32,字典大小为2000
)//因此第一层输出是(50,32)
model.add(Dropout(0.2))
model.add(Flatten()) # 平展 此时是变成50x32=1600
model.add(Dense(units = 256,
activation = "relu")) //输出层设置成256
model.add(Dropout(0.25))
model.add(Dense(units = 10,
activation = "softmax"))//最终的输出层输出为10类
print(model.summary()) # 打印模型
模型可视化:
(2)卷积神经网络实现
model = Sequential()
model.add(Embedding(output_dim = 32, # 词向量的维度
input_dim = 2000, # Size of the vocabulary 字典大小
input_length = 50 # 每个数字列表的长度
) //此时输出为(50,32)
)
model.add(Conv1D(256,
3,
padding = 'same',
activation = 'relu')
)//此时输出为(50,256)
model.add(MaxPool1D(3,3,padding='same'))//此时输出为(50/3=17,256)
model.add(Conv1D(32, 3, padding='same', activation='relu'))//此时输出为(50/3=17,32)
model.add(Flatten())//此时输出为(544)
model.add(Dropout(0.3))
model.add(BatchNormalization()) # (批)规范化层
model.add(Dense(256,activation='relu')) //输出层为256
model.add(Dropout(0.2))
model.add(Dense(units = 10,
activation = "softmax")) //输出层为10
print(model.summary()) # 打印模型
模型可视化:
(3)LSTM神经网络
//特征量一共有5个,每次输入的序列长度为50
model = Sequential()
model.add(LSTM(units=100,return_sequences=True,input_dim = x_train.shape[-1],input_length = x_train.shape[1]))
model.add(LSTM(units=50))
model.add(Dense(1))
model.compile(loss="mean_squared_error",optimizer = 'adam')
model.fit(x_train,y_train,epochs=epochs,batch_size=batch_size,verbose=1)
print(model.summary())
模型可视化: