Keras系列入门第七天

使用cnn对imdb进行文本分类

from keras.models import Sequential
from keras.layers import Dense,Flatten,SimpleRNN,Embedding,LSTM,Bidirectional,Conv1D,MaxPooling1D,GlobalMaxPooling1D
from keras.datasets import imdb
from keras import preprocessing as preprocessing
from keras.optimizers import RMSprop
import matplotlib.pyplot as plt
from keras.datasets import imdb
from keras.preprocessing import sequence

max_features = 10000
max_len = 200
(x_train,y_train),(x_test,y_test) = imdb.load_data(num_words=max_features)
x_train = preprocessing.sequence.pad_sequences(x_train,maxlen=max_len)
x_test = preprocessing.sequence.pad_sequences(x_test,maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=max_len))
model.add(Conv1D(32, 7, activation='relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(32, 7, activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(1))

model.summary()
#
model.compile(optimizer=RMSprop(lr=1e-4),
              loss='binary_crossentropy',
              metrics=['acc'])
history = model.fit(x_train, y_train,
                    epochs=10,
                    batch_size=128,
                    validation_split=0.2)

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值