在生成word2vec模型后,还要想办法把模型塞进embed层来使用。
model.wv.save_word2vec_format(u"./data_path/model")
nparray的读写:
npy_path=u"data_path/sample.npy"
a=numpy.array(range(0,16)).reshape((2,2,-1),order='F')
print(a)
numpy.save(npy_path,a)
print(numpy.load(npy_path))
dictionart的读写:
data={'a':[1,2.0,3,4+6j],
'b':('String',u'unicode string'),
'c':None}
print(data)
#写
output=open(pkl_path,'wb')
pickle.dump(data,output)
output.close()
#读
pkl_input=open(pkl_path,'rb')
print(pickle.load(pkl_input))
pkl_input.close()