-
关键字:
数据维度
,concat
-
问题描述:在使用N-gram神经网络训练PTB数据集时,使用接口
fluid.layers.concat
把四个词向量连接起来,最后经过全连接层输出。但是把fluid.layers.concat
的axis
参数设置为0时就报错。 -
报错信息:
<ipython-input-6-daf8837e1db3> in train(use_cuda, train_program, params_dirname)
37 num_epochs=1,
38 event_handler=event_handler,
---> 39 feed_order=['firstw', 'secondw', 'thirdw', 'fourthw', 'nextw'])
/usr/local/lib/python3.5/dist-packages/paddle/fluid/contrib/trainer.py in train(self, num_epochs, event_handler, reader, feed_order)
403 else:
404 self._train_by_executor(num_epochs, event_handler, reader,
--> 405 feed_order)
406
407 def test(self, reader, feed_order):
/usr/local/lib/python3.5/dist-packages/paddle/fluid/contrib/trainer.py in _train_by_executor(self, num_epochs, event_handler, reader, feed_order)
481 exe = executor.Executor(self.place)
482 reader = feeder.decorate_reader(reader, multi_devices=False)
--> 483 self._train_by_any_executor(event_handler, exe, num_epochs, reader)
484
485 def _train_by_any_executor(self, event_handler, exe, num_epochs, reader):
/usr/local/lib/python3.5/dist-packages/paddle/fluid/contrib/trainer.py in _train_by_any_executor(self, event_handler, exe, num_epochs, reader)
510 fetch_list=[
511 var.name
--> 512 for var in self.train_func_outputs
513 ])
514 else:
/usr/local/lib/python3.5/dist-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
468
469 self._feed_data(program, feed, feed_var_name, scope)
--> 470 self.executor.run(program.desc, scope, 0, True, True)
471 outs = self._fetch_data(fetch_list, fetch_var_name, scope)
472 if return_numpy:
EnforceNotMet: Enforce failed. Expected framework::slice_ddim(x_dims, 0, rank - 1) == framework::slice_ddim(label_dims, 0, rank - 1), but received framework::slice_ddim(x_dims, 0, rank - 1):400 != framework::slice_ddim(label_dims, 0, rank - 1):100.
Input(X) and Input(Label) shall have the same shape except the last dimension. at [/paddle/paddle/fluid/operators/cross_entropy_op.cc:37]
PaddlePaddle Call Stacks:
- 问题复现:使用四个
fluid.layers.embedding
接口建立四个词向量,接着把这四个词向量通过fluid.layers.concat
接口连接在一起,但是在设置axis
参数的值为0的时候就会报错。错误代码如下:
embed_first = fluid.layers.embedding(
input=first_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_second = fluid.layers.embedding(
input=second_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_third = fluid.layers.embedding(
input=third_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_fourth = fluid.layers.embedding(
input=fourth_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
concat_embed = fluid.layers.concat(
input=[embed_first, embed_second, embed_third, embed_fourth], axis=0)
- 解决问题:
fluid.layers.concat
接口中的axis
参数是指把上面的四个将张量连接在一起的整数轴,因为输出数据都是一维的,所以这个参数的应该是1。经过拼接后,该层输出的形状应该是(Batch大小, 四个词向量维度的和)
。
embed_first = fluid.layers.embedding(
input=first_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_second = fluid.layers.embedding(
input=second_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_third = fluid.layers.embedding(
input=third_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
embed_fourth = fluid.layers.embedding(
input=fourth_word,
size=[dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=is_sparse,
param_attr='shared_w')
concat_embed = fluid.layers.concat(
input=[embed_first, embed_second, embed_third, embed_fourth], axis=1)
- 问题拓展:
fluid.layers.embedding
接口是将高度稀疏的离散输入嵌入到一个新的实向量空间,对抗维数灾难,使用更少的维度,编码更丰富的信息。之后使用fluid.layers.concat
把多个这样的向量并在一起,最后把数据送入到神经网络中。