https://arxiv.org/pdf/1703.03906
Neural Machine Translation (NMT) has shown remarkable progress over the past few years with production systems now being deployed to end-users. One major drawback of current architectures is that they are expensive to train, typically requiring days to weeks of GPU time to converge. This makes exhaustive hyperparameter search, as is commonly done with other neural network architectures, prohibitively expensive. In this work, we present the first large-scale analysis of NMT architecture hyperparameters. We report empirical results and variance numbers for several hundred experimental runs, corresponding to over 250,000 GPU hours on the standard WMT English to German translation task. Our experiments lead to novel insights and practical advice for building and extending NMT architectures. As part of this contribution, we release an open-source NMT framework that enables researchers to easily experiment with novel techniques and reproduce state of the art results.
本文呈现了针对神经机器翻译(NMT)架构超参数的首次大规模分析,实验基于标准WMT英语到德语翻译任务,涉及数百次实验运行及超过25万小时GPU时间。研究揭示了NMT系统的新型见解并提供了实用建议,同时开源了一个便于实验的NMT框架。
1527

被折叠的 条评论
为什么被折叠?



