





References:
[1] Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. Weighted transformer network for machine. translation. arXiv preprint arXiv:1711.02132, 2017.
[2] Shaw, P., Uszkoreit, J., Vaswani, A. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155 (2018)
[3] http://www.sohu.com/a/234238473_129720
[4] https://baijiahao.baidu.com/s?id=1601234081544356769&wfr=spider&for=pc
[5] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI.
[6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina ToutanovaBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805
[7] Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In ACL.
本文综述了Transformer模型的发展,从Weighted Transformer Network到Self-Attention with Relative Position Representations,再到预训练模型如BERT和GPT的出现,展示了Transformer在自然语言处理领域的应用和进步。
5909

被折叠的 条评论
为什么被折叠?



