Paper Reading: Papers in Frontiers of NLP 2018 collection

本文综述了深度学习在自然语言处理领域的关键进展,包括多任务学习、词嵌入生成、递归神经网络、LSTM、CNN、Transformer等模型,以及注意力机制、对抗训练、迁移学习等技术的应用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.Papers collections
Note: the original name of the paper will be appended soonly!

IndexPaperYearBrief IntroNote
1.[Collobert & Weston, ICML ’08]2008Multi-task learning.MTL: Win Test-of-time-award at ICML 2018
2.[Pennington et al., EMNLP ’14; Levy et al., NIPS ’14]2014Generate embeeidng by matrix factorizationNew method of embedding
3.[Levy et al., TACL ’15]2015Classic methods (eg. PMI and SVD) for embedding generationNew method of embedding
4.[Le & Mikolov, ICML ’14; Kiros et al., NIPS ’15]2016Skip-gram for sentence representationSkip-gram
5.[Grover & Leskovec, KDD ’16]2016Skip-gram for Nueral Network modellingSkip-gram
6.[Luong et al., ’15]2015Difference embedding projection aids trasfer learningEmbedding projection
7.[Hochreiter & Schmidhuber, NeuComp ’97]1997The original paper for LSTMLSTM
8.[Kalchbrenner et al., ’17]2017Dilated CNNCNN: To enable wider receptive field
9.[Wang et al., ACL ’16]2016Stacked LSTM and CNNStacked model
10.[Bradbury et al., ICLR ’17]2017Use convolution to speed up LSTMCNN&LSTM combination
11.[Tai et al., ACL ’15]2015Extend Recursive nueral netword to LSTMRecursive neural network put forward
12.[Bastings et al., EMNLP ’17]2017graph convolutional neural networkCnn over graph(trees)
13.[Levy and Goldberg, ACL ’14]2014word embeddings generated form dependenciesEmbedding generation
14.[Wu et al., ’16]2016Deep LSTMNew seq2seq model
15.[Kalchbrenner et al., arXiv ’16; Gehring et al., arXiv ’17]2017Convolutional encodersNew seq2seq model
16.[Vaswani et al., NIPS ’17]2017Transformer: pure attention architectureNew seq2seq model
17.[Chen et al., ACL ’18]2018combination of LSTM and TransformerNew seq2seq model
18.[Vinyals et al., NIPS ’16]2016Attention in one-shot learningAttention & one-shot
19.0[Graves et al., arXiv ’14]2014Neural Turing MachineMemory Network
19.1[Weston et al., ICLR ’15]2015Memory NetworkMemory Network
19.2[Sukhbaatar et al., NIPS ’15]2015End-to-end Memory NetworksMemory Network
19.3Dynamic Memory Networks [Kumar et al., ICML ’16]2016Dynamic Memory NetworksMemory Network
19.4[Graves et al., Nature ’16]2016Neural Differentiable ComputerMemory Network
19.5[Henaff et al., ICLR ’17]2017Recurrent Entity NetworkMemory Network
20.[Peters et al., NAACL ’18],之前看过一篇稍后补上2018Language model embedding used as featureLanguage model
21.[Howard & Ruder, ACL ’18]2018Language model fine tuned on task dataLanguage model
22.[Jia & Liang, EMNLP ’17]2017Adversarial examplesAdversarial
23.[Miyato et al., ICLR ’17; Yasunaga et al., NAACL’18]2018Adversarial trainingForm of regularization
24.[Ganin et al., JMLR ’16; Kim et al., ACL ’17]2017Domain adversarial lossForm of regularization
25.[Semeniuta et al., ’18]2018GANs’ application in NLGGAN for NLP
26.[Paulus et al., ICLR ’18]2018RL for summarizationRL with ROUGE loss
27.[Ranzato et al., ICLR ’16]2016RL for Machine TranslationRL with BLUE loss
28.[Conneau et al., ICLR’18]2018word translation without parallel dataLow-resource scenarios
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值