nlp
FocusYang55
这个作者很懒,什么都没留下…
展开
专栏收录文章
- 默认排序
- 最新发布
- 最早发布
- 最多阅读
- 最少阅读
-
Word Embedding Preparation 5. BERT
BERT Published by Google in 2018 Bidirectional Encoder Representation from Transformers Two Phrases: Pre-training, fine-turning Use Transformer proposed in Attention Is All You Need by Google 2017 to replace RNN BERT takes advantages of multiple mode.原创 2020-12-17 00:55:56 · 181 阅读 · 0 评论 -
Word Embedding Preparation 4: ElMo
ElMo Published in 2018 and named as Embedding from language Models Deep contextualized word representations that models complex characteristics of word use and how these uses vary across linguistic contexts. It enables models to better disambiguate betw原创 2020-12-17 00:55:26 · 179 阅读 · 0 评论 -
Word Embedding Preparation 3. Glove
Glove Global Vectors for word Representation. Same model as Word2Vec Trainning is performed on aggregated global word-word. co-occurence statistics from a corpus. Must be trained offline.原创 2020-12-17 00:15:48 · 169 阅读 · 0 评论 -
Word embedding Preparation 2: Word2Vec
Word2Vec Abstract The similar structure as NNLM, bug focus on Word Embedding Two learning approach: Continuous Bag of Word(CBOW) Continuous Skip-gram(Skip-gram) 1. CBow Given its context wi-N, wi-n+1,wi-n+2...原创 2020-12-17 00:56:30 · 183 阅读 · 0 评论 -
Word Embedding Preparation 1: From Hard-code to NNLM
algorithm->machine learning->nlp->word embedding Abstract hard encode Bag of Word onehot embedding 1. Hard-coded Word is represent by ID. IDs arejust symbolic data. For example Enum, unicode string Cons Hard-code...原创 2020-12-17 00:58:09 · 288 阅读 · 0 评论
分享