借鉴:
cs224n notes 1:
http://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf
1. word2vec background theory
- distributional semantics: represent the meaning of a word based on the context in which it usually appears. The obtaiend representations/ word embeddings are dense and can better capture similarity.
- distributional similarity: the idea that similar words have similar context.
2. word2vec
A language model can assign a probability to a sequence of tokens w1, w2, …, wn.
e.g. unigram, bigram model
word2vec contains:
language models:
CBOW: predict center word from the context
skip-gram: predict the context from the center word
training methods:
negative sampling: defines an objective by sampling negative examples
hierarchical softmax: defines an objective by using an efficient tree structure to compute probabilities for all the vocabulary.
word2vec is iteration-based:
- model parameters are the word vectors
- train the model on a certain objective
- at every iteration, evaluate the errors and follow an update rule that has some notion of penalizing the model parameters that caused the errors thus, we learn the word vectors (modelparameters)
2.1 CBOW
steps:
- generate one-hot word vectors for the input context of size m: (x_c-m, x_c-m+1, …, x_c-1, x_c+1, …, x_c+m-1, x_c+m)
c: center word, x: |V|x1 vector - get our embedded word vectors for the context (v_c-m = V.x_c-m, v_c-m+1 = V.x_c-m+1, …, v_c-1, v_c+1, …, v_c+m-1, v_c+m)
v: nx1 embedded vector - average these 2m vectors to get v_ave
- generate a score vector z = Uv_ave
- yˆ = softmax(z)
- We desire our probabilities generated, yˆ ∈ R|V| to match the true probabilities
how to measure the degree of matching/similarity? cross-entropy.
so our optimization function is about cross entropy. The probability of (correct) center word given its context can be calculated using the cross entropy (similarity between the center word uc and its context v_bar).
So, our objective of updating the word vectors of center word and context words is to maximize the similarity of the center word and its context.

We use stochastic gradient descent to update. SGD compute gradients for a window with the size of m (2m context words) and update the parameters:

2.2 skip-gram
given center word, predict its context words in the window
steps:
- generate one-hot input vector x of the center word
x: Vx1 vector - get our embedded word vector for the center word: v_c = Vx
v_c: nx1 vector - generate a score vector: z = Uv_c
本文借鉴cs224n notes 1,介绍了word2vec的背景理论,包括分布语义学和分布相似性。详细阐述了word2vec模型,包含语言模型(CBOW和skip - gram)和训练方法(负采样和分层softmax),还说明了其迭代训练过程,并分别介绍了CBOW和skip - gram的具体步骤。
1721

被折叠的 条评论
为什么被折叠?



