KL divergence over-estimate/under-estimate

本文深入探讨了KL散度的两种形式:前向KL与反向KL。通过直观的例子,解释了当真实分布P(X)与近似分布Q(X)不同时,两者在优化过程中的作用与差异。前向KL关注P(X)>0时Q(X)的概率分配,而反向KL则允许Q(X)在P(X)=0时分配概率。

reference: Very intuitive example shown in this blog

https://wiseodd.github.io/techblog/2016/12/21/forward-reverse-kl/

Let our true distribution defined as P ( X ) P(X) P(X), and the approximate distribution as Q ( X ) Q(X) Q(X).
Forward KL: ∑ x ∈ X P ( x ) log ⁡ ( P ( x ) Q ( x ) ) \sum_{x \in X} P(x) \log(\frac{P(x)}{Q(x)}) xXP(x)log(Q(x)P(x))
Discussion in different cases:

  1. If P(x)=0, then log term can be igored so that the Q(x) can be any shape when P(x)=0. Q(x) is able to assign any probabilities when P(x)=0)
  2. If P(x)>0, then the log term have effect during optimization so that Q(x) assign probabilities as close as possible to P(X) when P(x)>0.
  3. The following graph is a specific optimal optimization for Forward KL.
    在这里插入图片描述

Reverse KL: ∑ x ∈ X Q ( x ) log ⁡ ( Q ( x ) P ( x ) ) \sum_{x \in X} Q(x) \log(\frac{Q(x)}{P(x)}) xXQ(x)log(P(x)Q(x))

  1. If Q(x)=0, log term can be ignored so that Q(x) able to assign 0 probabilities to P(x)>=0.
  2. If Q(x) > 0, log term is taken into account in optimization step so that Q(x) assign probabilites as close as possible to P(X) when P(x)>0
  3. The following graph is a specific optimal optimization for Reverse KL.
    在这里插入图片描述
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值