Bayesian inference for Poisson–hidden Markov models

贝叶斯推断用于泊松隐马尔可夫模型

细致地讨论一下用贝叶斯方法估计Poisson-HMM模型, 该问题来自 W. Zcchini 的 Hidden Markov Models for Time Series 一书第7章.

1. 问题概述

说是有一个 Poisson–HMM { X t } \{X_t\} { Xt} on m m m states, with underlying Markov chain { C t } \{C_t\} { Ct}. We denote the state-dependent means by λ = ( λ 1 , ⋯   , λ m ) \lambda =(\lambda_1, \cdots, \lambda_m) λ=(λ1,,λm), and the transition probability matrix of the Markov chain by Γ \Gamma Γ.

假设 m m m 是已知的, { X t } \{X_t\} { Xt} 是一元的, 观测值为 X ( T ) = { X 1 , X 2 , ⋯   , X T } X^{(T)}=\{X_1, X_2, \cdots, X_T\} X(T)={ X1,X2,,XT}.

2. 简要分析

要用贝叶斯方法估计该模型, 就是要获得从 Θ = { λ \Theta=\{\lambda Θ={ λ Γ } \Gamma\} Γ} 的后验分布 P ( Θ ∣ X ( T ) ) P(\Theta \mid X^{(T)}) P(ΘX(T)) 里的大量抽样. 按贝叶斯方法套路, P ( Θ ∣ X ( T ) ) ∝ P ( X ( T ) ∣ Θ ) P ( Θ ) P(\Theta \mid X^{(T)}) \propto P(X^{(T)} \mid \Theta)P(\Theta) P(ΘX(T))P(X(T)Θ)P(Θ)

P ( X ( T ) ∣ Θ ) P(X^{(T)} \mid \Theta) P(X(T)Θ) 需要 summing over all possible sequences of latent states S ( T ) S^{(T)} S(T). If the time series has length T T T and there are m m m possible states, this sum is over m T m^T mT sequences, making direct computation intractable for long series. P ( X ( T ) ∣ Θ ) = ∑ S T P ( X ( T ) , S ( T ) ∣ Θ ) P(X^{(T)} \mid \Theta) = \sum_{S^T} P(X^{(T)}, S^{(T)} \mid \Theta) P(X(T)Θ)=STP(X(T),S(T)Θ)

直接干不行, 需要退而从 P ( Θ , S ( T ) ∣ X ( T ) ) P(\Theta, S^{(T)} \mid X^{(T)}) P(Θ,S(T)X(T)) 这儿抽样, 理论推导见 Tanner and Wong (1987), 结论就是下面算法经过足够多次(burn-in)迭代后, { Θ g , Θ g + 1 , ⋯   } \{\Theta^{g}, \Theta^{g+1}, \cdots\} { Θg,Θg+1,} 就是来自 p ( Θ ∣ X ( T ) ) p(\Theta \mid X^{(T)}) p(ΘX(T)) 的样本.

这个算法就是Gibbs sampling, 是个循环迭代过程:

  • In order to obtain the posterior distribution p ( Θ ∣ X ( T ) ) p(\Theta \mid X^{(T)}) p(ΘX(T)), we have to initialize Θ \Theta Θ to Θ 1 \Theta^{1} Θ1 and get a sample of latent states S ( T ) , 1 S^{(T),1} S(T),1 from P ( S ( T ) ∣ X ( T ) , Θ 1 ) P(S^{(T)} \mid X^{(T)}, \Theta^{1}) P(S(T)X(T),Θ1).
  • Then draw Θ 2 \Theta^{2} Θ2 (不是平方, 2是序号) from P ( Θ ∣ X ( T ) , S ( T ) , 1 ) P(\Theta \mid X^{(T)}, S^{(T),1}) P(ΘX(T),S(T),1) which utilizes the factorization of the complete-data likelihood and conjugate relationship of the prior and the posterior distribution of Θ \Theta Θ.
    • P ( Θ ∣ X ( T ) , S ( T ) ) ∝ P ( X ( T ) , S ( T ) ∣ Θ ) P ( Θ ) P(\Theta \mid X^{(T)}, S^{(T)}) \propto P(X^{(T)}, S^{(T)} \mid \Theta) P(\Theta) P(ΘX(T),S(T))P(X(T),S(T)Θ)P(Θ)
    • The complete-data likelihood P ( X ( T ) , S ( T ) ∣ Θ ) P(X^{(T)}, S^{(T)} \mid \Theta) P(X
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值