浅谈隐式马尔可夫模型 - A Brief Note of Hidden Markov Model (HMM)

本文介绍了隐式马尔可夫模型(HMM),包括问题定义、贝叶斯决策理论、马尔可夫模型以及HMM的三个主要问题:评估、解码和训练。详细讨论了前向算法、后向算法、维特比算法以及baum-welch重估计方法,旨在帮助读者深入理解HMM在概率序列识别中的应用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Introduction

Problem Formulation

Now we talk about Hidden Markov Model. Well, what is HMM used for? Consider the following problem:

*Given an unknown observation: O O O, recognize it as one of N N N classes with minimum probability of error. *

So how to define the error and error probability?
Conditional Error: Given O O O, the risk associated with deciding that it is a class i i i event:
R ( S i ∣ O ) = ∑ j = 1 N e i j P ( S j ∣ O ) R(S_{i} | O) = \sum_{j=1}^{N}e_{ij}P(S_{j} | O) R(SiO)=j=1NeijP(SjO)where P ( S j ∣ O ) P(S_{j} | O) P(SjO) is the probability of that O O O is a class S j S_{j} Sj event and e i j e_{ij} eij is the cost of classifying a class j j j event as a class i i i event. e i j > 0 , e i i = 0 e_{ij}>0, e_{ii}=0 eij>0,eii=0.
Expected Error:
E = ∫ R ( S ( O ) ∣ O ) p ( O ) d O \mathcal{E}=\int R(S(O) | O)p(O)dO E=R(S(O)O)p(O)dOwhere S ( O ) S(O) S(O) is the decision made on O O O based on a policy. Then the question can be considered as:

How should S ( O ) S(O) S(O) be made to achieve minimum error probability? Or P ( S ( O ) ∣ O ) P(S(O)|O) P(S(O)O) is maximized?

Bayes Decision Theory

If we institute the policy: S ( O ) = S i = arg ⁡ max ⁡ S j P ( S j ∣ O ) S(O) = S_{i} = \arg\max_{S_{j}}P(S_{j} | O) S(O)=Si=argmaxSjP(SjO) then R ( S ( O ) ∣ O ) = min ⁡ S j R ( S j ∣ O ) R(S(O)|O)=\min_{S_{j}}R(S_{j} | O) R(S(O)O)=minSjR(SjO). It is the so-called Maximum A Posteriori (MAP) decision. But how do we know P ( S j ∣ O ) , i = 1 , 2 , … , M P(S_{j}|O), i=1,2,\dots,M P(SjO),i=1,2,,M for any O O O ?

Markov Model

States : S = { S 0 , S 1 , S 2 , … , S N } S=\{S_{0}, S_{1},S_{2},\dots, S_{N}\} S={ S0,S1,S2,,SN}
Transition probabilities : P ( q t = S i ∣ q t − 1 = S j ) P(q_{t}=S_{i} |q_{t-1}=S_{j}) P(qt=Siqt1=Sj)
Markov Assumption:
P ( q t = S i ∣ q t − 1 = S j , q t − 1 = S k , …   ) = P ( q t = S i ∣ q t − 1 = S j ) = a j i , a j i ≥ 0 , ∑ i = 1 N a j i = 1 , ∀ j P(q_{t}=S_{i} |q_{t-1}=S_{j}, q_{t-1}=S_{k},\dots)=P(q_{t}=S_{i} |q_{t-1}=S_{j})=a_{ji}, \quad a_{ji}\geq 0, \sum_{i=1}^{N}a_{ji}=1,\forall j P(qt=Siqt1=Sj,qt1=Sk,)=P(qt=

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值