隐马尔可夫模型的学习,根据训练数据是包括观测序列和状态序列还是只有观测序列,可以分别由监督学习与非监督学习实现。由于监督学习需要使用训练数据,而人工标注训练数据往往代价很高,有时就会利用非监督学习的方法,即Baum-Welch算法(也就是EM算法)。在介绍学习算法之前,先介绍一些概率和期望值的计算。这些计算会成为Baum-Welch算法公式的基础。
一些概率和期望值的计算
利用前向概率和后向概率,可以得到关于单个状态和两个状态概率的计算公式。
- 给定模型λ\lambdaλ和观测OOO,在时刻ttt处于状态qiq_iqi的概率。记为
γt(i)=P(it=qi∣O,λ)\gamma_t(i) = P(i_t = q_i |O,\lambda)γt(i)=P(it=qi∣O,λ)
先分解为分数形式
γt(i)=P(it=qi,O∣λ)P(O∣λ)(1)\gamma_t(i) = \frac{P(i_t = q_i, O | \lambda)}{P(O|\lambda)}\tag{1}γt(i)=P(O∣λ)P(it=qi,O∣λ)(1)
根据前向概率的定义可以做以下变换
αt(i)=P(o1,o2...ot,it=qt∣λ)=P(it=qt∣λ)P(o1,o2...ot∣it=qt,λ) \alpha_t(i) = P(o_1,o_2...o_t, i_t = q_t | \lambda) = P(i_t = q_t | \lambda)P(o_1,o_2...o_t| i_t = q_t , \lambda) αt(i)=P(o1,o2...ot,it=qt∣λ)=P(it=qt∣λ)P(o1,o2...ot∣it=qt,λ)
后向概率的定义如下
βt(i)=P(ot+1,ot+2...,oT∣it=qt,λ)\beta_t(i) = P(o_{t+1},o_{t+2}...,o_T | i_t = q_t , \lambda)βt(i)=P(ot+1,ot+2...,oT∣it=qt,λ)
将这两者相乘得到
αt(i)∗βt(i)=P(it=qt,O∣λ)(2)\alpha_t(i) * \beta_t(i) =P(i_t = q_t,O | \lambda)\tag{2}αt(i)∗βt(i)=P(it=qt,O∣λ)(2)
以上结果从两者的定义上也很好理解。
对变量iii在范围i=1,2,...Ni = 1,2,...Ni=1,2,...N上求和
∑i=1NP(it=qt,O∣λ)=P(O∣λ)(3)\sum_{i=1}^N P(i_t = q_t,O | \lambda) = {P(O|\lambda)}\tag{3}i=1∑NP(it=qt,O∣λ)=P(O∣λ)(3)
将式(2),(3)(2),(3)(2),(3)代入(1)(1)(1)可以得到
γt(i)=αt(i)∗βt(i)∑j=1Nαt(j)∗βt(j)(4)\gamma_t(i) = \frac{\alpha_t(i) * \beta_t(i)}{\sum_{j=1}^N \alpha_t(j) * \beta_t(j)}\tag{4}γt(i)=∑j=1Nαt(j)∗βt(j)αt(i)∗βt(i)(4)
3. 给定模型λ\lambdaλ和观测OOO,在时刻ttt处于状态qiq_iqi且在时刻t+1t+1t+1处于状态qjq_jqj的概率。记为
ξt(i,j)=P(it=qi,it+1=qj∣O,λ)\xi_t(i,j) = P(i_t = q_i,i_{t+1} = q_j |O,\lambda)ξt(i,j)=P(it=qi,it+1=qj∣O,λ)
通过前向后向概率计算:
ξt(i)=P(it=qi,it+1=qj,O∣λ)P(O∣λ)=P(it=qi,it+1=qj,O∣λ)∑i=1N∑j=1NP(it=qi,it+1=qj,O∣λ)\xi_t(i) = \frac{P(i_t = q_i, i_{t+1} = q_j,O | \lambda)}{P(O|\lambda)}=\frac{P(i_t = q_i, i_{t+1} = q_j,O | \lambda)}{\sum_{i=1}^N\sum_{j=1}^NP(i_t = q_i, i_{t+1} = q_j,O | \lambda)}ξt(i)=P(O∣λ)P(it=qi,it+1=qj,O∣λ)=∑i=1N∑j=1NP(it=qi,it+1=qj,O∣λ)P(it=qi,it+1=qj,O∣λ)
分子可以用前向后向概率表示
P(it=qi,it+1=qj,O∣λ)=αt(i)aijbj(ot+1)βt+1(j)P(i_t = q_i, i_{t+1} = q_j,O | \lambda) = \alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)P(it=qi,it+1=qj,O∣λ)=αt(i)aijbj(ot+1)βt+1(j)
则ξt(i)\xi_t(i)ξt(i)可以表示为
ξt(i)=αt(i)aijbj(ot+1)βt+1(j)∑i=1N∑j=1Nαt(i)aijbj(ot+1)βt+1(j)\xi_t(i) = \frac{\alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)}{\sum_{i=1}^N\sum_{j=1}^N\alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)}ξt(i)=∑i=1N∑j=1Nαt(i)aijbj(ot+1)βt+1(j)αt(i)aijbj(ot+1)βt+1(j)
4. 将γt(i)\gamma_t(i)γt(i)和ξt(i,j)\xi_t(i,j)ξt(i,j)对各个时刻求和,可以得到一些有用的期望值。
(1) 观测OOO下,状态iii出现的期望值
∑t=1Tγt(i)\sum_{t=1}^T\gamma_t(i)t=1∑Tγt(i)
将每一个时刻下,出现状态iii的概率相加
(2) 观测OOO下,由状态iii转移的期望值
∑t=1T−1γt(i)\sum_{t=1}^{T-1}\gamma_t(i)t=1∑T−1γt(i)
能够从状态iii转移的时刻是1,2...T−11,2...T-11,2...T−1,比上一个求和公式少了时刻TTT
(3) 观测OOO下,由状态iii转移到状态jjj的期望值
∑t=1T−1ξt(i,j)\sum_{t=1}^{T-1}\xi_t(i,j)t=1∑T−1ξt(i,j)
Baum-Welch模型
参数估计公式
推导的过程,尤其是拉格朗日对偶,我暂时还不十分理解,先直接给出训练方法,公式和代码。Baum-Welch算法(Baum-Welch algorithm),它是EM算法在隐马尔可夫模型学习过程中的具体实现,由Baum和Welch提出。
(1)初始化
对n=0n=0n=0,选取aij0,bj(k)0,πi0a_{ij}^{0} ,b_j(k)^{0} ,\pi_{i}^{0}aij0,bj(k)0,πi0,得到模型λ0=(aij0,bj(k)0,πi0)\lambda^0 = (a_{ij}^{0} ,b_j(k)^{0} ,\pi_{i}^{0})λ0=(aij0,bj(k)0,πi0)
(2)递推。对n=1,2,...n = 1,2,...n=1,2,...
aijn+1=∑t=1T−1ξt(i,j)∑t=1T−1γt(i)a_{ij}^{n+1} = \frac{\sum_{t= 1}^{T-1}\xi_t(i,j)}{\sum_{t= 1}^{T-1}\gamma_t(i)}aijn+1=∑t=1T−1γt(i)∑t=1T−1ξt(i,j)
bj(k)n+1=∑t=1,ot=vkTγt(j)∑t=1Tγt(j)b_j(k)^{n+1} = \frac{\sum_{t=1,o_t=v_k}^T\gamma_t(j)}{\sum_{t= 1}^T\gamma_t(j)}bj(k)n+1=∑t=1Tγt(j)∑t=1,ot=vkTγt(j)
πin+1=γ1(i)\pi_i^{n+1} = \gamma_1(i)πin+1=γ1(i)
公式右端按照观测O=(o1,o2,...oT)O = (o_1,o_2,...o_T)O=(o1,o2,...oT)和模型λn=(aijn,bj(k)n,πin)\lambda^n = (a_{ij}^{n} ,b_j(k)^{n} ,\pi_{i}^{n})λn=(aijn,bj(k)n,πin)代入计算
这两个训练的公式还是比较好理解。aijn+1a_{ij}^{n+1}aijn+1是转移概率,分母代表当前观测OOO下,由状态iii转移的期望值,而分子代表观测OOO下,由状态iii转移到状态jjj的期望值。两者相除即为aijn+1a_{ij}^{n+1}aijn+1。
(3)终止,得到模型λn+1=(aijn+1,bj(k)n+1,πin+1)\lambda^{n+1} = (a_{ij}^{n+1} ,b_j(k)^{n+1} ,\pi_{i}^{n+1})λn+1=(aijn+1,bj(k)n+1,πin+1)
###Baum-Welch算法的Python实现
def baum_welch_train(self, observations, criterion=0.05):
n_states = self.A.shape[0]
n_samples = len(observations)
done = False
while not done:
# alpha_t(i) = P(O_1 O_2 ... O_t, q_t = S_i | hmm)
# Initialize alpha
alpha = self._forward(observations)
# beta_t(i) = P(O_t+1 O_t+2 ... O_T | q_t = S_i , hmm)
# Initialize beta
beta = self._backward(observations)
xi = np.zeros((n_states,n_states,n_samples-1))
for t in range(n_samples-1):
denom = np.dot(np.dot(alpha[:,t].T, self.A) * self.B[:,observations[t+1]].T, beta[:,t+1])
for i in range(n_states):
numer = alpha[i,t] * self.A[i,:] * self.B[:,observations[t+1]].T * beta[:,t+1].T
xi[i,:,t] = numer / denom
# gamma_t(i) = P(q_t = S_i | O, hmm)
gamma = np.sum(xi,axis=1)
# Need final gamma element for new B
prod = (alpha[:,n_samples-1] * beta[:,n_samples-1]).reshape((-1,1))
gamma = np.hstack((gamma, prod / np.sum(prod))) #append one more to gamma!!!
newpi = gamma[:,0]
newA = np.sum(xi,2) / np.sum(gamma[:,:-1],axis=1).reshape((-1,1))
newB = np.copy(self.B)
num_levels = self.B.shape[1]
sumgamma = np.sum(gamma,axis=1)
for lev in range(num_levels):
mask = observations == lev
newB[:,lev] = np.sum(gamma[:,mask],axis=1) / sumgamma
if np.max(abs(self.pi - newpi)) < criterion and \
np.max(abs(self.A - newA)) < criterion and \
np.max(abs(self.B - newB)) < criterion:
done = 1
self.A[:],self.B[:],self.pi[:] = newA,newB,newpi