模型
输入输出为平稳随机过程,求最优权向量使得输出估计结果的均方误差最小
d^(n)=wHu(n)min{J(w){e(n)=d(n)−d^(n)J(w)==E{∣e(n)∣2}
\begin {align}
&\hat{d}(n)=w^Hu(n) \\
&min\{J(w)\\
&\left\{\begin{aligned}
&e(n)=d(n)-\hat{d}(n) \\
&J(w)==E\{|e(n)|^2\}
\end{aligned}
\right.
\end {align}
d^(n)=wHu(n)min{J(w){e(n)=d(n)−d^(n)J(w)==E{∣e(n)∣2}
方法
最优权向量:
Wiener-Hopf方程:
Rwo=pR=E{u(n)uH(n)}=[r(0)r(1)⋯r(M−1)r(−1)r(0)⋯r(M−2)⋮⋯⋱⋯r(−M+1)⋯⋯r(0)]p=E{u(n)d∗(n)}=[E{u(n)d∗(n)}E{u(n−1)d∗(n)}⋮E{u(n−M+1)d∗(n)}]=[p(0)p(−1)⋮p(−M+1)]
\begin {align}
&Rw_o=p\\
&R=E\{u(n)u^H(n)\}=\begin{bmatrix} r(0) & r(1) & \cdots & r(M-1) \\r(-1)&r(0) & \cdots & r(M-2) \\ \vdots &\cdots &\ddots &\cdots \\
r(-M+1)&\cdots&\cdots&r(0)\end{bmatrix}\\
&p=E\{u(n)d^*(n)\}=\begin{bmatrix}
E\{u(n)d^*(n)\} \\
E\{u(n-1)d^*(n)\}\\
\vdots\\
E\{u(n-M+1)d^*(n)\}
\end{bmatrix}
=\begin{bmatrix}
p(0)\\
p(-1)\\
\vdots\\
p(-M+1)
\end{bmatrix}
\end {align}
Rwo=pR=E{u(n)uH(n)}=r(0)r(−1)⋮r(−M+1)r(1)r(0)⋯⋯⋯⋯⋱⋯r(M−1)r(M−2)⋯r(0)p=E{u(n)d∗(n)}=E{u(n)d∗(n)}E{u(n−1)d∗(n)}⋮E{u(n−M+1)d∗(n)}=p(0)p(−1)⋮p(−M+1)
算法
1.最陡下降算法
(1)核心
Δw=−12μΔJ(w(n)) \Delta w = -\frac{1}{2}\mu\Delta J(w(n)) Δw=−21μΔJ(w(n))
(2)计算
{w(n+1)=w(n)+μ[p−Rw(n)]0<μ<2λmax \begin{align} \left\{\begin{aligned} &w(n+1)=w(n)+\mu [p-Rw(n)]\\ &0<\mu<\frac{2}{\lambda_{max}} \end{aligned} \right. \end{align} ⎩⎨⎧w(n+1)=w(n)+μ[p−Rw(n)]0<μ<λmax2
(3)性能
当0<μ<2λmax0<\mu<\frac{2}{\lambda_{max}}0<μ<λmax2时,最陡下降算法收敛:
limn−>∞w(n)=wo
\lim_{n->\infty} w(n)=w_o
n−>∞limw(n)=wo
2.LMS算法
(1)核心
R^=1N∑i=1Nu(i)uH(i)p^=1N∑i=1Nu(i)d∗(i) \begin{align} &\hat{R}=\frac{1}{N}\sum_{i=1}^{N}u(i)u^H(i)\\ &\hat{p}=\frac{1}{N}\sum_{i=1}^{N}u(i)d^*(i) \end{align} R^=N1i=1∑Nu(i)uH(i)p^=N1i=1∑Nu(i)d∗(i)
(2)计算
w^(n+1)=w^(n)+μu(n)e∗(n) \hat{w}(n+1) = \hat{w}(n)+\mu u(n)e^*(n) w^(n+1)=w^(n)+μu(n)e∗(n)
(3)性能
滤波器权向量1阶收敛,但均方误差大于最小均方误差
1阶收敛
当0<μ<2λmax0<\mu<\frac{2}{\lambda_{max}}0<μ<λmax2时,最陡下降算法收敛:
limn−>∞E{w^(n)}=wo
\lim_{n->\infty} E\{\hat{w}(n)\}=w_o
n−>∞limE{w^(n)}=wo
均方误差的稳态性能
步长因子满足
0<μ<2∑i=1Mλi=2Mr(0)
0<\mu<\frac{2}{\sum\limits_{i=1}^{M}{\lambda_i}}=\frac{2}{Mr(0)}
0<μ<i=1∑Mλi2=Mr(0)2
a.剩余均方误差JexJ_{ex}Jex
Jex(n)=EJ^(n)−JminJex(∞)≈μJmin∑i=1Mλi2−μ∑i=1Mλi \begin{align} &J_{ex}(n) = E{\hat{J}(n)}-J_{min}\\ &J_{ex}(\infty)\approx \mu J_{min}\frac{\sum\limits^{M}_{i=1}\lambda_i}{2-\mu\sum\limits_{i=1}^{M}{\lambda_i}} \end{align} Jex(n)=EJ^(n)−JminJex(∞)≈μJmin2−μi=1∑Mλii=1∑Mλi
b.失调参数MMM
M=Jex(∞)JminM=μ∑i=1Mλi2−μ∑i=1Mλi≈μ2∑i=1Mλi=μ2Mr(0)=μ2Mλav=M4τav \begin{align} &M=\frac{J_{ex}(\infty)}{J_{min}}\\ &M=\frac{\mu \sum\limits^{M}_{i=1}\lambda_i}{2-\mu\sum\limits_{i=1}^{M}{\lambda_i}} \approx\frac{\mu}{2}\sum\limits_{i=1}^{M}\lambda_i=\frac{\mu}{2}Mr(0)=\frac{\mu}{2}M\lambda_{av}=\frac{M}{4\tau_{av}} \end{align} M=JminJex(∞)M=2−μi=1∑Mλiμi=1∑Mλi≈2μi=1∑Mλi=2μMr(0)=2μMλav=4τavM
c.平均时间常数τav\tau_{av}τav
τav=12μλavλav=1M∑i=1Mλi \tau_{av}=\frac{1}{2\mu\lambda_{av}} \lambda_{av}=\frac{1}{M}\sum\limits_{i=1}^{M}\lambda_i τav=2μλav1λav=M1i=1∑Mλi
d.特征值拓展/特征值比χ\chiχ
χ(R)=λmaxλmin \chi(R)=\frac{\lambda_{max}}{\lambda_{min}} χ(R)=λminλmax
1219

被折叠的 条评论
为什么被折叠?



