信号同步Beek 97

本文介绍了一种基于极大似然估计的OFDM系统时间与频率偏移估计算法。利用循环前缀特性,该方法能在白噪声环境中准确估计OFDM符号的位置及频谱偏移。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Beek 97

  • 多载波同步领域的比较早的一篇文章
  • 利用循环前缀的重复特性,估计OFDM符号位置和频谱的方法
  • 探讨了在白噪声环境下,使用极大似然方法进行估计
  • 发表于IEEE 信号处理学报,引用量至今(2022年)1600 左右
  • J.J. van de Beek; M. Sandell; P.O. Borjesson “ML Estimation of Time and Frequency Offset in OFDM Systems”, IEEE Transactions on Signal Processing ( Volume: 45, Issue: 7, Jul 1997)

总体思路

  • 假定多载波传输的数据是随机化的,发送信号服从高斯分布
  • 假定信道噪声为加性高斯白噪声
  • 由于多普勒和晶振频偏原因,接收信号和发送信号之间存在频偏
  • 需要估计出接收信号的频偏和OFDM符号的起始位置
  • 利用循环前缀和其本体数据之间的相关性进行符号起始位置和频偏的估计

推导过程概述

  • 首先定义数据集,包含循环前缀和其本体数据的OFDM符号样值,(2A)
  • 探讨给出数据集中各个数据样点的相关性,(3)
  • 给出极大似然估计函数的定义式和简化后形式,(4、4A)
  • 利用已知的概率密度函数,以及附录中的推导结果,得到似然函数的具体形式(5)
  • 进一步的推导,得到似然函数表达式中各个子项的具体形式(6、7、8)
  • 给出似然函数的极大值求解方法(9)
  • 给出求解频偏的估计值(10)
  • 给出当频偏表达式达到极大值后,似然函数的进一步化简的表达式(11),其中仅剩下符号起始位置是未知变量
  • 给出最终的频偏和符号起始位置的求解表达式(12 、13)

详细推导过程

r(k)=s(k−θ)ej2πεk/N+n(k)r(k)=s(k-\theta) e^{j 2 \pi \varepsilon k / N}+n(k)r(k)=s(kθ)ej2πεk/N+n(k)
r(k)r(k)r(k) is the recevied signal, the transmitted signal s(k)s(k)s(k) approximates a complex Gaussian process affected by complex additive white Gaussian noise (AWGN) n(k)n(k)n(k)

  • time offset θ\thetaθ
  • carrier frequency offset ε\varepsilonε
  • DFT length is NNN
  • cyclic prefix length is LLL
  • observe consecutive samples of 2N+L2N+L2N+L consecutive samples of r(k)r(k)r(k), and that these samples contain one complete N+LN+LN+L -sample OFDM symbol.

Define the index set
I≜{θ,⋯ ,θ+L−1} and I′≜{θ+N,⋯ ,θ+N+L−1}(2A)\begin{array}{l} \mathcal{I} \triangleq\{\theta, \cdots, \theta+L-1\} \quad \text { and } \\ \mathcal{I}^{\prime} \triangleq\{\theta+N, \cdots, \theta+N+L-1\} \end{array} \hspace{8ex} (2A)I{θ,,θ+L1} and I{θ+N,,θ+N+L1}(2A)

  • The set I\mathcal{I}I contains the indices of the data samples that are copied into the cyclic prefix
  • The set I′\mathcal{I}^{\prime}I contains the indices of this prefix

Collect the observed samples in the (2N+L)×1(2N+L) \times 12N+L)×1 vector r≜[r(1)⋯r(2N+L)]T\boldsymbol{r} \triangleq[r(1) \cdots r(2 N+L)]^{T}r[r(1)r(2N+L)]T

  • The cyclic prefix and their copies r(k),k∈I∪I′r(k), k \in \mathcal{I} \cup \mathcal{I}^{\prime}r(k),kII are pairwise correlated
  • The remaining samples r(k),k∈I∉I′r(k), k \in \mathcal{I} \notin \mathcal{I}^{\prime}r(k),kI/I are mutually uncorrelated

∀k∈I:E{r(k)r∗(k+m)}={σs2+σn2m=0σs2e−j2πεm=N0 otherwise (3)\forall k \in \mathcal{I}: \quad E\left\{r(k) r^{*}(k+m)\right\}=\left\{\begin{array}{ll} \sigma_{s}^{2}+\sigma_{n}^{2} & m=0 \\ \sigma_{s}^{2} e^{-j 2 \pi \varepsilon} & m=N \\ 0 & \text { otherwise } \end{array}\right. \hspace{8ex} (3)kI:E{r(k)r(k+m)}=σs2+σn2σs2ej2πε0m=0m=N otherwise (3)

  • The probability density function, f(r∣θ,ε)f(\boldsymbol{r} \mid \theta, \varepsilon)f(rθ,ε)
  • The log-likelihood function, Λ(θ,ε)\Lambda(\theta, \varepsilon)Λ(θ,ε)
    Λ(θ,ε)=log⁡f(r∣θ,ε)=log⁡(∏k∈If(r(k),r(k+N))∏k∉I∪I′f(r(k)))=log⁡(∏k∈If(r(k),r(k+N))f(r(k))f(r(k+N))∏kf(r(k)))(4)\begin{aligned}\\ \Lambda(\theta, \varepsilon) &=\log f(\boldsymbol{r} \mid \theta, \varepsilon) \\ &=\log \left(\prod_{k \in \mathcal{I}} f(r(k), r(k+N)) \prod_{k \notin \mathcal{I} \cup \mathcal{I}^{\prime}} f(r(k))\right) \\ &=\log \left(\prod_{k \in \mathcal{I}} \frac{f(r(k), r(k+N))}{f(r(k)) f(r(k+N))} \prod_{k} f(r( k))\right) \end{aligned} \hspace{8ex} (4)Λ(θ,ε)=logf(rθ,ε)=logkIf(r(k),r(k+N))k/IIf(r(k))=log(kIf(r(k))f(r(k+N))f(r(k),r(k+N))kf(r(k)))(4)

The product factor ∏kf(r(k))\prod_{k} f(r( k))kf(r(k)) in Λ(θ,ε)\Lambda(\theta, \varepsilon)Λ(θ,ε) is independent of θ\thetaθ (since the product is over all kkk ) and ε\varepsilonε (since the density is rotationally invariant). Then we can get Λ(θ,ε)\Lambda(\theta, \varepsilon)Λ(θ,ε) as
Λ(θ,ε)=log⁡f(r∣θ,ε)=log⁡(∏k∈If(r(k),r(k+N))f(r(k))f(r(k+N)))(4A)\begin{aligned}\\ \Lambda(\theta, \varepsilon) &=\log f(\boldsymbol{r} \mid \theta, \varepsilon) \\ &=\log \left(\prod_{k \in \mathcal{I}} \frac{f(r(k), r(k+N))}{f(r(k)) f(r(k+N))} \right) \end{aligned} \hspace{8ex} (4A)Λ(θ,ε)=logf(rθ,ε)=log(kIf(r(k))f(r(k+N))f(r(k),r(k+N)))(4A)

Under the assumption that r\boldsymbol{r}r is a jointly Gaussian vector, the numerator f(r(k),r(k+N))f(r(k), r(k+N))f(r(k),r(k+N)) is a 2-D complex-valued Gaussian distribution. The denominator f(r(k))f(r(k+N))f(r(k)) f(r(k+N))f(r(k))f(r(k+N)) consists of two 1-D complex Gaussian distributions. As given in the appendix we can have the following result

  • Λ(θ,ε)=∣γ(θ)∣cos⁡(2πε+∠γ(θ))−ρΦ(θ)(5)\Lambda(\theta, \varepsilon)=|\gamma(\theta)| \cos (2 \pi \varepsilon+\angle \gamma(\theta))-\rho \Phi(\theta) \hspace{8ex} (5)Λ(θ,ε)=γ(θ)cos(2πε+γ(θ))ρΦ(θ)(5)
  • γ(m)≜∑k=mm+L−1r(k)r∗(k+N)(6)\gamma(m) \triangleq \sum_{k=m}^{m+L-1} r(k) r^{*}(k+N) \hspace{8ex} (6)γ(m)k=mm+L1r(k)r(k+N)(6)
  • Φ(m)≜12∑k=mm+L−1∣r(k)∣2+∣r(k+N)∣2(7)\Phi(m) \triangleq \frac{1}{2} \sum_{k=m}^{m+L-1}|r(k)|^{2}+|r(k+N)|^{2} \hspace{8ex} (7)Φ(m)21k=mm+L1r(k)2+r(k+N)2(7)
  • ρ≜∣E{r(k)r∗(k+N)}E{∣r(k)∣2}E{∣r(k+N)∣2}∣=σs2σs2+σn2=SNRSNR+1(8)\begin{aligned} \rho & \triangleq\left|\frac{E\left\{r(k) r^{*}(k+N)\right\}}{\sqrt{E\left\{|r(k)|^{2}\right\} E\left\{|r(k+N)|^{2}\right\}}}\right| \\ &=\frac{\sigma_{s}^{2}}{\sigma_{s}^{2}+\sigma_{n}^{2}}=\frac{\mathrm{SNR}}{\mathrm{SNR}+1} \end{aligned} \hspace{8ex} (8)ρE{r(k)2}E{r(k+N)2}E{r(k)r(k+N)}=σs2+σn2σs2=SNR+1SNR(8)

The maximization of the log-likelihood function can be performed in two steps:
max⁡(θ,ε)Λ(θ,ε)=max⁡θmax⁡εΛ(θ,ε)=max⁡θΛ(θ,ε^ML(θ))(9)\max _{(\theta, \varepsilon)} \Lambda(\theta, \varepsilon)=\max _{\theta} \max _{\varepsilon} \Lambda(\theta, \varepsilon)=\max _{\theta} \Lambda\left(\theta, \hat{\varepsilon}_{\mathrm{ML}}(\theta)\right) \hspace{8ex} (9)(θ,ε)maxΛ(θ,ε)=θmaxεmaxΛ(θ,ε)=θmaxΛ(θ,ε^ML(θ))(9)
In order to maximize the first part , ∣γ(θ)∣cos⁡(2πε+∠γ(θ))|\gamma(\theta)| \cos (2 \pi \varepsilon+\angle \gamma(\theta))γ(θ)cos(2πε+γ(θ)), in (5), cosine term needs to be equal to 1.

This yields the ML estimation of the frequency offset ε\varepsilonε

ε^ML(θ)=−12π∠γ(θ)+n(10)\hat{\varepsilon}_{\mathrm{ML}}(\theta)=-\frac{1}{2 \pi} \angle \gamma(\theta)+n \hspace{8ex} (10)ε^ML(θ)=2π1γ(θ)+n(10)
In general engineering application scenarios, we assume that ∣ε∣<1/2|\varepsilon| < 1/2ε<1/2 ; thus n=0n = 0n=0.
Then the log-likelihood function (5)(5)(5) of θ\thetaθ becomes
Λ(θ,ε^ML(θ))=∣γ(θ)∣−ρΦ(θ)(11)\Lambda\left(\theta, \hat{\varepsilon}_{\mathrm{ML}}(\theta)\right)=|\gamma(\theta)|-\rho \Phi(\theta) \hspace{8ex} (11)Λ(θ,ε^ML(θ))=γ(θ)ρΦ(θ)(11)

and the joint ML estimation of θ\thetaθ and ε\varepsilonε becomes

  • θ^ML=arg⁡max⁡θ{∣γ(θ)∣−ρΦ(θ)}(12) \hat{\theta}_{\mathrm{ML}}=\arg \max _{\theta}\{|\gamma(\theta)|-\rho \Phi(\theta)\} \hspace{8ex} (12) θ^ML=argθmax{γ(θ)ρΦ(θ)}(12)
  • ε^ML=−12π∠γ(θ^ML)(13) \hat{\varepsilon}_{\mathrm{ML}}=-\frac{1}{2 \pi} \angle \gamma\left(\hat{\theta}_{\mathrm{ML}}\right) \hspace{8ex} (13) ε^ML=2π1γ(θ^ML)(13)
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值