Composite Hypothesis Testing

博客围绕复合假设检验展开,介绍了在未知参数情况下的检测方法。以直流电平在高斯白噪声中的检测问题为例,探讨了Neyman - Pearson检测。还阐述了复合假设检验的贝叶斯方法和广义似然比检验(GLRT),并给出多个GLRT示例及性能分析。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


Motivation:

  • Neyman-Pearson detectors require perfect knowledge of the PDFs
  • What if this information is unknown?
  • Are there detectors for such scenarios? Radar, Sonar

Approach:

  • Design the NP detector, assuming the parameters are known
  • Manipulate the Test so that it is not dependent on the parameters

Example: DC Level in WGN with Unknown Amplitude (A>0)

Consider the DC level in WGN detection problem
H 0 : x [ n ] = w [ n ] n = 0 , 1 , … , N − 1 H 1 : x [ n ] = A + w [ n ] n = 0 , 1 , … , N − 1 \begin{array}{ll} \mathcal{H}_{0}: x[n]=w[n] & n=0,1, \ldots, N-1 \\ \mathcal{H}_{1}: x[n]=A+w[n] & n=0,1, \ldots, N-1 \end{array} H0:x[n]=w[n]H1:x[n]=A+w[n]n=0,1,,N1n=0,1,,N1
where the value of A A A is unknown, although a priori we know that A > 0 , A>0, A>0, and w [ n ] w[n] w[n] is WGN with variance σ 2 \sigma^{2} σ2. Then, the NP test is to decide H 1 \mathcal{H}_{1} H1 if
p ( x ; A , H 1 ) p ( x ; H 0 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A ) 2 ] 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 x 2 [ n ] ] > γ \frac{p\left(\mathbf{x} ; A, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \mathcal{H}_{0}\right)}=\frac{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-A)^{2}\right]}{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1} x^{2}[n]\right]}>\gamma p(x;H0)p(x;A,H1)=(2πσ2)2N1exp[2σ21n=0N1x2[n]](2πσ2)2N1exp[2σ21n=0N1(x[n]A)2]>γ
Taking the logarithm we have
A ∑ n = 0 N − 1 x [ n ] > σ 2 ln ⁡ γ + N A 2 2 A \sum_{n=0}^{N-1} x[n]>\sigma^{2} \ln \gamma+\frac{N A^{2}}{2} An=0N1x[n]>σ2lnγ+2NA2
since it is known that A > 0 , A>0, A>0, we have
∑ n = 0 N − 1 x [ n ] > σ 2 A ln ⁡ γ + N A 2 \sum_{n=0}^{N-1} x[n]>\frac{\sigma^{2}}{A} \ln \gamma+\frac{N A}{2} n=0N1x[n]>Aσ2lnγ+2NA
Finally, scaling by 1 / N 1 / N 1/N produces the test
T ( x ) = 1 N ∑ n = 0 N − 1 x [ n ] > σ 2 N A ln ⁡ γ + A 2 = γ ′ T(\mathbf{x})=\frac{1}{N} \sum_{n=0}^{N-1} x[n]>\frac{\sigma^{2}}{N A} \ln \gamma+\frac{A}{2}=\gamma^{\prime} T(x)=N1n=0N1x[n]>NAσ2lnγ+2A=γ
Clearly, the test statistic, which is the sample mean of the data, does not depend on A A A.

Recall from Chapter 3 that T ( x ; H 0 ) = x ˉ ∼ N ( 0 , σ 2 / N ) , T ( x ; H 1 ) = x ˉ ∼ N ( A , σ 2 / N ) T(\mathbf x;\mathcal H_0)=\bar x\sim \mathcal N(0,\sigma^2/N),T(\mathbf x;\mathcal H_1)=\bar x\sim \mathcal N(A,\sigma^2/N) T(x;H0)=xˉN(0,σ2/N),T(x;H1)=xˉN(A,σ2/N). Hence,
P F A = Pr ⁡ { T ( x ) > γ ′ ; H 0 } = Q ( γ ′ σ 2 / N ) P D = Pr ⁡ { T ( x ) > γ ′ ; H 1 } = Q ( γ ′ − A σ 2 / N ) = Q ( Q − 1 ( P F A ) − N A 2 σ 2 ) P_{FA}=\Pr\{T(\mathbf x)>\gamma^\prime;\mathcal H_0 \}=Q\left(\frac{\gamma^\prime}{\sqrt{\sigma^2/N}} \right)\\ P_{D}=\Pr\{T(\mathbf x)>\gamma^\prime;\mathcal H_1 \}=Q\left(\frac{\gamma^\prime-A}{\sqrt{\sigma^2/N}} \right)=Q\left(Q^{-1}(P_{FA})-\sqrt{\frac{NA^2}{\sigma^2}}\right) PFA=Pr{T(x)>γ;H0}=Q(σ2/N γ)PD=Pr{T(x)>γ;H1}=Q(σ2/N γA)=Q(Q1(PFA)σ2NA2 )
Therefore, P F A P_{FA} PFA (and the threshold) does not depend on A A A, although P D P_D PD depends on A A A.

The test ( 1 ) (1) (1) leads to the highest P D P_{D} PD (remember NP maximizes P D P_{D} PD ) for any value A A A. (as long as A > 0 A>0 A>0 ). Such a test is called a Uniformly Most Powerful (UMP) test. Any other test will have a poorer performance.

在这里插入图片描述

Unfortunately, UMP tests seldom exist.

Example: DC Level in WGN with Unknown Amplitude

Reconsider the example above with − ∞ < A < ∞ -\infty<A<\infty <A<. If we assume perfect knowledge of A A A to design a NP detector, then it termed as a clairvoyant detector.

When A A A can take on positive and negative values, the clairvoyant detector decides H 1 \mathcal{H}_{1} H1 if
1 N ∑ n = 0 N − 1 x [ n ] = x ˉ > γ + ′  for  A > 0 1 N ∑ n = 0 N − 1 x [ n ] = x ˉ < γ − ′  for  A < 0 \begin{aligned} \frac{1}{N} \sum_{n=0}^{N-1} x[n]=\bar{x}>\gamma_{+}^{\prime} \quad \text { for } A>0 \\ \frac{1}{N} \sum_{n=0}^{N-1} x[n]=\bar{x}<\gamma_{-}^{\prime}\quad \text { for } A<0 \end{aligned} N1n=0N1x[n]=xˉ>γ+ for A>0N1n=0N1x[n]=xˉ<γ for A<0
The detector is clearly unrealizable since it is composed of two different NP tests, the choice of which depends upon the unknown parameter A A A. It provides an upper bound on performance, which can be found as follows.
P F A = Pr ⁡ { x ˉ > γ + ′ ; H 0 } = Q ( γ + ′ σ 2 / N )  for  A > 0 P F A = Pr ⁡ { x ˉ < γ − ′ ; H 0 } = 1 − Q ( γ − ′ σ 2 / N ) = Q ( − γ − ′ σ 2 / N )  for  A < 0 \begin{aligned} &P_{F A}=\operatorname{Pr}\left\{\bar{x}>\gamma_{+}^{\prime} ; \mathcal{H}_{0}\right\}=Q\left(\frac{\gamma_{+}^{\prime}}{\sqrt{\sigma^{2} / N}}\right) &&\text { for } A>0\\ &P_{F A}=\operatorname{Pr}\left\{\bar{x}<\gamma_{-}^{\prime} ; \mathcal{H}_{0}\right\}=1-Q\left(\frac{\gamma_{-}^{\prime}}{\sqrt{\sigma^{2} / N}}\right)=Q\left(\frac{-\gamma_{-}^{\prime}}{\sqrt{\sigma^{2} / N}}\right) &&\text { for } A<0 \end{aligned} PFA=Pr{xˉ>γ+;H0}=Q(σ2/N γ+)PFA=Pr{xˉ<γ;H0}=1Q(σ2/N γ)=Q(σ2/N γ) for A>0 for A<0

P D = Pr ⁡ { x ˉ > γ + ′ ; H 1 } = Q ( γ + ′ − A σ 2 / N ) = Q ( Q − 1 ( P F A ) − N A 2 σ 2 )  for  A > 0 P D = 1 − Q ( γ − ′ − A σ 2 / N ) = Q ( − γ − ′ + A σ 2 / N ) = Q ( Q − 1 ( P F A ) + A σ 2 / N )  for  A < 0 \begin{aligned} &P_{D}=\operatorname{Pr}\left\{\bar{x}>\gamma_{+}^{\prime} ; \mathcal{H}_{1}\right\}=Q\left(\frac{\gamma_{+}^{\prime}-A}{\sqrt{\sigma^{2} / N}}\right)=Q\left(Q^{-1}\left(P_{F A}\right)-\sqrt{\frac{N A^{2}}{\sigma^{2}}}\right) && \text { for } A>0\\ &P_{D}=1-Q\left(\frac{\gamma_{-}^{\prime}-A}{\sqrt{\sigma^{2} / N}}\right)=Q\left(\frac{-\gamma_{-}^{\prime}+A}{\sqrt{\sigma^{2} / N}}\right)=Q\left(Q^{-1}\left(P_{F A}\right)+\frac{A}{\sqrt{\sigma^{2} / N}}\right)&& \text { for } A<0 \end{aligned} PD=Pr{xˉ>γ+;H1}=Q(σ2/N γ+A)=Q(Q1(PFA)σ2NA2 )PD=1Q(σ2/N γA)=Q(σ2/N γ+A)=Q(Q1(PFA)+σ2/N A) for A>0 for A<0

在这里插入图片描述

Instead of the clairvoyant detector, let’s look at the realizable detector:
T ( x ) = ∣ 1 N ∑ n = 0 N − 1 x [ n ] ∣ > γ ′ ′ T(\mathbf x)=\left|\frac{1}{N}\sum_{n=0}^{N-1}x[n] \right|>\gamma^{\prime \prime} T(x)=N1n=0N1x[n]>γ
Then the detection performance
P F A = Pr ⁡ { ∣ x ˉ ∣ > γ ′ ′ ; H 0 } = 2 Pr ⁡ { x ˉ > γ ′ ′ ; H 0 } = 2 Q ( γ ′ ′ σ 2 / N ) γ ′ ′ = σ 2 / N Q − 1 ( P F A / 2 ) P D = Pr ⁡ { ∣ x ˉ ∣ > γ ′ ′ ; H 1 } = Q ( Q − 1 ( P F A / 2 ) − A σ 2 / N ) + Q ( Q − 1 ( P F A / 2 ) + A σ 2 / N ) \begin{aligned} P_{F A}&=\operatorname{Pr}\left\{|\bar{x}|>\gamma^{\prime \prime} ; \mathcal{H}_{0}\right\}=2 \operatorname{Pr}\left\{\bar{x}>\gamma^{\prime \prime} ; \mathcal{H}_{0}\right\}=2 Q\left(\frac{\gamma^{\prime \prime}}{\sqrt{\sigma^{2} / N}}\right) \\ \gamma^{\prime \prime}&=\sqrt{\sigma^{2} / N} Q^{-1}\left(P_{F A} / 2\right) \\ P_{D}=\operatorname{Pr}\left\{|\bar{x}|>\gamma^{\prime \prime} ; \mathcal{H}_{1}\right\}&=Q\left(Q^{-1}\left(P_{F A} / 2\right)-\frac{A}{\sqrt{\sigma^{2} / N}}\right)+Q\left(Q^{-1}\left(P_{F A} / 2\right)+\frac{A}{\sqrt{\sigma^{2} / N}}\right) \end{aligned} PFAγPD=Pr{xˉ>γ;H1}=Pr{xˉ>γ;H0}=2Pr{xˉ>γ;H0}=2Q(σ2/N γ)=σ2/N Q1(PFA/2)=Q(Q1(PFA/2)σ2/N A)+Q(Q1(PFA/2)+σ2/N A)

在这里插入图片描述

The performance of this realizable detector is thus not optimal, but close to the optimal clairvoyant detector.

In fact, the proposed detector is an example of a more general approach to composite hypothesis testing, the generalized likelihood ratio test, which is described in the next section.

Composite Hypothesis Testing Approaches

Bayesian Approach

The Bayesian approach assigns prior PDFs to θ 0 \boldsymbol\theta_{0} θ0 and θ 1 \boldsymbol\theta_{1} θ1. In doing so it models the unknown parameters as realizations of a vector random variable. If the prior PDFs are denoted by p ( θ 0 ) p\left(\boldsymbol\theta_{0}\right) p(θ0) and p ( θ 1 ) , p\left(\boldsymbol\theta_{1}\right), p(θ1), respectively, the PDFs of the data are
p ( x ; H 0 ) = ∫ p ( x ∣ θ 0 ; H 0 ) p ( θ 0 ) d θ 0 p ( x ; H 1 ) = ∫ p ( x ∣ θ 1 ; H 1 ) p ( θ 1 ) d θ 1 \begin{aligned} p\left(\mathbf{x} ; \mathcal{H}_{0}\right) &=\int p\left(\mathbf{x} |\boldsymbol{\theta}_{0} ; \mathcal{H}_{0}\right) p\left(\boldsymbol{\theta}_{0}\right) d \boldsymbol{\theta}_{0} \\ p\left(\mathbf{x} ; \mathcal{H}_{1}\right) &=\int p\left(\mathbf{x} |\boldsymbol{\theta}_{1} ; \mathcal{H}_{1}\right) p\left(\boldsymbol{\theta}_{1}\right) d \boldsymbol{\theta}_{1} \end{aligned} p(x;H0)p(x;H1)=p(xθ0;H0)p(θ0)dθ0=p(xθ1;H1)p(θ1)dθ1
where p ( x ∣ θ i ; H i ) p\left(\mathbf{x} |\boldsymbol\theta_{i} ; \mathcal{H}_{i}\right) p(xθi;Hi) is the conditional PDF of x , \mathbf{x}, x, conditioned on θ i , \boldsymbol{\theta}_{i}, θi, assuming H i \mathcal{H}_{i} Hi is true. The unconditional PDFs p ( x ; H 0 ) p\left(\mathbf{x} ; \mathcal{H}_{0}\right) p(x;H0) and p ( x ; H 1 ) p\left(\mathbf{x} ; \mathcal{H}_{1}\right) p(x;H1) are now completely specified, no longer dependent on the unknown parameters. With the Bayesian approach the optimal NP detector decides H 1 \mathcal{H}_{1} H1 if
p ( x ; H 1 ) p ( x ; H 0 ) = ∫ p ( x ∣ θ 1 ; H 1 ) p ( θ 1 ) d θ 1 ∫ p ( x ∣ θ 0 ; H 0 ) p ( θ 0 ) d θ 0 > γ \frac{p\left(\mathbf{x} ; \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \mathcal{H}_{0}\right)}=\frac{\int p\left(\mathbf{x}| \boldsymbol{\theta}_{1} ; \mathcal{H}_{1}\right) p\left(\boldsymbol{\theta}_{1}\right) d \boldsymbol{\theta}_{1}}{\int p\left(\mathbf{x} |\boldsymbol\theta_{0} ; \mathcal{H}_{0}\right) p\left(\boldsymbol\theta_{0}\right) d \boldsymbol{\theta}_{0}}>\gamma p(x;H0)p(x;H1)=p(xθ0;H0)p(θ0)dθ0p(xθ1;H1)p(θ1)dθ1>γ

  • Need to choose prior pdf.
  • Integration can be difficult.

Generalized Likelihood Ratio Test (GLRT)

The GLRT replaces the unknown parameters by their maximum likelihood estimates (MLEs). In general, the GLRT decides H 1 \mathcal{H}_{1} H1 if
L G ( x ) = p ( x ; θ ^ 1 , H 1 ) p ( x ; θ ^ 0 , H 0 ) > γ L_{G}(\mathbf{x})=\frac{p\left(\mathbf{x} ; \hat{\boldsymbol\theta}_{1}, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \hat{\boldsymbol\theta}_{0}, \mathcal{H}_{0}\right)}>\gamma LG(x)=p(x;θ^0,H0)p(x;θ^1,H1)>γ
where θ ^ 1 \hat{\boldsymbol\theta}_{1} θ^1 is the MLE of θ 1 \boldsymbol\theta_{1} θ1 assuming H 1 \mathcal{H}_{1} H1 is true (maximizes p ( x ; θ 1 , H 1 ) ) , \left.p\left(\mathbf{x} ; \theta_{1}, \mathcal{H}_{1}\right)\right), p(x;θ1,H1)), and θ ^ 0 \hat{\boldsymbol\theta}_{0} θ^0 is the MLE of θ 0 \boldsymbol\theta_{0} θ0 assuming H 0 \mathcal{H}_{0} H0 is true (maximizes $ p\left(\mathbf{x} ; \boldsymbol{\theta}{0}, \mathcal{H}{0}\right)$).

The GLRT can also be expressed in another form, which is sometimes more convenient. since θ ^ i \hat{\boldsymbol\theta}_{i} θ^i is the MLE under H i , \mathcal{H}_{i}, Hi, it maximizes p ( x ; θ i , H i ) p\left(\mathbf{x} ; \boldsymbol{\theta}_{i}, \mathcal{H}_{i}\right) p(x;θi,Hi) or
p ( x ; θ ^ i , H i ) = max ⁡ θ i p ( x ; θ i , H i ) p\left(\mathbf{x} ; \hat{\boldsymbol{\theta}}_{i}, {\mathcal { H }}_{i}\right)=\max _{\boldsymbol{\theta}_{i}} p\left(\mathbf{x} ; \boldsymbol{\theta}_{i}, \mathcal{H}_{i}\right) p(x;θ^i,Hi)=θimaxp(x;θi,Hi)
Hence, L G ( x ) L_G(\mathbf x) LG(x) can be written as
L G ( x ) = max ⁡ θ 1 p ( x ; θ 1 , H 1 ) max ⁡ θ 0 p ( x ; θ 0 , H 0 ) L_G(\mathbf x)=\frac{\max _{\boldsymbol{\theta}_{1}} p\left(\mathbf{x} ; \boldsymbol{\theta}_{1}, \mathcal{H}_{1}\right)}{\max _{\boldsymbol{\theta}_{0}} p\left(\mathbf{x} ; \boldsymbol{\theta}_{0}, \mathcal{H}_{0}\right)} LG(x)=maxθ0p(x;θ0,H0)maxθ1p(x;θ1,H1)
The approach also provides information about the unknown parameters since the first step in determining L G ( x ) L_{G}(\mathbf{x}) LG(x) is to find the MLEs. We now continue the DC level in WGN example.

Example: DC Level in WGN with Unknown Amplitude - GLRT

In this case we have θ 1 = A \boldsymbol\theta_{1}=A θ1=A and there are no unknown parameters under H 0 \mathcal{H}_{0} H0. The hypothesis test becomes
H 0 : A = 0 H 1 : A ≠ 0 \begin{array}{l} \mathcal{H}_{0}: A=0 \\ \mathcal{H}_{1}: A \neq 0 \end{array} H0:A=0H1:A=0
Thus, the GLRT decides H 1 \mathcal{H}_{1} H1 if
L G ( x ) = p ( x ; A ^ , H 1 ) p ( x ; H 0 ) > γ L_{G}(\mathbf{x})=\frac{p\left(\mathbf{x} ; \hat{A}, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \mathcal{H}_{0}\right)}>\gamma LG(x)=p(x;H0)p(x;A^,H1)>γ
The MLE of A A A is found by maximizing
p ( x ; A , H 1 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A ) 2 ] p\left(\mathbf{x} ; A, \mathcal{H}_{1}\right)=\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-A)^{2}\right] p(x;A,H1)=(2πσ2)2N1exp[2σ21n=0N1(x[n]A)2]
By differentiating the likelihood (or loglikelihood) function and setting the derivative to zero, wen obtain the MLE A ^ = x ˉ \hat{A}=\bar{x} A^=xˉ. Thus,
L G ( x ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − x ˉ ) 2 ] 1 ( 2 π σ 2 ) N 2 exp ⁡ ( − 1 2 σ 2 ∑ n = 0 N − 1 x 2 [ n ] ) L_{G}(\mathbf{x})=\frac{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-\bar{x})^{2}\right]}{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left(-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1} x^{2}[n]\right)} LG(x)=(2πσ2)2N1exp(2σ21n=0N1x2[n])(2πσ2)2N1exp[2σ21n=0N1(x[n]xˉ)2]
Taking logarithms we have
ln ⁡ L G ( x ) = − 1 2 σ 2 ( ∑ n = 0 N − 1 x 2 [ n ] − 2 x ˉ ∑ n = 0 N − 1 x [ n ] + N x ˉ 2 − ∑ n = 0 N − 1 x 2 [ n ] ) = − 1 2 σ 2 ( − 2 N x ˉ 2 + N x ˉ 2 ) = N x ˉ 2 2 σ 2 \begin{aligned} \ln L_{G}(\mathbf{x}) &=-\frac{1}{2 \sigma^{2}}\left(\sum_{n=0}^{N-1} x^{2}[n]-2 \bar{x} \sum_{n=0}^{N-1} x[n]+N \bar{x}^{2}-\sum_{n=0}^{N-1} x^{2}[n]\right) \\ &=-\frac{1}{2 \sigma^{2}}\left(-2 N \bar{x}^{2}+N \bar{x}^{2}\right) \\ &=\frac{N \bar{x}^{2}}{2 \sigma^{2}} \end{aligned} lnLG(x)=2σ21(n=0N1x2[n]2xˉn=0N1x[n]+Nxˉ2n=0N1x2[n])=2σ21(2Nxˉ2+Nxˉ2)=2σ2Nxˉ2
or we decide H 1 \mathcal{H}_{1} H1 if
∣ x ˉ ∣ > γ ′ |\bar{x}|>\gamma^{\prime} xˉ>γ
This detector is identical to realizable detector we looked at before and the performance has already been given.

Example: DC Level in WGN with Unknown Amplitude and Variance - GLRT

Consider the detection problem
H 0 : x [ n ] = w [ n ] n = 0 , 1 , … , N − 1 H 1 : x [ n ] = A + w [ n ] n = 0 , 1 , … , N − 1 \begin{array}{ll} \mathcal{H}_{0}: x[n]=w[n] & n=0,1, \ldots, N-1 \\ \mathcal{H}_{1}: x[n]=A+w[n] & n=0,1, \ldots, N-1 \end{array} H0:x[n]=w[n]H1:x[n]=A+w[n]n=0,1,,N1n=0,1,,N1
where A A A is unknown with − ∞ < A < ∞ -\infty<A<\infty <A< and w [ n ] w[n] w[n] is WGN with unknown variance σ 2 \sigma^{2} σ2. A UMP test does not exist because the equivalent parameter test is
H 0 : A = 0 , σ 2 > 0 H 1 : A ≠ 0 , σ 2 > 0 \begin{array}{l} \mathcal{H}_{0}: A=0, \sigma^{2}>0 \\ \mathcal{H}_{1}: A \neq 0, \sigma^{2}>0 \end{array} H0:A=0,σ2>0H1:A=0,σ2>0
which is two-sided. The GLRT decides H 1 \mathcal{H}_{1} H1 if
L G ( x ) = p ( x ; A ^ , σ ^ 1 2 , H 1 ) p ( x ; σ ^ 0 2 , H 0 ) > γ L_{G}(\mathbf{x})=\frac{p\left(\mathbf{x} ; \hat{A}, \hat{\sigma}_{1}^{2}, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \hat{\sigma}_{0}^{2}, \mathcal{H}_{0}\right)}>\gamma LG(x)=p(x;σ^02,H0)p(x;A^,σ^12,H1)>γ
where [ A ^    σ ^ 2 ] T [\hat A~~\hat{\sigma}^2]^T [A^  σ^2]T is the MLE of the vector parameter θ 1 = [ A    σ 2 ] T \boldsymbol \theta_1=[A~~\sigma^2]^T θ1=[A  σ2]T under H 1 \mathcal H_1 H1, and σ ^ 0 2 \hat{\sigma}^2_0 σ^02 is the MLE of the parameter θ 0 = σ 2 \boldsymbol \theta_0=\sigma^2 θ0=σ2 under H 0 \mathcal H_0 H0. Note that we need to estimate the variance under both hypotheses.

Since
p ( x ; A , σ 2 , H 1 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A ) 2 ] p ( x ; A , σ 2 , H 1 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 x 2 [ n ] ] p\left(\mathbf{x} ; A, \sigma^{2}, \mathcal{H}_{1}\right)=\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-A)^{2}\right]\\ p\left(\mathbf{x} ; A, \sigma^{2}, \mathcal{H}_{1}\right)=\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}x^{2}[n]\right] p(x;A,σ2,H1)=(2πσ2)2N1exp[2σ21n=0N1(x[n]A)2]p(x;A,σ2,H1)=(2πσ2)2N1exp[2σ21n=0N1x2[n]]
Similar as before, we have
A ^ = x ˉ σ ^ 1 2 = 1 N ∑ n = 1 N − 1 ( x [ n ] − x ˉ ) 2 σ ^ 0 2 = 1 N ∑ n = 1 N − 1 x 2 [ n ] \hat A=\bar x\\ \hat {\sigma}^2_1=\frac{1}{N}\sum_{n=1}^{N-1}(x[n]-\bar x)^2\\ \hat {\sigma}^2_0=\frac{1}{N}\sum_{n=1}^{N-1}x^2[n] A^=xˉσ^12=N1n=1N1(x[n]xˉ)2σ^02=N1n=1N1x2[n]
Thus the GLRT becomes
L G ( x ) = ( σ ^ 0 2 σ ^ 1 2 ) N / 2 L_{G}(\mathbf{x})=\left(\frac{\hat{\sigma}_{0}^{2}}{\hat{\sigma}_{1}^{2}}\right)^{N / 2} LG(x)=(σ^12σ^02)N/2
In essence, the GLRT decides H 1 \mathcal H_1 H1 if the fit to the data of the signal A ^ = x ˉ \hat A= \bar x A^=xˉ produces a much smaller error, as measured by σ ^ 1 2 = ( 1 / N ) ∑ n = 0 N − 1 ( x [ n ] − A ^ ) 2 \hat {\sigma}^2_1=(1/N)\sum_{n=0}^{N-1}(x[n]-\hat A)^2 σ^12=(1/N)n=0N1(x[n]A^)2 than a fit of no signal or σ ^ 0 2 = ( 1 / N ) ∑ n = 0 N − 1 x 2 [ n ] \hat {\sigma}^2_0=(1/N)\sum_{n=0}^{N-1}x^2[n] σ^02=(1/N)n=0N1x2[n]. A slightly more intuitive form can be found as follows. Since
σ ^ 1 2 = 1 N ∑ n = 1 N − 1 x 2 [ n ] − x ˉ 2 = σ ^ 0 2 − x ˉ 2 \hat{\sigma}_{1}^{2}=\frac{1}{N}\sum_{n=1}^{N-1}x^2[n]-\bar x^2=\hat{\sigma}_{0}^{2}-\bar x ^2 σ^12=N1n=1N1x2[n]xˉ2=σ^02xˉ2
we have
2 ln ⁡ L G ( x ) = N ln ⁡ ( σ ^ 1 2 + x ˉ 2 σ ^ 1 2 ) = N ln ⁡ ( 1 + x ˉ 2 σ ^ 1 2 ) 2\ln L_G(\mathbf x)=N\ln \left(\frac{\hat{\sigma}_{1}^{2}+\bar x^2}{\hat{\sigma}_{1}^{2}} \right)=N\ln \left(1+\frac{\bar x^2}{\hat{\sigma}_{1}^{2}} \right) 2lnLG(x)=Nln(σ^12σ^12+xˉ2)=Nln(1+σ^12xˉ2)

Since ln ⁡ ( 1 + x ) \ln(1 + x) ln(1+x) is monotonically increasing with increasing x x x, an equivalent test statistic is
T ( x ) = A ^ 1 2 σ ^ 1 2 T(\mathrm{x})=\frac{\hat{A}_{1}^{2}}{\hat{\sigma}_{1}^{2}} T(x)=σ^12A^12

### 回答1: 假设检验是统计学中一种用于检验随机样本是否来自某一特定分布的方法。它通常用来决定一个假设(称为原假设)是否被拒绝或接受。通常有两种假设:原假设和备择假设。原假设是我们要证明或否定的假设,而备择假设则是原假设的补集。 ### 回答2: 假设检验(hypothesis testing)是统计学中最基本、应用最广泛的统计推断方法之一,它用于判断样本信息是否支持某个关于总体的假设,以此为基础作出决策。假设检验的基本思想是,我们提出一个关于总体的某种假设,并利用样本信息对该假设进行验证或证否,进而做出正确的统计推断。 在假设检验中,我们通常会根据问题的特定要求形式化出待检验的假设,它通常被分成两种类型,即零假设(null hypothesis)和备择假设(alternative hypothesis)。零假设是指我们需要验证的假设,通常表示一种相对稳定、均衡、无变化的情况或假设。备择假设则是指我们需要证明零假设错误或不成立的假设,通常表示一种相对不稳定、非均衡、具有变化的情况或假设。对于不同的问题,可选择适当的零假设和备择假设。 在假设检验的过程中,通常需要选择适当的统计量来计算样本数据。如均值检验中通常选择t检验或z检验,比例检验中通常选择卡方检验等。然后,利用所选的统计量将原假设的概率映射到检验统计量的分布上,从而得到检验统计量的观测值,并确定其是否落在某一特定的拒绝域内。如果观测值落在拒绝域内,则拒绝原假设,并认为备择假设更为可能成立。反之,如果观测值未落在拒绝域内,则无法拒绝原假设,无法证明备择假设更为正确。 在进行假设检验时,还需确定显著性水平,它代表了接受备择假设需要达到的信心程度。通常,常用的显著性水平是0.05或0.01,即在拒绝零假设之前,需要使错误接受备择假设的概率小于或等于给定的显著性水平。 总之,假设检验作为一种统计推断方法,可以帮助统计学家和决策者正确地理解和分析数据,对研究或决策进行支持和指导。 ### 回答3: 假设检验(Hypothesis testing)是一种用来推断与研究问题相关性的统计方法。该方法理论基础是根据样本数据评估一个总体参数的假设,然后使用统计分析来确定这个假设是应该接受还是拒绝。 假设检验有两种假设,即零假设和备择假设。零假设通常是一个默认假设,即当我们没有证据来支持备择假设时,零假设成立。例如,当我们研究一种药物是否真的能够治疗某种疾病时,零假设是这种药物无效;备择假设是这种药物有效。 假设检验的步骤包括: 1. 确定零假设与备择假设; 2. 确定显著性水平(α),即出现假阳性或假阴性的风险; 3. 获取样本数据并计算统计量; 4. 计算p值,即在零假设成立的情况下,得到观察值或更“极端”观察值的概率; 5. 判断能否拒绝零假设,即p值小于显著性水平(拒绝域),则拒绝零假设。 假设检验的优点是可以用来确定假设是否成立,帮助研究者做出决策。但是,假设检验也有一些局限性,例如: 1. 假设检验并不提供有关总体参数的确切值或置信区间; 2. 如果样本容量小,假设检验的结果可能不准确; 3. 正确的假设检验需要正确地选择假设和显著性水平。如果这些选择不正确,结果可能会偏差。 总之,假设检验是一种简单的推理方法,用于研究问题或比较不同种类的数据。研究者可以通过该方法确定已知参数值的有效性,以及推导结果是随机还是巧合。但是,正确应用假设检验需要仔细考虑所选择的假设和显著性水平,以及样本数据的大小。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值