多重检验

本文介绍了多重检验中四种常见的p-value校正方法:Bonferroni校正、Holm校正、Westfall和Young排列法及Benjamini和Hochberg的错误发现率法,并详细解释了每种方法的工作原理及其特点。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

多重检验中p-value的校正  

 

 
 

Multiple testing corrections adjust p-values derived from multiple statistical tests to correct for occurrence of false positives. In microarray data analysis, false positives are genes that are found to be statistically different between conditions, but are not in reality.

 

 

方法:

image 

 

A. Bonferroni correction

The p-value of each gene is multiplied by the number of genes in the gene list. If the corrected p-value is still below the error rate, the gene will be significant: Corrected P-value= p-value * n (number of genes in test) <0.05 As a consequence, if testing 1000 genes at a time, the highest accepted individual pvalue is 0.00005, making the correction very stringent. With a Family-wise error rate of 0.05 (i.e., the probability of at least one error in the family), the expected number of false positives will be 0.05.

 

B. Bonferroni Step-down (Holm) correction

This correction is very similar to the Bonferroni, but a little less stringent: 1) The p-value of each gene is ranked from the smallest to the largest. 2) The first p-value is multiplied by the number of genes present in the gene list: if the end value is less than 0.05, the gene is significant: Corrected P-value= p-value * n < 0.05 3) The second p-value is multiplied by the number of genes less 1: Corrected P-value= p-value * n-1 < 0.05 4) The third p-value is multiplied by the number of genes less 2: Corrected P-value= p-value * n-2 < 0.05 It follows that sequence until no gene is found to be significant. Example: Let n=1000, error rate=0.05 Gene name p-value before correction Rank Correction Is gene significant after correction? A 0.00002 1 0.00002 * 1000=0.02 0.02<0.05 => Yes B 0.00004 2 0.00004*999=0.039 0.039<0.05 => Yes C 0.00009 3 0.00009*998=0.0898 0.0898>0.05 => No Because it is a little less corrective as the p-value increases, this correction is less
conservative. However the Family-wise error rate is very similar to the Bonferroni correction (see table in section IV).

 

C. Westfall and Young Permutation

Both Bonferroni and Holm methods are called single-step procedures, where each pvalue is corrected independently. The Westfall and Young permutation method takes advantage of the dependence structure between genes, by permuting all the genes at the same time. The Westfall and Young permutation follows a step-down procedure similar to the Holm method, combined with a bootstrapping method to compute the p-value distribution:
1) P-values are calculated for each gene based on the original data set and
ranked. 2) The permutation method creates a pseudo-data set by dividing the data into artificial treatment and control groups. 3) P-values for all genes are computed on the pseudo-data set. 4) The successive minima of the new p-values are retained and compared to the original ones. 5) This process is repeated a large number of times, and the proportion of resampled data sets where the minimum pseudo-p-value is less than the original p-value is the adjusted p-value. Because of the permutations, the method is very slow. The Westfall and Young permutation method has a similar Family-wise error rate as the Bonferroni and Holm corrections.

D. Benjamini and Hochberg False Discovery Rate

This correction is the least stringent of all 4 options, and therefore tolerates more false positives. There will be also less false negative genes. Here is how it works: 1) The p-values of each gene are ranked from the smallest to the largest. 2) The largest p-value remains as it is. 3) The second largest p-value is multiplied by the total number of genes in gene
list divided by its rank. If less than 0.05, it is significant.
Corrected p-value = p-value*(n/n-1) < 0.05, if so, gene is significant.
4) The third p-value is multiplied as in step 3: Corrected p-value = p-value*(n/n-2) < 0.05, if so, gene is significant. And so on.

 

 

 

见:http://yixf.name/2011/01/11/%E3%80%90%E6%96%87%E7%8C%AE%E6%8E%A8%E8%8D%90%E3%80%91%E5%A4%9A%E9%87%8D%E5%81%87%E8%AE%BE%E6%A3%80%E9%AA%8C%E4%B8%AD%E7%9A%84p%E5%80%BC%E6%A0%A1%E6%AD%A3/

 

http://fhqdddddd.blog.163.com/blog/static/18699154201093171158444/

 

http://en.wikipedia.org/wiki/Multiple_comparisons

 

http://www.silicongenetics.com/Support/GeneSpring/GSnotes/analysis_guides/mtc.pdf

 

目前这些校正方法用于gene ontology的enrichment analysis

转载于:https://www.cnblogs.com/pangairu/p/4223885.html

### ANOVA 和多重检验校正之间的关系 ANOVA(方差分析)是一种统计方法,用于检测多个组之间的均值是否存在显著差异。如果 ANOVA 的结果表明至少有两组的均值不相等,则需要进一步确定具体是哪两组或哪些组的均值存在显著差异[^2]。这种后续分析被称为 post hoc test 或 multiple comparison analysis test。 在进行 post hoc test 时,通常会涉及多重假设检验的问题。当同时进行多次假设检验时,出现假阳性的概率会显著增加。例如,如果进行一次比较,出现假阳性的概率为 α;如果进行 m 次相互独立的比较,则出现至少一次假阳性的概率为 \(1 - (1 - \alpha)^m\)。当 m 较大时,即便样本之间没有任何实际差异,被检测出一个或多个假阳性的概率仍然很大[^3]。这就是所谓的多重比较问题。 为了控制总体第一类错误率(即假阳性率),需要对多重假设检验进行校正。一种常见的校正方法是 Bonferroni 校正,即将显著性水平 α 除以检验次数 m,从而降低每次比较的显著性水平。另一种方法是调整检验统计量,例如通过将计算得到的 F 值除以总的组间自由度来保持原始显著性水平[^1]。 以下是一个使用 Python 进行 ANOVA 分析和多重检验校正的示例代码: ```python import numpy as np from scipy.stats import f_oneway from statsmodels.stats.multicomp import pairwise_tukeyhsd # 示例数据 group1 = np.random.normal(0, 1, 10) group2 = np.random.normal(1, 1, 10) group3 = np.random.normal(0.5, 1, 10) # 执行 ANOVA f_statistic, p_value = f_oneway(group1, group2, group3) print(f"ANOVA F-statistic: {f_statistic}, p-value: {p_value}") # 如果 ANOVA 显著,执行 Tukey's HSD 检验 data = np.concatenate([group1, group2, group3]) labels = ['Group1'] * len(group1) + ['Group2'] * len(group2) + ['Group3'] * len(group3) tukey_results = pairwise_tukeyhsd(data, labels, alpha=0.05) print(tukey_results) ``` 通过上述代码,首先使用 `f_oneway` 函数执行单因素 ANOVA,然后在 ANOVA 结果显著的情况下,使用 Tukey's HSD 方法进行多重比较分析。Tukey's HSD 是一种常用的 post hoc test 方法,能够有效控制总体第一类错误率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值