![]() ![]() I changed the cutoff to 0.01 for a more stringent selection of DE genes. Treat <- p.adjust(treatper.gene, method="BH") Treat per.gene <- do.call(scran::combinePValues, c(treatpval.list, list(method="simes"))) The correct way to do things would be to properly combine the p-values across contrasts for each gene, using Simes' method: pval.list <- lapply(1:ncol(pvalues), FUN=function(i) ) Note, though, that there are better corrections than Bonferroni. Is the alpha level against which you compare your original p values to see if they are significant. 0.05) and divide it by the number of hypotheses (here 15). When I run the above code, I get an actual FDR of ~30%, which is much greater than my nominal FDR of 5% - not good. What you typically do is to take your desired alpha level (e.g. There is no guarantee that the FDR across genes is controlled at the nominal threshold, or even close. ![]() This is because the BH method is applied globally to the set of gene- and coefficient-specific tests. I should also add that the rowSums approach is not-quite-right in terms of FDR control of the resulting set of genes. I hope you did a contrasts.fit somewhere before calling treat, otherwise your tests won't make any sense. If you want more stringent defintions of DE genes, use treat and topTreat with a non-zero log-fold change threshold.If you must achieve strong FWER control across the set of detected genes, use Holm's method by setting thod="holm", as this dominates Bonferroni under all circumstances (see comments in ?p.adjust).No need to switch to a different correction method, especially if you do not appreciate the difference between FDR control (Benjamini-Hochberg) and strong FWER control (Bonferroni). just lower the FDR threshold! If 5% is too high, just use 1%, or 0.1%, or whatever. If you want a more stringent threshold.Having said all that, your multiple testing correction strategy is rather silly for several reasons. So, before you do anything else, you should clarify to yourself what it is you actually want to test. An additional difference is that decideTests will apply the multiple testing correction across all gene/coefficient combinations, while the ANOVA in topTable only gives one p-value per gene and thus applies the correction across genes only. By comparison, topTable will do an ANOVA testing the null hypothesis that all (non-intercept) coefficients are equal to zero. To test this, she randomly assigns 10 students to use each studying technique and records their exam scores. Suppose a teacher wants to know whether or not three different studying techniques lead to different exam scores among students. decideTests will give you a matrix where each column corresponds to the null hypothesis for each coefficient in your design matrix. The Bonferroni Method The Holm Method Example: One-Way ANOVA in R. We use the number of dimensions as the total metric count to correct for in the dimensional analysis, but it does not impact topline metrics.You don't mention your experimental design, but decideTests and topTable are probably not even testing the same null hypothesis. When analyzing dimensions, if correction for metrics is enabled, it's applied separately for the dimensional breakdown. In the example above, if we also wanted to correct for having 2 tests groups, we would further divide each α by 2. If both corrections are selected, they're applied on top of each other.Each Secondary Metric is calculated with α = 0.4 * 0.05 / 4 = 0.005.Each Primary Metric is calculated with α = 0.6 * 0.05 / 2 = 0.015.2 Primary Metrics and 4 Secondary Metrics. ![]() Here you may select what percentage of your total α is divided evenly among the Primary Metrics, and the remaining α is split equally among Secondary Metrics. The number of metrics in the scorecard.The significance level is divided by the number of variants being compared against control. The number of test groups (multiple treatment hypotheses).Thats equivalent to multiplying the p-values by the number of comparisons. However, simultaneous CIs are not equivalent to Bonferroni correction see the update below. You can choose to apply these based on one or both of the following: Bonferroni correction controls the alpha/n is the number of hypotheses tested in a typical multiple comparison (here. The significance level is divided by the number of comparisons being evaluated. If you run more comparisons at the same significance level, the chance of at least one false positive goes up because each comparison is an additional opportunity for false positive.īonferroni corrections are an optional feature on Statsig experiments that reduces the probability of Type I errors (false positives) by adjusting the significance level (α). If you run a tests with α = 0.05, the probability of a false positive will be 5%. Bonferroni Correction What is Bonferroni Correction? Ī Bonferroni Correction is a statistical method that reduces the probability of false positives by adjusting the significance level for multiple comparisons. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |