Papers organized by topic

Under each topic heading, a statement of each problem addressed comes before the link to papers describing the progress toward solving the problem.

Merging statistical methods

To analyze a given data set, a statistician may usually apply any of a number of different methods. Unfortunately, the interpretation of the data can depend strongly on the method chosen, even when there is insufficient guidance from theory and simulation studies to indicate which method is most reliable. In this situation, the statistician would like to combine the results of the best methods.

In the special case of robust Bayesian statistics, multiple methods of data analysis may be available for interpreting a given data set since there are multiple prior distributions that are equally applicable to the problem. If there is a reliable frequentist method, the statistician needs a way to choose a prior distribution that is is even more reliable.

Proposed approaches to combining statistical methods

A framework of frequentist inference based on confidence distributions

Frequentist methods have been criticized for being hard to interpret, for being inapplicable to many problems that can be solved by Bayesian statistics, and for yielding results that are not coherent with each another. These problems can be largely overcome by use of a confidence distribution without resorting to a prior distribution.

The confidence distribution is a distribution on parameter space that encodes nested confidence intervals and corresponding p-values. While the Bayesian posterior is defined in terms of a conditional distribution given the observed data, the confidence distribution is instead defined such that the probability that the parameter value lies in any fixed subset of parameter space, given the observed data, is equal to the coverage rate of the corresponding confidence interval.

Papers developing a framework of frequentist reasoning

Local false discovery rate (LFDR) estimators and effect-size estimators based on them

In tests of multiple null hypotheses, the LFDR of each null hypothesis is a posterior probability that it is false; low LFDRs correspond to high posterior probabilities of some effect. Unlike fully Bayesian methods, LFDR estimation does not require specification of a prior distribution but only its estimation. One article of this topic ("Empirical Bayes interval estimates that are conditionally equal to unadjusted confidence intervals or to default prior credibility intervals") develops an algorithm for generating point and interval estimates of an effect size of interest from an estimate of the LFDR. The other articles present LFDR estimators and performance comparisons between LFDR estimators:

Empirical Bayes estimation of local false discovery rates and of effect sizes

Generalizing the likelihood measure of the strength of statistical inference to composite hypotheses

Quantifying statistical evidence with a likelihood ratio works well for simple hypotheses, but most problems in statistical inference involve composite alternative hypotheses.

Measures of the strength of statistical evidence for composite alternative hypotheses

About this page

Last modified January 14, 2016 1:38 AM

personal web page

<<< main page