Given that this approach involves priors, another aspect of my research is concerned with dealing with the criticism that the methodology is inherently subjective in nature. A natural counterargument to this is that all of statistical inference is necessarily subjective as it involves choices made by the statistician (models, losses, priors, etc.). To partially cope with this subjective element of the subject, one incorporates model checking and checking for prior-data conflict into the analysis. A valid theory of inference does not incorporate subjective elements that cannot be checked against the (objective) data for their reasonableness.

The book Measuring Statistical Evidence Using Relative Belief summarizes this approach to statistics. It is our belief that this provides a much more logical and sound basis for statistics, free of many of the difficulties/paradoxes that other approaches suffer from. For example, it is acknowledged that objectivity is a desirable but unattainable goal in statistical analyses and that, while subjective choices are inevitable and often useful, their effects must be assessed and controlled. The following quote from the book summarizes the point of view taken.

"No matter how the ingredients are chosen they may be wrong in the sense that they are unreasonable in light of the data obtained. So, as part of any statistical analysis, it is necessary to check the ingredients chosen against the data. If the ingredients are determined to be flawed, then any inferences based on these choices are undermined with respect to their validity. Also, checking the model and the prior against the data, is part of how statistics can deal with the inherent subjectivity in any statistical analysis. There should be an explicit recognition that subjectivity is always part of a statistical analysis. A positive consequence of accepting this is that it is now possible to address an important problem, namely, the necessity of assessing and controlling the effects of subjectivity. This seems like a more appropriate role for statistics in science as opposed to arguing for a mythical objectivity or for the virtues of subjectivity based on some kind of coherency."

- Papers (with links to those more recent).
- Books
- Evans, M. and Swartz, T. (2000) Approximating Integrals via Monte Carlo and Deterministic Methods. Oxford Statistical Science Series 20, Oxford University Press.
- Evans, M. and Rosenthal, J. (2003, 2010) Probability and Statistics, The Science of Uncertainty. This text is no longer published by Freeman and is now available here as a free download.
- Evans, M. (2015) Measuring Statistical Evidence Using Relative Belief. Monographs on Statistics and Applied Probability 144, CRC Press, Taylor & Francis Group. blog, review