Ryan Martin (2019). False confidence, non-additive beliefs, and valid statistical inference. RESEARCHERS.ONE, https://www.researchers.one/article/2019-02-1.

Statistics has made tremendous advances since the times of Fisher, Neyman, Jeffreys, and others, but the fundamental questions about probability and inference that puzzled our founding fathers still exist and might even be more relevant today. To overcome these challenges, I propose to look beyond the two dominating schools of thought and ask what do scientists need out of statistics, do the existing frameworks meet these needs, and, if not, how to fill the void? To the first question, I contend that scientists seek to convert their data, posited statistical model, etc., into calibrated degrees of belief about quantities of interest. To the second question, I argue that any framework that returns additive beliefs, i.e., probabilities, necessarily suffers from *false confidence*---certain false hypotheses tend to be assigned high probability---and, therefore, risks making systematically misleading conclusions. This reveals the fundamental importance of *non-additive beliefs* in the context of statistical inference. But non-additivity alone is not enough so, to the third question, I offer a sufficient condition, called *validity*, for avoiding false confidence, and present a framework, based on random sets and belief functions, that provably meets this condition.

March 25, 2019 7:39 pm

Thanks for the feedback. My response to Clifton's 02/13/2019 comments and Crane's 03/12/2019 comments is in the attached PDF file.

June 2, 2019 4:26 am

I think that the most important principle of statistical inference is Lucien LeCam's Principle 0: never trust any principles 100% (or something like that). We have to remain able to be surprised and to completely rethink our models. Any standard philosophical framework for statistical inference *fails* because of principle 0. And Principle 0 is responsible for major miscarriages of justice, scientific scandals, and everything... We need to bring personal moral responsibility back as a basic principle of statistical inference.

More technically, any of the existing frameworks is a "model" and though many models are useful, none of them are actually "true". The question is whether or not they are adequate for purpose. The role of statistics in science is an important role in a many party game. Bayes theory asks: what should *I* believe? Hypothesis testing puts us in a two person game. Not very interesting except as a very, very rough approximation. I am pretty sure that it is impossible to come up with a compelling multi-party framework. The situation is already bad enough in the already formalised context of a court-room. There are always many more than two parties even if legal theory sometimes likes to pretend there is.

(Attached PDF of comments)