Advanced Search

    Law of total probability and Bayes' theorem in Riesz spaces

    Under Review Since : 2018-10-03

    This note generalizes the notion of conditional probability to Riesz spaces using the order-theoretic approach. With the aid of this concept, we establish the law of total probability and Bayes' theorem in Riesz spaces; we also prove an inclusion-exclusion formula in Riesz spaces. Several examples are provided to show that the law of total probability, Bayes' theorem and inclusion-exclusion formula in probability theory are special cases of our results.

    What a t-test easily hides

    Under Review Since : 2018-09-18

    To justify the effort of developing a theoretical construct, a theoretician needs empirical data that support a non-random effect of sufficiently high replication-probability. To establish these effects statistically, researchers (rightly) rely on a t-test. But many pursue questionable strategies that lower the cost of data-collection. Our paper reconstructs two such strategies. Both reduce the minimum sample-size (NMIN) sufficing under conventional errors (α, β) to register a given effect-size (d) as a statistically significant non-random data signature. The first strategy increases the β-error; the second treats the control-group as a constant, thereby collapsing a two-sample t-test into its one-sample version. (A two-sample t-test for d=0.50 under a=0.05 with NMIN=176, for instance, becomes a one-sample t-test under a=0.05, β=0.20 with NMIN=27.) Not only does this decrease the replication-probability of data from (1-β)=0.95 to (1-β)=0.80, particularly the second strategy cannot corroborate hypotheses meaningfully. The ubiquity of both strategies arguably makes them partial causes of the confidence-crisis. But as resource-pooling would allow research groups reach NMIN jointly, a group’s individually limited resources justify neither strategy.

    The RESEARCHERS.ONE mission

    Published Date : 2018-09-15

    This article describes our motivation behind the development of RESEARCHERS.ONE, our mission, and how the new platform will fulfull this mission.  We also compare our approach with other recent reform initiatives such as post-publication peer review and open access publications.  

    Logic of Probability and Conjecture

    Under Review Since : 2018-09-15

    I introduce a formalization of probability in intensional Martin-Löf type theory (MLTT) and homotopy type theory (HoTT) which takes the concept of ‘evidence’ as primitive in judgments about probability. In parallel to the intuition- istic conception of truth, in which ‘proof’ is primitive and an assertion A is judged to be true just in case there is a proof witnessing it, here ‘evidence’ is primitive and A is judged to be probable just in case there is evidence supporting it. To formalize this approach, we regard propositions as types in MLTT and define for any proposi- tion A a corresponding probability type Prob(A) whose inhabitants represent pieces of evidence in favor of A. Among several practical motivations for this approach, I focus here on its potential for extending meta-mathematics to include conjecture, in addition to rigorous proof, by regarding a ‘conjecture in A’ as a judgment that ‘A is probable’ on the basis of evidence. I show that the Giry monad provides a formal semantics for this system.

    Why Did The Crisis of 2008 Happen?

    Under Review Since : 2018-09-14

    This paper is a synthesis of the deposition in front of the Financial Crisis Inquiry Commission by the Obama Administration in 2010. Note that none of its ideas made it to the report.

    In peer review we (don't) trust: How peer review's filtering poses a systemic risk to science

    Under Review Since : 2018-09-14

    This article describes how the filtering role played by peer review may actually be harmful rather than helpful to the quality of the scientific literature. We argue that, instead of trying to filter out the low-quality research, as is done by traditional journals, a better strategy is to let everything through but with an acknowledgment of the uncertain quality of what is published, as is done on the RESEARCHERS.ONE platform.  We refer to this as "scholarly mithridatism."  When researchers approach what they read with doubt rather than blind trust, they are more likely to identify errors, which protects the scientific community from the dangerous effects of error propagation, making the literature stronger rather than more fragile.  

    Adaptive inference after model selection

    Published Date : 2018-09-14

    Penalized maximum likelihood methods that perform automatic variable are now ubiquitous in statistical research. It is well-known, however, that these estimators are nonregular and consequently have limiting distributions that can be highly sensitive to small perturbations of the underlying generative model. This is the case even for the fixed “p” framework. Hence, the usual asymptotic methods for inference, like the bootstrap and series approximations, often perform poorly in small samples and require modification. Here, we develop locally asymptotically consistent confidence intervals for regression coefficients when estimation is done using the Adaptive LASSO (Zou, 2006) in the fixed “p” framework. We construct the confidence intervals by sandwiching the nonregular functional of interest between two smooth, data-driven, upper and lower bounds and then approximating the distribution of the bounds using the bootstrap. We leverage the smoothness of the bounds to obtain consistent inference for the nonregular functional under both fixed and local alternatives. The bounds are adaptive to the amount of underlying nonregularity in the sense that they deliver asymptotically exact coverage whenever the underlying generative model is such that the Adaptive LASSO estimators are consistent and asymptotically normal, and conservative otherwise. The resultant confidence intervals possess a certain tightness property among all regular bounds. Although we focus on the Adaptive LASSO, our approach generalizes to other penalized methods.  (Originally published as a technical report in 2014.) 

    A Research Prediction Market Framework

    Under Review Since : 2018-09-09

    Prediction markets are currently used for three fields: 1. For economic, political and sporting event outcomes. (IEW, PredictItPredictWise) 2. For risk evaluation, product development and marketing. (Cultivate Labs/Consensus Point) 3. Research replication. (Replication Prediction Project, Experimental Economics Prediction Project, and Brian Nosek’s latest replicability study) The latter application of prediction markets has remained closed and/or proprietary despite the promising results in the methods. In this paper, I construct an open research prediction market framework to incentivize replicate study research and align the motivations of research stakeholders.   

    Patrick Matthew (1790 - 1874) and Natural Selection. Historical News about the Forgotten

    Under Review Since : 2018-10-05

    Ever since Charles Darwin admitted Patrick Matthew's priority in 1860, the latter's book On Naval Timber and Arboriculture is thought to contain the first clear and complete anticipation of the idea of (macro-)evolution through natural selection. Most publications dealing with Matthew, however, merely repeated the little that was known about Matthew through articles by Walther May and William Calman and re-printed the ever same pieces of correspondence and excerpts from Matthew’s book. Apart from this repeating of old lore, stronger claims merely jumped to the conclusion that Charles Darwin plagiarised Patrick Matthew from facts that are equivocal and have never been scrutinised within their historical context. This publication attempts to establish a proper historiography of Patrick Matthew, where there is currently mere repetition of old tales or tossing around of tall claims. The tall claims about Matthew raise questions that need to be properly addressed and put in historical perspective before tentative conclusions can be drawn.

         A reader that knows a good deal about evolutionary theory and its history but nothing about Patrick Matthew will naturally ask the following questions when confronted with tall claims: Who, what, where, how? Who was that Patrick Matthew? What exactly did he anticipate? Wherein did he publish this anticipation? How did his contemporaries receive it? On careful inspection of the evidence and its context, none of these questions can be answered in a simple and definitive way. This should come as no surprise after 150 years without a proper historiography of Patrick Matthew and, instead, a mixture of repeating lore and jumping to unwarranted conclusions. The following, therefore, demands of the reader to endure the suspense of not knowing some things for sure, because of the incomplete historical record, and the courtesy to excuse an unknown author enlarging on an unknown protagonist. On the other hand, it highlights how many interesting historical inquiries await talented students. Each question opens a historical panorama when addressed with an open mind rather than prejudices, orthodox or revisionist.

         Many archival sources given below are new to the historical canon of evolutionary biology. They are quoted verbatim when that is the most succinct way to support an argument. Each chapter is preceded by a captivating vignette, in present tense and different font, in order to whet the readers’ appetite. The style of the vignettes also differs from the other sections in that citations are put into footnotes, in order to keep the flow of reading uninterrupted. The citation style is scientific within the chapters, in order to ease the tracking and cross-checking for interested scholars, who want to follow up with their own inquiries. Some dramatising has been introduced in some of the vignettes, like the one that follows below. These extrapolations have been performed without affecting the historical facts, and they were written with sympathy for all the protagonists.

         The heading of each chapter is a question and the chapters are referred to as Q1 to Q9. The chapters themselves are subdivided into a summary part, the evidence that will seem excessive to some readers (they may skip it) and not enough to others (they may follow up with their own inquiries), and a conclusion. As an exception, Q3-Q5 are summarised and concluded together because they form a comparison, as a group, of the respective transmutation mechanisms of Patrick Matthew, Charles Darwin, and Alfred Wallace. [Chapters Q3-Q5 are based on an article that has previously been published in the Biological Journal of the Linnean Society 123(4).] 

         Part 1 addresses Patrick Matthew’s life. In particular, Q1 refutes the widespread myth that Patrick Matthew studied medicine at the University of Edinburgh until his father died in 1807. Q2 shows that this student of medicine was a namesake from Newbigging. Part 2 addresses the book On Naval Timber and Arboriculture of Patrick Matthew. In particular, Q3-Q5 compare the transmutation mechanisms of Matthew, Darwin and Wallace. This shows that the similarities between their schemes are superficial amounting to no more than that they all include natural selection somehow and lead to species transmutation somehow else. Without the retrospective that inflates the importance of natural selection over all other parts of the theories in question, they are as different as Cuvier’s and Matthew’s, say, or Lyell’s and Darwin’s theories. Q6 reveals what further information can be gleaned from the book about its kludgy composition. Part 3 sheds light on the reception of Matthew’s book by his contemporaries that refutes the myths of its utter non-reception and the opposite myth of its perfect reception. In particular, Q7-Q9 look at the roles of three popularisers of science, who have been claimed to have communicated Matthew’s ideas on natural selection and species transformation to Charles Darwin and Alfred Wallace respectively. First, Robert Chambers probably never read the book of Matthew. Second, John Loudon may or may not have been the author of an anonymous review of Matthew’s book in the Gardener’s Magazine. The only question remaining is why did Darwin miss the short passage about the origin of species in that review? As it starts by recounting matters of naval timber, shipbuilding and other issues of no interest to Darwin, however, it is easy to see how he might have inadvertently skipped the crucial passage. Third, Prideaux Selby read Matthew’s book but did not understand Matthew’s idea on ecological competition, which was a prerequisite for comprehending his evolutionary ideas. Even if he had understood, however, it is hard to see how he is supposed to have communicated the intelligence to Wallace in the Malay Archipelago.

     

    Modeling of multivariate spatial extremes

    Under Review Since : 2018-09-06

    Extreme values are by definition rare, and therefore a spatial analysis of extremes is attractive because a spatial analysis makes full use of the data by pooling information across nearby locations. In many cases, there are several dependent processes with similar spatial patterns. In this paper, we propose the first multivariate spatial models to simultaneously analyze several processes. Using a multivariate model, we are able to estimate joint exceedance probabilities for several processes, improve spatial interpolation by exploiting dependence between processes, and improve estimation of extreme quantiles by borrowing strength across processes. We propose models for separable and non-separable, and spatially continuous and discontinuous processes. The method is applied to French temperature data, where we find an increase in the extreme temperatures over time for much of the country.

    Empirical priors and posterior concentration rates for a monotone density

    Under Review Since : 2018-09-04

    In a Bayesian context, prior specification for inference on monotone densities is conceptually straightforward, but proving posterior convergence theorems is complicated by the fact that desirable prior concentration properties often are not satisfied. In this paper, I first develop a new prior designed specifically to satisfy an empirical version of the prior concentration property, and then I give sufficient conditions on the prior inputs such that the corresponding empirical Bayes posterior concentrates around the true monotone density at nearly the optimal minimax rate. Numerical illustrations also reveal the practical benefits of the proposed empirical Bayes approach compared to Dirichlet process mixtures.

    Gibbs posterior inference on value-at-risk

    Under Review Since : 2018-09-04

    Accurate estimation of value-at-risk (VaR) and assessment of associated uncertainty is crucial for both insurers and regulators, particularly in Europe. Existing approaches link data and VaR indirectly by first linking data to the parameter of a probability model, and then expressing VaR as a function of that parameter. This indirect approach exposes the insurer to model misspecification bias or estimation inefficiency, depending on whether the parameter is finite- or infinite-dimensional. In this paper, we link data and VaR directly via what we call a discrepancy function, and this leads naturally to a Gibbs posterior distribution for VaR that does not suffer from the aforementioned biases and inefficiencies. Asymptotic consistency and root-n concentration rate of the Gibbs posterior are established, and simulations highlight its superior finite-sample performance compared to other approaches.

    Homotopy Equivalence as FOLDS Equivalence

    Under Review Since : 2018-09-02

    We prove an observation of Makkai that FOLDS equivalence coincides with homotopy equivalence in the case of semi-simplicial sets.

    The Art of The Election: A Social Media History of the 2016 Presidential Race

    Under Review Since : 2018-09-01

    The Art of The Election: A Social Media History of the 2016 Presidential Race

    Abstract

    The book is 700 pages comprising of Donald Trump’s tweets from June 2015 to November 2016 and footnotes which comprise 70-80% of the tweets which explain the context of each tweet. The book has a 100 page bibliography.

    It is highly likely that Trump would not have been elected President were it not for social media. This is an unprecedented statement. This is the first time a presidential candidate utilized a social network to get his message out directly to voters, but moreover, to shape the media feedback loop. His tweets became news. This is primary source material on the 2016 election. No need for narratives, outside ”experts” or political ”science”.

    The file is too large to post on this website. But you can download the book under this link:

    https://www.dropbox.com/s/bxvsh7eqh2ueq6j/Trump%20Book.docx?dl=0

    Keywords and phrases: 2016, book, Trump, election, social media.

    The Evolutionary Theory Of Value

    Under Review Since : 2018-09-01

    We propose the first economical theory of value that actually works. We explain evolutionary causes of trade, and demonstrate how goods have value from the evolutionary perspective, and how this value is increased with trade. This "Darwinian" value of goods exists before humans assign monetary value (or any other value estimate) to traded goods. We propose objective value estimate expressed in energy units.

    The Fundamental Principle of Probability: Resolving the Replication Crisis with Skin in the Game

    Under Review Since : 2018-09-17

    I make the distinction between academic probabilities, which are not rooted in reality and thus have no tangible real-world meaning, and real probabilities, which attain a real-world meaning as the odds that the subject asserting the probabilities is forced to accept for a bet against the stated outcome.  With this I discuss how the replication crisis can be resolved easily by requiring that probabilities published in the scientific literature are real, instead of academic.  At present, all probabilities and derivatives that appear in published work, such as P-values, Bayes factors, confidence intervals, etc., are the result of academic probabilities, which are not useful for making meaningful assertions about the real world.

    The Logic of Typicality

    Under Review Since : 2018-08-30

    The notion of typicality appears in scientific theories, philosophical arguments, math- ematical inquiry, and everyday reasoning. Typicality is invoked in statistical mechanics to explain the behavior of gases. It is also invoked in quantum mechanics to explain the appearance of quantum probabilities. Typicality plays an implicit role in non-rigorous mathematical inquiry, as when a mathematician forms a conjecture based on personal experience of what seems typical in a given situation. Less formally, the language of typicality is a staple of the common parlance: we often claim that certain things are, or are not, typical. But despite the prominence of typicality in science, philosophy, mathematics, and everyday discourse, no formal logics for typicality have been proposed. In this paper, we propose two formal systems for reasoning about typicality. One system is based on propositional logic: it can be understood as formalizing objective facts about what is and is not typical. The other system is based on the logic of intuitionistic type theory: it can be understood as formalizing subjective judgments about typicality.

    Is statistics meeting the needs of science?

    Under Review Since : 2018-08-28

    Publication of scientific research all but requires a supporting statistical analysis, anointing statisticians the de facto gatekeepers of modern scientific discovery. While the potential of statistics for providing scientific insights is undeniable, there is a crisis in the scientific community due to poor statistical practice. Unfortunately, widespread calls to action have not been effective, in part because of statisticians’ tendency to make statistics appear simple. We argue that statistics can meet the needs of science only by empowering scientists to make sound judgments that account for both the nuances of the application and the inherent complexity of funda- mental effective statistical practice. In particular, we emphasize a set of statistical principles that scientists can adapt to their ever-expanding scope of problems.

    Rethinking probabilistic prediction: lessons learned from the 2016 U.S. presidential election

    Under Review Since : 2018-09-08

    Whether the predictions put forth prior to the 2016 U.S. presidential election were right or wrong is a question that led to much debate. But rather than focusing on right or wrong, we analyze the 2016 predictions with respect to a core set of {\em effectiveness principles}, and conclude that they were ineffective in conveying the uncertainty behind their assessments. Along the way, we extract key insights that will help to avoid, in future elections, the systematic errors that lead to overly precise and overconfident predictions in 2016. Specifically, we highlight shortcomings of the classical interpretations of probability and its communication in the form of predictions, and present an alternative approach with two important features.  First, our recommended predictions are safer in that they come with certain guarantees on the probability of an erroneous prediction; second, our approach easily and naturally reflects the (possibly substantial) uncertainty about the model by outputting plausibilities instead of probabilities.