• Home
  • Articles
  • Submit
  • Donate
  • FAQ
    • What is RESEARCHERS.ONE?
    • How to register?
    • Who can submit?
    • How to submit an article?
    • LaTeX or Word templates?
    • How to revise an article?
    • How to publish an article?
    • Public versus private peer review?
    • How to get your work reviewed?
    • How to review?
    • How to post a public peer review?
    • How to submit a private peer review?
    • What are the terms and conditions?
    • Why a submission fee?
    • How to donate?
    • Contact us?
  • Login
Frank Zenker
Lund University, Sweden

Website : 

frankzenker.de

Bio/Interests : 

I work in the philosophy of science and social epistemology, with a particular interest in theory change, statistical inference, and the theory and practice of argumentation, among others.
fzenker@gmail.com

Under Public Peer Review

  • What a t-test easily hides

    • Frank Zenker
    • Erich Witte
    Currently Under Review.
    • corroboration quality
    • effect-size
    • Fisher
    • induction quality
    • minimum sample size
    • Neyman-Pearson test-theory
    • statistical error
    • test-power
    • t-test

    To justify the effort of developing a theoretical construct, a theoretician needs empirical data that support a non-random effect of sufficiently high replication-probability. To establish these effects statistically, researchers (rightly) rely on a t-test. But many pursue questionable strategies that lower the cost of data-collection. Our paper reconstructs two such strategies. Both reduce the minimum sample-size (NMIN) sufficing under conventional errors (α, β) to register a given effect-size (d) as a statistically significant non-random data signature. The first strategy increases the β-error; the second treats the control-group as a constant, thereby collapsing a two-sample t-test into its one-sample version. (A two-sample t-test for d=0.50 under a=β=0.05 with NMIN=176, for instance, becomes a one-sample t-test under a=0.05, β=0.20 with NMIN=27.) Not only does this decrease the replication-probability of data from (1-β)=0.95 to (1-β)=0.80, particularly the second strategy cannot corroborate hypotheses meaningfully. The ubiquity of both strategies arguably makes them partial causes of the confidence-crisis. But as resource-pooling would allow research groups reach NMIN jointly, a group’s individually limited resources justify neither strategy.

    © 2019 RESEARCHERS.ONE