Under Review

What a t-test easily hides

Corroboration quality effect-size Fisher induction quality minimum sample size Neyman-Pearson test-theory statistical error test-power t-test

Cite as:

Frank Zenker and Erich Witte (2018). What a t-test easily hides . RESEARCHERS.ONE, https://www.researchers.one/article/2018-09-19.

Abstract:

To justify the effort of developing a theoretical construct, a theoretician needs empirical data that support a non-random effect of sufficiently high replication-probability. To establish these effects statistically, researchers (rightly) rely on a t-test. But many pursue questionable strategies that lower the cost of data-collection. Our paper reconstructs two such strategies. Both reduce the minimum sample-size (NMIN) sufficing under conventional errors (α, β) to register a given effect-size (d) as a statistically significant non-random data signature. The first strategy increases the β-error; the second treats the control-group as a constant, thereby collapsing a two-sample t-test into its one-sample version. (A two-sample t-test for d=0.50 under a=0.05 with NMIN=176, for instance, becomes a one-sample t-test under a=0.05, β=0.20 with NMIN=27.) Not only does this decrease the replication-probability of data from (1-β)=0.95 to (1-β)=0.80, particularly the second strategy cannot corroborate hypotheses meaningfully. The ubiquity of both strategies arguably makes them partial causes of the confidence-crisis. But as resource-pooling would allow research groups reach NMIN jointly, a group’s individually limited resources justify neither strategy.