In 1925, Fisher published *Statistical Methods for Research Workers*, whete he explained the concept of statistical significance. Somewhat arbitrarily, Fisher chose to define statistical significance as a difference that had less than a .05 probability of occurring by random chance (in technical terms, this is called a p-value).

Fast forward more than a century later, and many researchers believe Fisher’s choice of .05 has led to a crisis in science: Researches have shown that fewer than half of the published psychology findings held up when replicated and about 40% of experimental economics findings disappeared when the experiment was repeated.

What’s to be done? A new proposal is to use .005 instead for “significant” evidence. It might not solve all problems, but should help.

## 0 Comments

Be the first to post a comment.