top of page

They're all false. Here's why:

1. You have disproved the null hypothesis (the hypothesis that there is no statistical difference in recovery times).

[false] You can never disprove the null hypothesis within classical statistics. You only reject it based on statistical arguments (proof is based on a deductive argument).

2. You have obtained more evidence against the null hypothesis than if the p-value were p=0.045.

[false] All p-values below the alpha-criterion are treated identically. Otherwise, the alpha-criterion would lose its meaning: it would no longer set the error rate if you treated p-values below the alpha-criterion differently based on how much smaller they were.

3. You have found the probability of the null hypothesis being true.

[false] You cannot define the probabilities of hypotheses within classical statistics. This can only be done within the Bayesian definition of probability.

4. You have proved your hypothesis (that there is a reliable statistical difference in recovery time).

[false] You can never claim 'proof' within a statistical argument. Proof is reserved for deductive arguments.

5. From the p-value, you can deduce the probability of the experimental hypothesis being true.

[false] You cannot define the probabilities of hypotheses within classical statistics​. This can only be done within the Bayesian definition of probability.

6. You are able to lower your alpha-criterion and report this effect as significant at the 0.02 level

[false] This misunderstands the definition of the alpha-criterion. It must be selected before experiments, so that any results based on a given pre-set criterion will have a Type-I error rate equal to that alpha-criterion. Changing the criterion after the experimental results are in simply nullifies this. [also see 2]

7. You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

[false] This suggests you know the probability in a particular case (this particular decision to reject). You cannot define probabilities of for single cases within classical statistics​. This can only be done within the Bayesian definition of probability.

8. You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of times, you would obtain a significant result on 98.5% of occasions.

[false] The p-value does not tell you about this kind of reliability. If you were to know the true improvement in time-to-recovery based on your new drug (rather than just the experimentally measured value), then Effect-size calculations can be used to tell you this. Those calculations would be based on a known effect size and pre-set alpha-criterion, and not on the p-value from any one dataset.

9. You have found the probability of the alternative hypothesis being false.

[false] You cannot define the probabilities of hypotheses within classical statistics​. This can only be done within the Bayesian definition of probability.

10. You have computed the data analog of the type-I error rate, meaning there is a 1.5% chance you will incorrectly reject the null hypothesis when it is actually true

[false] this conflates alpha criterion and p-value. The alpha criterion tells you your error rate if you maintain that criterion across experiments. It has no statistical meaning within a single experiment, because probabilities are defined as frequencies within classical statistics. You can only talk about the probabilities of single events within the Bayesian definition of probability.

bottom of page