Online simulations of one-sided statistical tests
Effect of predictions the type I error of one-sided tests
In this simple simulation you are to make a prediction for the outcome of the test. You can choose to predict a positive effect, a negative effect or any effect at all. You will also select the desired significance threshold for making a claim of superiority (e.g. daily consumption of substance X improves heart function by Y). Then the selected quantity of random numbers from a population with a true mean of zero with a normally distributed error will be drawn and plotted for you.
Each random number represents the outcome of an experiment. The outcomes are compared to the calculated critical boundary based on the chosen significance level. Then the number of simulated outcomes that fall to the right of the critical boundary is given and it is demonstrated that the cut off is such that the portion of the observations to the right of it closely match the expected proportion of false rejections under the null (e.g. for α = 0.05 the portion will represent 5% of the total outcomes).
Simulated experiments with normally-distributed error
Results from the simulation with μ = 0:
Desired (nominal) probability of false rejections:
(% or out of )
Observed probability of false rejection of the null H0: μ ≤ 0 with a critical Z value of :
(% or out of )
Given that your prediction has no effect on the simulation (as can be checked in the code) it is exactly what one would expect. The sampling space remains unaffected and the type I error guarantees hold in the worst-case scenario (true effect is zero) for the claim of superiority regardless of any expectation or prediction made explicit before the test. Also note that α can be chosen at any point in time without any changes to the outcomes of the tests. Usually it is agreed upon beforehand to gain support and commitment from stakeholders, but there is no statistical argument against setting it afterwards.