Online simulations of one-sided statistical tests

These simulations are performed using JavaScript and are thus potentially less robust than ones done in R or an equivalent language with specialized statistical libraries including ones for generating random numbers from a specified distribution. Still, the accuracy is pretty good and so if you do not want to bother installing R to run our R simulations these should be a fine substitute.

Effect of predictions the type I error of one-sided tests

In this simple simulation you are to make a prediction for the outcome of the test. You can choose to predict a positive effect, a negative effect or any effect at all. You will also select the desired significance threshold for making a claim of superiority (e.g. daily consumption of substance X improves heart function by Y). Then the selected quantity of random numbers from a population with a true mean of zero with a normally distributed error will be drawn and plotted for you.

Each random number represents the outcome of an experiment. The outcomes are compared to the calculated critical boundary based on the chosen significance level. Then the number of simulated outcomes that fall to the right of the critical boundary is given and it is demonstrated that the cut off is such that the portion of the observations to the right of it closely match the expected proportion of false rejections under the null (e.g. for α = 0.05 the portion will represent 5% of the total outcomes).

experiments

 

Given that your prediction has no effect on the simulation (as can be checked in the code) it is exactly what one would expect. The sampling space remains unaffected and the type I error guarantees hold in the worst-case scenario (true effect is zero) for the claim of superiority regardless of any expectation or prediction made explicit before the test. Also note that α can be chosen at any point in time without any changes to the outcomes of the tests. Usually it is agreed upon beforehand to gain support and commitment from stakeholders, but there is no statistical argument against setting it afterwards.