Project OneSided: Setting the record straight on one-sided statistical tests
OneSided.org is a project dedicated to making the argument for the widespread adoption of one-sided statistical tests where directional claims are made in scientific and applied research. The overall goal is to counter the widespread misconceptions and confusions surrounding one-tailed tests of significance and one-sided confidence intervals in many scientific and applied disciplines: pharmacology, clinical trials, medical research, economics, psychiatry, psychology, business research, online controlled experiments, etc. and to change how they are used in practice. A detailed list of goals can be found here.
Why should I care about one-sided tests?
A lot of research is based on data and the conclusions reached, discoveries made and decisions taken as result of such research depends on the quality of the statistical analysis and reporting. According to our research two-sided tests are used in many cases where a one-sided test is the analysis needed to support the claims made. This leads to skewed perception of the data and in many cases to poor decisions based on the perceived higher than actual risk.
In a medical trial this might mean that the harmfulness or efficacy of a drug might be underestimated, in environment protection and control this may lead to a false belief in the environment being safe when it is not, in business it may lead to missed opportunities to grow the business or to underappreciation of risks.
In all these and other cases it also results in undue expenditure of resources on experimentation, unethical exposure of more than the necessary subjects to a new drug or substance, etc. assuming fixed levels of acceptable risk.
All of this is happening due to poor understanding and representation of one-sided tests in many statistical papers, textbooks, guidelines, and by some practicing statistical consultants (some cases documented here). OneSided.org aims to change that.
What is available on this site
We publish articles explaining one-sided statistical tests, resolving paradoxes and proving the need for using one-sided test of significance and confidence intervals when claim corresponding to directional hypotheses are made. There are interactive simulations and code for simulations you can run yourself. You will also find links to related literature: both for and against one-sided tests.
The goal of all the above is to dispel the myths, to clear the confusions and misunderstandings surroundings one-sided testing, and to discontinue the misuse of two-sided tests in support of directional claims.
Articles on one-sided tests
- One-sided statistical tests are just as accurate as two-sided tests
- The paradox of one-sided vs. two-sided tests of significance
- Directional claims require directional statistical hypotheses
- A p-value is meaningless without a specified null hypothesis
- When is a one-sided hypothesis required?
- 12 myths about one-tailed vs. two-tailed tests of significance
- Examples of improper use of two-sided hypotheses
- Fisher, Neyman & Pearson - advocates for one-sided tests and confidence intervals
- Proponents of one-sided statistical tests
- Examples of negative portrayals of one-sided significance tests
- Is the widespread usage of two-sided tests a result of a usability issue?
- Reasons for misunderstanding and misapplication of one-sided tests
- Refining statistical guidelines and requirements for one-sided tests
- The hidden costs of bad statistics in clinical research