Statistical inference using hypothesis testing is ubiquitous in science. Several misconceptions and misinterpretations of p-values have arisen over the years, which can lead to challenges communicating the correct interpretation of results.

by TAF Support

Statistical inference using hypothesis testing is ubiquitous in science. Several misconceptions and misinterpretations of p-values have arisen over the years, which can lead to challenges communicating the correct interpretation of results.

Statistics is, to a large extent, a science of comparison. You are trying to test whether one group is bigger, faster, or smarter than another.

You do this by setting up a null hypothesis that your two groups have equal means or proportions and an alternative hypothesis that one group is “better” than the other. The test has interesting results only when the data you collect ends up rejecting the null hypothesis.

But there are times when the interesting research question you’re asking is not about whether one group is better than the other, but whether the two groups are equivalent.

[Read more…] about Member Training: Equivalence Tests and Non-Inferiority

I was recently asked about when to use one and two tailed tests.

The long answer is: Use one tailed tests when you have a specific hypothesis about the direction of your relationship. Some examples include you hypothesize that one group mean is larger than the other; you hypothesize that the correlation is positive; you hypothesize that the proportion is below .5.

The short answer is: Never use one tailed tests.

Why?

1. Only a few statistical tests even can have one tail: z tests and t tests. So you’re severely limited. F tests, Chi-square tests, etc. can’t accommodate one-tailed tests because their distributions are not symmetric. Most statistical methods, such as regression and ANOVA, are based on these tests, so you will rarely have the chance to implement them.

2. Probably because they are rare, reviewers balk at one-tailed tests. They tend to assume that you are trying to artificially boost the power of your test. Theoretically, however, there is nothing wrong with them when the hypothesis and the statistical test are right for them.

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.

Continue Privacy PolicyPrivacy & Cookies Policy