I was recently asked about when to use one and two tailed tests.
The long answer is: Use one tailed tests when you have a specific hypothesis about the direction of your relationship. Some examples include you hypothesize that one group mean is larger than the other; you hypothesize that the correlation is positive; you hypothesize that the proportion is below .5.
The short answer is: Never use one tailed tests.
1. Only a few statistical tests even can have one tail: z tests and t tests. So you’re severely limited. F tests, Chi-square tests, etc. can’t accommodate one-tailed tests because their distributions are not symmetric. Most statistical methods, such as regression and ANOVA, are based on these tests, so you will rarely have the chance to implement them.
2. Probably because they are rare, reviewers balk at one-tailed tests. They tend to assume that you are trying to artificially boost the power of your test. Theoretically, however, there is nothing wrong with them when the hypothesis and the statistical test are right for them.
I am running a regression analysis, I wonder if the significance in table output for ANOVA is inherently one-tailed or two-tailed? I have a directional hypothesis, so I am wondering if I still have to divide the significance value? Thank you in advance!
Karen Grace-Martin says
It’s two-tailed. And the F tests cannot be made directional.
Say for example I am testing a hypothesis at .05 and it is a one tailed test. I expect the treatment to have a decrease and the cut off for significance is t = -1.67 so this is a left tail test. If t value after I run the test is +2.24 is this non significant since it is not in the hypothesized direction (negative)?
does a one-tailed test always require a one-sided (directional) hypothesis, and
does a two-tailed test always require a two-sided (nondirectional) hypothesis?
Asked in the other direction:
Does a one-sided (directional) hypothesis always require a one-sided test, and
does a two-sided (nondirectional) hypothesis always require a two-sided test?
Thanks for clarifying
Karen Grace-Martin says
Good question. So technically, when I am talking about one-tailed test, I do mean a test of a directional hypothesis. Likewise, I mean a non-directional hypothesis when I say two-tailed test.
In z-tests and t-tests, which are symmetric distributions, these terms are indeed interchangeable. But it definitely gets more complicated once you start talking about Chi-square tests and F-tests. Both of these have 0 at the left end and all high values of the statistic in one tail. So you can’t distinguish directions for these kinds of test statistics. So you can only test nondirectional hypotheses.
I have seen textbooks reporting confidence intervals of a standard deviation using low and high values of the chi-square statistic. Is it not based on a two-sided test?
Georgi Z. Georgiev says
There is no need to have a specific directional hypothesis (although you would usually have one), all that is needed to justify is to have a directional claim. Most claims in published scientific research and applied research are directional, since the moment you say the difference is positive or negative you have a directional claim. The only way to avoid it is to not mention the observed difference, or to state the difference as “plus or minus X” which would be ridiculous in most contexts.
Also, you should not mistake the tailed-ness of a statistical distribution with the tailed-ness of a hypothesis. While the Chi-Square or F-distribution might only have one tail, they can still be used for inference of one-sided and two-sided hypothesis alike. One can go as far back as Fisher and find examples of that (Statistical Methods for Research Workers).
If you are interested in more detailed arguments for the use of one-sided tests see the series of articles on The One-Sided Project website at https://www.onesided.org/.
Muhammad Ishaque says
Good Morning to all,
I am working on a project of Behavior Based Safety thesis work in which I have to assess data by SPSS software. Can any one could help me because i haven’t any idea about this software
Karen Grace-Martin says
We have a number of free resources on SPSS, but if you’re in the middle of a project we have an on-demand tutorial that will get you not just started, but able to use it: http://theanalysisinstitute.com/introspss/
Hui Zhao says
Thank you for the discussions about the power for one-sided test. I agree that we should be careful when we decide to use a one-sided test. For Chi-square test when comparing two proportions, we can use two approaches: normal-theory method (the z-test) and contingency-table approach (the Chi-square test). For the normal-theory test, it requires a large sample size with n>5 or n*proportion >10. If your proposed study satisfied this requirement, we can use normal-theory method which is z-test to compute the power for the one-sided test. If you are interested in how to compute the one-sided test power, you can refer to the textbook by Marcello Pagano and Kimberlee Gauvreau’s 2nd edition titled “Principles of Biostatistics” section 14.5 on page 330 regarding sample size estimation for one-sided hypothesis test for proportions.
Chi-squared tests are ALWAYS one-tailed… You only reject the null hypothesis when the test statistic falls into the right tail of the chi-square distribution.
What chi-squared tests are generally NOT is “directional”. They generally do not test whether the observed values are greater than or smaller than expected, only that they differ significantly.
If “Chi-squared tests are ALWAYS one-tailed…”, why is Karen saying in her above statement that “F tests, Chi-square tests, etc. can’t accommodate one-tailed tests”?
Sorry, but I am bit confused. Can you help?
Perhaps better wording is “F tests, Chi-square tests, etc. can’t accommodate directional tests.” Because there is only one tail for these distributions in which to find significance, it can’t distinguish between non-directional tests (eg, H1: mu1 – mu2 not equal to 0) and directional tests (eg, H1: mu1-mu2 greater than 0). In a t-test or z-test, we can either split alpha between two tails for a non-directional test or put alpha all into one tail for a directional test. We can then see whether we’re in the right tail based on the sign of the test statistic. F’s and Chi-sq values are squared. So they’re always positive. You get the same F value regardless of the direction of the means.
Thanks for the explanation… I am wondering: what happens if I have two methods, m1 and m2, and I want to show that m1 performs better (e.g. gives a higher value) than m2? Should I still use a two-tailed test? How can I show that m1 is, in fact, better (and not just different) than m2 (assuming the test proves significant)? Thanks for any comments.
My hypothesis says there is a positive relation between religiosity and altruistic behavior. Would a two-tailed approach using Pearson Correlation do the trick?
Some very helpful information available here!
I run paired sample t-test and it just has P-two tailed. So how to convert P-two tailed to t critical one tailed. I have t critical two tailed and df already.
Thanks so much
I have used a one-tailed test but the effect went into the opposite direction. How do I have to calculate my p-Value now. Thanks for your reply, Mirco
F-tests are almost always one-tailed. You would convert a two-tailed test’s p-value into a one-tailed test’s p-value by *halving* the p-value, not multiplying by 2 (as recommended above).
Thanks, Paul. Yes. I fixed the half.
While it’s true that F-tests are one-tailed, they’re not testing directional hypotheses, the way a one-tailed t or z test does.
I trying to complete the results of my study and need to know how to convert my 1-tailed results to 2-tailed? The company that ran my stats used a 1-tailed instead of 2-tailed, which as I understand is what I should have used to show directionality.
Hi Nicole, just double them.
Very interesting. I am reading a much celebrated book (The weakness of Civil Society in Post-Communist Europe, by Marc Howard, 2003) in polticial science at the moment containing regression analysis with one tailed coeficients. This, togheter with that they only have a N of 23 (with 5 independent variables), raise my eyebrows. The results are also quite controversial….
What would you off-hand say about that?
Without knowing anything else, the one tailed tests of coefficients wouldn’t worry me too much except for the fact you said it’s controversial. Which means perhaps the opposite result is reasonable.
The N of 23 is more of a concern. That’s pretty small. I find results like this are not bad per se. It’s fine to consider as one possible piece of information–they are great for spurring more research. But you can’t make any conclusions based on them.
Jst wanna thank you for your post ; it will save me on my exam tomorrow (y)
& in my course it is not always divided by 2 (p-value). It depends on the value of your t (,= 0) & if your H1:’value” is > or 0, Hasard ?
Even if i’m wrong, I want to thank you again !
how can we calculate p-value of one-tailed from two-tailed hypothesis in spss?
All you have to do is divide it by 2.
i found this issue more important please continue in such way.
sue gallagher says
Thank you so much for your reply and offer to try a find a source regarding the limited utility of one-tailed tests when doing ANOVAs and regressions, as well as the advice for converting two-tailed tests to one-tailed tests.
The only problem I have is that the Pearson Correlation Coefficient output from my stats consultant does not contain the p value. In order to calculate the p value for a two-tailed test, I thought it might be possible to take the df (n-2 for two-tailed tests) and look up the significance level in the table of critical values of the correlation coefficient. Once I get those values, I would simply divide by 2 to get the one-tailed level of significance. Do you think that is a statistically sound procedure.
Thank you again for your assistance,
sue gallagher says
I am currently working on my dissertation and one of my committee members suggested that I should have used a one-tailed test as I have a directional hypothesis, but I think that a two-tailed test is just as appropriate based on several of the reasons listed on the blog.
I was particularly intrigued by the statement that “F tests, chi-square tests, etc. can’t accommodate one-tailed tests because their distributions are not symmetric.” This would make a fine argument for not re-rerunning my data and was wondering if there is a reference or citation for that point. I have not been able to find that point in any of the stats texts that I own. Any help would be greatly appreciated!
Hi Sue,Hmmm. I would think that texts that talk about the F-test would mention that it's not symmetric. You could certainly use any text that states that t-squared=F. But I'll see if I can find something that says it directly.But in any case, you don't have to rerun anything, even if you weren't using F tests. To get a one-sided p-value, just halve the two-sided p-value you have.Karen