Odds ratios are one of those concepts in statistics that are just really hard to wrap your head around. Although probability and odds both measure how likely it is that something will occur, probability is just so much easier to understand for most of us.

I’m not sure if it’s just a more intuitive concepts, or if it’s something were just taught so much earlier so that it’s more ingrained. In either case, without a lot of practice, most people won’t have an immediate understanding of how likely something is if it’s communicated through odds.

So why not always use probability?

The problem is that probability and odds have different properties that give odds some advantages in statistics. For example, in logistic regression the odds ratio represents the constant effect of a predictor X, on the likelihood that one outcome will occur.

The key phrase here is *constant effect*. In regression models, we often want a measure of the unique effect of each X on Y. If we try to express the effect of X on the likelihood of a categorical Y having a specific value through probability, the effect is not constant.

What that means is there is no way to express in one number how X affects Y in terms of probability. The effect of X on the probability of Y has different values depending on the value of X.

So while we would love to use probabilities because they’re intuitive, you’re just not going to be able to describe that effect in a single number. So if you need to communicate that effect to a research audience, you’re going to have to wrap your head around odds ratios.

#### What about Probabilities

What you can do, and many people do, is to use the logistic regression model to calculate predicted probabilities at specific values of a key predictor, usually when holding all other predictors constant.

This is a great approach to use together with odds ratios. The odds ratio is a single summary score of the effect, and the probabilities are more intuitive.

Presenting probabilities without the corresponding odds ratios can be problematic, though.

First,when X, the predictor, is categorical, the effect of X *can* be effectively communicated through a difference or ratio of probabilities. The probability a person has a relapse in an intervention condition compared to the control condition makes a lot of sense.

But the p-value for that effect *is not* the p-value for the differences in probabilities.

If you present a table of probabilities at different values of X, most research audiences will, at least in their minds, make those difference comparisons between the probabilities. They do this because they’ve been trained to do this in linear models.

These differences in probabilities don’t line up with the p-values in logistic regression models, though. And this can get quite confusing.

Second, when X, the predictor is continuous, the odds ratio is constant across values of X. But probabilities aren’t.

It works exactly the same way as interest rates. I can tell you that an annual interest rate is 8%. So at the end of the year, you’ll earn $8 if you invested $100, or $40 if you invested $500. The rate stays constant, but the actual amount earned differs based on the amount invested.

Odds ratios work the same. An odds ratio of 1.08 will give you an 8% increase in the odds at any value of X.

Likewise, the difference in the probability (or the odds) depends on the value of X.

So if you do decide to report the increase in probability at different values of X, you’ll have to do it at low, medium, and high values of X. You can’t use a single number on the probability scale to convey the *relationship* between the predictor and the probability of a response.

It takes more than a single number, and it’s not “the effect of X on Y,” but sometimes it’s a better way to communicate what is really going on, especially to non-research audiences.

{ 20 comments… read them below or add one }

Hi Guys

Trying to understand what are similarities and differences between odds ratio and regression coefficients?

Just reading a paper and trying to understand the methodology and analysis of data.I haven’t had stats in 25 years. Thanks! I totally get it. Couldn’t do it but I totally get it!

1. The logistic regression coefficient indicates how the LOG of the odds ratio changes with a 1-unit change in the explanatory variable; this is not the same as the change in the (unlogged) odds ratio though the 2 are close when the coefficient is small.

2. Your use of the term “likelihood” is quite confusing. “Likelihood” in the precise context of a “likelihood” function is IN FACT a probability. Outside of that precise meaning, I am not aware of any other clear definition of “likelihood” in statistics.

Hi David,

1. Yes, agreed. I wasn’t talking about coefficients here at all. Just the odds ratios that are derived from those coefficients. Most people find the coefficients on the log-odds scale too abstract to interpret, beyond being positive or negative.

2. It’s only confusing to people who know statistical theory, which is not the audience here. I’m not using “likelihood” to describe the likelihood function, which we use to estimate the model parameters. I’m just talking about measuring how likely, in layman’s terms, each outcome is. Probability and odds are both ways of measuring how likely (ie the likelihood) of a success, but on different scales. It’s analogous to degrees Fahrenheit and degrees Celsius both measuring temperature, but on different scales. I haven’t come up with a better generic term than likelihood that doesn’t involve the measurement scale, but I’m open to suggestions. Possibility? Propensity? The latter has a different technical statistical usage.

Hi,

I’m a bit confused from this part: “An odds ratio of 1.08 will give you an 8% increase in the odds at any value of X.” My question is, what if the odds ratio is more more than 2?

For example, one of my logit coefficients is 3.0901, therefore the odds ratio should be 21.98. I was interpreting it as the increased likelihood of some event by a factor of 21.98, because this obviously can’t be transformed into percentage.

Do I interpret it correctly? Or am I missing something?

Thank you for your help!

Hi Rose, you have it right. You could change it to a percentage – 2098% increase(!) – but I agree that the way you have it is more interpretable.

If we try to express the effect of X on the likelihood of a categorical Y having a specific value through probability, the effect is not constant.

the effect is not constant? wh?

It means you’ll get a different probability ratio at each value of X. It’s a curve, not a straight line.

how to compute Probability in Logistic Regression with stata?

You can evaluate these relationships graphically and numerically using the “margins” commands in STATA. This website gives some nice examples and explanations: https://www.stata.com/meeting/germany13/abstracts/materials/de13_jann.pdf

You write

The key phrase here is constant effect. In regression models, we often want a measure of the unique effect of each X on Y. If we try to express the effect of X on the likelihood of a categorical Y having a specific value through probability, the effect is not constant.

But sometimes dont you want the effect of x in the cat var to not be constant. Like if you are predictive modeling a individual x that shows different behavior based on high or low x

Thank you. This helped me explain to reviewer 1 why the request for predicted probabilities rather than odds ratios was respectfully declined. Succinct, clear, and intuitive. Much appreciated.

how to interpret odds ratio in ordered multinational logit

This was an extremely clear explanation explained in a simple manner.

Can someone tell me how to transform odds ratios into logistic beta coefficients? Suppose the odds of becoming diabetic when some one is obese is 4, what would be the corresponding value of beta coefficient in a logistic regression?

Thank you so much for any clue.

Kiza.

Hi kiza, I suggest you run a linear regression . This will help you get the beta coefficient for that predictor variable.

You don’t really need to talk about coefficients, when you do so you only assess for the effect on the linearized prob, or if you are using a logit over the ln of the odds ratios.

When you interpred an odd ratio, you see what the effect of any determinant is over the probability of sth you’re researching.

So in your case, you’re interested in knowing what’s are the odds of becoming diabetic when you have the presence of diabetes, so in you example it goes like: you have 4:1 odds of becoming diabetic if you are obese, or obesity has an constant effect on diabetes represented by the increase of the chaces of being diabetic (4 times more probable to be diabetic if you’re obese)…

I hope I helped you.

This was an extremely intuitive explanation. I couldn’t find an answer like this elsewhere. Thank you!

How to interpret multinomial logistic?

Hi Tesfaye,

That’s a really good question, and how I’d answer depends on say, whether you already understand binary logistic regression.

I would suggest starting with these two webinar recordings:

Binary, Ordinal, and Multinomial Logistic Regression for Categorical Outcomes

Understanding Probability, Odds, and Odds Ratios in Logistic Regression.

They’re both free.

The former describes multinomial logistic regression and how interpretation differs from binary. The latter goes into more detail about how to interpret an odds ratio.

Karen