Interpreting Regression Coefficients

by Karen Grace-Martin


Linear regression is one of the most popular statistical techniques.

Despite its popularity, interpretation of the regression coefficients of any but the simplest models is sometimes, well….difficult.

So let’s interpret the coefficients of a continuous and a categorical variable.  Although the example here is a linear regression model, the approach works for interpreting coefficients from any regression model without interactions, including logistic and proportional hazards models.

A linear regression model with two predictor variables can be expressed with the following equation:

Y = B0 + B1*X1 + B2*X2 + e.

The variables in the model are:

  • Y, the response variable;
  • X1, the first predictor variable;
  • X2, the second predictor variable; and
  • e, the residual error, which is an unmeasured variable.

The parameters in the model are:

  • B0, the Y-intercept;
  • B1, the first regression coefficient; and
  • B2, the second regression coefficient.

One example would be a model of the height of a shrub (Y) based on the amount of bacteria in the soil (X1) and whether the plant is located in partial or full sun (X2).

Height is measured in cm, bacteria is measured in thousand per ml of soil, and type of sun = 0 if the plant is in partial sun and type of sun = 1 if the plant is in full sun.

Let’s say it turned out that the regression equation was estimated as follows:

Y = 42 + 2.3*X1 + 11*X2

Interpreting the Intercept

B0, the Y-intercept, can be interpreted as the value you would predict for Y if both X1 = 0 and X2 = 0.

We would expect an average height of 42 cm for shrubs in partial sun with no bacteria in the soil. However, this is only a meaningful interpretation if it is reasonable that both X1 and X2 can be 0, and if the data set actually included values for X1 and X2 that were near 0.

If neither of these conditions are true, then B0 really has no meaningful interpretation. It just anchors the regression line in the right place. In our case, it is easy to see that X2 sometimes is 0, but if X1, our bacteria level, never comes close to 0, then our intercept has no real interpretation.

Interpreting Coefficients of Continuous Predictor Variables

Since X1 is a continuous variable, B1 represents the difference in the predicted value of Y for each one-unit difference in X1, if X2 remains constant.

This means that if X1 differed by one unit (and X2 did not differ) Y will differ by B1 units, on average.

In our example, shrubs with a 5000 bacteria count would, on average, be 2.3 cm taller than those with a 4000/ml bacteria count, which likewise would be about 2.3 cm taller than those with 3000/ml bacteria, as long as they were in the same type of sun.

(Don’t forget that since the bacteria count was measured in 1000 per ml of soil, 1000 bacteria represent one unit of X1).

Interpreting Coefficients of Categorical Predictor Variables

Similarly, B2 is interpreted as the difference in the predicted value in Y for each one-unit difference in X2, if X1 remains constant. However, since X2 is a categorical variable coded as 0 or 1, a one unit difference represents switching from one category to the other.

B2 is then the average difference in Y between the category for which X2 = 0 (the reference group) and the category for which X2 = 1 (the comparison group).

So compared to shrubs that were in partial sun, we would expect shrubs in full sun to be 11 cm taller, on average, at the same level of soil bacteria.

Interpreting Coefficients when Predictor Variables are Correlated

Don’t forget that each coefficient is influenced by the other variables in a regression model. Because predictor variables are nearly always associated, two or more variables may explain some of the same variation in Y.

Therefore, each coefficient does not measure the total effect on Y of its corresponding variable, as it would if it were the only variable in the model.

Rather, each coefficient represents the additional effect of adding that variable to the model, if the effects of all other variables in the model are already accounted for. (This is called Type 3 regression coefficients and is the usual way to calculate them. However, not all software uses Type 3 coefficients, so make sure you check your software manual so you know what you’re getting).

This means that each coefficient will change when other variables are added to or deleted from the model.

For a discussion of how to interpret the coefficients of models with interaction terms, see Interpreting Interactions in Regression.

Bookmark and Share

tn_ircLearn more about the ins and outs of interpreting regression coefficients in our new On Demand workshop: Interpreting (Even Tricky) Regression Coeffcients.

{ 14 comments… read them below or add one }

Mark April 4, 2017 at 12:00 pm

How should I interpret the effects of an independent variable “age” (a continuous variable coded to range from (0) for the youngest to (1) for the oldest respondents) on my dependent variable “income” given a beta coefficient of 2.688823 ?


April November 29, 2016 at 2:45 pm

If you have a direction hypothesis for an IV, is it acceptable divide the two-tailed p-value for the t-value to obtain the one-tailed significance?


Juliet October 16, 2016 at 1:09 am

How do you interpret coefficients on discreet variables. For example, if sunlight was coded as 0 – no sunlight, 1 – partial sunlight and 2 – full sunlight, how would you interpret the coefficient on this independent variable?


Jon November 29, 2016 at 8:58 pm

To handle categorical variables like in your example you would encode then into n-1 binary variables where n is the number of categories, see here for example:


Ahmed August 23, 2016 at 12:43 am

I used linear regression to control for IQ. How can I know if differences between two groups remain the same?


ENDALE Y July 21, 2016 at 8:19 am

Please make it easy and understandable.


Akosua May 17, 2016 at 7:31 pm

Please how do you interprete a regression result that show zero as the coefficient. Thank you


Deniz June 6, 2016 at 9:12 pm

If B coefficient is 0 then, there is no relationship between dependent and independent variables.


Kanu October 5, 2016 at 1:04 pm

Does this means that a B coefficient just over 0 lets say 0.58 isn’t as good as the one which is 1.11?

What does the signs of the B coefficient’s means. Is it inverse association (-ve) and direct association (+ve) to the dependent variable?


Rupon Basumatary November 7, 2015 at 11:53 am

Makes easily understandable.


John May 19, 2015 at 2:48 am

In interpreting the coefficients of categorical predictor variables, what if X2 had several levels (several categories) instead of 0 and 1. Say, the soil was green, red, yellow or blue. How would you interpret quantitatively the differences in the coefficients? How much higher is the plant grown in green soil vs red soil?


Gio June 3, 2016 at 3:46 am

John, you can always transform a multi level categorical variable in (levels-1) two level categorical variables.

In your example the soil varaible would become:
– Soil_green (1,0)
– Soil_red (1,0)
– Soil_Yellow (1,0)
you do not need a Soil_Blue varaible because when all the above are 0 than you know it is a bout blue Soil


him July 17, 2016 at 7:58 pm

FYI – The above is commonly referred to as “dummy coding”


Martins Ahmed December 18, 2014 at 8:57 am

Really appreciate this exposition. It has to a greater extent cleared some difficulties I have been experiencing when it comes to interpreting the results of coefficient of linear regression.


Leave a Comment

Please note that Karen receives hundreds of comments at The Analysis Factor website each week. Since Karen is also busy teaching workshops, consulting with clients, and running a membership program, she seldom has time to respond to these comments anymore. If you have a question to which you need a timely response, please check out our low-cost monthly membership program, or sign-up for a quick question consultation.

{ 1 trackback }

Previous post:

Next post: