• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • our programs
    • Membership
    • Online Workshops
    • Free Webinars
    • Consulting Services
  • statistical resources
  • blog
  • about
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Collaborate with Us
  • contact
  • login

interpreting regression coefficients

Linear Regression Analysis – 3 Common Causes of Multicollinearity and What Do to About Them

by Karen Grace-Martin  1 Comment

Multicollinearity in regression is one of those issues that strikes fear into the hearts of researchers. You’ve heard about its dangers in statistics Stage 2classes, and colleagues and journal reviews question your results because of it. But there are really only a few causes of multicollinearity. Let’s explore them.Multicollinearity is simply redundancy in the information contained in predictor variables. If the redundancy is moderate, [Read more…] about Linear Regression Analysis – 3 Common Causes of Multicollinearity and What Do to About Them

Tagged With: dummy coding, interpreting regression coefficients, Multicollinearity, principal component analysis

Related Posts

  • Making Dummy Codes Easy to Keep Track of
  • Simplifying a Categorical Predictor in Regression Models
  • What is Multicollinearity? A Visual Description
  • Your Questions Answered from the Interpreting Regression Coefficients Webinar

Interpreting Regression Coefficients

by Karen Grace-Martin  31 Comments

Updated 12/20/2021

Despite its popularity, interpreting regression coefficients of any but the simplest models is sometimes, well….difficult.

So let’s interpret the coefficients in a model with two predictors: a continuous and a categorical variable.  The example here is a linear regression model. But this works the same way for interpreting coefficients from any regression model without interactions.

A linear regression model with two predictor variables results in the following equation:

Yi = B0 + B1*X1i + B2*X2i + ei.

The variables in the model are:

  • Y, the response variable;
  • X1, the first predictor variable;
  • X2, the second predictor variable; and
  • e, the residual error, which is an unmeasured variable.

The parameters in the model are:

  • B0, the Y-intercept;
  • B1, the first regression coefficient; and
  • B2, the second regression coefficient.

One example would be a model of the height of a shrub (Y) based on the amount of bacteria in the soil (X1) and whether the plant is located in partial or full sun (X2).

Height is measured in cm. Bacteria is measured in thousand per ml of soil.  And type of sun = 0 if the plant is in partial sun and type of sun = 1 if the plant is in full sun.

Let’s say it turned out that the regression equation was estimated as follows:

Y = 42 + 2.3*X1 + 11*X2

Interpreting the Intercept

B0, the Y-intercept, can be interpreted as the value you would predict for Y if both X1 = 0 and X2 = 0.

We would expect an average height of 42 cm for shrubs in partial sun with no bacteria in the soil. However, this is only a meaningful interpretation if it is reasonable that both X1 and X2 can be 0, and if the data set actually included values for X1 and X2 that were near 0.

If neither of these conditions are true, then B0 really has no meaningful interpretation. It just anchors the regression line in the right place. In our case, it is easy to see that X2 sometimes is 0, but if X1, our bacteria level, never comes close to 0, then our intercept has no real interpretation.

Interpreting Coefficients of Continuous Predictor Variables

Since X1 is a continuous variable, B1 represents the difference in the predicted value of Y for each one-unit difference in X1, if X2 remains constant.

This means that if X1 differed by one unit (and X2 did not differ) Y will differ by B1 units, on average.

In our example, shrubs with a 5000/ml bacteria count would, on average, be 2.3 cm taller than those with a 4000/ml bacteria count. They likewise would be about 2.3 cm taller than those with 3000/ml bacteria, as long as they were in the same type of sun.

(Don’t forget that since the measurement unit for bacteria count is 1000 per ml of soil, 1000 bacteria represent one unit of X1).

Interpreting Coefficients of Categorical Predictor Variables

Similarly, B2 is interpreted as the difference in the predicted value in Y for each one-unit difference in X2 if X1 remains constant. However, since X2 is a categorical variable coded as 0 or 1, a one unit difference represents switching from one category to the other.

B2 is then the average difference in Y between the category for which X2 = 0 (the reference group) and the category for which X2 = 1 (the comparison group).

So compared to shrubs that were in partial sun, we would expect shrubs in full sun to be 11 cm taller, on average, at the same level of soil bacteria.

Interpreting Coefficients when Predictor Variables are Correlated

Don’t forget that each coefficient is influenced by the other variables in a regression model. Because predictor variables are nearly always associated, two or more variables may explain some of the same variation in Y.

Therefore, each coefficient does not measure the total effect on Y of its corresponding variable. It would if it were the only predictor variable in the model. Or if the predictors were independent of each other.

Rather, each coefficient represents the additional effect of adding that variable to the model, if the effects of all other variables in the model are already accounted for.

This means that adding or removing variables from the model will change the coefficients. This is not a problem, as long as you understand why and interpret accordingly.

Interpreting Other Specific Coefficients

I’ve given you the basics here. But interpretation gets a bit trickier for more complicated models, for example, when the model contains quadratic or interaction terms. There are also ways to rescale predictor variables to make interpretation easier.

So here is some more reading about interpreting specific types of coefficients for different types of models:

  • Interpreting the Intercept
  • Removing the Intercept when X is Continuous or Categorical
  • Interpreting Interactions in Regression
  • How Changing the Scale of X affects Interpreting its Regression Coefficient
  • Interpreting Coefficients with a Centered Predictor

Tagged With: categorical predictor, continuous predictor, Intercept, interpreting regression coefficients, linear regression

Related Posts

  • Centering a Covariate to Improve Interpretability
  • Using Marginal Means to Explain an Interaction to a Non-Statistical Audience
  • Member Training: Segmented Regression
  • Should You Always Center a Predictor on the Mean?

Interpreting Regression Coefficients: Changing the scale of predictor variables

by Karen Grace-Martin  2 Comments

One issue that affects how to interpret regression coefficients is the scale of the variables. In linear regression, the scaling of both the response variable Y, and the relevant predictor X, are both important.

In regression models like logistic regression, where the response variable is categorical, and therefore doesn’t have a numerical scale, this only applies to predictor variables, X.

This can be an issue of measurement units–miles vs. kilometers. Or it can be an issue of simply how big “one unit” is. For example, whether one unit of annual income is measured in dollars, thousands of dollars, or millions of dollars.

The good news is you can easily change the scale of variables to make it easier to interpret their regression coefficients. This works as well for functions of regression coefficients, like odds ratios and rate ratios.

All you have to do is create a new variable in your data set (don’t overwrite the individual one in case you make a mistake). This new variable is simply the old one multiplied or divided by some constant.  The constant is often a factor of 10, but it doesn’t have to be. Then use the new variable in your model instead of the original one.

Since regression coefficients and odds ratios tell you the effect of a one unit change in the predictor, you should multiply them so that a one unit change in the predictor makes sense.

An Example of Rescaling Predictors

Here is a really simple example that I use in one of my workshops.

Y= first semester college GPA

X1= high school GPA
X2=SAT score

High School and first semester College GPA are both measured on a scale from 0 to  4. If you’re not familiar with this scaling, a 0 means you failed a class. An A (usually the top possible score) is a 4, a B is a 3, a C is a 2, and a D is a 1. So for a grade point average, a one point difference is very big.

If you’re an admissions counselor looking at high school transcripts, there is a big difference between a 3.7 GPA and a 2.7 GPA.

SAT score is on an entirely different scale. It’s a normed scale, so that the minimum is 200, the maximum is 800, and the mean is 500. Scores are in units of 10. You literally cannot receive a score of 622. You can get only 620 or 630.

So a one-point difference is not only tiny, it’s meaningless. Even a 10 point difference in SAT scores is pretty small. But 50 points is meaningful, and 100 points is large.

If you leave both predictors on the original scale in a regression that predicts first semester GPA, you get the following results:

Let’s interpret those coefficients.

The coefficient for high school GPA here is .20. This says that for each one-unit difference in GPA, we expect, on average, a .2 higher first semester GPA. While a one-unit change in GPA is huge, that’s reasonably meaningful.

The coefficient for SAT math scores is .002. That looks tiny. It says that for each one-unit difference in SAT math score, we expect, on average, a .002 higher first semester GPA. But a one-unit difference in SAT score is too small to interpret. It’s too small to be meaningful.

So we can change the scaling of our SAT score predictor to be in 10-point differences. Or in 100 point differences. Chose the scale for which one unit is meaningful.

Changing the scale by mulitplying the coefficient

In a linear model, you can simply multiply the coefficient by 10 to reflect a 10-point difference. That’s a coefficient of .02. So for each 10 point difference in math SAT score we expect, on average, a .02 higher first semester GPA.

Or we could multiply the coefficient by 50 to reflect a 50-point difference. That’s a coefficient of .10. So for each 50 point difference in math SAT score we expect, on average, a .1 higher first semester GPA.

When it’s easier to just change the variables

Multiplying the coefficient is easier than rescaling the original variable if you only have one or two of these and you’re using linear regression.

It doesn’t work once you’ve done any sort of back-transformation in generalized linear models. So you can’t just multiply the odds ratio or the incidence rate ratio by 10 or 50. Both of these are created by exponentiating the regression coefficient. Because of the order of operations in algebra, You have to first multiply the coefficient by the constant, and then re-expontiate.

Likewise, if you are using this predictor in more than one linear regression model, it’s much simpler to rescale the variable in the first place. Simply divide that SAT score by 10 or 50 and the coefficient will .02 or .10, respectively.

Updated 12/2/2021

Tagged With: interpreting odds ratios, interpreting regression coefficients, predictor variable

Related Posts

  • The General Linear Model, Analysis of Covariance, and How ANOVA and Linear Regression Really are the Same Model Wearing Different Clothes
  • Interpreting Lower Order Coefficients When the Model Contains an Interaction
  • Centering and Standardizing Predictors
  • Interpreting Regression Coefficients

Member Training: Preparing to Use (and Interpret) a Linear Regression Model

by TAF Support 

You think a linear regression might be an appropriate statistical analysis for your data, but you’re not entirely sure. What should you check before running your model to find out?

[Read more…] about Member Training: Preparing to Use (and Interpret) a Linear Regression Model

Tagged With: Bivariate Statistics, histogram, interpreting regression coefficients, linear regression, Multiple Regression, scatterplot, Univariate statistics

Related Posts

  • Member Training: The Link Between ANOVA and Regression
  • Member Training: Centering
  • The Difference Between R-squared and Adjusted R-squared
  • Member Training: Goodness of Fit Statistics

Simplifying a Categorical Predictor in Regression Models

by Jeff Meyer  Leave a Comment

One of the many decisions you have to make when model building is which form each predictor variable should take. One specific version of thisStage 2 decision is whether to combine categories of a categorical predictor.

The greater the number of parameter estimates in a model the greater the number of observations that are needed to keep power constant. The parameter estimates in a linear [Read more…] about Simplifying a Categorical Predictor in Regression Models

Tagged With: categorical predictor, interpreting regression coefficients, Model Building, pairwise, R-squared

Related Posts

  • What It Really Means to Remove an Interaction From a Model
  • Differences in Model Building Between Explanatory and Predictive Models
  • The Difference Between R-squared and Adjusted R-squared
  • Measures of Model Fit for Linear Regression Models

Your Questions Answered from the Interpreting Regression Coefficients Webinar

by Karen Grace-Martin  Leave a Comment

Last week I had the pleasure of teaching a webinar on Interpreting Regression Coefficients. We walked through the output of a somewhat tricky regression model—it included two dummy-coded categorical variables, a covariate, and a few interactions.

As always seems to happen, our audience asked an amazing number of great questions. (Seriously, I’ve had multiple guest instructors compliment me on our audience and their thoughtful questions.)

We had so many that although I spent about 40 minutes answering [Read more…] about Your Questions Answered from the Interpreting Regression Coefficients Webinar

Tagged With: dummy coding, dummy variable, effect size, eta-square, Interpreting Interactions, interpreting regression coefficients, Reference Group, spotlight analysis, statistical significance

Related Posts

  • Using Marginal Means to Explain an Interaction to a Non-Statistical Audience
  • Interpreting Interactions in Linear Regression: When SPSS and Stata Disagree, Which is Right?
  • About Dummy Variables in SPSS Analysis
  • Interpreting (Even Tricky) Regression Coefficients – A Quiz

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

This Month’s Statistically Speaking Live Training

  • Member Training: The Link Between ANOVA and Regression

Upcoming Workshops

    No Events

Upcoming Free Webinars

TBA

Quick links

Our Programs Statistical Resources Blog/News About Contact Log in

Contact

Upcoming

Free Webinars Membership Trainings Workshops

Privacy Policy

Search

Copyright © 2008–2023 The Analysis Factor, LLC.
All rights reserved.

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT