A Linear Regression Model with an interaction between two predictors (X_{1} and X_{2}) has the form:

Y = B_{0} + B_{1}X_{1} + B_{2}X_{2} + B_{3}X_{1}*X_{2}.

It doesn’t really matter if X_{1} and X_{2} are categorical or continuous, but let’s assume they are continuous for simplicity.

One important concept is that B_{1} and B_{2} are *not main effects*, the way they would be if there were no interaction term. Rather, they are *conditional effects*.

A main effect means that B_{1} is the effect of X_{1}–the difference in the mean of Y for each one unit change in X_{1}. But B_{1} is the effect of X_{1} conditional on X_{2} = 0.

For all values of X_{2} other than zero, the effect of X_{1} is B_{1} + B_{3}X_{2}.

The biggest practical implication is that when you add an interaction term to a model, B_{1} and B_{2} change drastically by definition (even if B_{3} is not significant) *because B _{1} and B_{2} are measuring a different effect than they were in a model without the interaction term*.

So don’t panic if B_{1} suddenly isn’t significant. It’s measuring something else altogether.

So B_{1}, in the presence of an interaction, is the effect of X_{1} only when X_{2} = 0.

If X_{2} never equals 0 in the data set, then B_{1} has no meaning. None.

This is a good reason to center X_{2}. If X_{2} is centered at its mean, then B_{1} is the effect of X_{1} when X_{2} is at its mean. Much more interpretable.

Even better is to center X_{2} at some meaningful value even if it’s not its mean. For example, if X_{2} is Age of children, perhaps the sample mean is 6.2 years. But 5 is the age when most children begin school, so centering Age at 5 might be more meaningful, depending on the topic being studied.

If X_{2} is categorical, the same approach applies, but with a different implication. If X_{2} is dummy coded 0/1, B_{1} is the effect of X_{1} *only for the reference group*.

The effect of X_{1} for the comparison group is B_{1} + B_{3}. To see why, plug in 0 for X_{2} for the reference group and write out the regression equation. Then plug in 1 for X_{2} for the comparison group. Do the algebra.

### Related Posts

- Your Questions Answered from the Interpreting Regression Coefficients Webinar
- Using Marginal Means to Explain an Interaction to a Non-Statistical Audience
- Understanding Interactions Between Categorical and Continuous Variables in Linear Regression
- Interpreting Interactions in Linear Regression: When SPSS and Stata Disagree, Which is Right?

{ 5 comments… read them below or add one }

Omer, it’s not a silly question. Depending on your software, you command it to show the basic info of x. For STATA, its sum X2. I forget for sas but something similar.

The sum command will show the mean value for X2 in your data. Say it’s 6.2. You must now generate a new variable for X2. In stata, you would say

gen X2_c (or whatever name you like) = X2-6.2

Your data is now centered. I believe to center at a different value you would just subtract that value from X2, but that seems too simple. I’m a beginner, too.

Hi Elaine,

No, you got it–it really is that simple. For whatever value you want to center on, just subtract that value from X2.

Nice post. However, I think the condition only apply if your design matrix is a offset from reference model. For a over-parameterized model or sigma-restricted model B1 will the your main effect.

Hi Karen,

This may be a silly question but how do you center X2?

Hi Omer,

You can center X2 by standardizing it, i.e. (X2-mean(X2))/sd(X2)

{ 1 trackback }