**by Jeff Meyer, MPA, MBA**

The concept of a statistical interaction is one of those things that seems very abstract. Obtuse definitions, like this one from Wikipedia, don’t help:

In statistics, aninteractionmay arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Most commonly, interactions are considered in the context of regression analyses.

First, we know this is true because we read it on the internet! Second, are you more confused now about interactions than you were before you read that definition?

If you’re like me, you’re wondering: What in the world is meant by “the relationship among three or more variables”? (As it turns out, the author of this definition is referring to an interaction between two predictor variables and their influence on the outcome variable.)

But, Wikipedia aside, statistical interaction isn’t so bad once you really get it. (Stay with me here.)

Let’s take a look at the interaction between two dummy coded categorical predictor variables.

The data set for our example is the 2014 General Social Survey conducted by the independent research organization NORC at the University of Chicago. The outcome variable for our linear regression will be “job prestige.” Job prestige is an index, ranked from 0 to 100, of 700 jobs put together by a group of sociologists. The higher the score, the more prestigious the job.

We’ll regress job prestige on marital status (no/yes) and gender.

First question: Do married people have more prestigious jobs than non-married people?

The answer is yes. On average, married people have a job prestige score of 5.9 points higher than those not married.

Next question: Do men have jobs with higher prestige scores than women?

The answer is no. Our output tells us there is no difference between men and women as far as the prestige of their job. Men’s average job prestige score is only .18 points higher than women, and this difference is not statistically significant.

But can we conclude that for all situations? Is it possible that the difference between job prestige scores for unmarried men and women is different than the difference between married men and women? Or will it be pretty much the same as the table above?

Here’s where the concept of interaction comes in. We need to use an interaction term to determine that. With the interaction we’ll generate predicted job prestige values for the following four groups: male-unmarried, female-unmarried, male-married and female-married.

Here are our results when we regression job prestige on marital status and gender and the interaction between married and male:

Everything is significant, but how in the world do we read this table? We are shown 4 values: married = 4.28, male = -1.86, married & male = 3.6 and the constant is 41.7. It helps to piece together the coefficients to create the predicted scores for the four groups mentioned above.

Let’s start with the constant. It’s in every regression table (unless we make the decision to exclude it). What exactly does it represent? The constant represents the predicted value when all variables are at their base case. In this situation our base case is married equals no and gender equals female (married = 0 and gender = 0).

So an unmarried woman has on average a predicted job prestige score of **41.7**. An unmarried man’s score is calculated by adding the coefficient for “male” to the constant (unmarried woman’s score plus the value for being a male): 41.7-1.9 = **39.8**.

To calculate a married woman’s score we start with the constant (unmarried female) and add to it the value for being married: 41.7 + 4.3 = **46**

How do we calculate a married man’s score? Always start with the constant and then add to it any of the factors that belong to it. So we’ll need to add to the constant the value of being married, of being male and also the extra value for being married and male: 41.7 + 4.3 – 1.9 + 3.6 = **47.7**.

Notice that the difference in average job prestige score between unmarried women and unmarried men is 1.9 greater and the difference in average job prestige score between married women and married men is 1.7 less. The difference in those differences is 3.6 (1.9 – (-1.7), which is exactly the same as the coefficient of our interaction.

For those of you who use Stata, the simple way to calculate the predicted values for all four groups is to use the post-estimation command ** margins**.

*margins sex#married, cformat(%6.2f)*

(These same means can be calculated using the lsmeans option in SAS’s proc glm or using EMMeans in SPSS’s Univariate GLM command).

If we had not used the interaction we would have concluded that there is no difference in the average job prestige score between men and women. By using the interaction, we found that there *is* a different relationship between gender and job prestige for unmarried compared to married people.

So that’s it. Not so bad, right? Now go forth and analyze.

### Related Posts

- Understanding Interactions Between Categorical and Continuous Variables in Linear Regression
- Interpreting Lower Order Coefficients When the Model Contains an Interaction
- Using Marginal Means to Explain an Interaction to a Non-Statistical Audience
- March 2018 Member Webinar: Using Transformations to Improve Your Linear Regression Model

{ 18 comments… read them below or add one }

Thanks for this wonderful explanation!

One question: What if, say, the coefficient for being married is insignificant, can we still sum up the values to get the predicted value of being male and married? Thank you!

Hi,

I’m glad you found this post helpful. Yes, you follow the same procedure whether married is significant or not.

Hi, can you tell me why contrast and pwcompare give different results?thanks!!!

Sorry, I meant pwcompare and margins..

Hi,

If you use the following command: margins married#sex, the margins command will give you the marginal means for each of the four paired groups.

If you use this command: “margins married#sex,pwcompare”, you will get the differences (contrast) between the different paired groups, the standard errors and the 95% CI for their differences.

This command, “pwcompare married#sex,pveffects”, you will get the differences (contrast) between the different paired groups, the standard errors, t scores and the p values.

Choosing which command to use is a matter of determining what results you are looking to report.

Thanks!!also for replying so quickly!!

Hi, thx for this powerful explanation. I’ve one additional question: how do I interpretate main and interaction effects when I have more than two (interacting) covariates? Which is the base (reference) value and what does the constant mean?

Hi, the base value is the category of the categorical variable that is not shown in the regression table output. The constant is the culmination of all base categories for the categorical variables in your model. For example, let’s say you have 3 predictors, gender, marital status and education in your model. The categories not shown in your output for gender is male, for marital status is single and for education is college graduate, then your constant represents single males with a college degree.

If you have a three way interaction I would suggest you use your software marginal means calculations (margins command in Stata, lmeans in R and SPSS) to help you interpret the results and graph them. In the course I teach on linear models I show how to do this in a spreadsheet as well as using your statistical software to understand the output.

To test the significance of each grouping in a three way interaction you will want to use your software’s pairwise comparison command.

Great!thank you!!!

Hi,

Thanks for your explanation. This help me a lot to understand that i need. However, i used a survey design and possion regression model to estimate prevalence ratio (irr), how is the sintaxis? Currently, My sintaxis is svy linearized: poisson Depresion_1 i.SEXO2 i.ns10_recod i.accidente i.familia i.estres_financiero EDAD, irr

But the reviewer said me that i need to evaluated interaction between sex and the other variables. How can i do this???

Cheers!

The easiest way to create an interaction of one variable with all variables is:

poisson Depresion_1 i.SEXO2##(i.ns10_recod i.accidente i.familia i.estres_financiero c.EDAD), irr

You need to put a “c.” in front of continuous variables such as age (EDAD). The other way to code it is:

poisson Depresion_1 i.SEXO2 i.ns10_recod i.accidente i.familia i.estres_financiero c.EDAD i.SEXO2#i.ns10_recod i.SEXO2#i.accidente i.SEXO2#i.familia i.SEXO2#i.estres_financiero i.SEXO2#c.EDAD, irr

Doing it this way allows you to easily drop any interactions that are not significant.

Jeff

Thanks You very much! it worked!! The analysis showed that there no effect interaction between sex and all other variables (according to p. value).

I’m really sorry I have not been able to answer the comment before and thank you. My computer had “died” and I could not recover the data to do the analysis until now.

But, what numbers means below the variable? is it the the value of each variable that are being test?

SEXO2#ns10_recod

2 2

2 3

Cheers

Hi Gabriel,

Assuming this come from the margins command, the first number 2 of the first line represents the 2nd category of the predictor SEX02 and the second 2 is the second category the the variable ns10_recod. The second line is similar, the 2nd category of SEX02 and the 3rd category of ns10_recod.

Awesome explanation. I just have one question. How do we understand if the difference between the unmarried women and married men is significant? Does it have anything to do with the interaction?

Thanks. Yes, you can test to see if there is a significant difference between unmarried women and married men. You can test any pairing within an interaction. How you do it depends upon your software. In Stata you would use the post estimation command “pwcompare” or “contrast”. In R you can use the “contrast” command and in SPSS you would run your comparisons through the “emmeans” statement within “unianova”.

How would be the interpretation if you’d have a glm with negative binomial distribution?

A negative binomial glm uses an incidence rate ratio (irr) to compare the exponentiated coefficients. A categorical interaction represents how much greater (or less) it is than the base category. For example an IRR of 1.25 can be interpreted as “25% greater than the base category”. If you determine your IRR’s in the similar manner to what is shown in this article you must add the non-exponentiated coefficients and then exponentiate the sums to obtain the IRR’s. It’s much easier if you use your statistical software’s post hoc commands to calculate these such as Stata’s margins command or SPSS EMMeans

Simplified and educative indeed