In a Regression model, should you drop interaction terms if they’re not significant?
In an ANOVA, adding interaction terms still leaves the main effects as main effects. That is, as long as the data are balanced, the main effects and the interactions are independent. The main effect is still telling you if there is an overall effect of that variable, after accounting for other variables in the model.
But in regression, adding interaction terms makes the coefficients of the lower order terms conditional effects, not main effects. That means that the effect of one predictor is conditional on the value of the other. The coefficient of the lower order term isn’t the effect of that term. It’s the effect only when the other term in the interaction equals 0.
So if an interaction isn’t significant, should you drop it?
If you are just checking for the presence of an interaction to make sure you are specifying the model correctly, go ahead and drop it. The interaction uses up df and changes the meaning of the lower order coefficients, and complicates the model. So if you were just checking for it, drop it.
But if you actually hypothesized an interaction that wasn’t significant, leave it in the model. The insignificant interaction means something in this case–it helps you evaluate your hypothesis. Taking it out can do more damage in specification error than in will in the loss of df.
The same is true in ANOVA models.
And as always, leave in any lower order terms, significant or not, for any higher order terms in the model. That means you have to leave in all insignificant two-way interactions for any significant 3-ways.
- The General Linear Model, Analysis of Covariance, and How ANOVA and Linear Regression Really are the Same Model Wearing Different Clothes
- Confusing Statistical Terms #1: The Many Names of Independent Variables
- July 2017 Member Webinar: The Multi-Faceted World of Residuals
- Interpreting Interactions when Main Effects are Not Significant