Model Building Strategies: Step Up and Top Down

How should I build my model?Stage 2

I get this question a lot, and it’s difficult to answer at first glance–it depends too much on your particular situation.

There are really three parts to the approach to building a model: the strategy, the technique to implement that strategy, and the decision criteria used within the technique.

The choice for all three parts depends a number of things, including:

  • Your research questions—what information about the variables are you trying to glean from the model?
  • Which specific type of model you’re running—ANOVA, logistic regression, linear mixed model, etc?
  • Issues in the data—how many predictors do you have and how related are they?
  • Purpose for the model—is it purely predictive or are you testing specific effects?

In this article, I will outline the two basic strategies you can take and some considerations for which strategy will work best in your situation.

1. The Top-Down Strategy

In the top down strategy, you start with a full model and remove predictors that aren’t helping the model.

The top-down strategy is usually appropriate when you have specific hypotheses about the relationship between predictors and the outcome variable.

Picture a two-way anova with specific hypotheses about an interaction and main effects, with some potential control variables.

Those control variables were included in the model just to see if they explain away the effect of the main predictor or explain some of the unexplained variance.  Hypotheses about them are not so specific that we aren’t interested in which version of the model does fit.  So we may remove predictors until the model fits reasonably well.

Usually the predictors that are candidates for removal are either covariates—potential control variables—or interactions or quadratic terms you don’t have specific hypotheses about, but you’re just checking.

If you have a key independent variable or even a key control variable in the model, most of the time you’ll leave it in, even if all indicators say it’s not helping the model.  The fact that it’s not predicting the outcome can itself be interesting.

2. The Step-Up Strategy

The step up strategy starts with an empty model, then slowly adds potential predictors.  And empty model is just as it’s described–one with no predictors.

It’s often also called an intercept-only model.

An empty model still models something.  All models have to include two things: an intercept and a measure of residual variance.

In an empty model, the intercept coefficient is the mean of Y, the outcome.  Remember that the intercept always measures the mean of Y when all X=0.  Since there are no Xs in the empty model, the intercept is just the mean of Y.

Likewise, in the empty model, the residual variance is simply the variance of Y.  Usually we think of residual variance as unexplained variance, and that’s true here as well–it’s just that all the variance is unexplained, because we have no predictors explaining any variation in Y.

The advantage of this strategy is that as you add predictors to the model, you can see whether and how much each predictor reduces the unexplained variation.

It is useful when your model is more exploratory—you’re interested in understanding which predictors are related to the outcome variable.

But it’s also a useful strategy when it’s very clear which hypotheses you want to test and which predictors are going to test those hypotheses, but it’s not clear how combinations of predictors will work together or the focus is on the variation explained by sets of predictors.

One thing to keep in mind is that the strategies are distinguishable from the techniques used to implement them and from the criteria you’re using to make decisions.

For example, either an automatic model-building technique like step-wise regression or a more methodical, theory-driven technique could be used in either a top-down or bottom up strategies.

Likewise, each technique can be based on different decision criteria on each step. Decisions for whether to add or subtract a predictor could use any one of a number of measures of general model fit or on the specific significance of that predictor.

This, of course, is what makes model building so unique to each situation.

 

Four Critical Steps in Building Linear Regression Models
While you’re worrying about which predictors to enter, you might be missing issues that have a big impact your analysis. This training will help you achieve more accurate results and a less-frustrating model building experience.

Reader Interactions

Comments

  1. Meenu says

    Hi Karen,

    So basically, this is just how we want to fit a model either starting with a full or empty model. No statistics involved. If so, is it similar or different from forward and backward options available in SPSS-> Analyze-> Regression–> Binary Logistic–>Method. In the method drop down list we get many options eg enter, forward backward etc.

    Can we combine these two approaches with univariate analysis? For example variables that are significant at p<0.20 at univariate are candidate for multivariate. If some 10 variables have p<0.20 at univariate, I put them together and then use backward elimination approach and remove one at a time with highest p-value from 0.05.

    Thanks
    Meenu


Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.