by Jeff Meyer, MPA, MBA
Suppose you are asked to create a model that will predict who will drop out of a program your organization offers. You decide to use a binary logistic regression because your outcome has two values: “0” for not dropping out and “1” for dropping out.
Most of us were trained in building models for the purpose of understanding and explaining the relationships between an outcome and a set of predictors. But model building works differently for purely predictive models. Where do we go from here?
In explanatory modeling we are interested in identifying variables that have a scientifically meaningful and statistically significant relationship with an outcome.
The primary goal is to test the theoretical hypotheses so there is an emphasis on both theoretically meaningful relationships and determining whether each relationship is statistically significant (you know that wonderful feeling you get when your predictors have p values less than 0.05).
Some of the steps in explanatory modeling include fitting potentially theoretically important predictors, checking for statistical significance, evaluating effect sizes, and running diagnostics.
In predictive modeling our interest is different. Here the goal is to use the associations between predictors and the outcome variable to generate good predictions for future outcomes.
As a result, predictive models are created very differently than explanatory models. The primary goal is predictive accuracy.
Being able to explain why a variable “fits” in the model is left for discussion over beers after work. This gives you the latitude to use predictors that may not have any theoretical value.
Variables that are used in a predictive model are based on association, not statistical significance or scientific meaning.
There are times when statistically significant variables will not be included in a predictive model. A significant predictor that adds no predictive benefit is excluded.
If the predictor is significant but only observable immediately before or at the time of the observed outcome, it cannot be used for predictions.
For example, theoretical models have shown that water temperatures are a highly significant factor in determining whether a tropical storm turns into a hurricane. That variable is not useful in a prediction model of the expected number of hurricanes during the upcoming season because it can only be measured immediately before an impending hurricane.
That’s too late.
A key strategy for successful predictive modeling is to explore. Changing the effect of a continuous predictor by squaring or taking the square root of its value is one approach. The primary limitation for including a predictor in the model is its availability for future model running.
The primary risk when creating a predictive model is to avoid “overfitting.” Overfitting is a result of creating a model that fits the current sample so perfectly that it may not be a good representation of the population.
How can you reduce this risk?
Use half of your data to create your model. Then test your model on the other half.
The data used to create the model is generally known as the “training set.” The data used for testing the model is called the “testing or validation set.” This process is known as “cross validation.”
You may find that you will need to modify your model to better fit both sets of data.
Jeff Meyer is a statistical consultant with The Analysis Factor, a stats mentor for Statistically Speaking membership, and a workshop instructor. Read more about Jeff here.