Imputation as an approach to missing data has been around for decades. You probably learned about mean imputation in methods classes, only to be told that you should never do it for a variety of very good reasons. Mean imputation, in which each missing value is replaced, or imputed, with the mean of observed values of that variable, is not the only type of imputation, however.

Better, although still problematic, imputation approaches use other variables in the data set to predict the missing value, and contain a random component. Using other variables preserves the relationships among variables in the imputations. The random component is important so that all missing values of a single variable are not all exactly equal. One example would be to use a regression equation to predict missing values, then add a random error term.

Although this approach solves many of the problems inherent in mean imputation, one problem remains. Because the imputed value is an estimate– a predicted value– there is uncertainty about its true value. Every statistic has uncertainty, measured by its standard error. Statistics computed using imputed data have even more uncertainty than its standard error measures. Your statistical package cannot distinguish between an imputed value and a real value.

Since the standard errors of statistics based on imputed values, such as sample means or regression coefficients, are too small, corresponding reported p-values are also too small. P-values that are reported as smaller than they, in reality, are, lead to Type I errors.

Multiple imputation has solved this problem by incorporating the uncertainty inherent in imputation. It has four steps:

- Create
*m*sets of imputations for the missing values using an imputation process with a random component. - The result is
*m*full data sets. Each data set will have slightly different values for the imputed data because of the random component. - Analyze each completed data set. Each set of parameter estimates will differ slightly because the data differs slightly.
- Combine results, calculating the variation in parameter estimates.

Remarkably, *m*, the number of sufficient imputations, can be only 5 to 10 imputations, although it depends on the percentage of data that are missing. The result is unbiased parameter estimates and a full sample size when done well.

Doing multiple imputation well, however, is not always quick or easy. First, it requires that the missing data be ignorable. Second, it requires a very good imputation model. Creating a good imputation model requires knowing your data very well and having variables that will predict missing values.

Multiple Imputation is available in SAS, Splus, and now SPSS 17.0, making it a much more accessible option to researchers.

For more information on what makes missing data ignorable, see my article, Missing Data Mechanisms.

### Related Posts

- Two Recommended Solutions for Missing Data: Multiple Imputation and Maximum Likelihood
- EM Imputation and Missing Data: Is Mean Imputation Really so Terrible?
- Member Training: What’s the Best Statistical Package for You?
- Ten Ways Learning a Statistical Software Package is Like Learning a New Language

{ 1 trackback }