I’m sure I don’t need to explain to you all the problems that occur as a result of missing data. Anyone who has dealt with missing data—that means everyone who has ever worked with real data—knows about the loss of power and sample size, and the potential bias in your data that comes with listwise deletion.

Listwise deletion is the default method for dealing with missing data in most statistical software packages. It simply means excluding from the analysis any cases with data missing on any variables involved in the analysis.

A very simple, and in many ways appealing, method devised to overcome these problems is mean imputation. Once again, I’m sure you’ve heard of it–just plug in the mean for that variable for all the missing values. The nice part is the mean isn’t affected, and you don’t lose that case from the analysis. And it’s so easy! SPSS even has a little button to click to just impute all those means.

But there are new problems. True, the mean doesn’t change, but the relationships with other variables do. And that’s usually what you’re interested in, right? Well, now they’re biased. And while the sample size remains at its full value, the standard error of that variable will be vastly underestimated–and this underestimation gets bigger the more missing data there are. Too-small standard errors lead to too-small p-values, so now you’re reporting results that should not be there.

There are other options. Multiple Imputation and Maximum Likelihood both solve these problems. But while Multiple Imputation is not available in all the major stats packages, it is very labor-intensive to do well. And Maximum Likelihood isn’t hard or labor intensive, but requires using structural equation modeling software, such as AMOS or MPlus.

The good news is there are other imputation techniques that are still quite simple, and don’t cause bias in some situations. And sometimes (although rarely) it really is okay to use mean imputation. When?

If your rate of missing data is very, very small, it honestly doesn’t matter what technique you use. I’m talking very, very, very small (2-3%).

There is another, better method for imputing single values, however, that is only slightly more difficult than mean imputation. It uses the E-M Algorithm, which stands for Expectation-Maximization. It is an interative procedure in which it uses other variables to impute a value (Expectation), then checks whether that is the value most likely (Maximization). If not, it re-imputes a more likely value. This goes on until it reaches the most likely value.

EM imputations are better than mean imputations because they preserve the relationship with other variables, which is vital if you go on to use something like Factor Analysis or Regression. They still underestimate standard error, however, so once again, this approach is only reasonable if the percentage of missing data are very small (under 5%) and if the standard error of individual items is not vital (as when combining individual items into an index).

The heavy hitters like Multiple Imputation and Maximum Likelihood are still superior methods of dealing with missing data and are in most situations the only viable approach. But you need to fit the right tool to the size of the problem. It may be true that backhoes are better at digging holes than trowels, but trowels are just right for digging small holes. It’s better to use a small tool like EM when it fits than to ignore the problem altogether.

EM Imputation is available in SAS, Stata, R, and SPSS Missing Values Analysis module.

{ 18 comments… read them below or add one }

Hi Karen, thanks for the valuable information about missing data. You mentioned

“If your rate of missing data is very, very small, it honestly doesn’t matter what technique you use. I’m talking very, very, very small (2-3%)”

Do you have a reference for that? It will really be usefull.

Regards

Hassan

Ooh, I did once. Perhaps it’s in John Graham’s very good article: http://www.stats.ox.ac.uk/~snijders/Graham2009.pdf

Thanks for providing the reference link!

Hi,

I want some datasets with missing data (I just can’t remove data by myself it has to be random) can you suggest some ? or can you suggest me a way to remove data by a program or a software ?

and also I need some data set which make non convex data sets, I need these for my experiment on EM and other algorithms.

Thanks

Hey Karen,

do you have a reliable reference for that 5% limit of missings for using EM? You would help me a lot! 🙂

Try: Annual Review of Psychology (Graham, 2009)

Hello, I don’t think you’re quite right about the EM underestimating parameters. In this process, the variance and covariance of that variable is also corrected … as explained in “The SAGE Handbook of Social Science Methodology”, by William Outhwaite and Stephen Turner.

Hi Karen,

Is it correct to say that once i clicked on “impute missing data” for a specific variable, that variable will have no missing data in the imputation dataset?

I would like to check what went wrong with my procedure.

I clicked on the Multiple Imputation –> Impute Missing data value in SPSS. All the tabs were left it as default.

After I clicked “OK” on impute missing data, I noted random missing data is still available in the dataset of imputation_1, imputation_2, imputation_3, imputation_4 and imputation_5.

Greatly appreciate if you could guide me through.

Thanks very much!

Hi Pei,

Hmm, that is indeed what should happen. I would have to troubleshoot it to figure out what is going wrong. It could be some default in your version of SPSS. I can’t think of one off the top of my head, though that’s often the cause.

Hi Karen

Would you tend to use “as many variables as possible” as predictors in EM imputation or only construct relevant ones? Why?

E.g. use socioeconmic status, IQ asf. as predictors for reading proficiency?

Thanks!

Hi Sebastian,

As a general rule, you want to use as many predictors that are helpful for prediction. So there may be a predictor that isn’t theoretically important, but is helpful with prediction (for whatever unknown reason). But you don’t want to throw in everything you have. Adding unhelpful predictors just raises standard errors.

What is the R function for the EM imputation?

Thanks

Kenny, I don’t use R (maybe an R user can jump in here), but I believe MICE can do it. I am pretty sure it does multiple imputation, and EM is generally one way of doing MI.

Just explore the “MI” package on R’s website

Or try the function “impute.mdr” from imputeMDR package

Hi Karen,

Just wondering what you would recommend to do with imputed EM values for ordinal scales. Do you leave the imputed values (with decimal places) or do you recode so that values lie within the original values (from 1.001 to 1.499 = 1 for example). The imputed values are needed for a CFA and multiple regression.

Thanks for your time,

Kirstine.

Hi Karen,

Not sure if you responded to Kirstine but I had the same question on imputed EM values for the ordinal scale..

Thanks in advance.

Marsha

Hi Marsha,

As a general rule, you don’t want to round off any imputations. Even if the imputed values look weird, you need to have variation in there, so don’t round them off.