EM Imputation and Missing Data: Is Mean Imputation Really so Terrible?

by Karen

Share

I’m sure I don’t need to explain to you all the problems that occur as a result of missing data.  Anyone who has dealt with missing data—that means everyone who has ever worked with real data—knows about the loss of power and sample size, and the potential bias in your data that comes with listwise deletion.

Listwise deletion is the default method for dealing with missing data in most statistical software packages.  It simply means excluding from the analysis any cases with data missing on any variables involved in the analysis.

A very simple, and in many ways appealing, method devised to overcome these problems is mean imputation. Once again, I’m sure you’ve heard of it–just plug in the mean for that variable for all the missing values. The nice part is the mean isn’t affected, and you don’t lose that case from the analysis. And it’s so easy! SPSS even has a little button to click to just impute all those means.

But there are new problems. True, the mean doesn’t change, but the relationships with other variables do. And that’s usually what you’re interested in, right? Well, now they’re biased. And while the sample size remains at its full value, the standard error of that variable will be vastly underestimated–and this underestimation gets bigger the more missing data there are. Too-small standard errors lead to too-small p-values, so now you’re reporting results that should not be there.

There are other options. Multiple Imputation and Maximum Likelihood both solve these problems. But while Multiple Imputation is not available in all the major stats packages, it is very labor-intensive to do well. And Maximum Likelihood isn’t hard or labor intensive, but requires using structural equation modeling software, such as AMOS or MPlus.

The good news is there are other imputation techniques that are still quite simple, and don’t cause bias in some situations. And sometimes (although rarely) it really is okay to use mean imputation. When?

If your rate of missing data is very, very small, it honestly doesn’t matter what technique you use. I’m talking very, very, very small (2-3%).

There is another, better method for imputing single values, however, that is only slightly more difficult than mean imputation. It uses the E-M Algorithm, which stands for Expectation-Maximization. It is an interative procedure in which it uses other variables to impute a value (Expectation), then checks whether that is the value most likely (Maximization). If not, it re-imputes a more likely value. This goes on until it reaches the most likely value.

EM imputations are better than mean imputations because they preserve the relationship with other variables, which is vital if you go on to use something like Factor Analysis or Regression. They still underestimate standard error, however, so once again, this approach is only reasonable if the percentage of missing data are very small (under 5%) and if the standard error of individual items is not vital (as when combining individual items into an index).

The heavy hitters like Multiple Imputation and Maximum Likelihood are still superior methods of dealing with missing data and are in most situations the only viable approach. But you need to fit the right tool to the size of the problem. It may be true that backhoes are better at digging holes than trowels, but trowels are just right for digging small holes. It’s better to use a small tool like EM when it fits than to ignore the problem altogether.

EM Imputation is available in SAS, Stata, R, and SPSS Missing Values Analysis module.

{ 21 comments… read them below or add one }

mehdi June 6, 2017 at 7:40 am

Hi karen
Do you have R code for ” EM algorithm” ? I just wanna Impute missing data with EM .

Reply

Hassan Qutub February 7, 2016 at 2:30 pm

Hi Karen, thanks for the valuable information about missing data. You mentioned
“If your rate of missing data is very, very small, it honestly doesn’t matter what technique you use. I’m talking very, very, very small (2-3%)”
Do you have a reference for that? It will really be usefull.

Regards

Hassan

Reply

Karen February 22, 2016 at 2:30 pm

Ooh, I did once. Perhaps it’s in John Graham’s very good article: http://www.stats.ox.ac.uk/~snijders/Graham2009.pdf

Reply

Tara July 4, 2016 at 1:50 pm

Thanks for providing the reference link!

Reply

Sushant December 11, 2015 at 6:25 pm

Hi,
I want some datasets with missing data (I just can’t remove data by myself it has to be random) can you suggest some ? or can you suggest me a way to remove data by a program or a software ?
and also I need some data set which make non convex data sets, I need these for my experiment on EM and other algorithms.
Thanks

Reply

Bianca February 25, 2015 at 9:50 am

Hey Karen,
do you have a reliable reference for that 5% limit of missings for using EM? You would help me a lot! 🙂

Reply

Madlaina June 15, 2015 at 8:55 am

Try: Annual Review of Psychology (Graham, 2009)

Reply

Lorelai January 20, 2015 at 11:07 am

Hello, I don’t think you’re quite right about the EM underestimating parameters. In this process, the variance and covariance of that variable is also corrected … as explained in “The SAGE Handbook of Social Science Methodology”, by William Outhwaite and Stephen Turner.

Reply

pei ting October 29, 2014 at 9:58 am

Hi Karen,

Is it correct to say that once i clicked on “impute missing data” for a specific variable, that variable will have no missing data in the imputation dataset?

I would like to check what went wrong with my procedure.

I clicked on the Multiple Imputation –> Impute Missing data value in SPSS. All the tabs were left it as default.

After I clicked “OK” on impute missing data, I noted random missing data is still available in the dataset of imputation_1, imputation_2, imputation_3, imputation_4 and imputation_5.

Greatly appreciate if you could guide me through.

Thanks very much!

Reply

Karen November 3, 2014 at 4:45 pm

Hi Pei,

Hmm, that is indeed what should happen. I would have to troubleshoot it to figure out what is going wrong. It could be some default in your version of SPSS. I can’t think of one off the top of my head, though that’s often the cause.

Reply

Sebastian January 15, 2014 at 10:07 am

Hi Karen
Would you tend to use “as many variables as possible” as predictors in EM imputation or only construct relevant ones? Why?
E.g. use socioeconmic status, IQ asf. as predictors for reading proficiency?

Thanks!

Reply

Karen January 15, 2014 at 10:31 am

Hi Sebastian,

As a general rule, you want to use as many predictors that are helpful for prediction. So there may be a predictor that isn’t theoretically important, but is helpful with prediction (for whatever unknown reason). But you don’t want to throw in everything you have. Adding unhelpful predictors just raises standard errors.

Reply

Kenny June 30, 2013 at 10:56 pm

What is the R function for the EM imputation?

Thanks

Reply

Karen July 1, 2013 at 3:45 pm

Kenny, I don’t use R (maybe an R user can jump in here), but I believe MICE can do it. I am pretty sure it does multiple imputation, and EM is generally one way of doing MI.

Reply

Lorelai January 20, 2015 at 11:09 am

Just explore the “MI” package on R’s website

Reply

Lorelai January 20, 2015 at 11:16 am

Or try the function “impute.mdr” from imputeMDR package

Reply

Kirstine April 23, 2013 at 7:28 am

Hi Karen,

Just wondering what you would recommend to do with imputed EM values for ordinal scales. Do you leave the imputed values (with decimal places) or do you recode so that values lie within the original values (from 1.001 to 1.499 = 1 for example). The imputed values are needed for a CFA and multiple regression.

Thanks for your time,

Kirstine.

Reply

Marsha December 11, 2013 at 3:34 am

Hi Karen,
Not sure if you responded to Kirstine but I had the same question on imputed EM values for the ordinal scale..
Thanks in advance.
Marsha

Reply

Karen December 23, 2013 at 1:39 pm

Hi Marsha,

As a general rule, you don’t want to round off any imputations. Even if the imputed values look weird, you need to have variation in there, so don’t round them off.

Reply

Mirjana June 1, 2017 at 10:22 am

What to do when imputed EM vales are zero or negative, or exceed the maximum number (e.g., -4, 0,and 8 and Likert scale is from 1 to 7)?

Reply

Mirjana June 1, 2017 at 10:27 am

Continue …. In SPSS is impossible to make constaints regarding maximum and minimum values for EM so how it should be solved. And also – Since EM does not impute values for categorical values, such as gender, what to do with them?

Leave a Comment

Please note that Karen receives hundreds of comments at The Analysis Factor website each week. Since Karen is also busy teaching workshops, consulting with clients, and running a membership program, she seldom has time to respond to these comments anymore. If you have a question to which you need a timely response, please check out our low-cost monthly membership program, or sign-up for a quick question consultation.

Previous post:

Next post: