OptinMon 06 - Approaches to Missing Data

On Data Integrity and Cleaning

July 30th, 2010 by

This year I hired a Quickbooks consultant to bring my bookkeeping up from the stone age.  (I had been using Excel).

She had asked for some documents with detailed data, and I tried to send her something else as a shortcut.  I thought it was detailed enough. It wasn’t, so she just fudged it. The bottom line was all correct, but the data that put it together was all wrong.

I hit the roof.Internally, only—I realized it was my own fault for not giving her the info she needed.  She did a fabulous job.

But I could not leave the data fudged, even if it all added up to the right amount, and already reconciled. I had to go in and spend hours fixing it. Truthfully, I was a bit of a compulsive nut about it.

And then I had to ask myself why I was so uptight—if accountants think the details aren’t important, why do I? Statisticians are all about approximations and accountants are exact, right?

As it turns out, not so much.

But I realized I’ve had 20 years of training about the importance of data integrity. Sure, the results might be inexact, the analysis, the estimates, the conclusions. But not the data. The data must be clean.

Sparkling, if possible.

In research, it’s okay if the bottom line is an approximation.  Because we’re never really measuring the whole population.  And we can’t always measure precisely what we want to measure.  But in the long run, it all averages out.

But only if the measurements we do have are as accurate as they possibly can be.

 


Quiz Yourself about Missing Data

May 3rd, 2010 by

Do you find quizzes irresistible?  I do.

Here’s a little quiz about working with missing data:

True or False?

1. Imputation is really just making up data to artificially inflate results.  It’s better to just drop cases with missing data than to impute.

2. I can just impute the mean for any missing data.  It won’t affect results, and improves power.

3. Multiple Imputation is fine for the predictor variables in a statistical model, but not for the response variable.

4. Multiple Imputation is always the best way to deal with missing data.

5. When imputing, it’s important that the imputations be plausible data points.

6. Missing data isn’t really a problem if I’m just doing simple statistics, like chi-squares and t-tests.

7. The worst thing that missing data does is lower sample size and reduce power.

Answers: (more…)


Answers to the Missing Data Quiz

May 3rd, 2010 by

In my last post, I gave a little quiz about missing data.  This post has the answers.

If you want to try it yourself before you see the answers, go here. (It’s a short quiz, but if you’re like me, you find testing yourself irresistible).

True or False?

1. Imputation is really just making up data to artificially inflate results.  It’s better to just drop cases with missing data than to impute. (more…)


Multiple Imputation: 5 Recent Findings that Change How to Use It

March 24th, 2010 by

Missing Data, and multiple imputation specifically, is one area of statistics that is changing rapidly. Research is still ongoing, and each year new findings on best practices and new techniques in software appear.

The downside for researchers is that some of the recommendations missing data statisticians were making even five years ago have changed.

Remember that there are three goals of multiple imputation, or any missing data technique: Unbiased parameter estimates in the final analysis (more…)


New version released of Amelia II: A Program for Missing Data

June 30th, 2009 by

A new version of Amelia II, a free package for multiple imputation, has just been released today.  Amelia II is available in two versions.  One is part of R, and the other, AmeliaView, is a GUI package that does not require any knowledge of the R programming language.  They both use the same underlying algorithms and both require having R installed.

At the Amelia II website, you can download Amelia II (did I mention it’s free?!), download R, get the very useful User’s Guide, join the Amelia listserve, and get information about multiple imputation.

If you want to learn more about multiple imputation:

 


3 Ad-hoc Missing Data Approaches that You Should Never Use

June 15th, 2009 by

The default approach to dealing with missing data in most statistical software packages is listwise deletion–dropping any case with data missing on any variable involved anywhere in the analysis.  It also goes under the names case deletion and complete case analysis.

Although this approach can be really painful (you worked hard to collect those data, only to drop them!), it does work well in some situations.  By works well, I mean it fits 3 criteria:

– gives unbiased parameter estimates

– gives accurate (or at least conservative) standard error estimates

– results in adequate power.

But not always.  So over the years, a number of ad hoc approaches have been proposed to stop the bloodletting of so much data.  Although each solved some problems of listwise deletion, they created others.  All three have been discredited in recent years and should NOT be used.  They are:

Pairwise Deletion: use the available data for each part of an analysis.  This has been shown to result in correlations beyond the 0,1 range and other fun statistical impossibilities.

Mean Imputation: substitute the mean of the observed values for all missing data.  There are so many problems, it’s difficult to list them all, but suffice it to say, this technique never meets the above 3 criteria.

Dummy Variable: create a dummy variable that indicates whether a data point is missing, then substitute any arbitrary value for the missing data in the original variable.  Use both variables in the analysis.  While it does help the loss of power, it usually leads to biased results.

There are a number of good techniques for dealing with missing data, some of which are not hard to use, and which are now available in all major stat software.  There is no reason to continue to use ad hoc techniques that create more problems than they solve.