Data Cleaning is a critically important part of any data analysis. Without properly prepared data, the analysis will yield inaccurate results. Correcting errors later in the analysis adds to the time, effort, and cost of the project.

# data cleaning

## Eight Data Analysis Skills Every Analyst Needs

It’s easy to think that if you just knew statistics better, data analysis wouldn’t be so hard.

It’s true that more statistical knowledge is always helpful. But I’ve found that statistical knowledge is only part of the story.

Another key part is developing data analysis skills. These skills apply to all analyses. It doesn’t matter which statistical method or software you’re using. So even if you never need any statistical analysis harder than a t-test, **developing these skills will make your job easier.**

[Read more…] about Eight Data Analysis Skills Every Analyst Needs

## Preparing Data for Analysis is (more than) Half the Battle

Just last week, a colleague mentioned that while he does a lot of study design these days, he no longer does much data analysis.

His main reason was that 80% of the work in data analysis is preparing the data for analysis. Data preparation is s-l-o-w and he found that few colleagues and clients understood this.

Consequently, he was running into expectations that he should analyze a raw data set in an hour or so.

You know, by clicking a few buttons.

I see this as well with clients new to data analysis. While they know it will take longer than an hour, they still have unrealistic expectations about how long it takes.

So I am here to tell you, *the time-consuming part is preparing the data*. Weeks or months is a realistic time frame. Hours is not.

(Feel free to send this to your colleagues who want instant results).

There are three parts to preparing data: cleaning it, creating necessary variables, and formatting all variables.

### Data Cleaning

**Data cleaning** means finding and eliminating errors in the data. How you approach it depends on how large the data set is, but the kinds of things you’re looking for are:

- Impossible or otherwise incorrect values for specific variables
- Cases in the data who met exclusion criteria and shouldn’t be in the study
- Duplicate cases
- Missing data and outliers
- Skip-pattern or logic breakdowns
- Making sure that the same value of string variables is always written the same way (male ≠ Male in most statistical software).

You can’t avoid data cleaning and it always takes a while, but there are ways to make it more efficient.

For example, one way to find impossible values for a variable is to print out data for cases outside a normal range.

This is where learning how to code in your statistical software of choice *really* helps. You’ll need to subset your data using IF statements to find those impossible values.

But if your data set is anything but small, you can also save yourself a lot of time, code, and errors by incorporating efficiencies like loops and macros so that you can perform some of these checks on many variables at once.

### Creating New Variables

Once the data are free of errors, you need to set up the variables that will directly answer your research questions.

It’s a rare data set in which every variable you need is measured directly.

So you may need to do a lot of recoding and computing of variables.

Examples include:

- Creating change scores
- Creating indices from scales
- Combining too-small-to-use categories of nominal variables
- Centering variables
- Restructuring data from wide format to long (or the reverse)

And of course, part of creating each new variable is double-checking that it worked correctly.

**Formatting Variables**

Both original and newly created variables need to be formatted correctly for two reasons:

First, so your software works with them correctly. Failing to format a missing value code or a dummy variable correctly will have major consequences for your data analysis.

Second, it’s much faster to run the analyses and interpret results if you don’t have to keep looking up which variable Q156 is.

Examples include:

- Setting all missing data codes so missing data are treated as such
- Formatting date variables as dates, numerical variables as numbers, etc.
- Labeling all variables and categorical values so you don’t have to keep looking them up.

All three of these steps require a solid knowledge of how to manage data in your statistical software. Each one approaches them a little differently.

It’s also very important to keep track of and be able to easily redo all your steps. Always assume you’ll have to redo something. So use (or record) syntax, not menus.

## R Is Not So Hard! A Tutorial, Part 18: Re-Coding Values

*by David Lillis, Ph.D.*

One data manipulation task that you need to do in pretty much any data analysis is recode data. It’s almost never the case that the data are set up exactly the way you need them for your analysis.

In R, you can re-code an entire vector or array at once. To illustrate, let’s set up a vector that has missing values.

`A <- c(3, 2, NA, 5, 3, 7, NA, NA, 5, 2, 6)`

`A`

`[1] 3 2 NA 5 3 7 NA NA 5 2 6`

We can re-code all missing values by another number (such as zero) as follows: [Read more…] about R Is Not So Hard! A Tutorial, Part 18: Re-Coding Values

## R Is Not So Hard! A Tutorial, Part 17: Testing for Existence of Particular Values

*by David Lillis, Ph.D.*

Sometimes you need to know if your data set contains elements that meet some criterion or a particular set of criteria.

For example, a common data cleaning task is to check if you have missing data (NAs) lurking somewhere in a large data set.

Or you may need to check if you have zeroes or negative numbers, or numbers outside a given range.

In such cases, the any() and all() commands are very helpful. You can use them to interrogate R about the values in your data. [Read more…] about R Is Not So Hard! A Tutorial, Part 17: Testing for Existence of Particular Values

## On Data Integrity and Cleaning

This year I hired a Quickbooks consultant to bring my bookkeeping up from the stone age. (I had been using Excel).

She had asked for some documents with detailed data, and I tried to send her something else as a shortcut. I thought it was detailed enough. It wasn’t, so she just fudged it. The bottom line was all correct, but the data that put it together was all wrong.

I hit the roof. Internally, only—I realized it was my own fault for not giving her the info she needed. She did a fabulous job.

But I could not leave the data fudged, even if it all added up to the right amount, and already reconciled. I had to go in and spend hours fixing it. Truthfully, I was a bit of a compulsive nut about it.

And then I had to ask myself why I was so uptight—if accountants think the details aren’t important, why do I? Statisticians are all about approximations and accountants are exact, right?

As it turns out, not so much.

But I realized I’ve had 20 years of training about the importance of data integrity. Sure, the results might be inexact, the analysis, the estimates, the conclusions. But not the data. The data must be clean.

Sparkling, if possible.

In research, it’s okay if the bottom line is an approximation. Because we’re never really measuring the whole population. And we can’t always measure precisely what we want to measure. But in the long run, it all averages out.

But only if the measurements we do have are as accurate as they possibly can be.