• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • Home
  • Our Programs
    • Membership
    • Online Workshops
    • Free Webinars
    • Consulting Services
  • About
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Collaborate with Us
  • Statistical Resources
  • Contact
  • Blog
  • Login

Preparing Data for Analysis is (more than) Half the Battle

by Karen Grace-Martin 7 Comments

Just last week, a colleague mentioned that while he does a lot of study design these days, he no longer does much data analysis.

His main reason was that 80% of the work in data analysis is preparing the data for analysis.  Data preparation is s-l-o-w and he found that few colleagues and clients understood this.

Consequently, he was running into expectations that he should analyze a raw data set in an hour or so.

You know, by clicking a few buttons.

I see this as well with researchers new to data analysis.  While they know it will take longer than an hour, they still have unrealistic expectations about how long it takes.

So I am here to tell you, the time-consuming part is preparing the data.  Weeks or months is a realistic time frame.  Hours is not.

(Feel free to send this to your colleagues who want instant results).

There are three parts to preparing data: cleaning it, creating necessary variables, and formatting all variables.

Data Cleaning

Data cleaning means finding and eliminating errors in the data.  How you approach it depends on how large the data set is, but the kinds of things you’re looking for are:

  • Impossible or otherwise incorrect values for specific variables
  • Cases in the data who met exclusion criteria and shouldn’t be in the study
  • Duplicate cases
  • Missing data and outliers (don’t delete all outliers, but you may need to investigate to see if one is an error)
  • Skip-pattern or logic breakdowns
  • Making sure that the same value of string variables is always written the same way (male ≠ Male in most statistical software).

You can’t avoid data cleaning and it always takes a while, but there are ways to make it more efficient. For example, rather than search through the data set for impossible values, print a table of data values outside a normal range, along with subject ids.

This is where learning how to code in your statistical software of choice really helps.  You’ll need to subset your data using IF statements to find those impossible values.

But if your data set is anything but small, you can also save yourself a lot of time, code, and errors by incorporating efficiencies like loops and macros so that you can perform some of these checks on many variables at once.

Creating New Variables

Once the data are free of errors, you need to set up the variables that will directly answer your research questions.

It’s a rare data set in which every variable you need is measured directly.

So you may need to do a lot of recoding and computing of variables.

Examples include:

  • Creating change scores
  • Creating indices from scales
  • Reverse coding scale items
  • Combining too-small-to-use categories of nominal variables
  • Centering variables
  • Restructuring data from wide format to long (or the reverse)

And of course, part of creating each new variable is double-checking that it worked correctly.

Formatting Variables

Both original and newly created variables need to be formatted correctly for two reasons:

First, so your software works with them correctly.  Failing to format a missing value code or a dummy variable correctly will have major consequences for your data analysis.

Second, it’s much faster to run the analyses and interpret results if you don’t have to keep looking up which variable Q156 is.

Examples include:

  • Setting all missing data codes so missing data are treated as such
  • Formatting date variables as dates, numerical variables as numbers, etc.
  • Labeling all variables and categorical values so you don’t have to keep looking them up.

All three of these steps require a solid knowledge of how to manage data in your statistical software.  Each one approaches them a little differently.

It’s also very important to keep track of and be able to easily redo all your steps.  Always assume you’ll have to redo something.  So use (or record) syntax, not only menus.

 

The Pathway: Steps for Staying Out of the Weeds in Any Data Analysis
Get the road map for your data analysis before you begin. Learn how to make any statistical modeling – ANOVA, Linear Regression, Poisson Regression, Multilevel Model – straightforward and more efficient.

Tagged With: data, data cleaning, data preparation, formatting variables, reverse coding

Related Posts

  • Best Practices for Data Preparation
  • Four Weeds of Data Analysis That are Easy to Get Lost In
  • Eight Data Analysis Skills Every Analyst Needs
  • Rescaling Sets of Variables to Be on the Same Scale

Reader Interactions

Comments

  1. Emma Xu says

    July 20, 2021 at 6:05 pm

    Totally agree!

    Sadly for me, it’s not the colleagues but the clients who are urgent to see the results. So many times I’ve been asked to analyze, and produce a report of survey data within 1 day or even within 5 hours.

    Reply
    • Karen Grace-Martin says

      July 21, 2021 at 8:46 am

      5 hours! Oh my…

      Reply
  2. Winfred Mwangi says

    July 17, 2021 at 1:16 am

    Thank you so much. This is a relief and an encouragement. I thought after data collection next step was analysis and writing. I never understood there is a big step in between – data management. Which for me required I learn a software. The basics first. Real learning of state occurred concurrently as I cleaned data. It has taken me One year and 8 months of my PhD!

    Reply
  3. Eric Ditwiler, Ph.D. says

    July 16, 2021 at 11:11 am

    Thank you for pulling that all together.
    It caused me to bookmark your page.
    The ‘Tidyverse’ in R is very useful as is keeping data in mySQL or mariaDB databases and accessing them from R.

    Reply
    • Zelalem D. says

      July 20, 2021 at 6:53 am

      True: This is the reality but often gets too little attention among researchers!

      Reply
  4. Jon K Peck says

    March 24, 2020 at 8:52 am

    An underused data cleaning/validation procedure in SPSS Statistics is the VALIDATEDATA procedure. It does a number of basic checks on variables such as looking for a high percentage of missing values, but it also allows definition of single- and cross-variable rules that can check for invalid values, skip logic violations etc. Although it can be tedious to set up these rules, rules can be saved in a library and reused. Statistics comes with a bunch of predefined rules.

    Reply
  5. Richard says

    March 19, 2015 at 4:44 am

    I agree very much.

    As a Psychologist the data of my experiments are comparingly small. But still, doing the actual statistical analysis does not take up so much time. Getting your data in shape to do any reasonable and valid analyses does.

    Tidying your dataset and creating variables really takes around ~ 80% of the time you spend with it. And it’s not like you tidy it and then it’s done. Most times you come up with another interesting way to get some insights with your data but then you find out that you have to create another variable for this calculation and it’s back to tidying data and creating variables.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.

Primary Sidebar

This Month’s Statistically Speaking Live Training

  • Member Training: Analyzing Pre-Post Data

Upcoming Free Webinars

Poisson and Negative Binomial Regression Models for Count Data

Upcoming Workshops

  • Analyzing Count Data: Poisson, Negative Binomial, and Other Essential Models (Jul 2022)
  • Introduction to Generalized Linear Mixed Models (Jul 2022)

Copyright © 2008–2022 The Analysis Factor, LLC. All rights reserved.
877-272-8096   Contact Us

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT