OptinMon 10 - 14 Steps

The Data Analysis Work Flow: 9 Strategies for Keeping Track of your Analyses and Output

August 13th, 2010 by

Knowing the right statistical analysis to use in any data situation, knowing how to run it, and being able to understand the output are all really important skills for statistical analysis.  Really important.

But they’re not the only ones.

Another is having a system in place to keep track of the analyses.  This is especially important if you have any collaborators (or a statistical consultant!) you’ll be sharing your results with.  You may already have an effective work flow, but if you don’t, here are some strategies I use.  I hope they’re helpful to you.

1. Always use Syntax Code

All the statistical software packages have come up with some sort of easy-to-use, menu-based approach.  And as long as you know what you’re doing, there is nothing wrong with using the menus.  While I’m familiar enough with SAS code to just write it, I use menus all the time in SPSS.

But even if you use the menus, paste the syntax for everything you do.  There are many reasons for using syntax, but the main one is documentation.  Whether you need to communicate to someone else or just remember what you did, syntax is the only way to keep track.  (And even though, in the midst of analyses, you believe you’ll remember how you did something, a week and 40 models later, I promise you won’t.  I’ve been there too many times.  And it really hurts when you can’t replicate something).

In SPSS, there are two things you can do to make this seamlessly easy.  First, instead of hitting OK, hit Paste.  Second, make sure syntax shows up on the output.  This is the default in later versions, but you can turn in on in Edit–>Options–>Viewer.  Make sure “Display Commands in Log” and “Log” are both checked.  (Note: the menus may differ slightly across versions).

2.  If your data set is large, create smaller data sets that are relevant to each set of analyses.

First, all statistical software needs to read the entire data set to do many analyses and data manipulation.  Since that same software is often a memory hog, running anything on a large data set will s-l-o-w down processing. A lot.

Second, it’s just clutter.  It’s harder to find the variables you need if you have an extra 400 variables in the data set.

3. Instead of just opening a data set manually, use commands in your syntax code to open data sets.

Why?  Unless you are committing the cardinal sin of overwriting your original data as you create new variables, you have multiple versions of your data set.  Having the data set listed right at the top of the analysis commands makes it crystal clear which version of the data you analyzed.

4. Use Variable and Value labels religiously

I know you remember today that your variable labeled Mar4cat means marital status in 4 categories and that 0 indicates ‘never married.’  It’s so logical, right?  Well, it’s not obvious to your collaborators and it won’t be obvious to you in two years, when you try to re-analyze the data after a reviewer doesn’t like your approach.

Even if you have a separate code book, why not put it right in the data?  It makes the output so much easier to read, and you don’t have to worry about losing the code book.  It may feel like more work upfront, but it will save time in the long run.

5. Put data manipulation, descriptive analyses, and models in separate syntax files

When I do data analysis, I follow my Steps approach, which means first I create all the relevant variables, then run univariate and bivariate statistics, then initial models, and finally hone the models.

And I’ve found that if I keep each of these steps in separate program files, it makes it much easier to keep track of everything.  If you’re creating new variables in the middle of analyses, it’s going to be harder to find the code so you can remember exactly how you created that variable.

6. As you run different versions of models, label them with model numbers

When you’re building models, you’ll often have a progression of different versions.  Especially when I have to communicate with a collaborator, I’ve found it invaluable to number these models in my code and print that model number on the output.  It makes a huge difference in keeping track of nine different models.

7. As you go along with different analyses, keep your syntax clean, even if the output is a mess.

Data analysis is a bit of an iterative process.  You try something, discover errors, realize that variable didn’t work, and try something else.  Yes, base it on theory and have a clear analysis plan, but even so, the first analyses you run won’t be your last.

Especially if you make mistakes as you go along (as I inevitably do), your output gets pretty littered with output you don’t want to keep.  You could clean it up as you go along, but I find that’s inefficient.  Instead, I try to keep my code clean, with only the error-free analyses that I ultimately want to use.  It lets me try whatever I need to without worry.  Then at the end, I delete the entire output and just rerun all code.

One caveat here:  You may not want to go this approach if you have VERY computing intensive analyses, like a generalized linear mixed model with crossed random effects on a large data set.  If your code takes more than 20 minutes to run, this won’t be more efficient.

8. Use titles and comments liberally

I’m sure you’ve heard before that you should use lots of comments in your syntax code.  But use titles too.  Both SAS and SPSS have title commands that allow titles to be printed right on the output.  This is especially helpful for naming and numbering all those models in #6.

9. Name output, log, and programs the same

Since you’ve split your programs into separate files for data manipulations, descriptives, initial models, etc. you’re going to end up with a lot of files.  What I do is name each output the same name as the program file.  (And if I’m in SAS, the log too-yes, save the log).

Yes, that means making sure you have a separate output for each section.  While it may seem like extra work, it can make looking at each output less overwhelming for anyone you’re sharing it with.

 


On Data Integrity and Cleaning

July 30th, 2010 by

This year I hired a Quickbooks consultant to bring my bookkeeping up from the stone age.  (I had been using Excel).

She had asked for some documents with detailed data, and I tried to send her something else as a shortcut.  I thought it was detailed enough. It wasn’t, so she just fudged it. The bottom line was all correct, but the data that put it together was all wrong.

I hit the roof.Internally, only—I realized it was my own fault for not giving her the info she needed.  She did a fabulous job.

But I could not leave the data fudged, even if it all added up to the right amount, and already reconciled. I had to go in and spend hours fixing it. Truthfully, I was a bit of a compulsive nut about it.

And then I had to ask myself why I was so uptight—if accountants think the details aren’t important, why do I? Statisticians are all about approximations and accountants are exact, right?

As it turns out, not so much.

But I realized I’ve had 20 years of training about the importance of data integrity. Sure, the results might be inexact, the analysis, the estimates, the conclusions. But not the data. The data must be clean.

Sparkling, if possible.

In research, it’s okay if the bottom line is an approximation.  Because we’re never really measuring the whole population.  And we can’t always measure precisely what we want to measure.  But in the long run, it all averages out.

But only if the measurements we do have are as accurate as they possibly can be.

 


Cohort and Case-Control Studies: Pro’s and Con’s

June 7th, 2010 by

Two designs commonly used in epidemiology are the cohort and case-control studies. Both study causal relationships between a risk factor and a disease. What is the difference between these two designs? And when should you opt for the one or the other?

Cohort studies

Cohort studies begin with a group of people (a cohort) free of disease. The people in the cohort are grouped by whether or not they are exposed to a potential cause of disease. The whole cohort is followed over time to see if (more…)


Great Resources for Your Literature Review

April 30th, 2010 by

by Ursula Saqui, Ph.D.

This is the second post of a two-part series on the overall process of doing a literature review.  Part one discussed the benefits of doing a literature review, how to get started, and knowing when to stop.

You have made a commitment to do a literature review, have the purpose defined, and are ready to get started.

Where do you find your resources?

If you are not in academia, have access to a top-notch library, or receive the industry publications of interest, you may need to get creative if you do not want to pay for each article. (In a pinch, I have paid up to $36 for an article, which can add up if you are conducting a comprehensive literature review!)

Here is where the internet and other community resources can be your best friends.

Still stuck?  Hire someone who knows how to do a good literature review and has access to quality resources.

On a budget?  Hire a student who has access to an academic library.  Many times students can get credit for working on research and business projects through internships or experiential learning programs. This situation is a win-win.  You get the information you need and the student gets academic credit along with exposure to new ideas and topics.

About the Author: With expertise in human behavior and research, Ursula Saqui, Ph.D. gives clarity and direction to her clients’ projects, which inevitably lead to better results and strategies. She is the founder of Saqui Research.

 


The Literature Review: The Foundation of Any Successful Research Project

April 23rd, 2010 by

by Ursula Saqui, Ph.D.

This post is the first of a two-part series on the overall process of doing a literature review.  Part two covers where to find your resources.

Would you build your house without a foundation?  Of course not!  However, many people skip the first step of any empirical-based project–conducting a literature review.  Like the foundation of your house, the literature review is the foundation of your project.

Having a strong literature review gives structure to your research method and informs your statistical analysis.  If your literature review is weak or non-existent, (more…)


What Makes a Statistical Analysis Wrong?

January 21st, 2010 by

One of the most anxiety-laden questions I get from researchers is whether their analysis is “right.”

I’m always slightly uncomfortable with that word. Often there is no one right analysis.

It’s like finding Mr. or Ms. Right. Most of the time, there is not just one Right. But there are many that are clearly Wrong.

What Makes an Analysis Right?

Luckily, what makes an analysis right is easier to define than what makes a person right for you. It pretty much comes down to two things: whether the assumptions of the statistical method are being met and whether the analysis answers the research question.

Assumptions are very important. A test needs to reflect the measurement scale of the variables, the study design, and issues in the data. A repeated measures study design requires a repeated measures analysis. A binary dependent variable requires a categorical analysis method.

But within those general categories, there are often many analyses that meet assumptions. A logistic regression or a chi-square test both handle a binary dependent variable with a single categorical predictor. But a logistic regression can answer more research questions. It can incorporate covariates, directly test interactions, and calculate predicted probabilities. A chi-square test can do none of these.

So you get different information from different tests. They answer different research questions.

An analysis that is correct from an assumptions point of view is useless if it doesn’t answer the research question. A data set can spawn an endless number of statistical tests that don’t answer the research question. And you can spend an endless number of days running them.

When to Think about the Analysis

The real bummer is it’s not always clear that the analyses aren’t relevant until you  write up the research paper.

That’s why writing out the research questions in theoretical and operational terms is the first step of any statistical analysis. It’s absolutely fundamental. And I mean writing them in minute detail. Issues of mediation, interaction, subsetting, control variables, et cetera, should all be blatantly obvious in the research questions.

Thinking about how to analyze the data before collecting the data can help you from hitting a dead end. It can be very obvious, once you think through the details, that the analysis available to you based on the data won’t answer the research question.

Whether the answer is what you expected or not is a different issue.

So when you are concerned about getting an analysis “right,” clearly define the design, variables, and data issues, but most importantly, get explicitly clear about what you want to learn from this analysis.

Once you’ve done this, it’s much easier to find the statistical method that answers the research questions and meets assumptions. Even if you don’t know the right method, you can narrow your search with clear guidance.