OptinMon 10 - 14 Steps

The Problem with Using Tests for Statistical Assumptions

July 16th, 2018 by

Every statistical model and hypothesis test has assumptions.

And yes, if you’re going to use a statistical test, you need to check whether those assumptions are reasonable to whatever extent you can.

Some assumptions are easier to check than others. Some are so obviously reasonable that you don’t need to do much to check them most of the time. And some have no good way of being checked directly, so you have to use situational clues.

(more…)


What Is Latent Class Analysis?

May 16th, 2017 by

One of the most common—and one of the trickiest—challenges in data analysis is deciding how to include multiple predictors in a model, especially when they’re related to each other.

Let’s say you are interested in studying the relationship between work spillover into personal time as a predictor of job burnout.

You have 5 categorical yes/no variables that indicate whether a particular symptom of work spillover is present (see below).

While you could use each individual variable, you’re not really interested if one in particular is related to the outcome. Perhaps it’s not really each symptom that’s important, but the idea that spillover is happening.

(more…)


When to Check Model Assumptions

March 7th, 2016 by

Like the chicken and the egg, there’s a question about which comes first: run a model or test assumptions? Unlike the chickens’, the model’s question has an easy answer.

There are two types of assumptions in a statistical model.  Some are distributional assumptions about the residuals.  Examples include independence, normality, and constant variance in a linear model.

Others are about the form of the model.  They include linearity and (more…)


Preparing Data for Analysis is (more than) Half the Battle

March 18th, 2015 by

Just last week, a colleague mentioned that while he does a lot of study design these days, he no longer does much data analysis.

His main reason was that 80% of the work in data analysis is preparing the data for analysis.  Data preparation is s-l-o-w and he found that few colleagues and clients understood this.

Consequently, he was running into expectations that he should analyze a raw data set in an hour or so.

You know, by clicking a few buttons.

I see this as well with researchers new to data analysis.  While they know it will take longer than an hour, they still have unrealistic expectations about how long it takes.

So I am here to tell you, the time-consuming part is preparing the data.  Weeks or months is a realistic time frame.  Hours is not.

(Feel free to send this to your colleagues who want instant results.)

There are three parts to preparing data: cleaning it, creating necessary variables, and formatting all variables.

Data Cleaning

Data cleaning means finding and eliminating errors in the data.  How you approach it depends on how large the data set is, but the kinds of things you’re looking for are:

You can’t avoid data cleaning and it always takes a while, but there are ways to make it more efficient. For example, rather than search through the data set for impossible values, print a table of data values outside a normal range, along with subject ids.

This is where learning how to code in your statistical software of choice really helps.  You’ll need to subset your data using IF statements to find those impossible values.

But if your data set is anything but small, you can also save yourself a lot of time, code, and errors by incorporating efficiencies like loops and macros so that you can perform some of these checks on many variables at once.

Creating New Variables

Once the data are free of errors, you need to set up the variables that will directly answer your research questions.

It’s a rare data set in which every variable you need is measured directly.

So you may need to do a lot of recoding and computing of variables.

Examples include:

And of course, part of creating each new variable is double-checking that it worked correctly.

Formatting Variables

Both original and newly created variables need to be formatted correctly for two reasons:

First, so your software works with them correctly.  Failing to format a missing value code or a dummy variable correctly will have major consequences for your data analysis.

Second, it’s much faster to run the analyses and interpret results if you don’t have to keep looking up which variable Q156 is.

Examples include:

All three of these steps require a solid knowledge of how to manage data in your statistical software.  Each one approaches them a little differently.

It’s also very important to keep track of and be able to easily redo all your steps.  Always assume you’ll have to redo something.  So use (or record) syntax, not only menus.

 


Steps to Running Any Statistical Model Questions, Part 1

February 15th, 2013 by

Recently I gave a webinar The Steps to Running Any Statistical Model.  A few hundred people were live on the webinar.  We held a Q&A session at the end, but as you can imagine, we didn’t have time to get through all the questions.

This is the first in a series of written answers to some of those questions.  I’ve tried to sort them by the step each is about.

A written list of the steps is available here.

If you missed the webinar, you can view the video here.  It’s free.

Questions about Step 1. Write out research questions in theoretical and operational terms

Q: In using secondary data research designing, have you found that this type of data source affects the research question? That is, should one have a strong understanding of the data to ensure their theoretical concept can be operational to fit the data?  My research question changes the more I learn.

Yes.  There’s no point in asking research questions that the data you have available can’t answer.

So the order of the steps would have to change—you may have to start with a vague idea of the type of research question you want to ask, but only refine it after doing some descriptive statistics, or even running an initial model.

 

Q: How soon in the process should one start with the first group of steps?

You want to at least start thinking about them as you’re doing the lit review and formulating your research questions.

Think about how you could measure variables, which ones are likely to be collinear or have a lot of missing data.  Think about the kind of model you’d have to do for each research question.

Think of a scenario where the same research question could be operationalized such that the dependent variable is measured either continuous or ordered categories.  An easy example is income in dollars measured by actual income or by income categories.

By all means, if people can answer the question with a real and accurate number, your analysis will be much, much easier.  In many situations, they can’t.  They won’t know, remember, or tell you their exact income.  If so, you may have to use categories to prevent missing data.  But these are things to think about early.

 

Q: where in the process do you use existing lit/results to shape the research question and modeling?

I would start by putting the literature review before Step 1.  You’ll use that to decide on a theoretical research question, as well as ways to operationalize it..

But it will help you other places as well.  For example, it helps the sample size calculations to have variance estimates from other studies.  Other studies may give you an idea of variables that are likely to have missing data, too little variation to include as predictors.  They may change your exploratory factor analysis in Step 7 to a confirmatory one.

In fact, just about every step can benefit from a good literature review.

If you missed the webinar, you can view the video here.  It’s free.

 


Checking the Normality Assumption for an ANOVA Model

May 21st, 2012 by

I am reviewing your notes from your workshop on assumptions.  You have made it very clear how to analyze normality for regressions, but I could not find how to determine normality for ANOVAs.  Do I check for normality for each independent variable separately?  Where do I get the residuals?  What plots do I run?  Thank you!

I received this great question this morning from a past participant in my Assumptions of Linear Models workshop.

It’s one of those quick questions without a quick answer. Or rather, without a quick and useful answer.  The quick answer is:

Do it exactly the same way.  All of it.

The longer, useful answer is this: (more…)