Karen Grace-Martin

Random Intercept and Random Slope Models

July 27th, 2015 by

This free, one-hour webinar is part of our regular Craft of Statistical Analysis series. In it, we will introduce and demonstrate two of the core concepts of mixed modeling—the random intercept and the random slope.

Most scientific fields now recognize the extraordinary usefulness of mixed models, but they’re a tough nut to crack for someone who didn’t receive training in their methodology.

But it turns out that mixed models are actually an extension of linear models. If you have a good foundation in linear models, the extension to mixed models is more of a step than a leap. (Okay, a large step, but still).

You’ll learn what random intercepts and slopes mean, what they do, and how to decide if one or both are needed. It’s the first step in understanding mixed modeling.

Date: Friday, August 21, 2015
Time:
12pm EDT (New York time)
Cost:
Free

***Note: This webinar has already taken place. Sign up below to get access to the video recording of the webinar.


Member Training: An Overview of Effect Size Statistics and Why They are So Important

July 1st, 2015 by

Whenever we run an analysis of variance or run a regression one of the first things we do is look at the p-value of our predictor variables to determine whether

they are statistically significant. When the variable is statistically significant, did you ever stop and ask yourself how significant it is? (more…)


Member Training: Transformations & Nonlinear Effects in Linear Models

May 7th, 2015 by

Why is it we can model non-linear effects in linear regression?

What the heck does it mean for a model to be “linear in the parameters?” (more…)


Models for Repeated Measures Continuous, Categorical, and Count Data

April 6th, 2015 by

Lately, I’ve gotten a lot of questions about learning how to run models for repeated measures data that isn’t continuous.

Mostly categorical. But once in a while discrete counts.

A typical study is in linguistics or psychology where (more…)


Member Training: Confidence Intervals

April 6th, 2015 by

A Science News article from July 2014 was titled “Scientists’ grasp of confidence intervals doesn’t inspire confidence.” Perhaps that is why only 11% of the articles in the 10 leading psychology journals in 2006 reported confidence intervals in their statistical analysis.

How important is it to be able to create and interpret confidence intervals?

The American Psychological Association Publication Manual, which sets the editorial standards for over 1,000 journals in the behavioral, life, and social sciences, has begun emphasizing parameter estimation and de-emphasizing Null Hypothesis Significance Testing (NHST).

Its most recent edition, the sixth, published in 2009, states “estimates of appropriate effect sizes and confidence intervals are the minimum expectations” for published research.

In this webinar, we’ll clear up the ambiguity as to what exactly is a confidence interval and how to interpret them in a table and graph format. We will also explore how they are calculated for continuous and dichotomous outcome variables in various types of samples and understand the impact sample size has on the width of the band. We’ll discuss related concepts like equivalence testing.

By the end of the webinar, we anticipate your grasp of confidence intervals will inspire confidence.


Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.

(more…)


Preparing Data for Analysis is (more than) Half the Battle

March 18th, 2015 by

Just last week, a colleague mentioned that while he does a lot of study design these days, he no longer does much data analysis.

His main reason was that 80% of the work in data analysis is preparing the data for analysis.  Data preparation is s-l-o-w and he found that few colleagues and clients understood this.

Consequently, he was running into expectations that he should analyze a raw data set in an hour or so.

You know, by clicking a few buttons.

I see this as well with researchers new to data analysis.  While they know it will take longer than an hour, they still have unrealistic expectations about how long it takes.

So I am here to tell you, the time-consuming part is preparing the data.  Weeks or months is a realistic time frame.  Hours is not.

(Feel free to send this to your colleagues who want instant results.)

There are three parts to preparing data: cleaning it, creating necessary variables, and formatting all variables.

Data Cleaning

Data cleaning means finding and eliminating errors in the data.  How you approach it depends on how large the data set is, but the kinds of things you’re looking for are:

  • Impossible or otherwise incorrect values for specific variables
  • Cases in the data who met exclusion criteria and shouldn’t be in the study
  • Duplicate cases
  • Missing data and outliers (don’t delete all outliers, but you may need to investigate to see if one is an error)
  • Skip-pattern or logic breakdowns
  • Making sure that the same value of string variables is always written the same way (male ≠ Male in most statistical software).

You can’t avoid data cleaning and it always takes a while, but there are ways to make it more efficient. For example, rather than search through the data set for impossible values, print a table of data values outside a normal range, along with subject ids.

This is where learning how to code in your statistical software of choice really helps.  You’ll need to subset your data using IF statements to find those impossible values.

But if your data set is anything but small, you can also save yourself a lot of time, code, and errors by incorporating efficiencies like loops and macros so that you can perform some of these checks on many variables at once.

Creating New Variables

Once the data are free of errors, you need to set up the variables that will directly answer your research questions.

It’s a rare data set in which every variable you need is measured directly.

So you may need to do a lot of recoding and computing of variables.

Examples include:

And of course, part of creating each new variable is double-checking that it worked correctly.

Formatting Variables

Both original and newly created variables need to be formatted correctly for two reasons:

First, so your software works with them correctly.  Failing to format a missing value code or a dummy variable correctly will have major consequences for your data analysis.

Second, it’s much faster to run the analyses and interpret results if you don’t have to keep looking up which variable Q156 is.

Examples include:

  • Setting all missing data codes so missing data are treated as such
  • Formatting date variables as dates, numerical variables as numbers, etc.
  • Labeling all variables and categorical values so you don’t have to keep looking them up.

All three of these steps require a solid knowledge of how to manage data in your statistical software.  Each one approaches them a little differently.

It’s also very important to keep track of and be able to easily redo all your steps.  Always assume you’ll have to redo something.  So use (or record) syntax, not only menus.