In this 10-part tutorial, you will learn how to get started using SPSS for data preparation, analysis, and graphing. This tutorial will give you the skills to start using SPSS on your own. You will need a license to SPSS and to have it installed before you begin.
You may have heard that using SPSS Syntax is more efficient, gives you more control, and ultimately saves you time and frustration. They’re all true.
I realize that you probably use SPSS because you don’t want to code. You like the menus.
I get it. I like the menus, too, and I use them all the time. But I use syntax just as often.
At some point, if you want to do serious data analysis, you have to start using syntax. [Read more…] about SPSS Syntax 101
In this nearly 6-hour tutorial you will learn menu-based R libraries so you can use R without having to fuss with R code. These libraries don’t cover everything R can do, but they do quite a bit and can set you up to make running R much easier.
Imputation as an approach to missing data has been around for decades.
You probably learned about mean imputation in methods classes, only to be told to never do it for a variety of very good reasons. Mean imputation, in which each missing value is replaced, or imputed, with the mean of observed values of that variable, is not the only type of imputation, however.
Two Criteria for Better Imputations
Better, although still problematic, imputation methods have two qualities. They use other variables in the data set to predict the missing value, and they contain a random component.
Using other variables preserves the relationships among variables in the imputations. It feels like cheating, but it isn’t. It ensures that any estimates of relationships using the imputed variable are not too low. Sure, underestimates are conservatively biased, but they’re still biased.
The random component is important so that all missing values of a single variable are not exactly equal. Why is that important? If all imputed values are equal, standard errors for statistics using that variable will be artificially low.
There are a few different ways to meet these criteria. One example would be to use a regression equation to predict missing values, then add a random error term.
Although this approach solves many of the problems inherent in mean imputation, one problem remains. Because the imputed value is an estimate–a predicted value–there is uncertainty about its true value. Every statistic has uncertainty, measured by its standard error. Statistics computed using imputed data have even more uncertainty than their standard errors measure.
Your statistical package cannot distinguish between an imputed value and a real value.
Since the standard errors of statistics based on imputed values are too small, corresponding p-values are also too small. P-values that are reported as smaller than they actually are? Those lead to Type I errors.
How Multiple Imputation Works
Multiple imputation solves this problem by incorporating the uncertainty inherent in imputation. It has four steps:
- Create m sets of imputations for the missing values using a good imputation process. This means it uses information from other variables and has a random component.
- The result is m full data sets. Each data set will have slightly different values for the imputed data because of the random component.
- Analyze each completed data set. Each set of parameter estimates will differ slightly because the data differs slightly.
- Combine results, calculating the variation in parameter estimates.
Remarkably, m, the number of sufficient imputations, can be only 5 to 10 imputations, although it depends on the percentage of data that are missing. A good multiple imputation model results in unbiased parameter estimates and a full sample size.
Doing multiple imputation well, however, is not always quick or easy. First, it requires that the missing data be missing at random. Second, it requires a very good imputation model. Creating a good imputation model requires knowing your data very well and having variables that will predict missing values.
I recently was asked whether to report means from descriptive statistics or from the Estimated Marginal Means with SPSS GLM.
The short answer: Report the Estimated Marginal Means (almost always).
To understand why and the rare case it doesn’t matter, let’s dig in a bit with a longer answer.
First, a marginal mean is the mean response for each category of a factor, adjusted for any other variables in the model (more on this later).
Just about any time you include a factor in a linear model, you’ll want to report the mean for each group. The F test of the model in the ANOVA table will give you a p-value for the null hypothesis that those means are equal. And that’s important.
But you need to see the means and their standard errors to interpret the results. The difference in those means is what measures the effect of the factor. While that difference can also appear in the regression coefficients, looking at the means themselves give you a context and makes interpretation more straightforward. This is especially true if you have interactions in the model.
Some basic info about marginal means
- In SPSS menus, they are in the Options button and in SPSS’s syntax they’re EMMEANS.
- These are called LSMeans in SAS, margins in Stata, and emmeans in R’s emmeans package.
- Although I’m talking about them in the context of linear models, all the software has them in other types of models, including linear mixed models, generalized linear models, and generalized linear mixed models.
- They are also called predicted means, and model-based means. There are probably a few other names for them, because that’s what happens in statistics.
When marginal means are the same as observed means
Let’s consider a few different models. In all of these, our factor of interest, X, is a categorical predictor for which we’re calculating Estimated Marginal Means. We’ll call it the Independent Variable (IV).
Model 1: No other predictors
If you have just a single factor in the model (a one-way anova), marginal means and observed means will be the same.
Observed means are what you would get if you simply calculated the mean of Y for each group of X.
Model 2: Other categorical predictors, and all are balanced
Likewise, if you have other factors in the model, if all those factors are balanced, the estimated marginal means will be the same as the observed means you got from descriptive statistics.
Model 3: Other categorical predictors, unbalanced
Now things change. The marginal mean for our IV is different from the observed mean. It’s the mean for each group of the IV, averaged across the groups for the other factor.
When you’re observing the category an individual is in, you will pretty much never get balanced data. Even when you’re doing random assignment, balanced groups can be hard to achieve.
In this situation, the observed means will be different than the marginal means. So report the marginal means. They better reflect the main effect of your IV—the effect of that IV, averaged across the groups of the other factor.
Model 4: A continuous covariate
When you have a covariate in the model the estimated marginal means will be adjusted for the covariate. Again, they’ll differ from observed means.
It works a little bit differently than it does with a factor. For a covariate, the estimated marginal mean is the mean of Y for each group of the IV at one specific value of the covariate.
By default in most software, this one specific value is the mean of the covariate. Therefore, you interpret the estimated marginal means of your IV as the mean of each group at the mean of the covariate.
This, of course, is the reason for including the covariate in the model–you want to see if your factor still has an effect, beyond the effect of the covariate. You are interested in the adjusted effects in both the overall F-test and in the means.
If you just use observed means and there was any association between the covariate and your IV, some of that mean difference would be driven by the covariate.
For example, say your IV is the type of math curriculum taught to first graders. There are two types. And say your covariate is child’s age, which is related to the outcome: math score.
It turns out that curriculum A has slightly older kids and a higher mean math score than curriculum B. Observed means for each curriculum will not account for the fact that the kids who received that curriculum were a little older. Marginal means will give you the mean math score for each group at the same age. In essence, it sets Age at a constant value before calculating the mean for each curriculum. This gives you a fairer comparison between the two curricula.
But there is another advantage here. Although the default value of the covariate is its mean, you can change this default. This is especially helpful for interpreting interactions, where you can see the means for each group of the IV at both high and low values of the covariate.
In SPSS, you can change this default using syntax, but not through the menus.
For example, in this syntax, the EMMEANS statement reports the marginal means of Y at each level of the categorical variable X at the mean of the Covariate V.
UNIANOVA Y BY X WITH V
If instead, you wanted to evaluate the effect of X at a specific value of V, say 50, you can just change the EMMEANS statement to:
Another good reason to use syntax.
Article updated 8/17/2021
Centering a covariate –a continuous predictor variable–can make regression coefficients much more interpretable. That’s a big advantage, particularly when you have many coefficients to interpret. Or when you’ve included terms that are tricky to interpret, like interactions or quadratic terms.
For example, say you had one categorical predictor with 4 categories and one continuous covariate, plus an interaction between them.
First, you’ll notice that if you center your covariate at the mean, there is [Read more…] about Centering a Covariate to Improve Interpretability