In this 10-part tutorial, you will learn how to get started using SPSS for data preparation, analysis, and graphing. This tutorial will give you the skills to start using SPSS on your own. You will need a license to SPSS and to have it installed before you begin.
You may have heard that using SPSS Syntax is more efficient, gives you more control, and ultimately saves you time and frustration. They’re all true.
I realize that you probably use SPSS because you don’t want to code. You like the menus.
I get it. I like the menus, too, and I use them all the time. But I use syntax just as often.
At some point, if you want to do serious data analysis, you have to start using syntax. [Read more…] about SPSS Syntax 101
One of the difficult decisions in mixed modeling is deciding which factors are fixed and which are random. And as difficult as it is, it’s also very important. Correctly specifying the fixed and random factors of the model is vital to obtain accurate analyses.
Now, you may be thinking of the fixed and random effects in the model, rather than the factors themselves, as fixed or random. If so, remember that each term in the model (factor, covariate, interaction or other multiplicative term) has an effect. We’ll come back to how the model measures the effects for fixed and random factors.
Sadly, the definitions in many texts don’t help much with decisions to specify factors as fixed or random. Textbook examples are often artificial and hard to apply to the real, messy data you’re working with.
Here’s the real kicker. The same factor can often be fixed or random, depending on the researcher’s objective.
That’s right, there isn’t always a right or a wrong decision here. It depends on the inferences you want to make about this factor. This article outlines a different way to think about fixed and random factors.
Consider an experiment that examines beetle damage on cucumbers. The researcher replicates the experiment at five farms and on four fields at each farm.
There are two varieties of cucumbers, and the researcher measures beetle damage on each of 50 plants at the end of the season. The researcher wants to measure differences in how much damage the two varieties sustain.
The experiment then has the following factors: VARIETY, FARM, and FIELD.
Fixed factors can be thought of in terms of differences.
The effect of a categorical fixed factor is measured by differences from the overall mean.
The effect of a continuous fixed predictor (usually called a covariate) is defined by its slope–how the mean of the dependent variable differs with differing values of the predictor.
The output for fixed predictors, then, gives estimates for mean-differences or slopes.
Conclusions regarding fixed factors are particular to the values of these factors. For example, if one variety of cucumber has significantly less damage than the other, you can make conclusions about these two varieties. This says nothing about cucumber varieties that were not tested.
Random factors, on the other hand, are defined by a distribution and not by differences.
The values of a random factor are assumed to be chosen from a population with a normal distribution with a certain variance.
The output for a random factor is an estimate of this variance and not a set of differences from a mean. Effects of random factors are measured in terms of variance, not mean differences.
For example, we may find that the variance among fields makes up a certain percentage of the overall variance in beetle damage. But we’re not measuring how much higher the beetle damage is in one field compared to another. We only care that they vary, not how much they differ.
Software specification of fixed and random factors
Before we get into specifying factors as fixed and random, please note:
When you’re specifying random effects in software, like intercepts and slopes, the random factor is itself the Subject variable in your software. This differs in different software procedures. Some specifically ask for code like “Subject=FARM.” Others just put FARM after a || or some other indicator of the subject for which random intercepts and slopes are measured.
Some software, like SAS and SPSS, allow you to specify the random factors two ways. One is to list the random effects (like intercept) along with the random factor listed as a subject.
/Random Intercept | Subject(Farm)
The other is to list the random factors themselves.
Both of those give you the same output.
Other software, like R and Stata, only allow the former. And neither uses the term “Subject” anywhere. But that random factor, the subject, still needs to be specified after the | or || in the random part of the model.
Specifying fixed effects is pretty straightforward.
Situations that indicate a factor should be specified as fixed:
1. The factor is the primary treatment that you want to compare.
In our example, VARIETY is definitely fixed as the researcher wants to compare the mean beetle damage on the two varieties.
2. The factor is a secondary control variable, and you want to control for differences in the specific values of this factor.
Say the researcher chose these farms specifically for some feature they had, such as specific soil types or topographies that may affect beetle damage. If the researcher wants to compare the farms as representatives of those soil types, then FARM should be fixed.
3. The factor has only two values.
Even if everything else indicates that a factor should be random, if it has only two values, the variance cannot be calculated, and it should be fixed.
Situations that indicate random factors:
1. Your interest is in quantifying how much of the overall variance to attribute to this factor.
If you want to know how much of the variation in beetle damage you can attribute to the farm at which the damage took place, FARM would be random.
2. Your interest is not in knowing which specific means differ, but you want to account for the variation in this factor.
If the farms were chosen at random, FARM should be random.
This choice of the specific farms involved in the study is key. If you can rerun the study using different specific farms–different values of the Farm factor–and still be able to draw the same conclusions, then Farm should be random. However, if you want to compare or control for these particular farms, then Farm is a fixed factor.
3. You would like to generalize the conclusions about this factor to the whole population.
There is nothing about comparing these specific fields that is of interest to the researcher. Rather, the researcher wants to generalize the results of this experiment to all fields, so FIELD is random.
4. Any interaction with a random factor is also random.
How the factors of a model are specified can have great influence on the results of the analysis and on the conclusions drawn.
Imputation as an approach to missing data has been around for decades.
You probably learned about mean imputation in methods classes, only to be told to never do it for a variety of very good reasons. Mean imputation, in which each missing value is replaced, or imputed, with the mean of observed values of that variable, is not the only type of imputation, however.
Two Criteria for Better Imputations
Better, although still problematic, imputation methods have two qualities. They use other variables in the data set to predict the missing value, and they contain a random component.
Using other variables preserves the relationships among variables in the imputations. It feels like cheating, but it isn’t. It ensures that any estimates of relationships using the imputed variable are not too low. Sure, underestimates are conservatively biased, but they’re still biased.
The random component is important so that all missing values of a single variable are not exactly equal. Why is that important? If all imputed values are equal, standard errors for statistics using that variable will be artificially low.
There are a few different ways to meet these criteria. One example would be to use a regression equation to predict missing values, then add a random error term.
Although this approach solves many of the problems inherent in mean imputation, one problem remains. Because the imputed value is an estimate–a predicted value–there is uncertainty about its true value. Every statistic has uncertainty, measured by its standard error. Statistics computed using imputed data have even more uncertainty than their standard errors measure.
Your statistical package cannot distinguish between an imputed value and a real value.
Since the standard errors of statistics based on imputed values are too small, corresponding p-values are also too small. P-values that are reported as smaller than they actually are? Those lead to Type I errors.
How Multiple Imputation Works
Multiple imputation solves this problem by incorporating the uncertainty inherent in imputation. It has four steps:
- Create m sets of imputations for the missing values using a good imputation process. This means it uses information from other variables and has a random component.
- The result is m full data sets. Each data set will have slightly different values for the imputed data because of the random component.
- Analyze each completed data set. Each set of parameter estimates will differ slightly because the data differs slightly.
- Combine results, calculating the variation in parameter estimates.
Remarkably, m, the number of sufficient imputations, can be only 5 to 10 imputations, although it depends on the percentage of data that are missing. A good multiple imputation model results in unbiased parameter estimates and a full sample size.
Doing multiple imputation well, however, is not always quick or easy. First, it requires that the missing data be missing at random. Second, it requires a very good imputation model. Creating a good imputation model requires knowing your data very well and having variables that will predict missing values.
In your statistics class, your professor made a big deal about unequal sample sizes in one-way Analysis of Variance (ANOVA) for two reasons.
1. Because she was making you calculate everything by hand. Sums of squares require a different formula* if sample sizes are unequal, but statistical software will automatically use the right formula. So we’re not too concerned. We’re definitely using software.
2. Nice properties in ANOVA such as the Grand Mean being the intercept in an effect-coded regression model don’t hold when data are unbalanced. Instead of the grand mean, you need to use a weighted mean. That’s not a big deal if you’re aware of it. [Read more…] about When Unequal Sample Sizes Are and Are NOT a Problem in ANOVA
Of all the stressors you’ve got right now, accessing your statistical software from home shouldn’t be one of them. (You know, the one on your office computer).
We’ve gotten some updates from some statistical software companies on how they’re making it easier to access the software you have a license to or to extend a free trial while you’re working from home.