The beauty of the Univariate GLM procedure in SPSS is that it is so flexible. You can use it to analyze regressions, ANOVAs, ANCOVAs with all sorts of interactions, dummy coding, etc.

The down side of this flexibility is it is often confusing what to put where and what it all means.

So here’s a quick breakdown.

The dependent variable I hope is pretty straightforward. Put in your continuous dependent variable.

Fixed Factors are categorical independent variables. It does not matter if the variable is something you manipulated or something you are controlling for. If it’s categorical, it goes in Fixed Factors.

Now, you can put a categorical variable into Covariates, as long as it’s coded properly–dummy or effect coding are common. What you don’t want to do though, is to put a variable coded 1, 2, 3, 4, 5, 6 for the 6 categories into Covariates. SPSS will think those values are real numbers, and will fit a regression line.

There are a few things you should know about putting a categorical variable into Fixed Factors.

1. You don’t have to create dummy variables for a regression or ANCOVA. SPSS does that for you by default.

2.The default is for SPSS to create interactions among all fixed factors. So if you have 5 fixed factors and don’t want to test 5-way interactions that you’ll never be able to interpret, you’ll need to create a custom model by clicking Model and removing some of the interactions.

3. For any Fixed Factor, you can get marginal means (means adjusted for by other variables in the model) by clicking options. These are generally easier to interpret than the parameter estimates for categorical variables. Especially if you don’t have any continuous predictors in your model, it is much easier to interpret means than parameter estimates.

4. You can also get paired comparison tests for any Fixed Factors by clicking Post Hocs. You can’t get them for Covariates.

5. The default in SPSS is to dummy code any Fixed Factors for the Regression Parameter Estimates Table (which will only be output if you click Options–>Parameter Estimates).

The default is to make the reference category the one that comes last alphabetically. So if your categories (what you typed into the data) are Male and Female, Male will be the default reference.

Remember higher numbers come later alphabetically, so **if you had coded your categories 0 and 1, SPSS will make 1 the reference group!** This can create a lot of confusion, so you can change the default by choosing Contrast and making the reference group First.

If you want a category in the middle to be the reference group, your only choice is to recode the variable so that that category comes last alphabetically.

Most of the time, you won’t use Random Factors. Rather than calculating means for each category, as is done with Fixed Factors, SPSS calculates only a single variance for Random Factors. So if you want to compare the means, use Fixed Factors. In fact, if you have Random Factors, you should generally be using the Mixed procedure, which uses better algorithms for estimating effects of Random Factors.

### Related Posts

- Dummy Coding in SPSS GLM–More on Fixed Factors, Covariates, and Reference Groups, Part 1
- 3 Reasons Psychology Researchers should Learn Regression
- The General Linear Model, Analysis of Covariance, and How ANOVA and Linear Regression Really are the Same Model Wearing Different Clothes
- Dummy Coding in SPSS GLM–More on Fixed Factors, Covariates, and Reference Groups, Part 2

{ 3 comments… read them below or add one }

← Previous Comments

Hi Karen!

If you’re coding for mean income, i.e. 10,000-20,000$ coded as 1, 20,000-30,000$ coded as 2, 30,000-40,000$ coded as 3 ect how do you control for this in a hierarchical regression model? As it is coded categorically and using numbers to represent each income bracket.

best,

Rhiannon

Rhiannon, you can only treat predictors as nominal or continuous in models. There is no way to take into account the ordering of categories.

Hi, Karen!

I have a between-groups pretest posttest design which I’m analysing using ANCOVA (grouping variable has 2 levels; covariate is pretest score, DV is posttest score). My scatter plots show pretty clearly parallel regression slopes–HOORAY! But the custom GLM is showing a significant groupingvariable*covariate interaction, p = 0.04. Is that -just- acceptable?

If so, how can I write that up? (If not, what should I do instead? I’ve heard I can still run the GLM, but with… some sort of change? And change in what I call the analysis?)