An Example of Specifying Within-Subjects Factors in Repeated Measures

Some repeated measures designs make it quite challenging to  specify within-subjects factors. Especially difficult is when the design contains two “levels” of repeat, but your interest is in testing just one.

Let’s look at a great example of what this looks like and how to deal with it in this question from a reader :

The Design:

I want to do a GLM (repeated measures ANOVA) with the valence of some actions of my test-subjects (valence = desirability of actions) as a within-subject factor. My subjects have to rate a number of actions/behaviours in a pre-set list of 20 actions from ‘very likely to do’ to ‘will never do this’ on a scale from 1 to 7, and some of these actions are desirable (e.g. help a blind man crossing the street) and therefore have a positive valence (in psychology) and some others are non-desirable (e.g. play loud music at night) and therefore have negative valence in psychology.

My question is how I can use valence as a within-subjects factor in GLM. Is there a way to tell SPSS some actions have positive valence and others have negative valence ? I assume assigning labels to the actions will not do it, as SPSS does not make analyses based on labels …
Please help. Thank you.

The Analysis Issues

The researcher is correct that the value labels aren’t going to do it.  There are a couple different ways that you can do it and it’s going to depend on the analysis you use.

My assumption is that there are an equal number of positive and negative actions among the 20 actions.  So each subject rates 10 actions of each.

The big question is whether you actually have any interest in comparing the actions themselves.  So do you really want to know if someone is more likely to help a blind man across the street as they are to do some other positive action (say, donate to charity)?  Or are these 10 positive actions just examples of positive actions? And you just want to compare all the positives to all the negatives?

A Few Analysis Options for Within-Subjects Factors that are Not Optimal

The reason I ask is that when you do repeated measures ANOVA, you have to set it up in the wide format.  This means that each response (the rating of every action) is a separate column.  RM ANOVA wants each column to be a condition and will compare them.  There is no way to specify “compare these 10 columns to these other 10 columns.”

Now if it turns out that these 20 actions actually represent 10 categories of another factor x 2 categories of valence, Repeated Measures ANOVA would work just fine.  But it has to calculate a mean for each cell of the 10×2 and it assumes that each column represents one combination of conditions.   In other words, RM ANOVA has no way of dealing with 10 within-subjects replicates of the exact same condition.

The ad-hoc approach is to average the 10 responses that are all in the same condition.  While this basically works, it’s really not the best approach for many reasons. It made sense when repeated measures ANOVA was the only option. But it’s not anymore.

A Better but Unintuitive Way to Specify Within-Subjects Factors

There is a better way to do it that requires a different way of thinking about it.  This is a linear mixed model.  So instead of RM ANOVA, you will run a linear mixed model.

The big advantage here is that Mixed uses the long data format.  This means that each person’s response to each of the 20 actions goes into a new row of data.  You have one variable (column) for the response. You have another variable that indicates the action, and another that indicates the valence of that action.

This allows you to separate out what’s being repeated (20 actions) from the variable you’re actually interested in testing (valence).

This is how I would recommend doing it and in fact, you would ideally analyze this kind of experiment with crossed random factors.  This allows you to control for two things that you need to control for. First, multiple responses across actions for the same subject are likely to be correlated. Second,  multiple responses across subjects for the same action are likely to be correlated.

You will specify both subjects and actions as two different random factors, each with a random intercept. Valence is a fixed factor. It will feel like you didn’t specify within-subjects factors at all. But the software will understand it from the two different random factors and the fact that each subject has both valences in different rows of the data.

For example, there may be some negative actions (kicking puppies) that people generally rate less likely than other negative actions (playing loud music).  That consistency across people’s responses leads those responses to be correlated.  If you can control for that, it comes out of the unexplained error in the denominator of the F test.

This helps you find real effects in the comparisons you want to make and helps you meet assumptions of independent errors.

Fixed and Random Factors in Mixed Models
One of the hardest parts of mixed models is understanding which factors to make fixed and which to make random. Learn the important criteria to help you decide.

Reader Interactions


Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.