Volume 6, Issue 3
October 2013

A Note From Karen

Karen Grace-MartinHappy October! We’re all about workshops this fall….

After months of work, our team has finally completed all five of our On Demand workshops. We’re quite excited about these since they allow you to learn a statistical topic when you need it, at your own pace. You can check out topics and details here.

Been thinking of learning R? We’re opening registration next week on a brand new Introduction to R workshop by our fabulous guest instructor Dr. David Lillis. David has now taught a few different webinars and a previous workshop and I’m happy to say we’ve gotten great feedback on what a terrific instructor he is. If you’re interested in seeing what R is all about, take a look.

And we’re beginning to plan our workshops for winter and spring, but we need some feedback. We get occasional requests for specific workshops, and there are so many potential topics. We want to make sure we’re offering the ones you need. Later this week, we’ll send out a very, very short survey about potential workshop topics.

And I hope you enjoy today’s article. It’s one of those seemingly basic topics that can actually get confusing once you get into real data situations.

Happy analyzing!
Karen


Feature Article: Five Common Relationships Among Three Variables in a Statistical Model

In a statistical model–any statistical model–there is only one way that a predictor X and a response Y can relate:

I’m assuming the point is to model the predictive ability, the effect, of X on Y. In other words, there is a clear response variable*, although not necessarily a causal relationship. We could have switched the direction of the arrow to indicate that Y predicts X or used a two-headed arrow to show a correlation, with no direction, but that’s a whole other story. For our purposes, Y is the response variable and X the predictor.

But a third variable–another predictor–can relate to X and Y in a number of different ways. How this predictor relates to X and Y changes how we interpret the relationship between X and Y.We ha

These relationships have common names, but the names sometimes differ across fields, so you may be familiar with a different name than the ones I give below. In any case, the names are less important than:

1. making sure the relationship you are testing is the one that answers your research question and
2. the relationship reflects the data.

So let’s look at some possible relationships, once we add a second predictor, Z.

1. Covariate correlated with X

In this model, the Covariate, Z, is correlated with X, and both predict Y.

Because X and Z are not independent, there will usually be some joint effect on Y–some part of the relationship between X and Y that can’t be distinguished from the relationship between Z and Y.

The relationship between X and Y is no longer the full effect of X on Y. It’s the marginal, unique effect of X on Y, after controlling for the effect of Z. While the model fit as a whole will include both the joint and the unique effects of both X and Z on Y, the regression coefficient for X will only include its unique effect.

When both X and Z are observed variables, this is nearly always the situation.

As long as the correlation is moderate, it’s still possible to measure the unique effect of X. If it gets too high, however,

A good example of this kind of relationship would be in a study that measures the nutritional composition of soil cores at different altitudes and moisture levels. Because water flows downhill, lower altitudes (X) tend to be more moist (Z). So while we can’t completely separate altitude from moisture levels, as long as they’re only moderately correlated, we should be able to find the unique effect of altitude on potassium levels.

Test this effect by including X and Z in a model together. To understand how much Z and X overlap in their explanation of Y, rerun the model without Z and look at how much X’s coefficient changes.

2. Covariate Independent of X

When a covariate Z is NOT related to X, it has a slightly different effect. It needs to be able to predict Y as well to be useful in the model, but the effects of X and Z don’t overlap.

Including Z in the model often leads to the relationship between X and Y becoming more significant because Z has explained some of the otherwise unexplained variance in Y.

An example of this kind of covariate is when an experimental manipulation (X) on response time (Y) only becomes significant when we control for finger dexterity levels (Z). If finger dexterity has a large effect on response time and we don’t account for it, all of that variation due to dexterity will go into unexplained error–the denominator of our test statistic. Controlling for that variance means it is no longer unexplained, and is removed from the denominator.

Test this type of effect by running hierarchical regression (add each predictor in on a separate step).

3. Spurious Relationship

A confounding variable Z creates a spurious relationship between X and Y because Z is related to both X and Y.

This is the relationship seen in most “correlation not causation” examples: The amount of ice cream consumption (X) in a month predicts number of shark attacks (Y). Do sharks like eating ice-cream laden people? No.

This spurious relationship is created by a confounder that leads to increases in both ice-cream consumption and shark attacks: temperature (Z). People both eat more ice cream and swim in the ocean in hotter months.

It can be difficult to distinguish between this relationship and #1 above through testing alone. This is a situation where you have to entertain spuriousness as a possibility and question your results.

4. Mediation

Mediation indicates a specific causal pathway. It occurs when at least part of the reason X affects Y is through Z. X affects Z and Z affects Y.

There are then two effects of X on Y: the indirect effect of X on Y through Z and the direct effect of X on Y.

An example here would be if the relationship between stress (X) and depression (Y) was mediated through increased levels of stress hormones (Z).

There are a number of ways to test for mediation, including running a series of regression models and through path analysis. More modern approaches recommend testing the strength of the indirect effect.

5. Moderation

Despite the similarity in the names, a moderation is an entirely different beast than mediation.

Moderation indicates that the effect of X on Y is different for different values of Z. In other words, Z moderates (affects) the effect of X on Y.

Maybe when Z is large, there is a strong positive relationship between X and Y, but when Z is small, there is no relationship between X and Y.

So essentially, we’re saying that X predicts Y in different ways, depending on the value of Z.

X and Z can be correlated or not.

One example of moderation can be seen in the relationship between depression (X), physical health scores (Y), and poverty status (Z). Poverty status (Yes/No) would be said to moderate the relationship between depression and physical health if there was a weak negative relationship among people not in poverty, but a strong negative relationship among people in poverty.

Test moderation by including an interaction term between X and Z.

All of these relationships can occur with more than two predictors in the model, and a model can contain more than one of them. Figuring out the potential relationships among variables is often the fun part of data analysis.


Further Reading and Resources

Jose, Paul. (2013). Doing Statistical Mediation and Moderation. Guilford Press.

The Difference Between Interaction and Association

Confusing Statistical Terms #5: Covariate

Understanding Mediation and Path Analysis Webinar

 
This Month's Data Analysis Brown Bag Webinar

Mediation and Path Analysis


Upcoming Workshop:

Introduction to Data Analysis with SPSS

Introduction to R


Quick Links

The Analysis Factor

The Analysis Institute

More About Us

You received this email because you subscribed to The Analysis Factor's list community. To change your subscription, see the link at end of this email. If your email is having trouble with the format, click here for a web version.

Please forward this to anyone you know who might benefit. If you received this from a friend, sign up for this email newsletter here.


About Us

What is The Analysis Factor? The Analysis Factor is the difference between knowing about statistics and knowing how to use statistics in data analysis. It acknowledges that statistical analysis is an applied skill. It requires learning how to use statistical tools within the context of a researcher's own data, and supports that learning.

The Analysis Factor, the organization, offers statistical consulting, resources, and learning programs that empower researchers to become confident, able, and skilled statistical practitioners. Our aim is to make your journey acquiring the applied skills of statistical analysis easier and more pleasant.

Karen Grace-Martin, the founder, spent seven years as a statistical consultant at Cornell University. While there, she learned that being a great statistical advisor is not only about having excellent statistical skills, but about understanding the pressures and issues researchers face, about fabulous customer service, and about communicating technical ideas at a level each client understands. 

You can learn more about Karen Grace-Martin and The Analysis Factor at theanalysisfactor.com.

Please forward this newsletter to colleagues who you think would find it useful. Your recommendation is how we grow.

If you received this email from a friend or colleague, click here to subscribe to this newsletter.

Need to change your email address? See below for details.

No longer wish to receive this newsletter? See below to cancel.