The Difference Between Association and Correlation

by Karen Grace-Martin

What does it mean for two variables to be correlated?

Is that the same or different than if they’re associated or related?

This is the kind of question that can feel silly, but shouldn’t. It’s just a reflection of the confusing terminology used in statistics. In this case, the technical statistical term looks like, but is not exactly the same as, the way we mean it in everyday English.

In everyday English, correlated, associated, and related all mean the same thing.

The technical meaning of correlation is the strength of association as measured by a correlation coefficient.

While correlation is a technical term, association is not. It simply means the presence of a relationship: certain values of one variable tend to co-occur with certain values of the other variable.

Correlation coefficients are on a -1 to 1 scale.

On this scale -1 indicates a perfect negative relationship: high values of one variable are associated with low values of the other. Likewise, a correlation of  +1 describes a perfect positive relationship: high values of one variable are associated with high values of the other.

0 indicates no relationship. High values of one variable co-occur as often with high and low values of the other.

There is no independent and no dependent variable in a correlation. It’s a bivariate descriptive statistic.

You can make this descriptive statistic into an inferential one by calculating a confidence interval or running a hypothesis test that the correlation coefficient = 0. But correlations are inherently descriptive and don’t require a test.

 

The most common correlation coefficient is the Pearson correlation coefficient. Often denoted by r, it measures the strength of a linear relationship in a sample on a standardized scale from -1 to 1.

It is so common that it is often used synonymously with correlation.

Pearson’s coefficient assumes that both variables are normally distributed. This requires they be truly continuous and unbounded.

But if you’re interested in relationships of non-normally distributed variables that don’t work in a Pearson’s correlation, don’t worry. There are other correlation coefficients that don’t require normality of the variables.

Examples include Spearman rank correlation, point-biserial correlation, rank-biserial correlation, tetrachoric, and polychoric correlation.

There are other measures of association that don’t have those exact same properties. They often are used where one or both of the variables is either ordinal or nominal.

They include measures such as phi, gamma, Kendall’s tau-b, Stuart’s tau-c, Somer’s D, and Cramer’s V, among others.

So to summarize, there are many measures of association, and only some of these are correlations. Even so, there isn’t always consensus about the meanings of terms in statistics, so it’s always a good practice to state the meaning of the terms you’re using.

Effect Size Statistics
Statistical software doesn't always give us the effect sizes we need. Learn some of the common effect size statistics and the ways to calculate them yourself.

Leave a Comment

Please note that, due to the large number of comments submitted, any comments on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.

Previous post:

Next post: