OptinMon

The Difference between Chi Square Tests of Independence and Homogeneity

October 14th, 2022 by

A chi square test is often applied to two-way tables, like the one below.

A table of Union Status by Gender for Employed Individuals in 2020 (adapted from Current Population Survey, Bureau of Labor Statistics)
 

This table represents a sample of 1,322 individuals. Of these individuals, 687 are male, and 635 are female. Also 143 are union members, 159 are represented by unions, and 1,020 are not affiliated with a union.

You might use a chi-square test if you want to learn something about the relationship of gender and union status. The question then might come up: should you use a test of independence, or a test of homogeneity?

Does it matter? Software doesn’t generally differentiate between the two, which leads to a final question: are they even different?

Well, yes and no. Read on!

Different: Independence versus Homogeneity

Independence and homogeneity do refer to different ideas. If union status and gender are independent, that means that union status and gender are unrelated. In other words, if you know someone’s union status, you won’t be able to make a better guess as to their gender.

If you know someone’s gender, you won’t be able to make a better guess as to their union status.

Homogeneity is different and refers to the concept of similarity. If you are familiar with linear regression, you might associate this with residuals. Residuals should be homogeneous, meaning they all come from the same distribution.

That idea applies to this two-way table as well. We may want to know if the distribution of union status is the same for men and women. In other words, does union status come from the same distribution for both men and women?

To test independence, we would not approach the question from the standpoint of gender or union status. We would take a sample of all employed individuals, and then break them down into the categories in the table.

To test homogeneity, we would approach it from the standpoint of gender. We would randomly sample individuals from within each gender, and then measure their union status.

Either approach would result in the table above.

Same: Chi-Square Statistics

Chi-square statistics for categorical data generally follow this formula:

For each of the six cells representing a combination of gender and union status, the number in the cell is the count we observe. “Expected” refers to what we would see in each cell under the null hypothesis. That means if gender and union status are independent (or if union status is homogeneous across the genders).

We calculate the difference, square it, and divide by the expected count for each cell. We then add these all together, and that is the chi-square test statistic.

Where do we get the expected counts for each cell?

Let’s examine the combination of male and union member under independence. If gender and union membership are independent, then how many male union members do we expect? Well,
– 10.81% of the sample are union members
– 51.96% are male

So, if they are independent, 10.81% x 51.96% is 5.62%, and 5.62% of 1,322 is 74.3. This is how many individuals we would expect to be male union members.

Now let’s consider male union members under homogeneity. Overall, 10.81% of the sample are union members. If this is the same for both males and females, then of the 687 males, we expect 74.3 to be union members.

Independence and homogeneity result in the same expected number of union members! It turns out this calculation is the same for every cell in the table. It follows that the chi-square statistic is also the same.

Does It Matter?

As it turns out, independence and homogeneity are two sides of the same coin. If gender and union status are independent, then union status is distributed the same way for males and females.

So which test should you say you are using, if they turn out the same?

Again, that comes back to how you have phrased your research question. Are you determining whether gender and union status are related. That is a test of independence. Are you looking for differences between males and females? That is a test of homogeneity.


An Example of Specifying Within-Subjects Factors in Repeated Measures

September 19th, 2022 by

Some repeated measures designs make it quite challenging to  specify within-subjects factors. Especially difficult is when the design contains two “levels” of repeat, but your interest is in testing just one.

Let’s look at a great example of what this looks like and how to deal with it in this question from a reader :

The Design:

I want to do a GLM (repeated measures ANOVA) with the valence of some actions of my test-subjects (valence = desirability of actions) as a within-subject factor. My subjects have to rate a number of actions/behaviours in a pre-set list of 20 actions from ‘very likely to do’ to ‘will never do this’ on a scale from 1 to 7, and some of these actions are desirable (e.g. help a blind man crossing the street) and therefore have a positive valence (in psychology) and some others are non-desirable (e.g. play loud music at night) and therefore have negative valence in psychology.

My question is how I can use valence as a within-subjects factor in GLM. Is there a way to tell SPSS some actions have positive valence and others have negative valence ? I assume assigning labels to the actions will not do it, as SPSS does not make analyses based on labels …
Please help. Thank you.

(more…)


How the Population Distribution Influences the Confidence Interval

September 6th, 2022 by

Spoiler alert, real data are seldom normally distributed. How does the population distribution influence the estimate of the population mean and its confidence interval?

To figure this out, we randomly draw 100 observations 100 times from three distinct populations and plot the mean and corresponding 95% confidence interval of each sample.
(more…)


Member Training: Analyzing Likert Scale Data

August 31st, 2022 by

Is it really ok to treat Likert items as continuous? And can you just decide to combine Likert items to make a scale? Likert-type data is extremely common—and so are questions like these about how to analyze it appropriately. (more…)


The Difference Between R-squared and Adjusted R-squared

August 22nd, 2022 by

When is it important to use adjusted R-squared instead of R-squared?

R², the Coefficient of Determination, is one of the most useful and intuitive statistics we have in linear regression.Stage 2

It tells you how well the model predicts the outcome and has some nice properties. But it also has one big drawback.

(more…)


The Four Models You Meet in Structural Equation Modeling

August 8th, 2022 by

A multiple regression model could be conceptualized using Structural Equation Model path diagrams. That’s the simplest SEM you can create, but its real power lies in expanding on that regression model.  Here I will discuss four types of structural equation models.

Path Analysis

More interesting research questions could be asked and answered using Path Analysis. Path Analysis is a type of structural equation modeling without latent variables. (more…)