Correlated Errors in Confirmatory Factor Analysis

by Jeff Meyer

Latent constructs, such as liberalism or conservatism, are theoretical and cannot be measured directly.

But we can use a set of questions on a scale, called indicators, to represent the construct together by combining them into a latent factor.

Often prior research has determined which indicators represent the latent construct. Prudent researchers will run a confirmatory factor analysis (CFA) to ensure the same indicators work in their sample.

You can run a CFA using either the statistical software’s “factor analysis” command or a structural equation model (SEM). There are several advantages to using SEM over the “factor analysis” command. The advantage we will look at is the ability to correlate error terms.

The first step in a CFA is to verify that the indicators have some commonality and are a good representation of the latent construct. We cannot simply add the scores of each indicator to create the latent factor.

Why? The indicators do not equally represent the latent construct. They all have their own “strength,” represented by its loading onto the latent factor. The loading is the variance of the indicator that is shared with the latent factor.

Not all the variance of an indicator is shared. This leftover, unshared variance is known as the unique variance. We consider unique variance as measurement error since it does not help in the measurement of the latent construct.

It is possible that part of the measurement error of one indicator is partially correlated with the measurement error of another indicator. This correlation can be due to pure randomness or it can be a result of something that influences both indicators.

If there is a legitimate reason for indicators’ error terms being related, the error terms can be “correlated” within a structural equation model. What are legitimate reasons?

According to Timothy Brown in his book “Confirmatory Factor Analysis for Applied Research”, some of the nonrandom measurement error that should be correlated can be a result of:

  • Acquiescent response:  a response bias caused by a person’s response agreeing with attitude statements regardless of the content of the question
  • Assessment methods: questionnaire, observer ratings
  • Reversed or similarly worded test items
  • Personal traits: reading disability or cognitive biases such as groupthink, which affect a respondent’s ability to answer a questionnaire truthfully

To determine which indicators’ error terms have a high correlation, we generate a modification index. The index produces a measurement of how much the model’s goodness of fit will improve if any two specific error terms are correlated.

Please note, you must be able to justify that the error terms can be correlated. For example, if the latent construct is derived from a survey, a review of the questions must be done to determine if the two questions are related to the same topic.

What is the advantage of correlating the error terms?

Correlating indicator error terms can improve the reliability of the latent construct’s scale, which can be measured via goodness of fit statistics. Unfortunately, correlating error terms is not possible when using a software’s “factor” command and can only be done within a structural equation model.

Principal Component Analysis
Summarize common variation in many variables... into just a few. Learn the 5 steps to conduct a Principal Component Analysis and the ways it differs from Factor Analysis.

Leave a Comment

Please note that, due to the large number of comments submitted, any comments on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.

Previous post:

Next post: