Life After Exploratory Factor Analysis: Estimating Internal Consistency

by guest

Share

by Christos Giannoulis, PhD

After you are done with the odyssey of exploratory factor analysis (aka a reliable and valid instrument)…you may find yourself at the beginning of a journey rather than the ending.

The process of performing exploratory factor analysis usually seeks to answer whether a given set of items form a coherent factor (or often several factors). If you decide on the number and type of factors, the next step is to evaluate how well those factors are measured.

Since you are on it, you might also examine whether the factor that is being examined needs all the items in order to be measured effectively.

To evaluate whether the factors that you derived from EFA are reliable, confirmed in a new sample, and stable (invariant) across multiple groups, you need some criteria. The “Swiss-Army Knife” of criteria for scale reliability is Cronbach’s alpha (Cronbach, 1951).

Coefficient alpha by Cronbach is a measure of internal reliability or consistency of the items in an instrument, index or scale. Luckily, alpha is offered in many conventional software packages and is very easy to calculate.

However, it may come with some side-effects because of hidden misconceptions.

Like other reliability coefficients, the Cronbach’s alpha ranges from 0 to 1. The square root of alpha is the correlation between the score on a scale and errorless “true scores.” (Nunnally, J. C., & Bernstein, I. 1994).

If a set of items that measures a construct (e.g. job satisfaction scale) has an alpha of 0.80 for a scale, the square root of 0.80 is 0.89. This represents an estimate of the correlation between that score and the “true scores” for that construct.

As you probably know, the square of a correlation is an estimate of shared variance, so squaring this number leads us back to the proportion of “true score” in the measurement: .80.

Finally, we can subtract our “true score” variance from our total variance to get an estimate of our proportion of error variance in the measurement, in this case 20%.

Thus, a scale with an alpha of .80 includes approximately 20% error.

So where is the catch?

The catch is that the alpha is not a measure of unidimensionality (i.e. an indicator that a scale is measuring a single item/construct rather than multiple related item/constructs) as is often thought.

Unidimensionality in Cronbach’s alpha assumes the questions are only measuring one latent variable or dimension.

Unidimensionality is an important assumption of alpha, in that scales that are multidimensional will cause alpha to be under-estimated if not assessed separately for each dimension, but high values for alpha are not necessarily indicators of unidimensionality (e.g., Schmitt, 1996).

Another feature of alpha is that it is not a characteristic of the instrument, but rather a characteristic of the sample in which the instrument was used. A biased, unrepresentative, or small sample could produce a very different estimate than a large, representative sample.

Furthermore, the estimates from one large, supposedly representative sample can differ from another, and the results in one population can most certainly differ from another. This is why we place such an emphasis on replication.

Replication is necessary to support the reliability of an instrument. In addition, the reliability of an established instrument must be reestablished when using the instrument with a new population.

In situations like this bootstrapping – repeated random sampling with replacement – is our friend.

And now, turn on the invisible and inaudible trumpet music for the most frequent question on reliability and Cronbach’s alpha that we receive from our clients:

What is a good enough value for alpha?

Even though this is a tricky question, many brave researchers consider that an alpha of 0.70 or 0.80 represents “acceptable” and “good” reliability, respectively. In our upcoming workshop on Principal Component and Factor Analysis, I explain why these types of thresholds should not be followed and some alternative indicators for internal consistency.

For the moment, I will just default to the interpretation of those alpha values: An alpha of 0.70 is associated with 30% error variance in a scale, and an alpha of 0.80 is associated with 20%.

In the case of Cronbach’s alpha : the higher the alpha value the better the internal consistency (Osborne and Banjanovic, 2016). There are probably diminishing returns after one exceeds 0.90 which still represents about 10% error variance in the measurement.

References

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika,16(3), 297-334.

Nunnally, J. C., & Bernstein, I. (1994). Psychometric Theory (3rd ed.). New York: McGraw-Hill.

Osborne, J. W., & Banjanovic, E. S. (2016). Exploratory factor analysis with SAS. SAS Institute.

Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350-353.

{ 2 comments… read them below or add one }

Shelby

Hi! Is there a way I can sign up to get emails about various analytics topics? My boss has been forwarding a few to me, but I would love to get them all. Thanks!

Reply

Karen Grace-Martin

Hi Shelby! You can sign up for StatWise newsletter (on sidebar) and check off that you want to receive more email updates. You can also give support@theanalysisfactor.com permission and we can add you manually to our list! Let us know your preference- glad to have you on board.

– Ana
Marketing Support

Reply

Leave a Comment

Please note that, due to the large number of comments submitted, any comments on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to answers and more resources 24/7.

Previous post:

Next post: