We mentioned before that we use Confirmatory Factor Analysis to evaluate whether the relationships among the variables are adequately represented by the hypothesized factor structure. The factor structure (relationships between factors and variables) can be based on theoretical justification or previous findings.

Once we estimate the relationship indicators of those factors, the next task is to determine the extent to which these structure specifications are consistent with the data. The main question we are trying to answer is:

### Is the model a good-fitting model?

In order to answer this question we need to first define what we mean by fitting a model.

Fit refers to the ability of a model to represent the data. Specifically, in CFA, a model fit refers to how closely observed data match the relationships specified in a hypothesized model.

The next step is to define what is a *good-fitting *model. This point is where things start to get interesting…

Researchers who are novice at confirmatory factor analysis techniques usually think that a good fitting model means a good model. In reality, this relationship is not straightforward.

*A good-fitting model does not necessarily mean that this model is a good model*.

For example, fit statistics cannot tell you whether you have specified the correct directions in a structural model or the correct number of factors (3, 4, etc.) in a measurement model. Fit statistics also say little about person–level fit, or the degree to which the model generates accurate predictions for individual cases.

### So, what is a *good-fitting *model?

A *good-fitting *model is one that is reasonably consistent with the data.

That leads us to the next question that I usually get from novice researchers and analysts: Does a good-fitting model mean it is the best model? The answer most of the time is “no”.

A *good-fitting *model is not necessarily the best fitting model. A *good-fitting *model can still be improved. My advice is always to examine carefully the parameter estimates to determine if you have a reasonable model as well as a reasonable combination of evaluation criteria (i.e. fit-statistics).

There are more than a dozen different fit statistics researchers use to assess their Confirmatory Factor Analyses and Structural Equation Models. In this webinar, I explained the differences of those fit-statistics as well as the uses and abuses… in practice. In the Principal Component and Factor Analysis Workshop** **I delve into more on the types and uses of fit statistics.

My first advice to novice researchers and analysts who ask which fit statistic they should use, is to use a combination.

Instead of arguing which measure is the best, my second advice to novice researchers is to start by using evaluation criteria that are common to most software packages. This way you allow transparency and reproducibility of your results.

Lastly, I would like to leave you with the most intriguing myth in *Confirmatory *factor analysis if not in SEM overall. This myth is derived from this question

*If we confirm that our model is consistent with the data, can we claim that our model is proven?*

Well, the most that we can conclude in CFA is that the model is consistent with the data, but we cannot generally claim that our model is proven. Under this prism… CFA is a *disconfirmatory *procedure that can helps us reject false models (e.g. those with poor fit the data,) but it does not *confirm* the veracity of our model. Bollen (1998) puts it like this:

“*If a model is consistent with reality, then the data should be consistent with the model. But if the data are consistent with the model, this does not imply that the model corresponds to reality.*”

**References:**

Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley.

Abebe says

how many times can we make covariation among error variances to make good model fit? is there any rule about the possible number we make between error variances to get good model fit?