Latent constructs, such as liberalism or conservatism, are theoretical and cannot be measured directly.
But we can represent the latent construct by combining a set of questions on a scale, called indicators. We do this via factor analysis.
Often prior research has determined which indicators represent the latent construct. Prudent researchers will run a confirmatory factor analysis (CFA) to ensure the same indicators work in their sample.
A well-fitting regression model results in predicted values close to the observed data values.
The mean model, which uses the mean for every predicted value, generally would be used if there were no useful predictor variables. The fit of a proposed
regression model should therefore be better than the fit of the mean model. (more…)
Based on questions I’ve been asked by clients, most analysts prefer using the factor analysis procedures in their general statistical software to run a confirmatory factor analysis.
While this can work in some situations, you’re losing out on some key information you’d get from a structural equation model. This article highlights one of these.
There is a bit of art and experience to model building. You need to build a model to answer your research question but how do you build a statistical model when there are no instructions in the box?
Should you start with all your predictors or look at each one separately? Do you always take out non-significant variables and do you always leave in significant ones?
We mentioned before that we use Confirmatory Factor Analysis to evaluate whether the relationships among the variables are adequately represented by the hypothesized factor structure. The factor structure (relationships between factors and variables) can be based on theoretical justification or previous findings.
Once we estimate the relationship indicators of those factors, the next task is to determine the extent to which these structure specifications are consistent with the data. The main question we are trying to answer is:
How do you choose between Poisson and negative binomial models for discrete count outcomes?
One key criterion is the relative value of the variance to the mean after accounting for the effect of the predictors. A previous article discussed the concept of a variance that is larger than the model assumes: overdispersion.
(Underdispersion is also possible, but much less common).
There are two ways to check for overdispersion: (more…)