In most regression models, there is one response variable and one or more predictors. From the model’s point of view, it doesn’t matter if those predictors are there to predict, to moderate, to explain, or to control. All that matters is that they’re all Xs, on the right side of the equation.
But we can represent the latent construct by combining a set of questions on a scale, called indicators. We do this via factor analysis.
Often prior research has determined which indicators represent the latent construct. Prudent researchers will run a confirmatory factor analysis (CFA) to ensure the same indicators work in their sample.
It’s a small leap to generalized linear models, if you already understand linear models. Many, many concepts are the same in both types of models.
But one thing that’s perplexing to many is why generalized linear models have no error term, like linear models do. [Read more…] about Why Generalized Linear Models Have No Error Term
The last, and sometimes hardest, step for running any statistical model is writing up results.
As with most other steps, this one is a bit more complicated for structural equation models than it is for simpler models like linear regression.
Any good statistical report includes enough information that someone else could replicate your results with your data.
I recently held a free webinar in our The Craft of Statistical Analysis program about Binary, Ordinal, and Nominal Logistic Regression.
It was a record crowd and we didn’t get through everyone’s questions, so I’m answering some here on the site. They’re grouped by topic, and you will probably get more out of it if you watch the webinar recording. It’s free.
The following questions refer to this logistic regression model: [Read more…] about Link Functions and Errors in Logistic Regression