One of the Many Advantages to Running Confirmatory Factor Analysis with a Structural Equation Model

February 23rd, 2020 by

Based on questions I’ve been asked by clients, most analysts prefer using the factor analysis procedures in their general statistical software to run a confirmatory factor analysis.

While this can work in some situations, you’re losing out on some key information you’d get from a structural equation model. This article highlights one of these.


Same Statistical Models, Different (and Confusing) Output Terms

January 7th, 2020 by

Learning how to analyze data can be frustrating at times. Why do statistical software companies have to add to our confusion?Stage 2

I do not have a good answer to that question. What I will do is show examples. In upcoming blog posts, I will explain what each output means and how they are used in a model.

We will focus on ANOVA and linear regression models using SPSS and Stata software. As you will see, the biggest differences are not across software, but across procedures in the same software.


Member Training: The Anatomy of an ANOVA Table

December 31st, 2019 by

Our analysis of linear regression focuses on parameter estimates, z-scores, p-values and confidence levels. Rarely in regression do we see a discussion of the estimates and F statistics given in the ANOVA table above the coefficients and p-values.

And yet, they tell you a lot about your model and your data. Understanding the parts of the table and what they tell you is important for anyone running any regression or ANOVA model.


Member Training: The Multi-Faceted World of Residuals

July 1st, 2017 by

Most analysts’ primary focus is to check the distributional assumptions with regards to residuals. They must be independent and identically distributed (i.i.d.) with a mean of zero and constant variance.

Residuals can also give us insight into the quality of our models.

In this webinar, we’ll review and compare what residuals are in linear regression, ANOVA, and generalized linear models. Jeff will cover:

  • Which residuals — standardized, studentized, Pearson, deviance, etc. — we use and why
  • How to determine if distributional assumptions have been met
  • How to use graphs to discover issues like non-linearity, omitted variables, and heteroskedasticity

Knowing how to piece this information together will improve your statistical modeling skills.

Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.


Incorporating Graphs in Regression Diagnostics with Stata

May 24th, 2016 by

Stage 2You put a lot of work into preparing and cleaning your data. Running the model is the moment of excitement.

You look at your tables and interpret the results. But first you remember that one or more variables had a few outliers. Did these outliers impact your results? (more…)

Linear Models in R: Improving Our Regression Model

April 23rd, 2015 by

Stage 2Last time we created two variables and used the lm() command to perform a least squares regression on them, and diagnosing our regression using the plot() command.

Just as we did last time, we perform the regression using lm(). This time we store it as an object M. (more…)