While this can work in some situations, you’re losing out on some key information you’d get from a structural equation model. This article highlights one of these.
I do not have a good answer to that question. What I will do is show examples. In upcoming blog posts, I will explain what each output means and how they are used in a model.
We will focus on ANOVA and linear regression models using SPSS and Stata software. As you will see, the biggest differences are not across software, but across procedures in the same software.
Our analysis of linear regression focuses on parameter estimates, z-scores, p-values and confidence levels. Rarely in regression do we see a discussion of the estimates and F statistics given in the ANOVA table above the coefficients and p-values.
And yet, they tell you a lot about your model and your data. Understanding the parts of the table and what they tell you is important for anyone running any regression or ANOVA model.
Most analysts’ primary focus is to check the distributional assumptions with regards to residuals. They must be independent and identically distributed (i.i.d.) with a mean of zero and constant variance.
Residuals can also give us insight into the quality of our models.
In this webinar, we’ll review and compare what residuals are in linear regression, ANOVA, and generalized linear models. Jeff will cover:
- Which residuals — standardized, studentized, Pearson, deviance, etc. — we use and why
- How to determine if distributional assumptions have been met
- How to use graphs to discover issues like non-linearity, omitted variables, and heteroskedasticity
Knowing how to piece this information together will improve your statistical modeling skills.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
You put a lot of work into preparing and cleaning your data. Running the model is the moment of excitement.
You look at your tables and interpret the results. But first you remember that one or more variables had a few outliers. Did these outliers impact your results? [Read more…] about Incorporating Graphs in Regression Diagnostics with Stata
Last time we created two variables and used the lm() command to perform a least squares regression on them, and diagnosing our regression using the plot() command.
Just as we did last time, we perform the regression using lm(). This time we store it as an object M. [Read more…] about Linear Models in R: Improving Our Regression Model