Three Myths and Truths About Model Fit in Confirmatory Factor Analysis

by Christos Giannoulis, PhD We mentioned before that we use Confirmatory Factor Analysis to evaluate whether the relationships among the variables are adequately represented by the hypothesized factor structure. The factor structure (relationships between factors and variables) can be based on theoretical justification or previous findings. Once we estimate the relationship indicators of those factors, the […]

Read the full article →

Four Common Misconceptions in Exploratory Factor Analysis

by Christos Giannoulis, PhD Today, I would like to briefly describe four misconceptions that I feel are commonly perceived by novice researchers in Exploratory Factor Analysis: Misconception 1: The choice between component and common factor extraction procedures is not so important. In Principal Component Analysis, a set of variables is transformed into a smaller set […]

Read the full article →

June 2018 Member Webinar: The Fundamentals of Sample Size Calculations

Sample size estimates are one of those data analysis tasks that look straightforward, but once you try to do one, make you want to bang your head against the computer in frustration. Or, maybe that’s just me.
Regardless of how they make you feel, they are super important to do for your study before you collect the data.

Read the full article →

To Moderate or to Mediate?

by Christos Giannoulis, PhD We get many questions from clients who use the terms mediator and moderator interchangeably. They are easy to confuse, yet mediation and moderation are two distinct terms that require distinct statistical approaches. The key difference between the concepts can be compared to a case where a moderator lets you know when […]

Read the full article →

Understanding Interactions Between Categorical and Continuous Variables in Linear Regression

So we’ve looked at the interaction effect between two categorical variables. But let’s make things a little more interesting, shall we? What if our predictors of interest, say, are a categorical and a continuous variable? How do we interpret the interaction between the two? We’ll keep working with our trusty 2014 General Social Survey data set. But this time let’s examine the impact of job prestige level (a continuous variable) and gender (a categorical, dummy coded variable) as our two predictors.

Read the full article →

May 2018 Member Webinar : Adjustments for Multiple Testing: When and How to Handle Multiplicity

A research study rarely involves just one single statistical test. And multiple testing can result in statistically significant findings just by chance.
After all, with the typical Type I error rate of 5% used in most tests, we are allowing ourselves to “get lucky” 1 in 20 times for each test.  When you figure out the probability of Type I error across all the tests, that probability skyrockets.
There are a number of ways to control for this chance significance. And as with most things statistical, determining a viable adjustment to control for the chance significance depends on what you are doing. Some approaches are good. Some are not so good. And, sometimes an adjustment isn’t even necessary.
In this webinar, Elaine Eisenbeisz will provide an overview of multiple comparisons and why they can be a problem.

Read the full article →

When to Use Logistic Regression for Percentages and Counts

One important yet difficult skill in statistics is choosing a type model for different data situations. One key consideration is the dependent variable.

For linear models, the dependent variable doesn’t have to be normally distributed, but it does have to be continuous, unbounded, and measured on an interval or ratio scale.

Percentages don’t fit these criteria. Yes, they’re continuous and ratio scale. The issue is the boundaries at 0 and 100.

Likewise, counts have a boundary at 0 and are discrete, not continuous. The general advice is to analyze these with some variety of a Poisson model.

Yet there is a very specific type of variable that can be considered either a count or a percentage, but has its own specific distribution.

Read the full article →

Why ANOVA is Really a Linear Regression, Despite the Difference in Notation

When I was in graduate school, stat professors would say “ANOVA is just a special case of linear regression.”  But they never explained why. And I couldn’t figure it out. The model notation is different. The output looks different. The vocabulary is different. The focus of what we’re testing is completely different. How can they […]

Read the full article →

What Is Specification Error in Statistical Models?

When we think about model assumptions, we tend to focus on assumptions like independence, normality, and constant variance. The other big assumption, which is harder to see or test, is that there is no specification error. The assumption of linearity is part of this, but it’s actually a bigger assumption. What is this assumption of no specification error? The basic idea is that when you choose a final model, you want to choose one that accurately represents the real relationships among variables. There are a few common ways of specifying a linear model inaccurately.

Read the full article →

What Is Regression to the Mean?

Have you ever heard that “2 tall parents will have shorter children”?

This phenomenon, known as regression to the mean, has been used to explain everything from patterns in hereditary stature (as Galton first did in 1886) to why movie sequels or sophomore albums so often flop.

So just what is regression to the mean (RTM)?

Read the full article →