• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • Home
  • About
    • Our Programs
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Guest Instructors
  • Membership
    • Statistically Speaking Membership Program
    • Login
  • Workshops
    • Online Workshops
    • Login
  • Consulting
    • Statistical Consulting Services
    • Login
  • Free Webinars
  • Contact
  • Login

Karen Grace-Martin

Why Statistics Terminology is Especially Confusing

by Karen Grace-Martin Leave a Comment

The field of statistics has a terminology problem.

It affects students’ ability to learn statistics. It affects researchers’ ability to communicate with statisticians; with collaborators in different fields; and of course, with the general public.

It’s easy to think the real issue is that statistical concepts are difficult. That is true. It’s not the whole truth, though. [Read more…] about Why Statistics Terminology is Especially Confusing

Tagged With: statistical terminology, statistics, terminology

Related Posts

  • The Wisdom of Asking Silly Statistics Questions
  • Chi-Square Test of Independence Rule of Thumb: n > 5
  • Member Training: Confusing Statistical Terms
  • Should Confidence Intervals or Tests of Significance be Used?

Confusing Statistical Term #9: Multiple Regression Model and Multivariate Regression Model

by Karen Grace-Martin 22 Comments

First Published 4/29/09;

Updated 2/23/21 to give more detail.

Much like General Linear Model and Generalized Linear Model in #7, there are many examples in statistics of terms with (ridiculously) similar names, but nuanced meanings.

Today I talk about the difference between multivariate and multiple, as they relate to regression.

Multiple Regression

A regression analysis with one dependent variable and eight independent variables is NOT a multivariate regression model.  It’s a multiple regression model.

And believe it or not, it’s considered a univariate model.

This is uniquely important to remember if you’re an SPSS user. Choose Univariate GLM (General Linear Model) for this model, not multivariate.

I know this sounds crazy and misleading because why would a model that contains nine variables (eight Xs and one Y) be considered a univariate model?

It’s because of the fundamental idea in regression that Xs and Ys aren’t the same. We’re using the Xs to understand the mean and variance of Y. This is why the residuals in a linear regression are differences between predicted and actual values of Y. Not X.

(And of course, there is an exception, called Type II or Major Axis linear regression, where X and Y are not distinct. But in most regression models, Y has a different role than X).

It’s the number of Ys that tell you whether it’s a univariate or multivariate model. That said, other than SPSS, I haven’t seen anyone use the term univariate to refer to this model in practice. Instead, the assumed default is that indeed, regression models have one Y, so let’s focus on how many Xs the model has. This leads us to…

Simple Regression: A regression model with one Y (dependent variable) and one X (independent variable).

Multiple Regression: A regression model with one Y (dependent variable) and more than one X (independent variables).

References below.

Multivariate Regression

Multivariate analysis ALWAYS describes a situation with multiple dependent variables.

So a multivariate regression model is one with multiple Y variables. It may have one or more than one X variables. It is equivalent to a MANOVA: Multivariate Analysis of Variance.

Other examples of Multivariate Analysis include:

  • Principal Component Analysis
  • Factor Analysis
  • Canonical Correlation Analysis
  • Linear Discriminant Analysis
  • Cluster Analysis

But wait. Multivariate analyses like cluster analysis and factor analysis have no dependent variable, per se. Why is it about dependent variables?

Well,  it’s not really about dependency.  It’s about which variables’ mean and variance is being analyzed.  In a multivariate regression, we have multiple dependent variables, whose joint mean is being predicted by the one or more Xs. It’s the variance and covariance in the set of Ys that we’re modeling (and estimating in the Variance-Covariance matrix).

Note: this is actually a situation where the subtle differences in what we call that Y variable can help.  Calling it the outcome or response variable, rather than dependent, is more applicable to something like factor analysis.

So when to choose multivariate GLM?  When you’re jointly modeling the variation in multiple response variables.

References

In response to many requests in the comments, I suggest the following references.  I give the caveat, though, that neither reference compares the two terms directly. They simply define each one. So rather than just list references, I’m going to explain them a little.

  1. Neter, Kutner, Nachtsheim, Wasserman’s Applied Linear Regression Models, 3rd ed. There are, incidentally, never editions with slight changes in authorship. But I’m citing the one on my shelf.

Chapter 1, Linear Regression with One Independent Variable, includes:

“Regression model 1.1 … is “simple” in that there is only one predictor variable.”

Chapter 6 is titled Multiple Regression – I, and section 6.1 is “Multiple Regression Models: Need for Several Predictor Variables.” Interestingly enough, there is no direct quotable definition of the term “multiple regression.” Even so, it’s pretty clear. Go read the chapter to see.

There is no mention of the term “Multivariate Regression” in this book.

2. Johnson & Wichern’s Applied Multivariate Statistical Analysis, 3rd ed.

Chapter 7, Multivariate Linear Regression Models, section 7.1 Introduction. Here it says:

“In this chapter we first discuss the multiple regression model for the prediction of a single response. This model is then generalized to handle the prediction of several dependent variables.” (Emphasis theirs).

They finally get to Multivariate Multiple Regression in Section 7.7. Here they “consider the problem of modeling the relationship between m responses, Y1, Y2, …,Ym, and a single set of predictor variables.”

Misuses of the Terms

I’d be shocked, however, if there aren’t some books or articles out there where the terms are not used or defined  the way I’ve described them here, according to these references. It’s very easy to confuse these terms, even for those of us who should know better.

And honestly, it’s not that hard to just describe the model instead of naming it. “Regression model with four predictors and one outcome” doesn’t take a lot more words and is much less confusing.

If you’re ever confused about the type of model someone is describing to you, just ask.

Read More Explanations of Confusing Statistical Terms.

Tagged With: Multiple Regression, multivariate analysis, SPSS Multivariate GLM, SPSS Univariate GLM

Related Posts

  • Same Statistical Models, Different (and Confusing) Output Terms
  • A Visual Description of Multicollinearity
  • Member Training: Preparing to Use (and Interpret) a Linear Regression Model
  • A Strategy for Converting a Continuous to a Categorical Predictor

Four Weeds of Data Analysis That are Easy to Get Lost In

by Karen Grace-Martin Leave a Comment

Every time you analyze data, you start with a research question and end with communicating an answer. In fact, defining that research question is vital to getting to the right answer.

But in between those start and end points are twelve other steps. I call this the Data Analysis Pathway. It’s a framework I put together years ago, inspired by a client who kept getting stuck in Weed #1. But I’ve honed it over the years of assisting thousands of researchers with their analysis.

[Read more…] about Four Weeds of Data Analysis That are Easy to Get Lost In

Tagged With: Data Analysis, data analysis plan, data issues

Related Posts

  • The Difference Between Model Assumptions, Inference Assumptions, and Data Issues
  • Eight Data Analysis Skills Every Analyst Needs
  • What to Do When You Can’t Run the Ideal Analysis 
  • Simplifying a Categorical Predictor in Regression Models

The Difference Between Model Assumptions, Inference Assumptions, and Data Issues

by Karen Grace-Martin Leave a Comment

Have you ever compared the list of assumptions for linear regression across two sources? Whether they’re textbooks, lecture notes, or web pages, chances are the assumptions don’t quite line up.

Why? Sometimes the authors use different terminology. So it just looks different.

And sometimes they’re including not only model assumptions, but inference assumptions and data issues. All are important, but understanding the role of each can help you understand what applies in your situation.

[Read more…] about The Difference Between Model Assumptions, Inference Assumptions, and Data Issues

Tagged With: Assumptions, data issues, inference

Related Posts

  • Four Weeds of Data Analysis That are Easy to Get Lost In
  • Eight Data Analysis Skills Every Analyst Needs
  • Preparing Data for Analysis is (more than) Half the Battle
  • Outliers: To Drop or Not to Drop

When Unequal Sample Sizes Are and Are NOT a Problem in ANOVA

by Karen Grace-Martin 219 Comments

Updated Dec 18, 2020 to add more detail

In your statistics class, your professor made a big deal about unequal sample sizes in one-way Analysis of Variance (ANOVA) for two reasons.

1. Because she was making you calculate everything by hand.  Sums of squares require a different formula* if sample sizes are unequal, but statistical software will automatically use the right formula. So we’re not too concerned. We’re definitely using software.

2. Nice properties in ANOVA such as the Grand Mean being the intercept in an effect-coded regression model don’t hold when data are unbalanced.  Instead of the grand mean, you need to use a weighted mean.  That’s not a big deal if you’re aware of it. [Read more…] about When Unequal Sample Sizes Are and Are NOT a Problem in ANOVA

Tagged With: analysis of variance, ANOVA, SPSS, Unequal sample sizes

Related Posts

  • Same Statistical Models, Different (and Confusing) Output Terms
  • Why ANOVA and Linear Regression are the Same Analysis
  • 3 Reasons Psychology Researchers should Learn Regression
  • What are Sums of Squares?

Missing Data: Two Big Problems with Mean Imputation

by Karen Grace-Martin 11 Comments

Mean imputation: So simple. And yet, so dangerous.

Perhaps that’s a bit dramatic, but mean imputation (also called mean substitution) really ought to be a last resort.

It’s a popular solution to missing data, despite its drawbacks. Mainly because it’s easy. It can be really painful to lose a large part of the sample you so carefully collected, only to have little power.

But that doesn’t make it a good solution, and it may not help you find relationships with strong parameter estimates. Even if they exist in the population.

On the other hand, there are many alternatives to mean imputation that provide much more accurate estimates and standard errors, so there really is no excuse to use it.

This post is the first explaining the many reasons not to use mean imputation (and to be fair, its advantages).

First, a definition: mean imputation is the replacement of a missing observation with the mean of the non-missing observations for that variable.

Problem #1: Mean imputation does not preserve the relationships among variables.

True, imputing the mean preserves the mean of the observed data.  So if the data are missing completely at random, the estimate of the mean remains unbiased. That’s a good thing.

Plus, by imputing the mean, you are able to keep your sample size up to the full sample size. That’s good too.

This is the original logic involved in mean imputation.

If all you are doing is estimating means (which is rarely the point of research studies), and if the data are missing completely at random, mean imputation will not bias your parameter estimate.

It will still bias your standard error, but I will get to that in another post.

Since most research studies are interested in the relationship among variables, mean imputation is not a good solution.  The following graph illustrates this well:

This graph illustrates hypothetical data between X=years of education and Y=annual income in thousands with n=50.  The blue circles are the original data, and the solid blue line indicates the best fit regression line for the full data set.  The correlation between X and Y is r = .53.

I then randomly deleted 12 observations of income (Y) and substituted the mean.  The red dots are the mean-imputed data.

Blue circles with red dots inside them represent non-missing data.  Empty Blue circles represent the missing data.   If you look across the graph at Y = 39, you will see a row of red dots without blue circles.  These represent the imputed values.

The dotted red line is the new best fit regression line with the imputed data.  As you can see, it is less steep than the original line. Adding in those red dots pulled it down.

The new correlation is r = .39.  That’s a lot smaller that .53.

The real relationship is quite underestimated.

Of course, in a real data set, you wouldn’t notice so easily the bias you’re introducing. This is one of those situations where in trying to solve the lowered sample size, you create a bigger problem.

One note: if X were missing instead of Y, mean substitution would artificially inflate the correlation.

In other words, you’ll think there is a stronger relationship than there really is. That’s not good either. It’s not reproducible and you don’t want to be overstating real results.

This solution that is so good at preserving unbiased estimates for the mean isn’t so good for unbiased estimates of relationships.

Problem #2: Mean Imputation Leads to An Underestimate of Standard Errors

A second reason is applies to any type of single imputation. Any statistic that uses the imputed data will have a standard error that’s too low.

In other words, yes, you get the same mean from mean-imputed data that you would have gotten without the imputations. And yes, there are circumstances where that mean is unbiased. Even so, the standard error of that mean will be too small.

Because the imputations are themselves estimates, there is some error associated with them.  But your statistical software doesn’t know that.  It treats it as real data.

Ultimately, because your standard errors are too low, so are your p-values.  Now you’re making Type I errors without realizing it.

That’s not good.

A better approach?  There are two: Multiple Imputation or Full Information Maximum Likelihood.

Tagged With: mean imputation, mean substitution, Missing Data

Related Posts

  • 3 Ad-hoc Missing Data Approaches that You Should Never Use
  • EM Imputation and Missing Data: Is Mean Imputation Really so Terrible?
  • Seven Ways to Make up Data: Common Methods to Imputing Missing Data
  • Multiple Imputation in a Nutshell

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 40
  • Go to Next Page »

Primary Sidebar

Free Webinars

Binary, Ordinal, and Multinomial Logistic Regression for Categorical Outcomes (Signup)

This Month’s Statistically Speaking Live Training

  • April Member Training: Statistical Contrasts

Upcoming Workshops

  • Logistic Regression for Binary, Ordinal, and Multinomial Outcomes (May 2021)
  • Introduction to Generalized Linear Mixed Models (May 2021)

Read Our Book



Data Analysis with SPSS
(4th Edition)

by Stephen Sweet and
Karen Grace-Martin

Statistical Resources by Topic

  • Fundamental Statistics
  • Effect Size Statistics, Power, and Sample Size Calculations
  • Analysis of Variance and Covariance
  • Linear Regression
  • Complex Surveys & Sampling
  • Count Regression Models
  • Logistic Regression
  • Missing Data
  • Mixed and Multilevel Models
  • Principal Component Analysis and Factor Analysis
  • Structural Equation Modeling
  • Survival Analysis and Event History Analysis
  • Data Analysis Practice and Skills
  • R
  • SPSS
  • Stata

Copyright © 2008–2021 The Analysis Factor, LLC. All rights reserved.
877-272-8096   Contact Us

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Non-necessary

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.

SAVE & ACCEPT