• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • our programs
    • Membership
    • Online Workshops
    • Free Webinars
    • Consulting Services
  • statistical resources
  • blog
  • about
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Collaborate with Us
  • contact
  • login

OptinMon 30 - Four Critical Steps in Building Linear Regression Models

The Difference Between R-squared and Adjusted R-squared

by Karen Grace-Martin  4 Comments

When is it important to use adjusted R-squared instead of R-squared?

R², the the Coefficient of Determination, is one of the most useful and intuitive statistics we have in linear regression.Stage 2

It tells you how well the model predicts the outcome and has some nice properties. But it also has one big drawback.

[Read more…] about The Difference Between R-squared and Adjusted R-squared

Tagged With: Adjusted R-squared, Coefficient of determination, linear regression, Multiple Regression, R-squared

Related Posts

  • Confusing Statistical Term #9: Multiple Regression Model and Multivariate Regression Model
  • Member Training: Preparing to Use (and Interpret) a Linear Regression Model
  • What is Multicollinearity? A Visual Description
  • Member Training: The Link Between ANOVA and Regression

Member Training: Assumptions of Linear Models

by TAF Support  Leave a Comment

Stage 2What are the assumptions of linear models? If you compare two lists of assumptions, most of the time they’re not the same.
[Read more…] about Member Training: Assumptions of Linear Models

Tagged With: Assumptions, linear models, model equation

Related Posts

  • Member Training: Using Transformations to Improve Your Linear Regression Model
  • Member Training: The Link Between ANOVA and Regression
  • Member Training: Centering
  • The Difference Between R-squared and Adjusted R-squared

Measures of Model Fit for Linear Regression Models

by Karen Grace-Martin  38 Comments

A well-fitting regression model results in predicted values close to the observed data values. Stage 2

The mean model, which uses the mean for every predicted value, generally would be used if there were no useful predictor variables. The fit of a proposed
regression model should therefore be better than the fit of the mean model. [Read more…] about Measures of Model Fit for Linear Regression Models

Tagged With: F test, Model Fit, R-squared, regression models, RMSE

Related Posts

  • The Difference Between R-squared and Adjusted R-squared
  • Simplifying a Categorical Predictor in Regression Models
  • Why ANOVA is Really a Linear Regression, Despite the Difference in Notation
  • Can a Regression Model with a Small R-squared Be Useful?

Linear Regression Analysis – 3 Common Causes of Multicollinearity and What Do to About Them

by Karen Grace-Martin  1 Comment

Multicollinearity in regression is one of those issues that strikes fear into the hearts of researchers. You’ve heard about its dangers in statistics Stage 2classes, and colleagues and journal reviews question your results because of it. But there are really only a few causes of multicollinearity. Let’s explore them.Multicollinearity is simply redundancy in the information contained in predictor variables. If the redundancy is moderate, [Read more…] about Linear Regression Analysis – 3 Common Causes of Multicollinearity and What Do to About Them

Tagged With: dummy coding, interpreting regression coefficients, Multicollinearity, principal component analysis

Related Posts

  • Making Dummy Codes Easy to Keep Track of
  • Simplifying a Categorical Predictor in Regression Models
  • What is Multicollinearity? A Visual Description
  • Your Questions Answered from the Interpreting Regression Coefficients Webinar

Why report estimated marginal means?

by Karen Grace-Martin  56 Comments

Updated 8/18/2021

I recently was asked whether to report means from descriptive statistics or from the Estimated Marginal Means with SPSS GLM.Stage 2

The short answer: Report the Estimated Marginal Means (almost always).

To understand why and the rare case it doesn’t matter, let’s dig in a bit with a longer answer.

First, a marginal mean is the mean response for each category of a factor, adjusted for any other variables in the model (more on this later).

Just about any time you include a factor in a linear model, you’ll want to report the mean for each group. The F test of the model in the ANOVA table will give you a p-value for the null hypothesis that those means are equal. And that’s important.

But you need to see the means and their standard errors to interpret the results. The difference in those means is what measures the effect of the factor. While that difference can also appear in the regression coefficients, looking at the means themselves give you a context and makes interpretation more straightforward. This is especially true if you have interactions in the model.

Some basic info about marginal means

  • In SPSS menus, they are in the Options button and in SPSS’s syntax they’re EMMEANS.
  • These are called LSMeans in SAS, margins in Stata, and emmeans in R’s emmeans package.
  • Although I’m talking about them in the context of linear models, all the software has them in other types of models, including linear mixed models, generalized linear models, and generalized linear mixed models.
  • They are also called predicted means, and model-based means. There are probably a few other names for them, because that’s what happens in statistics.

When marginal means are the same as observed means

Let’s consider a few different models. In all of these, our factor of interest, X, is a categorical predictor for which we’re calculating Estimated Marginal Means. We’ll call it the Independent Variable (IV).

Model 1: No other predictors

If you have just a single factor in the model (a one-way anova), marginal means and observed means will be the same.

Observed means are what you would get if you simply calculated the mean of Y for each group of X.

Model 2: Other categorical predictors, and all are balanced

Likewise, if you have other factors in the model, if all those factors are balanced, the estimated marginal means will be the same as the observed means you got from descriptive statistics.

Model 3: Other categorical predictors, unbalanced

Now things change. The marginal mean for our IV is different from the observed mean. It’s the mean for each group of the IV, averaged across the groups for the other factor.

When you’re observing the category an individual is in, you will pretty much never get balanced data. Even when you’re doing random assignment, balanced groups can be hard to achieve.

In this situation, the observed means will be different than the marginal means. So report the marginal means. They better reflect the main effect of your IV—the effect of that IV, averaged across the groups of the other factor.

Model 4: A continuous covariate

When you have a covariate in the model the estimated marginal means will be adjusted for the covariate. Again, they’ll differ from observed means.

It works a little bit differently than it does with a factor. For a covariate, the estimated marginal mean is the mean of Y for each group of the IV at one specific value of the covariate.

By default in most software, this one specific value is the mean of the covariate. Therefore, you interpret the estimated marginal means of your IV as the mean of each group at the mean of the covariate.

This, of course, is the reason for including the covariate in the model–you want to see if your factor still has an effect, beyond the effect of the covariate.  You are interested in the adjusted effects in both the overall F-test and in the means.

If you just use observed means and there was any association between the covariate and your IV, some of that mean difference would be driven by the covariate.

For example, say your IV is the type of math curriculum taught to first graders. There are two types. And say your covariate is child’s age, which is related to the outcome: math score.

It turns out that curriculum A has slightly older kids and a higher mean math score than curriculum B. Observed means for each curriculum will not account for the fact that the kids who received that curriculum were a little older. Marginal means will give you the mean math score for each group at the same age. In essence, it sets Age at a constant value before calculating the mean for each curriculum. This gives you a fairer comparison between the two curricula.

But there is another advantage here. Although the default value of the covariate is its mean, you can change this default.  This is especially helpful for interpreting interactions, where you can see the means for each group of the IV at both high and low values of the covariate.

In SPSS, you can change this default using syntax, but not through the menus.

For example, in this syntax, the EMMEANS statement reports the marginal means of Y at each level of the categorical variable X at the mean of the Covariate V.

UNIANOVA Y BY X WITH V
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(X) WITH(V=MEAN)
/DESIGN=X V.

If instead,  you wanted to evaluate the effect of X at a specific value of V, say 50, you can just change the EMMEANS statement to:

/EMMEANS=TABLES(X) WITH(V=50)

Another good reason to use syntax.

Tagged With: Covariate, Estimated marginal Means, LSMeans, SPSS GLM, spss syntax

Related Posts

  • Same Statistical Models, Different (and Confusing) Output Terms
  • Spotlight Analysis for Interpreting Interactions
  • Confusing Statistical Terms #5: Covariate
  • 5 Reasons to use SPSS Syntax

Overfitting in Regression Models

by Karen Grace-Martin  1 Comment

The practice of choosing predictors for a regression model, called model building, is an area of real craft.Stage 2

There are many possible strategies and approaches and they all work well in some situations. Every one of them requires making a lot of decisions along the way. As you make decisions, one danger to look out for is overfitting—creating a model that is too complex for the the data. [Read more…] about Overfitting in Regression Models

Tagged With: linear regression, Linear Regression Model, Model Building, overfitting, regression model

Related Posts

  • Same Statistical Models, Different (and Confusing) Output Terms
  • What is Multicollinearity? A Visual Description
  • Differences in Model Building Between Explanatory and Predictive Models
  • Member Training: The Link Between ANOVA and Regression

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 11
  • Go to Next Page »

Primary Sidebar

This Month’s Statistically Speaking Live Training

  • Member Training: The Link Between ANOVA and Regression

Upcoming Workshops

    No Events

Upcoming Free Webinars

TBA

Quick links

Our Programs Statistical Resources Blog/News About Contact Log in

Contact

Upcoming

Free Webinars Membership Trainings Workshops

Privacy Policy

Search

Copyright © 2008–2023 The Analysis Factor, LLC.
All rights reserved.

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT