• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • Home
  • Our Programs
    • Membership
    • Online Workshops
    • Free Webinars
    • Consulting Services
  • About
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Collaborate with Us
  • Statistical Resources
  • Contact
  • Blog
  • Login

Mixed and Multilevel Models

Five Extensions of the General Linear Model

by Karen Grace-Martin 12 Comments

Generalized linear models, linear mixed models, generalized linear mixed models, marginal models, GEE models.  You’ve probably heard of more than one of them and you’ve probably also heard that each one is an extension of our old friend, the general linear model.

This is true, and they extend our old friend in different ways, particularly in regard to the measurement level of the dependent variable and the independence of the measurements.  So while the names are similar (and confusing), the distinctions are important.

It’s important to note here that I am glossing over many, many details in order to give you a basic overview of some important distinctions.  These are complicated models, but I hope this overview gives you a starting place from which to explore more. [Read more…] about Five Extensions of the General Linear Model

Tagged With: generalized linear mixed model

Related Posts

  • Member Training: Missing Data
  • Member Training: Elements of Experimental Design
  • Regression Diagnostics in Generalized Linear Mixed Models
  • Six Differences Between Repeated Measures ANOVA and Linear Mixed Models

The Repeated and Random Statements in Mixed Models for Repeated Measures

by Karen Grace-Martin 68 Comments

“Because mixed models are more complex and more flexible than the general linear model, the potential for confusion and errors is higher.”

– Hamer & Simpson (2005)

Linear Mixed Models, as implemented in SAS’s Proc Mixed, SPSS Mixed, R’s LMER, and Stata’s xtmixed, are an extension of the general linear model.  They use more sophisticated techniques for estimation of parameters (means, variances, regression coefficients, and standard errors), and as the quotation says, are much more flexible.

Here’s one example of the flexibility of mixed models, and its resulting potential for confusion and error. [Read more…] about The Repeated and Random Statements in Mixed Models for Repeated Measures

Tagged With: Marginal Model, mixed model, Proc Mixed, Random Statement, Repeated Statement, SPSS Mixed

Related Posts

  • Approaches to Repeated Measures Data: Repeated Measures ANOVA, Marginal, and Mixed Models
  • Mixed Up Mixed Models
  • Specifying Fixed and Random Factors in Mixed Models
  • Six Differences Between Repeated Measures ANOVA and Linear Mixed Models

How Simple Should a Model Be? The Case of Insignificant Controls, Interactions, and Covariance Structures

by Karen Grace-Martin 12 Comments

“Everything should be made as simple as possible, but no simpler” – Albert Einstein*Stage 2

For some reason, I’ve heard this quotation 3 times in the past 3 days.  Maybe I hear it everyday, but only noticed because I’ve been working with a few clients on model selection, and deciding how much to simplify a model.

And when the quotation fits, use it. (That’s the saying, right?)

*For the record, a quick web search indicated this may be a paraphrase, but it still applies.

The quotation is the general goal of model selection.  You really do want the model to be as simple as possible, but still able to answer the research question of interest.

This applies to many areas of model selection.  Here are a few examples: [Read more…] about How Simple Should a Model Be? The Case of Insignificant Controls, Interactions, and Covariance Structures

Tagged With: control variables, Covariance Structure, interaction, Model Selection

Related Posts

  • Model Building Strategies: Step Up and Top Down
  • Five Common Relationships Among Three Variables in a Statistical Model
  • Member Training: Using Excel to Graph Predicted Values from Regression Models
  • Member Training: Hierarchical Regressions

Covariance Matrices, Covariance Structures, and Bears, Oh My!

by Karen Grace-Martin 38 Comments

Of all the concepts I see researchers struggle with as they start to learn high-level statistics, the one that seems to most often elicit the blank stare of incomprehension is the Covariance Matrix, and its friend, the Covariance Structure.

And since understanding them is fundamental to a number of statistical analyses, particularly Mixed Models and Structural Equation Modeling, it’s an incomprehension you can’t afford.

So I’m going to explain what they are and how they’re not so different from what you’re used to.  I hope you’ll see that once you get to know them, they aren’t so scary after all.

What is a Covariance Matrix?

There are two concepts inherent in a covariance matrix–covariance and matrix.  Either one can throw you off.

Let’s start with matrix.  If you never took linear algebra, the idea of matrices can be frightening.  (And if you still are in school, I highly recommend you take it.  Highly).  And there are a lot of very complicated, mathematical things you can do with matrices.

But you, a researcher and data analyst, don’t need to be able to do all those complicated processes to your matrices.  You do need to understand what a matrix is, be able to follow the notation, and understand a few simple matrix processes, like multiplication of a matrix by a constant.

The thing to keep in mind when it all gets overwhelming is a matrix is just a table.  That’s it.

A Covariance Matrix, like many matrices used in statistics, is symmetric.  That means that the table has the same headings across the top as it does along the side.

Start with a Correlation Matrix

The simplest example, and a cousin of a covariance matrix, is a correlation matrix.  It’s just a table in which each variable is listed in both the column headings and row headings, and each cell of the table (i.e. matrix) is the correlation between the variables that make up the column and row headings.  Here is a simple example from a data set on 62 species of mammal:

From this table, you can see that the correlation between Weight in kg and Hours of Sleep, highlighted in purple, is -.307. Smaller mammals tend to sleep more.

You’ll notice that this is the same above and below the diagonal. The correlation of Hours of Sleep with Weight in kg is the same as the correlation between Weight in kg and Hours of Sleep.

Likewise, all correlations on the diagonal equal 1, because they’re the correlation of each variable with itself.

If this table were written as a matrix, you’d only see the numbers, without the column headings.

Now, the Covariance Matrix

A Covariance Matrix is very similar. There are really two differences between it and the Correlation Matrix. It has this form:

First, we have substituted the correlation values with covariances.

Covariance is just an unstandardized version of correlation.  To compute any correlation, we divide the covariance by the standard deviation of both variables to remove units of measurement.  So a covariance is just a correlation measured in the units of the original variables.

Covariance, unlike  correlation, is not constrained to being between -1 and 1. But the covariance’s sign will always be the same as the corresponding correlation’s. And a covariance=0 has the exact same meaning as a correlation=0: no linear relationship.

Because covariance is in the original units of the variables, variables on scales with bigger numbers and with wider distributions will necessarily have bigger covariances. So for example, Life Span has similar correlations to Weight and Exposure while sleeping, both around .3.

But values of Weight vary a lot (this data set contains both Elephants and Shrews), whereas Exposure is an index variable that ranges from only 1 to 5. So Life Span’s covariance with Weight (5113.27) is much larger than than with Exposure (10.66).

Second, the diagonal cells of the matrix contain the variances of each variable. A covariance of a variable with itself is simply the variance. So you have a context for interpreting these covariance values.

Once again, a covariance matrix is just the table without the row and column headings.

What about Covariance Structures?

Covariance Structures are just patterns in covariance matrices.  Some of these patterns occur often enough in some statistical procedures that they have names.

You may have heard of some of these names–Compound Symmetry, Variance Components, Unstructured, for example.  They sound strange because they’re often thrown about without any explanation.

But they’re just descriptions of patterns.

For example, the Compound Symmetry structure just means that all the variances are equal to each other and all the covariances are equal to each other. That’s it.

It wouldn’t make sense with our animal data set because each variable is measured on a different scale. But if all four variables were measured on the same scale, or better yet, if they were all the same variable measured under four experimental conditions, it’s a very plausible pattern.

Variance Components just means that each variance is different, and all covariances=0. So if all four variables were completely independent of each other and measured on different scales, that would be a reasonable pattern.

Unstructured just means there is no pattern at all.  Each variance and each covariance is completely different and has no relation to the others.

There are many, many covariance structures.  And each one makes sense in certain statistical situations.  Until you’ve encountered those situations, they look crazy.  But each one is just describing a pattern that makes sense in some situations.

 

Tagged With: Correlation, correlation matrix, Covariance Matrix, Covariance Structure, linear mixed model, mixed model, multilevel model, Structural Equation Modeling

Related Posts

  • Member Training: Goodness of Fit Statistics
  • Multilevel, Hierarchical, and Mixed Models–Questions about Terminology
  • The Difference Between Random Factors and Random Effects
  • Six Differences Between Repeated Measures ANOVA and Linear Mixed Models

How to Combine Complicated Models with Tricky Effects

by Karen Grace-Martin 4 Comments

Need to dummy code in a Cox regression model?

Interpret interactions in a logistic regression?

Add a quadratic term to a multilevel model?

quadratic interaction plotThis is where statistical analysis starts to feel really hard. You’re combining two difficult issues into one.

You’re dealing with both a complicated modeling technique at Stage 3 (survival analysis, logistic regression, multilevel modeling) and tricky effects in the model (dummy coding, interactions, and quadratic terms).

The only way to figure it all out in a situation like that is to break it down into parts.  [Read more…] about How to Combine Complicated Models with Tricky Effects

Tagged With: Cox Regression, dummy coding, interaction, logistic regression, multilevel model, quadratic terms, Survival Analysis

Related Posts

  • Member Training: Types of Regression Models and When to Use Them
  • When NOT to Center a Predictor Variable in Regression
  • Member Training: Goodness of Fit Statistics
  • Interpreting Regression Coefficients in Models other than Ordinary Linear Regression

Approaches to Repeated Measures Data: Repeated Measures ANOVA, Marginal, and Mixed Models

by Karen Grace-Martin 94 Comments

In a recent post, I discussed the differences between repeated measures and longitudinal data, and some of the issues that come up in each one.

I want to expand on that discussion, and discuss the three approaches you can take to analyze repeated measures data.

For a few, very specific designs, you can get the exact same results from all three approaches.  This, I find, has always made it difficult to figure out what each one is doing, and how to apply them to OTHER designs.

For the purposes of discussion here, I’m going to define repeated measures data as repeated measurements of the same outcome variable on the same individual.  The individual is often a person, but could just as easily be a plant, animal, colony, company, etc.  For simplicity, I’ll use “individual.” [Read more…] about Approaches to Repeated Measures Data: Repeated Measures ANOVA, Marginal, and Mixed Models

Tagged With: Marginal Model, mixed model, Population Averaged Model, Repeated Measures

Related Posts

  • Six Differences Between Repeated Measures ANOVA and Linear Mixed Models
  • The Repeated and Random Statements in Mixed Models for Repeated Measures
  • Linear Mixed Models for Missing Data in Pre-Post Studies
  • When Does Repeated Measures ANOVA not work for Repeated Measures Data?

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Go to page 11
  • Go to page 12
  • Go to Next Page »

Primary Sidebar

This Month’s Statistically Speaking Live Training

  • Member Training: A Gentle Introduction to Bootstrapping

Upcoming Free Webinars

Getting Started with R
3 Overlooked Strengths of Structural Equation Modeling
4 Critical Steps in Building Linear Regression Models

Upcoming Workshops

    No Events

Copyright © 2008–2022 The Analysis Factor, LLC. All rights reserved.
877-272-8096   Contact Us

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT