OptinMon

Concepts you Need to Understand to Run a Mixed or Multilevel Model

January 23rd, 2013 by

Have you ever been told you need to run a mixed (aka: multilevel) model and been thrown off by all the new vocabulary?

It happened to me when I first started my statistical consulting job, oh so many years ago. I had learned mixed models in an ANOVA class, so I had a pretty good grasp on many of the concepts.

But when I started my job, SAS had just recently come out with Proc Mixed, and it was the first time I had to actually implement a true multilevel model.  I was out of school, so I had to figure it out on the job.

And even with my background, I had a pretty steep learning curve to get to a point where it made sense.  Sure, I was able to figure out the steps, but there are some pretty tricky situations and complicated designs out there.

To implement it well, you need a good understanding of the big picture, and how the small parts fit into it.  (more…)


Analyzing Pre-Post Data with Repeated Measures or ANCOVA

January 22nd, 2013 by

One area in statistics where I see conflicting advice is how to analyze pre-post data. I’ve seen this myself in consulting. A few years ago, I received a call from a distressed client. Let’s call her Nancy.Stage 2

Nancy had asked for advice about how to run a repeated measures analysis. The advisor told Nancy that actually, a repeated measures analysis was inappropriate for her data.

Nancy was sure repeated measures was appropriate. This advice led her to fear that she had grossly misunderstood a very basic tenet in her statistical training.

The Study Design

Nancy had measured a response variable at two time points for two groups. The intervention group received a treatment and a control group did not. Participants were randomly assigned to one of the two groups.

The researcher measured each participant before and after the intervention.

Analyzing the Pre-Post Data

Nancy was sure that this was a classic repeated measures experiment. It has (more…)


R Is Not So Hard! A Tutorial, Part 3: Regressions and Plots

January 11th, 2013 by


In Part 2 of this series, we created two variables and used the lm() command to perform a least squares regression on them, treating one of them as the dependent variable and the other as the independent variable. Here they are again.

height = c(176, 154, 138, 196, 132, 176, 181, 169, 150, 175)
bodymass = c(82, 49, 53, 112, 47, 69, 77, 71, 62, 78)

Today we learn how to obtain useful diagnostic information about a regression model and then how to draw residuals on a plot. As before, we perform the regression.

lm(height ~ bodymass)
Call:
lm(formula = height ~ bodymass)

Coefficients:
(Intercept) bodymass
98.0054 0.9528
Now let’s find out more about the regression. First, let’s store the regression model as an object called mod and then use the summary() command to learn about the regression.

mod <- lm(height ~ bodymass)

summary(mod)

Here is what R gives you.

Call:
lm(formula = height ~ bodymass)
Residuals:
Min 1Q Median 3Q Max
-10.786 -8.307 1.272 7.818 12.253

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 98.0054 11.7053 8.373 3.14e-05 ***
bodymass 0.9528 0.1618 5.889 0.000366 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 9.358 on 8 degrees of freedom
Multiple R-squared: 0.8126, Adjusted R-squared: 0.7891
F-statistic: 34.68 on 1 and 8 DF, p-value: 0.0003662
R has given you a great deal of diagnostic information about the regression. The most useful of this information are the coefficients themselves, the Adjusted R-squared, the F-statistic and the p-value for the model.

Now let’s use R’s predict() command to create a vector of fitted values.

regmodel <- predict(lm(height ~ bodymass))

regmodel

Here are the fitted values:

1 2 3 4 5 6 7 8 9 10
176.1334 144.6916 148.5027 204.7167 142.7861 163.7472 171.3695 165.6528 157.0778 172.3222

Now let’s plot the data and regression line again.

plot(bodymass, height, pch = 16, cex = 1.3, col = "blue", main = "HEIGHT PLOTTED AGAINST BODY MASS", xlab = "BODY MASS (kg)", ylab = "HEIGHT (cm)")

abline(lm(height ~ bodymass))

tn_image001

We can plot the residuals using R’s for loop and a subscript k that runs from 1 to the number of data points. We know that there are 10 data points, but if we do not know the number of data we can find it using the length() command on either the height or body mass variable.

npoints <- length(height)
npoints

[1] 10

Now let’s implement the loop and draw the residuals (the differences between the observed data and the corresponding fitted values) using the lines() command. Note the syntax we use to draw in the residuals.

for (k in 1: npoints) lines(c(bodymass[k], bodymass[k]), c(height[k], regmodel[k]))

Here is our plot, including the residuals.

tn_image002

In part 4 we will look at more advanced aspects of regression models and see what R has to offer.


About the Author:
David Lillis has taught R to many researchers and statisticians. His company, Sigma Statistics and Research Limited, provides both on-line instruction and face-to-face workshops on R, and coding services in R. David holds a doctorate in applied statistics.

See our full R Tutorial Series and other blog posts regarding R programming.

 


Three Issues in Sample Size Estimates for Multilevel Models

November 30th, 2012 by

If you’ve ever worked with multilevel models, you know that they are an extension of linear models. For a researcher learning them, this is both good and bad news.

The good side is that many of the concepts, calculations, and results are familiar. The down side of the extension is that everything is more complicated in multilevel models.

This includes power and sample size calculations. (more…)


Factor Analysis: A Short Introduction, Part 4–How many factors should I find?

November 16th, 2012 by

by Maike Rahn, PhD

One of the hardest things to determine when conducting a factor analysis is how many factors to settle on. Statistical programs provide a number of criteria to help with the selection.

Eigenvalue > 1

Programs usually have a default cut-off for the number of generated factors, such as all factors with an eigenvalue of ≥1.

This is because a factor with an eigenvalue of 1 accounts for as much variance as a single variable, and the logic is that only factors that explain at least the same amount of variance as a single variable is worth keeping.

But often a cut-off of 1 results in more factors than the user bargained for or (more…)


Factor Analysis: A Short Introduction, Part 3-The Difference Between Confirmatory and Exploratory Factor Analysis

November 2nd, 2012 by

by Maike Rahn, PhD

An important question that the consultants at The Analysis Factor are frequently asked is:

What is the difference between a confirmatory and an exploratory factor analysis?

A confirmatory factor analysis assumes that you enter the factor analysis with a firm idea about the number of factors you will encounter, and about which variables will most likely load onto each factor.

Your expectations are usually based on published findings of a factor analysis.

An example is a fatigue scale that has previously been validated. You would like to make sure that the variables in your sample load onto the factors the same way they did in the original research.

In other words, you have very clear expectations about what you will find in your own sample. This means that (more…)