ANOVA

3 Reasons Psychology Researchers should Learn Regression

February 17th, 2009 by

Stage 2Back when I was doing psychology research, I knew ANOVA pretty well.  I’d taken a number of courses on it and could run it backward and forward.  I kept hearing about ANCOVA, but in every ANOVA class that was the last topic on the syllabus, and we always ran out of time.

The other thing that drove me crazy was those stats professors kept saying “ANOVA is just a special case of Regression.”  I could not for the life of me figure out why or how.

It was only when I switched over to statistics that I finally took a regression class and figured out what ANOVA was all about. And only when I started consulting, and seeing hundreds of different ANOVA and regression models, that I finally made the connection.

But if you don’t have the driving curiosity about ANOVA and regression, why should you, as a researcher in Psychology, Education, or Agriculture, who is trained in ANOVA, want to learn regression?  There are 3 main reasons.

1. There a many, many continuous independent variables and covariates that need to be included in models.  Without the tools to analyze them as continuous, you are left forcing them into ANOVA using an arbitrary technique like median splits.  At best, you’re losing power.  At worst, you’re not publishing your article because you’re missing real effects.

2. Having a solid understanding of the General Linear Model in its various forms equips you to really understand your variables and their relationships.  It allows you to try a model different ways–not for data fishing, but for discovering the true nature of the relationships.  Having the capacity to add an interaction term or a squared term  allows you to listen to your data and makes you a better researcher.

3. The multiple linear regression model is the basis for many other statistical techniques–logistic regression, multilevel and mixed models, Poisson regression, Survival Analysis, and so on.  Each of these is a step (or small leap) beyond multiple regression.  If you’re still struggling with what it means to center variables or interpret interactions, learning one of these other techniques becomes arduous, if not painful.

Having guided thousands of researchers through their statistical analysis over the past 10 years, I am convinced that having a strong, intuitive understanding of the general linear model in its variety of forms is the key to being an effective and confident statistical analyst.  You are then free to learn and explore other methodologies as needed.

 


Continuous and Categorical Variables: The Trouble with Median Splits

February 16th, 2009 by

Stage 2A Median Split is one method for turning a continuous variable into a categorical one.  Essentially, the idea is to find the median of the continuous variable.  Any value below the median is put it the category “Low” and every value above it is labeled “High.”

This is a very common practice in many social science fields in which researchers are trained in ANOVA but not Regression.  At least that was true when I was in grad school in psychology.  And yes, oh so many years ago, I used all these techniques I’m going to tell you not to.

There are problems with median splits.  The first is purely logical.  When a continuum is categorized, every value above the median, for example, is considered equal.  Does it really make sense that a value just above the median is considered the same as values way at the end?  And different than values just below the median?  Not so much.

So one solution is to split the sample into three groups, not two, then drop the middle group.  This at least creates some separation between the two groups.  The obvious problem, here though, is you’re losing a third of your sample.

The second problem with categorizing a continuous predictor, regardless of how you do it, is loss of power (Aiken & West, 1991).  It’s simply harder to find effects that are really there.




So why is it common practice?  Because categorizing continuous variables is the only way to stuff them into an ANOVA, which is the only statistics method researchers in many fields are trained to do.

Rather than force a method that isn’t quite appropriate, it would behoove researchers, and the quality of their research, to learn the general linear model and how ANOVA fits into it.  It’s really only a short leap from ANOVA to regression but a necessary one.  GLMs can include interactions among continuous and categorical predictors just as ANOVA does.

If left continuous, the GLM would fit a regression line to the effect of that continuous predictor.  Categorized, the model will compare the means.  It often happens that while the difference in means isn’t significant, the slope is.

Reference: Aiken & West (1991). Multiple Regression: Testing and interpreting interactions.

 


SPSS GLM: Choosing Fixed Factors and Covariates

December 30th, 2008 by

The beauty of the Univariate GLM procedure in SPSS is that it is so flexible.  You can use it to analyze regressions, ANOVAs, ANCOVAs with all sorts of interactions, dummy coding, etc.

The down side of this flexibility is it is often confusing what to put where and what it all means.

So here’s a quick breakdown.

The dependent variable I hope is pretty straightforward.  Put in your continuous dependent variable.

Fixed Factors are categorical independent variables.  It does not matter if the variable is (more…)


Confusing Statistical Terms #3: Level

December 12th, 2008 by

Level is a term in statistics that is confusing because it has multiple meanings in different contexts (much like alpha and beta).

There are three different uses of the term Level in statistics that mean completely different things. What makes this especially confusing is that all three of them can be used in the exact same research context.

I’ll show you an example of that at the end.

So when you’re talking to someone who is learning statistics or who happens to be thinking of that term in a different context, this gets especially confusing.

Levels of Measurement

The most widespread of these is levels of measurement. Stanley Stevens came up with this taxonomy of assigning numerals to variables in the 1940s. You probably learned about them in your Intro Stats course: the nominal, ordinal, interval, and ratio levels.

Levels of measurement is really a measurement concept, not a statistical one. It refers to how much and the type of information a variable contains. Does it indicate an unordered category, a quantity with a zero point, etc?

It is important in statistics because it has a big impact on which statistics are appropriate for any given variable. For example, you would not do the same test of association between two nominal variables as you would between two interval variables.

Levels of a Factor

A related concept is a Factor. Although Factor itself has multiple meanings in statistics, here we are talking about a categorical predictor variable.

The typical use of Factor as a categorical predictor variable comes from experimental design. In experimental design, the predictor variables (also often called Independent Variables) are generally categorical and nominal. They represent different experimental conditions, like treatment and control traditions.

Each of these categorical conditions is called a level.

Here are a few examples:

In an agricultural study, a fertilizer treatment variable has three levels: Organic fertilizer (composted manure); High concentration of chemical fertilizer; low concentration of chemical fertilizer.

In a medical study, a drug treatment has four levels: Placebo; low dosage; medium dosage; high dosage.

In a linguistics study, a word frequency variable has two levels: high frequency words; low frequency words.

Although this use of level is very widespread, I try to avoid it personally. Instead I use the word “value” or “category” both of which are accurate, but without other meanings.

Level in Multilevel Models

A completely different use of the term is in the context of multilevel models. Multilevel models is a term for some mixed models. (The terms multilevel models and mixed models are often used interchangably, though mixed model is a bit more flexible).

Multilevel models have two or more sources of random variation.  A two level model has two sources of random variation and can have predictors at each level.

A common example is a model from a design where the response variable of interest is measured on students. It’s hard though, to sample students directly or to randomly assign them to treatments, since there is a natural clustering of students within schools.

So the resource-efficient way to do this research is to sample students within schools.

Predictors can be measured at the student level (eg. gender, SES, age) or the school level (enrollment, % who go on to college).  The dependent variable has variation from student to student (level 1) and from school to school (level 2).

We always count these levels from the bottom up. So if we have students clustered within classroom and classroom clustered within school and school clustered within district, we have:

So this use of the term level describes the design of the study, not the measurement of the variables or the categories of the factors.

Putting them together

So this is the truly unfortunate part. There are situations where all three definitions of level are relevant within the same statistical analysis context.

I find this unfortunate because I think using the same word to mean completely different things just confuses people. But here it is:

Picture that study in which students are clustered within school (a two-level design). Each school is assigned to use one of three math curricula (the independent variable, which happens to be categorical).

So, the variable “math curriculum” is a factor with 3 levels (ie, three categories). Because those three categories of “math curriculum” are unordered, “math curriculum” has a nominal level of measurement. And since “math curriculum” is assigned to each school, it is considered a level 2 variable in the two-level model.

See the rest of the Confusing Statistical Terms series.

 


Confusing Statistical Terms #1: The Many Names of Independent Variables

November 24th, 2008 by

Statistical models, such as general linear models (linear regression, ANOVA, MANOVA), linear mixed models, and generalized linear models (logistic, Poisson, regression, etc.) all have the same general form.

On the left side of the equation is one or more response variables, Y. On the right hand side is one or more predictor variables, X, and their coefficients, B. The variables on the right hand side can have many forms and are called by many names.

There are subtle distinctions in the meanings of these names. Unfortunately, though, there are two practices that make them more confusing than they need to be.

First, they are often used interchangeably. So someone may use “predictor variable” and “independent variable” interchangably and another person may not. So the listener may be reading into the subtle distinctions that the speaker may not be implying.

Second, the same terms are used differently in different fields or research situations. So if you are an epidemiologist who does research on mostly observed variables, you probably have been trained with slightly different meanings to some of these terms than if you’re a psychologist who does experimental research.

Even worse, statistical software packages use different names for similar concepts, even among their own procedures. This quest for accuracy often renders confusion. (It’s hard enough without switching the words!).

Here are some common terms that all refer to a variable in a model that is proposed to affect or predict another variable.

I’ll give you the different definitions and implications, but it’s very likely that I’m missing some. If you see a term that means something different than you understand it, please add it to the comments. And please tell us which field you primarily work in.

Predictor Variable, Predictor

This is the most generic of the terms. There are no implications for being manipulated, observed, categorical, or numerical. It does not imply causality.

A predictor variable is simply used for explaining or predicting the value of the response variable. Used predominantly in regression.

Independent Variable

I’ve seen Independent Variable (IV) used different ways.

1. It implies causality: the independent variable affects the dependent variable. This usage is predominant in ANOVA models where the Independent Variable is manipulated by the experimenter. If it is manipulated, it’s generally categorical and subjects are randomly assigned to conditions.

2. It does not imply causality, but it is a key predictor variable for answering the research question. In other words, it is in the model because the researcher is interested in understanding its relationship with the dependent variable. In other words, it’s not a control variable.

3. It does not imply causality or the importance of the variable to the research question. But it is uncorrelated (independent) of all other predictors.

Honestly, I only recently saw someone define the term Independent Variable this way. Predictor Variables cannot be independent variables if they are at all correlated. It surprised me, but it’s good to know that some people mean this when they use the term.

Explanatory Variable

A predictor variable in a model where the main point is not to predict the response variable, but to explain a relationship between X and Y.

Control Variable

A predictor variable that could be related to or affecting the dependent variable, but not really of interest to the research question.

Covariate

Generally a continuous predictor variable. Used in both ANCOVA (analysis of covariance) and regression. Some people use this to refer to all predictor variables in regression, but it really means continuous predictors. Adding a covariate to ANOVA (analysis of variance) turns it into ANCOVA (analysis of covariance).

Sometimes covariate implies that the variable is a control variable (as opposed to an independent variable), but not always.

And sometimes people use covariate to mean control variable, either numerical or categorical.

This one is so confusing it got it’s own Confusing Statistical Terms article.

Confounding Variable, Confounder

These terms are used differently in different fields. In experimental design, it’s used to mean a variable whose effect cannot be distinguished from the effect of an independent variable.

In observational fields, it’s used to mean one of two situations. The first is a variable that is so correlated with an independent variable that it’s difficult to separate out their effects on the response variable. The second is a variable that causes the independent variable’s effect on the response.

The distinction in those interpretations are slight but important.

Exposure Variable

This is a term for independent variable in some fields, particularly epidemiology. It’s the key predictor variable.

Risk Factor

Another epidemiology term for a predictor variable. Unlike the term “Factor” listed below, it does not imply a categorical variable.

Factor

A categorical predictor variable. It may or may not indicate a cause/effect relationship with the response variable (this depends on the study design, not the analysis).

Independent variables in ANOVA are almost always called factors. In regression, they are often referred to as indicator variables, categorical predictors, or dummy variables. They are all the same thing in this context.

Also, please note that Factor has completely other meanings in statistics, so it too got its own Confusing Statistical Terms article.

Feature

Used in Machine Learning and Predictive models, this is simply a predictor variable.

Grouping Variable

Same as a factor.

Fixed factor

A categorical predictor variable in which the specific values of the categories are intentional and important, often chosen by the experimenter. Examples include experimental treatments or demographic categories, such as sex and race.

If you’re not doing a mixed model (and you should know if you are), all your factors are fixed factors. For a more thorough explanation of fixed and random factors, see Specifying Fixed and Random Factors in Mixed or Multi-Level Models

Random factor

A categorical predictor variable in which the specific values of the categories were randomly assigned. Generally used in mixed modeling. Examples include subjects or random blocks.

For a more thorough explanation of fixed and random factors, see Specifying Fixed and Random Factors in Mixed or Multi-Level Models

Blocking variable

This term is generally used in experimental design, but I’ve also seen it in randomized controlled trials.

A blocking variable is a variable that indicates an experimental block: a cluster or experimental unit that restricts complete randomization and that often results in similar response values among members of the block.

Blocking variables can be either fixed or random factors. They are never continuous.

Dummy variable

A categorical variable that has been dummy coded. Dummy coding (also called indicator coding) is usually used in regression models, but not ANOVA. A dummy variable can have only two values: 0 and 1. When a categorical variable has more than two values, it is recoded into multiple dummy variables.

Indicator variable

Same as dummy variable.

The Take Away Message

Whenever you’re using technical terms in a report, an article, or a conversation, it’s always a good idea to define your terms. This is especially important in statistics, which is used in many, many fields, each of whom adds their own subtleties to the terminology.

 

Confusing Statistical Terms Series

Confusing Statistical Terms #1: The Many Names of Independent Variables

Confusing Statistical Terms #2: Alpha and Beta

Confusing Statistical Terms #3: Levels

Confusing Statistical Term #4: Hierarchical Regression vs. Hierarchical Model

Confusing Statistical Term #5: Covariate

Confusing Statistical Term #6: Factor

Confusing Statistical Term #7: GLM

 


One-tailed and Two-tailed Tests

November 19th, 2008 by

I was recently asked about when to use one and two tailed tests.

The long answer is:  Use one tailed tests when you have a specific hypothesis about the direction of your relationship.  Some examples include you hypothesize that one group mean is larger than the other; you hypothesize that the correlation is positive; you hypothesize that the proportion is below .5.

The short answer is: Never use one tailed tests.

Why?

1. Only a few statistical tests even can have one tail: z tests and t tests.  So you’re severely limited.  F tests, Chi-square tests, etc. can’t accommodate one-tailed tests because their distributions are not symmetric.  Most statistical methods, such as regression and ANOVA, are based on these tests, so you will rarely have the chance to implement them.

2. Probably because they are rare, reviewers balk at one-tailed tests.  They tend to assume that you are trying to artificially boost the power of your test.  Theoretically, however, there is nothing wrong with them when the hypothesis and the statistical test are right for them.