Generalized linear models, linear mixed models, generalized linear mixed models, marginal models, GEE models. You’ve probably heard of more than one of them and you’ve probably also heard that each one is an extension of our old friend, the general linear model.
This is true, and they extend our old friend in different ways, particularly in regard to the measurement level of the dependent variable and the independence of the measurements. So while the names are similar (and confusing), the distinctions are important.
It’s important to note here that I am glossing over many, many details in order to give you a basic overview of some important distinctions. These are complicated models, but I hope this overview gives you a starting place from which to explore more. (more…)
You may have noticed conflicting advice about whether to leave insignificant effects in a model or take them out in order to simplify the model.
One effect of leaving in insignificant predictors is on p-values–they use up precious df in small samples. But if your sample isn’t small, the effect is negligible.
The bigger effect is on interpretation, and really the above cases are about whether it aids interpretation to leave them in. Models do get so cluttered it’s hard to figure out what’s going on, and it makes sense to eliminate effects that aren’t serving a purpose, but even insignificant effects can have a purpose. (more…)
It’s really easy to mix up the concepts of association (as measured by correlation) and interaction. Or to assume if two variables interact, they must be associated. But it’s not actually true.
In statistics, they have different implications for the relationships among your variables. This is especially true when the variables you’re talking about are predictors in a regression or ANOVA model.

Association
Association between two variables means the values of one variable relate in some way to the values of the other. It is usually measured by correlation for two continuous variables and by cross tabulation and a Chi-square test for two categorical variables.
Unfortunately, there is no nice, descriptive measure for association between one (more…)
In the world of data analysis, there’s not always one right statistical analysis for every research question.
There are so many issues to take into account. They include the research question to be answered, the measurement of the variables, the study design, data limitations and issues, the audience, practical constraints like software availability, and the purpose of the data analysis.
So what do you do when a reviewer rejects your choice of data analysis? And you believe it’s right? This reviewer can be your boss, your dissertation committee, a co-author, the PI, or the ubiquitous journal reviewer #2.
What do you do?
There are ultimately two choices: You can redo the analysis their way. Or you can fight for your analysis. How do you choose?
The one absolute in this choice is that you have to honor the integrity of your data analysis and yourself.
Good science matters.
Do not be persuaded to do an analysis that will produce inaccurate or misleading results. Especially when readers will actually make decisions based on these results.
On the other hand, If no one will ever read your report, this is less crucial.
But even within that absolute, there are often choices. Keep in mind the two goals in data analysis:
- The analysis needs to accurately reflect the limits of the design and the data, while still answering the research question. Assumptions have to be reasonably met.
- The analysis needs to communicate the results to the audience.
When to fight for your analysis
So first and foremost, if your reviewer is asking you to do an analysis that does not appropriately take into account the design or the variables, you need to fight.
For example, a number of years ago I worked with a researcher who had a study with repeated measurements on the same individuals. It had a small sample size and an unequal number of observations on each individual.
It was clear that to take into account the design and the unbalanced data, the appropriate analysis was a linear mixed model.
The researcher’s co-author questioned the use of the linear mixed model, mainly because he wasn’t familiar with it. He thought the researcher was attempting something fishy. His suggestion was to use an ad hoc technique of averaging over the multiple observations for each subject.
This was a situation where fighting was worth it.
Unnecessarily simplifying the analysis to please people who were unfamiliar with an appropriate method was not an option. The simpler model would have violated assumptions.
This was particularly important because the research was being submitted to a high-level journal.
So it was the researcher’s job to educate not only his coauthor, but the readers, in the form of explaining the analysis and its advantages, with citations, right in the paper.
When to jump through hoops
In contrast, sometimes the reviewer is not really asking for a completely different analysis. They just want a similar analysis that reports different specific statistics.
Let’s use another example with repeated measures data.
In this example, the client had a simple (2) pre-post by (3) treatment group design. Time was within subjects and treatment group was between subjects.
I’ve seen it happen many times that the committee or reviewer #2 is familiar with Repeated Measures ANOVA and wants to see the types of statistics that are easy to compute for ordinary least squares (OLS) statistical models.
Examples include a model R2 and effect size statistics like eta2.
This is a situation where it may be easier, and produces no ill-effects, to jump through the hoop.*
Running the analysis they prefer won’t violate any assumptions or produce inaccurate results. This assumes you have no missing data in the design and it’s reasonable to assume sphericity (both of which were true in this case).
If the reviewer can stop your research in its tracks, it may be worth it to rerun the analysis to get the statistics they want to see reported.
You do have to decide whether the cost of jumping through the hoop, in terms of time, money, and emotional energy, is worth it.
If the request is relatively minor, it usually is. If it’s a matter of rerunning every analysis you’ve done to indulge a committee member’s pickiness, it may be worth standing up for yourself and your analysis.
*A note on these examples: there are many other design issues that can make a repeated measures ANOVA not work and a fight worth it. You can read about them here.
How to Fight for your Analysis
When you’re dealing with anonymous reviewers, the situation can get sticky. After all, you cannot ask them to clarify their concerns. And you have limited opportunities to explain the reasons for choosing your analysis.
It may be harder to discern if they are being overly picky, don’t understand the statistics themselves, or have a valid point.
If you choose to stand up for yourself, be well armed. Research the issue until you are absolutely confident in your approach (or until you’re convinced that you were missing something).
A few hours in the library or talking with a trusted expert is never a wasted investment. Compare that to running an unpublishable analysis to please a committee member or coauthor.
Often, the problem is actually not about whether you did the right statistical analysis, but in the way you explained it. It’s your job to explain your analysis with enough detail and the correct wording. You’ll need to include why the analysis is appropriate and, if it’s unfamiliar to readers, what it does.
Rewrite that section, making it very clear. Ask someone to review it. Cite other research that uses or explains that statistical method.
Whatever you choose, be confident that you made the right decision, then move on.
Last month I did a webinar on Poisson and negative binomial models for count data. With a few hundred participants, we ran out of time to get through all the questions, so I’m answering some of them here on the blog.
This set of questions are all related to when it’s appropriate to treat count data as continuous and run the more familiar and simpler linear model.
Q: Do you have any guidelines or rules of thumb as far as how many discrete values an outcome variable can take on before it makes more sense to just treat it as continuous?
The issue usually isn’t a matter of how many values there are. (more…)
It seems every editor and her brother these days wants to see standardized effect size statistics reported in journal articles.
For ANOVAs, two of the most popular are Eta-squared and partial Eta-squared. In one way ANOVAs, they come out the same, but in more complicated models, their values, and their meanings differ.
SPSS only reports partial Eta-squared, and in earlier versions of the software it was (unfortunately) labeled Eta-squared. More recent versions have fixed the label, but still don’t offer Eta-squared as an option.
Luckily Eta-squared is very simple to calculate yourself based on the sums of squares in your ANOVA table. I’ve written another blog post with all the formulas. You can (more…)