One important yet difficult skill in statistics is choosing a type model for different data situations. One key consideration is the dependent variable.

For linear models, the dependent variable doesn’t have to be normally distributed, but it does have to be continuous, unbounded, and measured on an interval or ratio scale.

Percentages don’t fit these criteria. Yes, they’re continuous and ratio scale. The issue is the boundaries at 0 and 100.

Likewise, counts have a boundary at 0 and are discrete, not continuous. The general advice is to analyze these with some variety of a Poisson model.

Yet there is a very specific type of variable that can be considered either a count or a percentage, but has its own specific distribution.

This is a variable that indicates the number of successes out of N trials.

Here are a few examples that I’ve seen recently in consulting:

- The percentage of buds on a tree that opened.
- The number of errors a bilingual speaker made on a grammar test.
- The percentage of employees a manager would recommended for a promotion under different conditions.

If you think about it, you can consider any of these to be either a percentage or a count.

If a tree has 820 buds and 453 open, we could either consider that a count of 453 or a percentage of 55.2%. But they’re both measuring this same idea of 453 out of 820 possible openings.

There isn’t a name for it, but I think of this type of percentage as a discrete percentage.

It’s a little bit different than a percentage of a mass quantity, like the percentage of the area of a Petri dish that is covered with mold. In the Petri dish example, there aren’t discrete trials, each of which could be a success or a failure. It’s just one mass and some measurable percentage of it is covered with mold.

A variable that measures the number of success out of N trials follows a binomial distribution, which is simply the sum of a set of success/failure trials.

The binary outcome variable that we generally use for logistic regression is *one* of these trials. It follows a Bernoulli distribution. The variable has one trial with two possible outcomes for each individual: success or failure.

(Yes, this is very confusing. Many books, software, and statisticians describe the one-trial binary situation as a binomial distribution. I’ve surely done it myself. They’re related but not exactly the same).

As it turns out, logistic regression can handle either a Bernoulli variable with one trial per subject or a Binomial variable with N trials per subject.

*The key parameter in both distributions is p, the probability of success on each trial.*

That’s what we’re trying to predict in a logistic regression with our predictor variables. For example, does the tree’s altitude predict the probability that any given bud will open?

Pretty much every stat software has both options as dependent variables for a logistic regression, but it’s not always easy to find. For example, in SAS, it’s quite easy. The MODEL statement in PROC LOGISTIC allows either. It calls them the single-trial syntax or the events/trials syntax.

But in SPSS, the Logistic Regression procedure can only run the single-trial Bernoulli form. To run the events-and-trials binomial form, you need to use the Generalized Linear Models procedure. There you can specify a logistic link and a binomial distribution.

Now, why not treat it as a count variable and run a Poisson or negative binomial model?

Well….there is a relationship between the binomial and the Poisson distributions: as the number of trials gets big and the probability of success gets small, the binomial distribution approximates a Poisson.

So think about how many trials you actually have and the overall proportion of successes to decide which approach might fit better.

Ryan says

How do you interpret the coefficient for a continuous predictor variable in a logistic regression with N trials as the outcome? For example, let’s say we fit a logistic regression with the outcome as proportion of buds opening on a tree and input variable as the age of the tree. What is the interpretation of the effect of age?

Karen Grace-Martin says

It’s the same as if the data were binary outcomes for each bud. Convert the coefficient to an odds ratio, which tells you the factor by which the odds of any given bud opening increases for each additional unit of age.

Adele says

What if the percentage is not of a discrete quantity, such as time (i.e. time spent in one area versus another)?

Karen Grace-Martin says

Then you use a beta regression.

Meghan Caulfield says

If you interpret the odds ratio from a logistic regression of one trial as the relative odds between two groups, how do you interpret the odds ratio from a logistic regression of N trials per subject? Thanks.

Karen Grace-Martin says

Hi Meghan,

It’s essentially the same. Let’s use the example of the buds opening on the tree and the two groups are trees with pruning and trees without pruning. The odds ratio for the Pruning variable gives the relative odds of any given bud on a pruned tree opening.