Oh so many years ago I had my first insight into just how ridiculously confusing all the statistical terminology can be for novices.
I was TAing a two-semester applied statistics class for graduate students in biology. It started with basic hypothesis testing and went on through to multiple regression.
It was a cross-listed class, meaning there were a handful of courageous (or masochistic) undergrads in the class, and they were having trouble keeping up with the ambitious graduate-level pace.
I remember one day in particular in the discussion section I was leading when one of the poor undergrads was hopelessly lost. We were talking about the simple regression coefficient (beta) and the intercept (which the text we were using chose to call alpha, instead of the more familiar beta-naught).
It was only after repeated probing that I realized she was logically trying to fit it into the concepts of alpha and beta that we had already taught her–Type I and Type II errors in hypothesis testing.
Entirely. Different. Concepts.
With the same names.
Once I realized the source of the error, I was able to explain that we were using the same terminology for entirely different concepts.
But as it turns out, there are even more meanings of both alpha and beta. Here they are:
As I already mentioned, the definition most learners of statistics come to first for beta and alpha are about hypothesis testing.
Alpha is the probability of Type I error in any hypothesis test–incorrectly claiming statistical significance.
Beta is the probability of Type II error in any hypothesis test–incorrectly concluding no statistical significance. (1 – Beta is power).
In most textbooks and software packages, the population regression coefficients are denoted by beta. Like all population parameters, they are theoretical–we don’t know what they are. The regression coefficients we estimate from our sample are statistical estimates of those parameter values. Most parameters are denoted with Greek letters and statistics with the corresponding Latin letters.
Most texts refer to the intercept as β0 (beta-naught–and yes, that’s the closest I can get to a subscript) and every other regression coefficient as β1, β2, β3, etc. But as I already mentioned, some statistics texts will refer to the intercept as alpha, to distinguish it from the other coefficients.
Standardized Regression Coefficients
But, for some reason, SPSS labels standardized regression coefficient estimates as Beta. Despite the fact that they are statistics–measured on the sample, not the population.
And I can’t verify this, but I vaguely recall that Systat uses the same term. If you have Systat and can verify or negate this claim, feel free to do so in the comments.
Another, completely separate use of alpha is Cronbach’s alpha, aka Coefficient Alpha, which measures the reliability of a scale. It’s a very useful little statistic, but should not be confused with either of the other uses of alpha.
If you’d like to learn more about power and sample size estimates, take a look at our online workshop: Calculating Power and Sample Size. We’ll go over the logic, the info you need, where to get it, how to do the steps, and how to use power software to get good estimates. We’ll also go over what these estimates really tell you, and what they don’t.