How to Get Standardized Regression Coefficients When Your Software Doesn’t Want To Give Them To You

by Karen Grace-Martin


Standardized regression coefficients remove the unit of measurement of predictor and outcome variables.  They are sometimes called betas, but I don’t like to use that term because there are too many other, and too many related, concepts that are also called beta.

There are many good reasons to report them:

  • They serve as standardized effect size statistics.
  • They allow you to compare the relative effects of predictors measured on different scales.
  • They make journal editors and committee members happy in fields where they are commonly reported.

If you use a regression procedure in most software, standardized regression coefficients are reported by default. Or at least an easy option.

But there are times you need to use some procedure that won’t compute standardized coefficients for you.

Often it makes more sense to use a general linear model procedure to run regressions.  But GLM in SAS and SPSS don’t give standardized coefficients.

Likewise, you won’t get standardized regression coefficients reported after combining results from multiple imputation.

Luckily, there’s a way to get around it.

A standardized coefficient is the same as an unstandardized coefficient between two standardized variables. We often learn to standardize the coefficient itself because that’s the shortcut.  But implicitly, it’s the equivalence to the coefficient between standardized variables that gives a standardized coefficient meaning.

So all you have to do to get standardized coefficients is standardize your predictors and your outcome.


The Steps

Remember all those Z-scores you had to calculate in Intro Stats?  It wasn’t the useless exercise you thought it was at the time.

Converting a variable to a Z-score is standardizing.

In other words, do these steps for Y, your outcome variable, and every X, your predictors:

1. Calculate the mean and standard deviation.

2. Create a new standardized version of each variable.  To get it, create a new variable in which you subtract the mean from the original value, then divide that by the standard error.

3. Use those standardized versions in the regression.


Could this take a while?  Yup.

But if that’s what the journal requires you report, just do it.


A nice advantage, is you can apply it, at least partially, even in regression models that can’t usually accommodate standardized regression coefficients.

For example, in a logistic regression it doesn’t make sense to standardize Y because it’s categorical.  But you can standardize all your Xs to get rid of their units.

You can then interpret your odds ratios in terms of one standard deviation increases in each X, rather than one-unit increases.

tn_ircLearn more about the ins and outs of interpreting regression coefficients in our new On Demand workshop: Interpreting (Even Tricky) Regression Coeffcients.

Bookmark and Share

{ 14 comments… read them below or add one }

Leave a Comment

Please note that Karen receives hundreds of comments at The Analysis Factor website each week. Since Karen is also busy teaching workshops, consulting with clients, and running a membership program, she seldom has time to respond to these comments anymore. If you have a question to which you need a timely response, please check out our low-cost monthly membership program, or sign-up for a quick question consultation.

Previous post:

Next post: