*by David Lillis, Ph.D.*

Ordinary Least Squares regression provides linear models of continuous variables. However, much data of interest to statisticians and researchers are not continuous and so other methods must be used to create useful predictive models.

The glm() command is designed to perform generalized linear models (regressions) on binary outcome data, count data, probability data, proportion data and many other data types.

In this blog post, we explore the use of R’s glm() command on one such data type. Let’s take a look at a simple example where we model binary data.

In the mtcars data set, the variable vs indicates if a car has a V engine or a straight engine.

We want to create a model that helps us to predict the probability of a vehicle having a V engine or a straight engine given a weight of 2100 lbs and engine displacement of 180 cubic inches.

First we fit the model:

We use the glm() function, include the variables in the usual way, and specify a binomial error distribution, as follows:

model <- glm(formula= vs ~ wt + disp, data=mtcars, family=binomial)

summary(model)

Call: glm(formula = vs ~ wt + disp, family = binomial, data = mtcars)

Deviance Residuals: Min 1Q Median 3Q Max -1.67506 -0.28444 -0.08401 0.57281 2.08234

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.60859 2.43903 0.660 0.510 wt 1.62635 1.49068 1.091 0.275 disp -0.03443 0.01536 -2.241 0.025 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 43.86 on 31 degrees of freedom Residual deviance: 21.40 on 29 degrees of freedom AIC: 27.4

Number of Fisher Scoring iterations: 6

We see from the estimates of the coefficients that *weight* influences *vs* positively, while *displacement* has a slightly negative effect.

The model output is somewhat different from that of an ordinary least squares model. I will explain the output in more detail in the next article, but for now, let’s continue with our calculations.

Remember, our goal here is to calculate a predicted probability of a V engine, for specific values of the predictors: a weight of 2100 lbs and engine displacement of 180 cubic inches.

To do that, we create a data frame called newdata, in which we include the desired values for our prediction.

newdata = data.frame(wt = 2.1, disp = 180)

Now we use the predict() function to calculate the predicted probability. We include the argument type=”response” in order to get our prediction.

predict(model, newdata, type="response") 0.2361081

The predicted probability is 0.24.

That wasn’t so hard! In our next article, I will explain more about the output we got from the glm() function.

**About the Author:***David Lillis has taught R to many researchers and statisticians. His company, Sigma Statistics and Research Limited, provides both on-line instruction and face-to-face workshops on R, and coding services in R. David holds a doctorate in applied statistics.*

**Interested in learning more? Check out our free webinar recording on Understanding Probability, Odds, and Odds Ratios in Logistic Regression.**

### Related Posts

- Generalized Linear Models in R, Part 3: Plotting Predicted Probabilities
- Generalized Linear Models in R, Part 2: Understanding Model Fit in Logistic Regression Output
- Generalized Linear Models in R, Part 5: Graphs for Logistic Regression
- Generalized Linear Models (GLMs) in R, Part 4: Options, Link Functions, and Interpretation