Previous Posts
One great thing about logistic regression, at least for those of us who are trying to learn how to use it, is that the predictor variables work exactly the same way as they do in linear regression. Dummy coding, interactions, quadratic terms--they all work the same way.
Principal Component Analysis (PCA) is a handy statistical tool to always have available in your data analysis tool belt. It's a data reduction technique, which means it's a way of capturing the variance in many variables in a smaller, easier-to-work-with set of variables. There are many, many details involved, though, so here are a few things to remember as you run your PCA.
MANOVA is the multivariate (meaning multiple dependent variables) version of ANOVA, but there are many misconceptions about it. In this webinar, you’ll learn: When to use MANOVA and when you’d be better off using individual ANOVAs How to follow up the overall MANOVA results to interpret What those strange statistics mean — Wilk’s lambda, Roy’s Greatest Root (hint — it’s not a carrot) Its relationship to discriminant analysis
In Part 3 ans Part 4 we used the lm() command to perform least squares regressions. We saw how to check for non-linearity in our data by fitting polynomial models and checking whether they fit the data better than a linear model. Now let’s see how to fit an exponential model in R.
One important consideration in choosing a missing data approach is the missing data mechanism—different approaches have different assumptions about the mechanism. Each of the three mechanisms describes one possible relationship between the propensity of data to be missing and values of the data, both missing and observed.
Graphing predicted values from a regression model or means from an ANOVA makes interpretation of results much easier. Every statistical software will graph predicted values for you. But the more complicated your model, the harder it can be to get the graph you want in the format you want. Excel isn’t all that useful for estimating the statistics, but it has some very nice features that are useful for doing data analysis, one of which is graphing.
In all linear regression models, the intercept has the same definition: the mean of the response, Y, when all predictors, all X = 0.
We will use a data set of counts (atomic disintegration events that take place within a radiation source), taken with a Geiger counter at a nuclear plant. The counts were registered over a 30 second period for a short-lived, man-made radioactive compound. We read in the data and subtract the background count of 623.4 counts per second in order to obtain the counts that pertain to the radio-active source. Cut and paste the following data into your R workspace.
Hierarchical regression is a very common approach to model building that allows you to see the incremental contribution to a model of sets of predictor variables. Popular for linear regression in many fields, the approach can be used in any type of regression model — logistic regression, linear mixed models, or even ANOVA. In this webinar, we’ll go over the concepts and steps, and we’ll look at how it can be useful in different contexts.
And having them all in the variable view window makes things incredibly easy while you're doing your analysis. But sometimes you need to just print them all out--to create a code book for another analyst or to include in the output you're sending to a collaborator. Or even just to print them out for yourself for easy reference.

stat skill-building compass