guest contributer

Member Training: Adjustments for Multiple Testing: When and How to Handle Multiplicity

May 3rd, 2018 by
 A research study rarely involves just one single statistical test. And multiple testing can result in more statistically significant findings just by chance.

After all, with the typical Type I error rate of 5% used in most tests, we are allowing ourselves to “get lucky” 1 in 20 times for each test.  When you figure out the probability of Type I error across all the tests, that probability skyrockets.
(more…)


Member Training: Equivalence Tests and Non-Inferiority

April 2nd, 2018 by
Statistics is, to a large extent, a science of comparison. You are trying to test whether one group is bigger, faster, or smarter than another.
 
You do this by setting up a null hypothesis that your two groups have equal means or proportions and an alternative hypothesis that one group is “better” than the other. The test has interesting results only when the data you collect ends up rejecting the null hypothesis.
 
But there are times when the interesting research question you’re asking is not about whether one group is better than the other, but whether the two groups are equivalent.

(more…)


Member Training: Using Transformations to Improve Your Linear Regression Model

March 5th, 2018 by

Transformations don’t always help, but when they do, they can improve your linear regression model in several ways simultaneously.

They can help you better meet the linear regression assumptions of normality and homoscedascity (i.e., equal variances). They also can help avoid some of the artifacts caused by boundary limits in your dependent variable — and sometimes even remove a difficult-to-interpret interaction.

(more…)


Member Training: A Data Analyst’s Guide to Methods and Tools for Reproducible Research

November 1st, 2017 by

Have you ever experienced befuddlement when you dust off a data analysis that you ran six months ago? 

Ever gritted your teeth when your collaborator invalidates all your hard work by telling you that the data set you were working on had “a few minor changes”?

Or panicked when someone running a big meta-analysis asks you to share your data?

If any of these experiences rings true to you, then you need to adopt the philosophy of reproducible research.

(more…)


Member Training: A Quick Introduction to Weighting in Complex Samples

October 3rd, 2017 by

A few years back the winning t-shirt design in a contest for the American Association of Public Opinion Research read “Weighting is the Hardest Part.” And I don’t think the t-shirt was referring to anything about patience!

Most statistical methods assume that every individual in the sample has the same chance of selection.

Complex Sample Surveys are different. They use multistage sampling designs that include stratification and cluster sampling. As a result, the assumption that every selected unit has the same chance of selection is not true.

To get statistical estimates that accurately reflect the population, cases in these samples need to be weighted. If not, all statistical estimates and their standard errors will be biased.

But selection probabilities are only part of weighting. (more…)


Member Training: Quantile Regression: Going Beyond the Mean

September 1st, 2017 by

In your typical statistical work, chances are you have already used quantiles such as the median, 25th or 75th percentiles as descriptive statistics.

But did you know quantiles are also valuable in regression, where they can answer a broader set of research questions than standard linear regression?

In standard linear regression, the focus is on estimating the mean of a response variable given a set of predictor variables.

In quantile regression, we can go beyond the mean of the response variable. Instead we can understand how predictor variables predict (1) the entire distribution of the response variable or (2) one or more relevant features (e.g., center, spread, shape) of this distribution.

For example, quantile regression can help us understand not only how age predicts the mean or median income, but also how age predicts the 75th or 25th percentile of the income distribution.

Or we can see how the inter-quartile range — the width between the 75th and 25th percentile — is affected by age. Perhaps the range becomes wider as age increases, signaling that an increase in age is associated with an increase in income variability.

In this webinar, we will help you become familiar with the power and versatility of quantile regression by discussing topics such as:

  • Quantiles – a brief review of their computation, interpretation and uses;
  • Distinction between conditional and unconditional quantiles;
  • Formulation and estimation of conditional quantile regression models;
  • Interpretation of results produced by conditional quantile regression models;
  • Graphical displays for visualizing the results of conditional quantile regression models;
  • Inference and prediction for conditional quantile regression models;
  • Software options for fitting quantile regression models.

Join us on this webinar to understand how quantile regression can be used to expand the scope of research questions you can address with your data.


Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.

(more…)