Smoothing can assist data analysis by highlighting important trends and revealing long term movements in time series that otherwise can be hard to see.
Many data smoothing techniques have been developed, each of which may be useful for particular kinds of data and in specific applications. David will give an introductory overview of the most common smoothing methods, and will show examples of their use. He will cover moving averages, exponential smoothing, the Kalman Filter, low-pass filters, high pass filters, LOWESS and smoothing splines.
This presentation is pitched towards those who may use smoothing techniques during the course of their analytic work, but who have little familiarity with the techniques themselves. David will avoid the underpinning mathematical and statistical methods, but instead will focus on providing a clear understanding of what each technique is about.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
(more…)
Latent Class Analysis is a method for finding and measuring unobserved latent subgroups in a population based on responses to a set of observed categorical variables.
This webinar will present an overview and an example of how latent class analysis works to find subgroups, how to interpret the output, the steps involved in running it. We will discuss extensions and uses of the latent classes in other analyses and similarities and differences with related techniques.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
(more…)
Whenever we run an analysis of variance or run a regression one of the first things we do is look at the p-value of our predictor variables to determine whether
they are statistically significant. When the variable is statistically significant, did you ever stop and ask yourself how significant it is? (more…)
P-values are the fundamental tools used in most inferential data analyses
(more…)
Why is it we can model non-linear effects in linear regression?
What the heck does it mean for a model to be “linear in the parameters?” (more…)
A Science News article from July 2014 was titled “Scientists’ grasp of confidence intervals doesn’t inspire confidence.” Perhaps that is why only 11% of the articles in
the 10 leading psychology journals in 2006 reported confidence intervals in their statistical analysis.
How important is it to be able to create and interpret confidence intervals?
The American Psychological Association Publication Manual, which sets the editorial standards for over 1,000 journals in the behavioral, life, and social sciences, has begun emphasizing parameter estimation and de-emphasizing Null Hypothesis Significance Testing (NHST).
Its most recent edition, the sixth, published in 2009, states “estimates of appropriate effect sizes and confidence intervals are the minimum expectations” for published research.
In this webinar, we’ll clear up the ambiguity as to what exactly is a confidence interval and how to interpret them in a table and graph format. We will also explore how they are calculated for continuous and dichotomous outcome variables in various types of samples and understand the impact sample size has on the width of the band. We’ll discuss related concepts like equivalence testing.
By the end of the webinar, we anticipate your grasp of confidence intervals will inspire confidence.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
(more…)