I just posted brief multicollinearity tutorial on my other blog (loosely based on the material from the Serious Stats book).

You can read it here.

I just posted brief multicollinearity tutorial on my other blog (loosely based on the material from the Serious Stats book).

You can read it here.

*Posted by Thom on November 11, 2013*

https://seriousstats.wordpress.com/2013/11/11/multicollinearity-tutoral/

It is now increasingly common for experimental psychologists (among others) to use multilevel models (also known as linear mixed models) to analyze data that used to be shoe-horned into a repeated measures ANOVA design. Chapter 18 of Serious Stats introduces multilevel models by considering them as an extension of repeated measures ANOVA models that can cope with missing outcomes, time-varying covariates and can relax the sphericity assumption of conventional repeated measures ANOVA. They can also deal with other – less well known – problems such as having stimuli that are random factor (e.g., see this post on my Psychological Statistics blog). Last, but not least, multilevel generalised linear models allow you to have discrete and bounded outcomes (e.g., dichotomous, ordinal or count data) rather than be constrained by as assuming a continuous response with normal errors.

There are two main practical problems to bear in mind when switching to the multilevel approach. First, the additional complexity of the approach can be daunting at first – though it is possible to built up gently to more complex models. Recent improvements in availability of software and support (textbooks, papers and online resources) also help. The second is that as soon as a model departs markedly from a conventional repeated measures ANOVA, correct inferences (notably significance tests and interval estimates such as confidence intervals) can be difficult to obtain. If the usual ANOVA assumptions hold in a nested, balanced design then there is a known equivalence between the multilevel model inferences using *t* or *F* tests and the familiar ANOVA tests (and this case the expected output of the tests is the same). The main culprits are boundary effects (which effect inferences about variances and hence most tests of random effects) and working out the correct degrees of freedom (*df*) to use for your test statistic. Both these problems are discussed in Chapter 18 of the book. If you have very large samples an asymptotic approach (using Wald *z* or chi-square statistics) is probably just fine. However, the further you depart from conventional repeated measures ANOVA assumptions the harder it is to know how large a sample news to be before the asymptotics kick in. In other words, the more attractive the multilevel approach the less you can rely on the Wald tests (or indeed the Wald-style *t* or *F* tests).

The solution I advocate in Serious Stats is either to use parametric bootstrapping or Markov chain Monte Carlo (MCMC) approaches. Another approach is to use some form of correction to the *df* or test statistic such as the Welch-Satterthwaite correction. For multilevel models with factorial type designs the recommended correction is generally the Kenward-Roger approximation. This is implemented in SAS, but (until recently) not available in R. Judd, Westfall and Kenny (2012) describe how to use the Kenward-Roger approximation to get more accurate significance tests from a multilevel model using R. Their examples use the newly developed pbkrtest package (Halekoh & Højsgaard, 2012) – which also has functions for parametric bootstrapping.

My purpose here is to contrast the the MCMC and Kenward-Roger correction (ignoring the parametric bootstrap for the moment). To do that I’ll go through a worked example – looking to obtain a significance test and a 95% confidence interval (CI) for a single effect.

*The pitch data example*

The example I’ll use is for the pitch data from from Chapter 18 of the book. This experiment (from a collaboration with Tim Wells and Andrew Dunn) involves looking at the at pitch of male voices making attractiveness ratings with respect to female faces. The effect of interest (for this example) is whether average pitch goes up or done for higher ratings (and if so, by how much). A conventional ANOVA is problematic because this is a design with two fully crossed random factors – each participant (*n* = 30) sees each face (*n* = 32) and any conclusions ought to generalise both to other participants and (crucially) to other faces. Furthermore, there is a time-varying covariate – the baseline pitch of the numerical rating when no face is presented. The significance tests or CIs reported by most multilevel modelling packages with also be suspect. Running the analysis in the R package lme4 gives parameter estimates and *t* statistics for the fixed effects but no *p* values or CIs. The following R code loads the pitch data, checks the first few cases, loads lme4 and runs the model of interest. (You should install lme4 using the command install.packages(‘lme4’) if you haven’t done so already).

Note the lack of df and p values. This is deliberate policy by the lme4 authors; they are not keen on giving users output that has a good chance of being very wrong.

*The Kenward-Roger approximation*

This approximation involves adjusting both the *F* statistic and its *df* so that the *p* value comes out approximately correct (see references below for further information). It won’t hurt too much to think of it as turbocharged Welch-Satterthwaite correction. To get the corrected *p* value from this approach first install the pbkrtest package and then load it. The approximation is computed using the KRmodcomp() function. This takes the model of interest (with the focal effect) and a reduced model (one without the focal effect). The code below installs and loads everything, runs the reduced model and then uses KRmodcomp() to get the corrected *p* value. Note that it may take a while to run (it took about 30 seconds on my laptop).

install.packages('pbkrtest') library(pbkrtest) pitch.red <- lmer(pitch ~ base + (1|Face) + (1|Participant), data=pitch.dat) KRmodcomp(pitch.me, pitch.red)

The corrected *p* value is .0001024. The result could reported as a Kenward-Roger corrected test with *F*(1, 118.5) = 16.17, *p* = .0001024. In this case the Wald *z* test would have given a p value of around .0000435. Here the effect is sufficiently large that the difference in approaches doesn’t matter – but that won’t always be true.

*The MCMC approach*

The MCMC approach (discussed in Chapter 18) can be run in several ways – with the lme4 functions or those in MCMCglmm being fairly easy to implement. Here I’ll stick with lme4 (but for more complex models MCMCglmm is likely to be better).

First you need to obtain a large number of Monte Carlo simulations from the model of interest. I’ll use 25,000 here (but I often start with 1,000 and work up to a bigger sample). Again this may take a while (about 30 or 40 seconds on my laptop).

pitch.mcmc <- mcmcsamp(pitch.me, n = 25000)

For MCMC approaches it is useful to check the estimates from the simulations. Here I’ll take a quick look at the trace plot (though a density plot is also sensible – see chapter 18).

xyplot(pitch.mcmc)

This produces the following plot (or something close to it):

The trace for the fixed effect of attractiveness looks pretty healthy – the thich black central portion indicating that it doesn’t jump around too much. Now we can look at the 95% confidence interval (strictly a Bayesian highest posterior density or HPD interval – but for present purposes it approximates to a 95% CI).

HPDinterval(pitch.mcmc)

This gives the interval estimate [0.2227276, 0.6578456]. This excludes zero so it is statistically significant (and MCMCglmm would have given an us MCMC-derived estimate of the *p* value).

*Comparison and reccomendation*

Although the Kenward-Roger approach is well-regarded, for the moment I would reccomend the MCMC approach. The pbkrtest package is still under development and I could not always get the approximation or the parametric bootstrap to work (but the parametric bootstrap can also be obtained in other ways – see Chapter 18).

The MCMC approach is also preferable in that it should generalize safely to models where the performance of the Kenward-Roger approximation is unknown (or poor) such as for discrete or ordinal outcomes. It also provides interval estimates rather than just *p* values. The main downside is that you need to familiarize yourself with some basic MCMC diagnostics (e.g., trace and density plots at the very least) and be willing to re-run the simulations to check that the interval estimates are stable.

*References *

Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. *Journal of Personality and Social Psychology*, *103*, 54-69.

Halekoh, U., & Højsgaard, S. (2012) A Kenward-Roger approximation and parametric bootstrap methods for tests in linear mixed models – the R package pbkrtest. Submitted to Journal of Statistical Software.

Update

Ben Bolker pointed out that future versions of lme4 may well drop the MCMC functions (which are limited, at present, to fairly basic models). In the book I mainly used MCMCglmm – which is rather good at fitting fully crossed factorial models. Here is the R code for the pitch data. Using 50,000 simulations seems to give decent estimates of the attractiveness effect. Plotting the model object gives both MCMC trace plots and kernel density plots of the MCMC estimates (hit return in the console to see all the plots).

Last but not least, any one interested in the topic should keep an eye on the draft r-sig-mixed-modelling FAQ for a summary of the challenges and latest available solutions for multilevel inference in R (and other packages).

*Posted by Thom on April 18, 2013*

https://seriousstats.wordpress.com/2013/04/18/using-multilevel-models-to-get-accurate-inferences-for-repeated-measures-anova-designs/

The companion web site for Serious stats is now live:

http://www.palgrave.com/psychology/Baguley/

It includes a sample chapter (Chapter 15: Contrasts), data sets, R scripts for all the examples and supplementary material.

*Posted by Thom on March 23, 2012*

https://seriousstats.wordpress.com/2012/03/23/serious-stats-companion-web-site-now-live/

In Chapter 2 (Confidence Intervals) of Serious stats I consider the problem of displaying confidence intervals (CIs) of a set of means (which I illustrate with the simple case of two independent means). Later, in Chapter 16 (Repeated Measures ANOVA), I consider the trickier problem of displaying of two or more means from paired or repeated measures. The example in Chapter 16 uses R functions from my recent paper reviewing different methods for displaying means for repeated measures (within-subjects) ANOVA designs (Baguley, 2012b). For further details and links see a brief summary on my psychological statistics blog. The R functions included a version for independent measures (between-subject) designs, but this was a rather limited designed for comparison purposes (and not for actual use).

The independent measures case is relatively straight-forward to implement and I hadn’t originally planned to write functions for it. Since then, however, I have decided that it is worth doing. Setting up the plots can be quite fiddly and it may be useful to go over the key points for the independent case before you move on to the repeated measures case. This post therefore adapts my code for independent measures (between-subjects) designs.

The approach I propose is inspired by Goldstein and Healy (1995) – though other authors have made similar suggestions over the years (see Baguley, 2012b). Their aim was to provide a simple method for displaying a large collection of independent means (or other independent statistics). At its simplest the method reduces to plotting each statistic with error bars equal to ±1.39 standard errors of the mean. This result is a normal approximation that can be refined in various ways (e.g., by using the *t* distribution or by extending it to take account of correlations between conditions). Using a Goldstein-Healy plot two means are considered different with 95% confidence if their two intervals do not overlap. In other words non-overlapping CIs are (in this form of plot) approximately equivalent to a statistically significant difference between the two means with α = .05. For convenience I will refer to CIs that have this property as *difference-adjusted* CIs (to distinguish them from conventional CIs).

It is important to realize that conventional 95% CIs constructed around each mean won’t have this property. For independent means they are usually around 40% too wide and thus will often overlap even if the usual *t* test of their difference is statistically significant at *p* < .05. This happens because the variance of a difference is (in independent samples) equal to the sum of the variances of the individual samples. Thus the standard error of the difference is around times too large (assuming equal variances). For a more comprehensive explanation see Chapter 3 of Serious stats or Baguley (2012b).

If you have only two means there are at least three basic options:

1) plot the individual means with conventional 95% CIs around each mean

2) plot the difference between means and a 95% CI for the difference

3) plot some form of difference-adjusted CI

Which option is best? It depends on what you are trying to do. A good place to start is with your reasons for constructing a graphical display in the first place. Graphs are not particularly good for formal inference and other options (e.g., significance tests, reporting point estimates CIs in text, likelihood ratios, Bayes factors and so forth) exist for reporting the outcome of formal hypothesis tests. Graphs are appropriate for informal inference. This includes exploratory data analysis, to aid the interpretation of complex patterns or to summarize a number of simple patterns in a single display. If the patterns are very clear, informal inference might be sufficient. In other cases it can be supplemented with formal inference.

What patterns do the three basic options above reveal? Option 1) shows the precision around individual means. This readily supports inference about the individual means (but not their difference). For example, a true population outside the 95% CI is considered implausible (and the observed mean would be different from that hypothesized value with p < .05 using a one sample *t* test).

Option 2) makes for a rather dull plot because it just involves a single point estimate for the difference in means and the 95% CI for the difference. If this is the only quantity of interest you’d be better off just reporting the mean and 95% CI in the text. This has advantage of being more compact and more accurate than trying to read the numbers off a graph. [This is one reason that graphs aren’t optimal for formal inference; it can be hard, for instance, to tell whether a line includes zero or excludes zero when the difference is just statistically significant or just statistically non-significant. With informal inference you shouldn’t care where *p* = .049 or *p* = .051, but whether there are any clear patterns in the data]

Option 3) shows you the individual means but calibrates the CIs so that you can tell if it is plausible that the sample means differ (using 95% confidence in the difference as a standard). Thus it seems like a good choice for graphical display if you are primarily interested in the differences between means. For formal inference it can be supplemented by reporting a hypothesis test in the text (or possibly a Figure caption).

It is worth noting that option 3) becomes even more attractive if you have more than two means to plot. It allows you to see patterns that emerge over the set of means (e.g., linear or non-linear trends or – if *n* per sample is similar – changes in variances) and to compare pairs of means to see whether it is plausible that they are different.

In contrast, option 2) is rather unattractive with more than two means. First, with *J* means there are *J*(*J*-1)/2 differences and thus an unnecessarily cluttered graphical display (e.g., with *J* = 5 means there are 10 Cis to plot). Second, plotting only the differences can obscure important patterns in the data (e.g., an increasing or decreasing trend in the means or variances would be difficult to identify).

Where only a few means are to be plotted (as is common in ANOVA) it makes sense to take a slight more accurate approach than the approximation originally proposed by Goldstein and Healy for large collections of means. This approach uses the *t* distribution. A similar approach is advocated by Afshartous and Preston (2010) who also provide R code for calculating multipliers for the standard errors using the *t* distribution (and an extension for the repeated measures). My approach is similar, but involves calculating the margin of error (half width of the error bars) directly rather than computing a multiplier to apply to the standard error.

Difference-adjusted CIs for the mean of each sample from an independent measures (between-subjects) ANOVA design is given by Equation 3.31 of Serious stats:

The term is the mean of the *j*th sample (where samples are labeled *j* = 1 to *J*) and is the standard error of that sample. The term is the quantile of the *t* distribution with degrees of freedom (where is the size of *j*th sample) that includes to 100(1 – α) % of the distribution.

Thus, apart from the term, this equation is identical to that for a 95% CI around the individual means, with the proviso that the standard error here is computed separately for each sample. This differs from the usual approach to plotting CIs for independent measures ANOVA design – where it is common to use a pooled standard error computed from a pooled standard deviation ( the root mean square error of the ANOVA) . While a pooled error term is sometimes appropriate, it is generally a bad idea for graphical display of the CIs because it will obscure any patterns in the variability of the samples. [Nevertheless, where is very small it make make sense to use a pooled error term on the grounds that each sample provides an exceptionally poor estimate of its population standard deviation]

However, the most important change is the term. It creates a difference-adjusted CI by ensuring that the joint width of the margin of error around any two means is $latex \sqrt 2 $ times larger than for a single mean. The division by 2 arises merely as a consequence of dealing jointly with two error bars. Their total has to be $latex \sqrt 2 $ times larger and therefore each one needs only to be times its conventional value (for an unadjusted CI). This is discussed in more detail by Baguley (2012a; 2012b).

This equation should perform well (e.g., providing fairly accurate coverage) as long as variances are not very unequal and the samples are approximately normal. Even when these conditions are not met, remember the aim is not to support formal inference. In addition, the approach is likely to be slightly more robust than ANOVA (at least to homogeneity of variance and unequal sample sizes). So this method is likely to be a good choice whenever ANOVA is appropriate.

Two R functions for difference-adjusted CIs in independent measures ANOVA designs are provided here. The first function bsci() calculates conventional or difference-adjusted CIs for a one-way ANOVA design.

bsci <- function(data.frame, group.var=1, dv.var=2, difference=FALSE, pooled.error=FALSE, conf.level=0.95) { data <- subset(data.frame, select=c(group.var, dv.var)) fact <- factor(data[[1]]) dv <- data[[2]] J <- nlevels(fact) N <- length(dv) ci.mat <- matrix(,J,3, dimnames=list(levels(fact), c('lower', 'mean', 'upper'))) ci.mat[,2] <- tapply(dv, fact, mean) n.per.group <- tapply(dv, fact, length) if(difference==TRUE) diff.factor= 2^0.5/2 else diff.factor=1 if(pooled.error==TRUE) { for(i in 1:J) { moe <- summary(lm(dv ~ 0 + fact))$sigma/(n.per.group[[i]])^0.5 * qt(1-(1-conf.level)/2,N-J) * diff.factor ci.mat[i,1] <- ci.mat[i,2] - moe ci.mat[i,3] <- ci.mat[i,2] + moe } } if(pooled.error==FALSE) { for(i in 1:J) { group.dat <- subset(data, data[1]==levels(fact)[i])[[2]] moe <- sd(group.dat)/sqrt(n.per.group[[i]]) * qt(1-(1-conf.level)/2,n.per.group[[i]]-1) * diff.factor ci.mat[i,1] <- ci.mat[i,2] - moe ci.mat[i,3] <- ci.mat[i,2] + moe } } ci.mat } plot.bsci <- function(data.frame, group.var=1, dv.var=2, difference=TRUE, pooled.error=FALSE, conf.level=0.95, xlab=NULL, ylab=NULL, level.labels=NULL, main=NULL, pch=21, ylim=c(min.y, max.y), line.width=c(1.5, 0), grid=TRUE) { data <- subset(data.frame, select=c(group.var, dv.var)) if(missing(level.labels)) level.labels <- levels(data[[1]]) if (is.factor(data[[1]])==FALSE) data[[1]] <- factor(data[[1]]) if (is.factor(data[[1]])==TRUE) data[[1]] <- factor(data[[1]]) dv <- data[[2]] J <- nlevels(data[[1]]) ci.mat <- bsci(data.frame=data.frame, group.var=group.var, dv.var=dv.var, difference=difference, pooled.error=pooled.error, conf.level=conf.level) moe.y <- max(ci.mat) - min(ci.mat) min.y <- min(ci.mat) - moe.y/3 max.y <- max(ci.mat) + moe.y/3 if (missing(xlab)) xlab <- "Groups" if (missing(ylab)) ylab <- "Confidence interval for mean" plot(0, 0, ylim = ylim, xaxt = "n", xlim = c(0.7, J + 0.3), xlab = xlab, ylab = ylab, main = main) grid() points(ci.mat[,2], pch = pch, bg = "black") index <- 1:J segments(index, ci.mat[, 1], index, ci.mat[, 3], lwd = line.width[1]) segments(index - 0.02, ci.mat[, 1], index + 0.02, ci.mat[, 1], lwd = line.width[2]) segments(index - 0.02, ci.mat[, 3], index + 0.02, ci.mat[, 3], lwd = line.width[2]) axis(1, index, labels=level.labels) }

The default is difference=FALSE (on the basis that these are the CIs most likely to be reported in text or tables). The second function plot.bsci() uses the former function to plot the means and CIs the default here is difference=TRUE (on the basis that it the difference-adjusted CIs are likely to be more useful for graphical display). For both functions the default is a pooled error term (pooled.error=FALSE) and a 95% confidence level (conf.level=0.95). Each function also takes input as a data frame and assumes that the grouping variable is the first column and the dependent variable the second column. If the appropriate variables are in different columns, the correct columns can be specified with the arguments group.var and dv.var. The plotting function also takes some standard graphical parameters (e.g., for labels and so forth).

The following examples use the diagram data set from Serious stats. The first line loads the data set (if you have a live internet connection). The second line generated the difference-adjusted CIs. The third line plots the difference adjusted CIs. Note that the grouping variable (factor) is in the second column and the DV is in the fourth column.

diag.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/diagram.csv') bsci(diag.dat, group.var=2, dv.var=4, difference=TRUE) plot.bsci(diag.dat, group.var=2, dv.var=4, ylab='Mean description quality', main = 'Difference-adjusted 95% CIs for the Diagram data')

In this case the graph looks like this:

It should be immediately clear that while the segmented diagram condition (S) tends to have higher scores than the text (T) or picture (P) conditions, but the full diagram (F) condition is somewhere in between. This matches the uncorrected pairwise comparisons where S > P = T, S = F, and F = P = T.

At some point I will also add a function to plot two-tiered error bars (combining option 1 and 3). For details of the extension to repeated measures designs see Baguley (2012b). The code and date sets are available here.

Afshartous D., & Preston R. A. (2010). Confidence intervals for dependent data: equating nonoverlap with statistical significance. *Computational Statistics and Data Analysis. 54*, 2296-2305.

Baguley, T. (2012a, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave.

Baguley, T. (2012b). Calculating and graphing within-subject confidence intervals for ANOVA. Behavior Research Methods, 44, 158-175.

Goldstein, H., & Healy, M. J. R. (1995). *Journal of the Royal Statistical Society. Series A (Statistics in Society), 158*, 175-177.

Schenker, N., & Gentleman, J. F. (2001). On judging the significance of differences by examining the overlap between confidence intervals. *The American Statistician, 55, *182-186.

*Posted by Thom on March 18, 2012*

https://seriousstats.wordpress.com/2012/03/18/cis-for-anova/

This is a blog to accompany my forthcoming book “Serious stats” published by Palgrave.

Baguley, T. (2012, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave.

The book is available for pre-order (e.g., via amazon) and instructors should be able to pre-order inspection copies via Macmillan in the US (or Palgrave in the UK).

The proofs have been checked and returned and I am hoping for a publication date of May 2012.

*Posted by Thom on February 1, 2012*

https://seriousstats.wordpress.com/2012/02/01/intro/