Multicollinearity tutoral

I just posted brief multicollinearity tutorial on my other blog (loosely based on the material from the Serious Stats book).

You can read it here.

Near-instant high quality graphs in R

One of the main attractions of R (for me) is the ability to produce high quality graphics that look just the way you want them to. The basic plot functions are generally excellent for exploratory work and for getting to know your data. Most packages have additional functions for appropriate exploratory work or for summarizing and communicating inferences. Generally the default plots are at least as good as other (e.g., commercial packages) but with the added advantage of being fairly easy to customize once you understand basic plotting functions and parameters.

Even so, getting a plot looking just right for a presentation or publication often takes a lot of work using basic plotting functions. One reason for this is that constructing a good graphic is an inherently difficult enterprise, one that balances aesthetic factors and statistical factors and that requires a good understanding of who will look at the graphic, what they know, what they want to know and how they will interpret it. It can takes hours – maybe days – to get a graphic right.

In Serious Stats I focused on exploratory plots and how to use basic plotting functions to customize them. I think this was important to include, but one of my regrets was not having enough space to cover a different approach to plotting in R. This is Hadley Wickham’s ggplot2 package (inspired by Leland Wilkinson’s grammar of graphics approach).

In this blog post I’ll quickly demonstrate a few ways that ggplot2 can be used to quickly produce amazing graphics for presentations or publication. I’ll finish by mentioning some pros and cons of the approach.

The main attraction of ggplot2 for newcomers to R is the qplot() quick plot function. Like the R plot() function it will recognize certain types and combinations of R objects and produce an appropriate plot (in most cases). Unlike the basic R plots the output tends to be both functional and pretty. Thus you may be able to generate the graph you need for your talk or paper almost instantly.

A good place to start is the vanilla scatter plot. Here is the R default:

Scatter default in R

Compare it with the ggplot2 default:

Scatter with ggplot2 default

Below is the R code for comparison. (The data here are from hov.csv file used in Chapter 10 Example 10.2 of Serious Stats).

# install and then load the package
install.packages('ggplot2')
library(ggplot2)

# get the data
hov.dat <-  read.csv('http://www2.ntupsychology.net/seriousstats/hov.csv')

# using plot()
with(hov.dat, plot(x, y))

# using qplot()
qplot(x, y, data=hov.dat)

R code formatted by Pretty R at inside-R.org

Adding a line of best fit

The ggplot2 version is (in my view) rather prettier, but a big advantage is being able to add a range of different model fits very easily. The common choice of model fit is that of a straight line (usually the least squares regression line). Doing this in ggplot2 is easier than with basic plot functions (and you also get 95% confidence bands by default).

Here is the straight line fit from a linear model:

Linear modelPOST

qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’)

The geom specifies the type of plot (one with points and a smoothed line in this case) while the method specifies the model for obtaining the smoothed line. A formula can also be added (but the formula defaults to y as a simple linear function of x).

Loess, polynomial fits or splines

Mind you, the linear model fit has some disadvantages. Even if you are working with a related statistical model (e.g., a Pearson’s r or least squares simple or multiple regression) you might want to have a more data driven plot. A good choice here is to use a local regression approach such as loess. This lets the data speak for themselves – effectively fitting a complex curve driven by the local properties of the data. If this is reasonably linear then your audience should be able to see the quality of the straight-line fit themselves. The local regression also gives approximate 95% confidence bands. These may support informal inference without having to make strong assumptions about the model.

Here is the loess plot:

loess plot.png

Here is the code for the loess plot:

qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’loess’)

I like the loess approach here because its fairly obvious that the linear fit does quite well. showing the straight line fit has the appearance of imposing the pattern on the data, whereas a local regression approach illustrates the pattern while allowing departures from the straight line fit to show through.

In Serious Stats I mention loess only in passing (as an alternative to polynomial regression). Loess is generally superior as an exploratory tool – whereas polynomial regression (particularly quadratic and cubic fits) are more useful for inference. Here is an example of a cubic polynomial fit (followed by R code):

cubic fit.png

qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’, formula= y ~ poly(x, 2))

Also available are fits using robust linear regression or splines. Robust linear regression (see section 10.5.2 of Serious Stats for a brief introduction) changes the loss function least squares in order to reduce impact of extreme points. Sample R code (graph not shown):

library(MASS)
qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’rlm’)

One slight problem here is that the approximate confidence bands assume normality and thus are probably too narrow.

Splines are an alternative to loess that fits sections of simpler curves together. Here is a spline with three degrees of freedom:

spline fit 3.png

library(splines)
qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’, formula=y ~ ns(x, 3))

A few final thoughts

If you want to know more the best place to start is with Hadley Wickham’s book. Chapter 2 covers qplot() and is available free online.

The immediate pros of the ggplot2 approach are fairly obvious – quick, good-looking graphs. There is, however, much more to the package and there is almost no limit to what you can produce. The output of the ggplot2 functions is itself an R object that can be stored and edited to create new graphs. You can use qplot() to create many other graphs – notably kernel density plots, bar charts, box plots and histograms. You can get these by changing the geom (or by default with certain object types an input).

The cons are less obvious. First, it takes some time investment to get to grips with the grammar of graphics approach (though this is very minimal if you stick with the quick plot function). Second, you may not like the default look of the ggplot2 output (though you can tweak it fairly easily). For instance, I prefer the default kernel density and histogram plots from the R base package to the default ggplot2 ones. I like to take a bare bones plot and build it up … trying to keep visual clutter to a minimum. I also tend to want black and white images for publication (whereas I would use grey and colour images more often in presentations). This is mostly to do with personal taste.

Serious stats – a quick chapter summary

Here is a list of the contents by chapter with quick notes on chapter content …

0. Preface (About the book; notes on software, mathematics and types of boxed sections)

1. Data, Samples and Statistics (A gentle review of measures of central tendency and dispersion with a little more depth in places – flagging up the distinction between descriptive and inferential formulas and perhaps introducing a few unfamiliar statistics such the geometric mean)

2. Probability Distributions (A background chapter giving a whirlwind tour of the main probability distributions – discrete and continuous – that crop up in later chapters. It also introduces important concepts such probability mass functions, probability density functions and cumulative density functions and characteristics of distributions such as skew, kurtosis and whether they are bounded. From a statistical point of view it is a quick overview missing out a lot of the difficult stuff. )

3. Confidence Intervals (This chapter introduces interval estimation using confidence intervals (CIs) and gives examples for discrete and continuous distributions – particularly those for means and differences between independent or paired means using the t distribution. This chapter also introduces Monte Carlo methods – with emphasis on the bootstrap.)

4. Significance Tests (This chapter introduces significance tests. These are deliberately covered after CIs – which are less popular in the behavioral sciences but generally more useful. A number of common tests are covered – notably t tests and chi-square tests. The chapter ends with some comments on the appropriate use of significance tests – a point picked up again in chapter 11.)

5. Regression (This chapter introduces regression – with an emphasis on simple linear regression. Later chapters draw heavily on this basic material including concepts such as prediction, leverage and influence. The versatility of regression approaches is shown by illustrating how an independent t test is a simple regression model and how a linear model can fit some curvilinear relationships.)

6. Correlation and Covariance (Introduces covariance and correlation with emphasis on the link between Pearson’s r and simple linear regression. The chapter also introduces standardization and problems of working with standardized quantities such as boundary effects, range restriction and small sample bias. Methods for inference with correlation coefficients and comparing correlations (e.g., using the Fisher z distribution) are considered. Some alternatives to Pearson’s r are also introduced.)

7. Effect Size (This chapter focuses on effect size, starting with an overview of the different uses of effect size metrics. The chapter gives a tour of different types of effect size metrics, distinguishing between: continuous and discrete metrics; simple (unstandardized) and standardized metrics; focused (1 df) and unfocused (multiple df) metrics; base rate sensitive and base rate insensitive metrics. I argue that standardized metrics whether based on differences or correlations (d family or r family) are not good measures of the practical, clinical or theoretical importance of an effect because they confound the magnitude of an effect with its variability – though they may be useful in some situations.)

8. Statistical Power (This chapter introduces statistical power – starting by explaining the link between the effect size and statistical power, illustrating why standardized effect size (by combining the magnitude of an effect with its variability) is often a convenient way to summarize an effect in order to estimate statistical power or the sample size required to detect an effect. Problems and pitfalls in statistical power and sample size estimation are discussed. Later sections introduce the accuracy in parameter estimation approach to power in relation to the width of a confidence interval.)

9. Exploring Messy Data (This chapter looks at exploratory analysis of data with emphasis on graphical methods for checking statistical assumptions.)

10. Dealing with Messy Data (This chapter surveys approaches to dealing with violations of statistical assumptions with particular emphasis on robust methods and transformations.)

11. Alternatives to Classical Statistical Inference (This chapter looks at criticism of classical, frequentist methods of inference and considers frequentist responses and three alternative approaches: likelihood, Bayesian and information-theoretic methods. I illustrate each of the alternatives both here and in later chapters.)

12. Multiple Regression and the General Linear Model (This chapter extends regression to models with multiple predictors. The problem of fitting these models when predictors are not orthogonal (i.e., when they are correlated) is introduced and a solution is illustrated using matrix algebra. The rest of the chapter introduces partial and semi-partial correlation and focuses on interpreting a multiple regression model and related issues such as collinearity and suppression.)

13. ANOVA and ANCOVA with Independent Measures (This chapter introduces ANOVA and ANCOVA as special cases of multiple regression with categorical predictors (e.g., using dummy or effect coding). The chapter ends by introducing the multiple comparison problem in relation to differences between means for a factor in ANOVA or differences between adjusted means in ANCOVA. For the latter, the main focus is on modified Bonferroni procedures, though alternatives such as control of false discovery rate and information-theoretic approaches are briefly considered.)

14. Interactions (This chapter looks at modeling non-additive effects of predictors in multiple regression models through the inclusion of interaction terms. It starts by looking at the most general form of an interaction model in multiple regression (often termed a moderated multiple regression) before looking at polynomial terms in regression and interactions in the context of ANOVA and ANCOVA. The main emphasis is on interpreting and exploring interaction effects (e.g., through graphical methods). The chapter also looks at simple main effects and simple interaction effects.)

15. Contrasts (This chapter looks at the often neglected topic of contrasts – mainly in the context of ANOVA and ANCOVA models (where they are weighted combinations of differences in means or adjusted means). Methods for setting up contrasts to test hypotheses about patterns of means are explained for simple cases and extended for unbalanced designs, adjusted means and interaction effects.)

16. Repeated Measures ANOVA (This chapter introduces repeated measures and related (e.g., matched) designs. These increase statistical power by removing individual differences from the ANOVA error term, but at the cost of increased complexity (e.g., making stronger assumptions about the errors of the model). Again, the chapter focuses on checking and dealing with violations of assumptions and on the interpretation of the model. It also briefly considers MANOVA and repeated measures ANCOVA models and the use of gain scores).

17. Modelling Discrete Outcomes (This chapter explains how the regression approach of the general linear model can be extended to models with discrete outcomes using the generalized linear model and related approaches. The main focus is on logistic regression (including multinomial and ordered logistic regression) and Poisson regression, but negative binomial regression and models for excess zeroes (zero-inflated and hurdle models) are briefly reviewed. The chapter ends by considering the difficulty of modeling correlated observations in logistic regression.)

18. Multilevel Models (This chapter introduces multilevel models with particular emphasis on their application to the analysis repeated measures data. The chapter considers conventional nested designs (e.g., repeated measures within participants or children within schools) and moves on to fully crossed models and a brief overview of multilevel generalized linear models).

All chapters come with several examples within the chapter and R code (at the end). Most also have notes on SPSS syntax. I don’t include full SPSS instructions because these are often already available in popular texts. If they aren’t available it is generally because SPSS couldn’t readily implement these analyses. Also note that recent versions of SPSS can be set up to call R via syntax (though I find it easier to use R directly).

Online supplements

The book is around 800 pages long and some material cut from the final draft will be available in five online supplements. This material is either parenthetical (being too detailed than required) or self-contained sections that could stand alone and were perhaps not relevant for all readers.

OS1. Meta-analysis (This section was included in chapter 7 Effect size and introduces meta-analysis. Most meta-analytic approaches for continuous data use standardized effect size metrics. As the chapter argues that simple effect size metrics are often superior for summarizing and comparing effects this chapter uses meta-analysis of simple (raw) mean differences to illustrate fixed effect and random effects models. There is a nice link between random effects meta-analysis and multilevel models – so it was a shame to drop it.)

OS2. Dealing with missing data (An overview of methods for dealing with missing data that was part of chapter 10. The main focus is on multiple imputation – an extremely useful and underused approach in the behavioral sciences and a worked example is demonstrated for both R and SPSS. There are nice links between multiple imputation and meta-analysis – so it made sense to move this chapter out once I had decided to leave out meta-analysis. If you work with missing data and aren’t already familiar with multiple imputation you should take a careful look at this chapter – as most standard methods for dealing with missing data are biased and have low statistical power.)

OS3. Replication probabilities and prep (When I started writing the book there was quite an interest in replication probabilities and prep. in particular as an alternative to p values. This interest has largely faded and my (largely critical) take on prep is now mainly a historical curiosity. The main text now covers this topic briefly in chapter 11. )

OS4. Pseudo-R2 and related measures (A reader of the final draft of chapter 17 commented that given the problems with these measures and my own critical stance on standardized effect size metrics that my coverage of this topic was too detailed. I greatly reduced the emphasis on pseudo-R2 in the text by moving most of the material here. Of these measures my favourite is Zheng and Agresti’s predictive power measure – which I find most intuitive.)

OS5. Loglinear models (Loglinear models are models of contingency table data (closely related to Poisson regression, and under certain conditions equivalent). As Poisson models are generally more flexible, loglinear models were cut from the final draft. However, as they are quite popular in the behavioral sciences – this supplement is provided. Loglinear models are also a convenient way to parameterize a count model to make it more “chi-square-like”. Note: loglinear model can also be used in a more general sense to include models with log link functions or log transformations.)

Follow

Get every new post delivered to your Inbox.

Join 38 other followers