Near-instant high quality graphs in R

One of the main attractions of R (for me) is the ability to produce high quality graphics that look just the way you want them to. The basic plot functions are generally excellent for exploratory work and for getting to know your data. Most packages have additional functions for appropriate exploratory work or for summarizing and communicating inferences. Generally the default plots are at least as good as other (e.g., commercial packages) but with the added advantage of being fairly easy to customize once you understand basic plotting functions and parameters.

Even so, getting a plot looking just right for a presentation or publication often takes a lot of work using basic plotting functions. One reason for this is that constructing a good graphic is an inherently difficult enterprise, one that balances aesthetic factors and statistical factors and that requires a good understanding of who will look at the graphic, what they know, what they want to know and how they will interpret it. It can takes hours – maybe days – to get a graphic right.

In Serious Stats I focused on exploratory plots and how to use basic plotting functions to customize them. I think this was important to include, but one of my regrets was not having enough space to cover a different approach to plotting in R. This is Hadley Wickham’s ggplot2 package (inspired by Leland Wilkinson’s grammar of graphics approach).

In this blog post I’ll quickly demonstrate a few ways that ggplot2 can be used to quickly produce amazing graphics for presentations or publication. I’ll finish by mentioning some pros and cons of the approach.

The main attraction of ggplot2 for newcomers to R is the qplot() quick plot function. Like the R plot() function it will recognize certain types and combinations of R objects and produce an appropriate plot (in most cases). Unlike the basic R plots the output tends to be both functional and pretty. Thus you may be able to generate the graph you need for your talk or paper almost instantly.

A good place to start is the vanilla scatter plot. Here is the R default:

Scatter default in R

Compare it with the ggplot2 default:

Scatter with ggplot2 default

Below is the R code for comparison. (The data here are from hov.csv file used in Chapter 10 Example 10.2 of Serious Stats).

# install and then load the package
install.packages('ggplot2')
library(ggplot2)

# get the data
hov.dat <-  read.csv('http://www2.ntupsychology.net/seriousstats/hov.csv')

# using plot()
with(hov.dat, plot(x, y))

# using qplot()
qplot(x, y, data=hov.dat)

R code formatted by Pretty R at inside-R.org

Adding a line of best fit

The ggplot2 version is (in my view) rather prettier, but a big advantage is being able to add a range of different model fits very easily. The common choice of model fit is that of a straight line (usually the least squares regression line). Doing this in ggplot2 is easier than with basic plot functions (and you also get 95% confidence bands by default).

Here is the straight line fit from a linear model:

Linear modelPOST

qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’)

The geom specifies the type of plot (one with points and a smoothed line in this case) while the method specifies the model for obtaining the smoothed line. A formula can also be added (but the formula defaults to y as a simple linear function of x).

Loess, polynomial fits or splines

Mind you, the linear model fit has some disadvantages. Even if you are working with a related statistical model (e.g., a Pearson’s r or least squares simple or multiple regression) you might want to have a more data driven plot. A good choice here is to use a local regression approach such as loess. This lets the data speak for themselves – effectively fitting a complex curve driven by the local properties of the data. If this is reasonably linear then your audience should be able to see the quality of the straight-line fit themselves. The local regression also gives approximate 95% confidence bands. These may support informal inference without having to make strong assumptions about the model.

Here is the loess plot:

loess plot.png

Here is the code for the loess plot:

qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’loess’)

I like the loess approach here because its fairly obvious that the linear fit does quite well. showing the straight line fit has the appearance of imposing the pattern on the data, whereas a local regression approach illustrates the pattern while allowing departures from the straight line fit to show through.

In Serious Stats I mention loess only in passing (as an alternative to polynomial regression). Loess is generally superior as an exploratory tool – whereas polynomial regression (particularly quadratic and cubic fits) are more useful for inference. Here is an example of a cubic polynomial fit (followed by R code):

cubic fit.png

qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’, formula= y ~ poly(x, 2))

Also available are fits using robust linear regression or splines. Robust linear regression (see section 10.5.2 of Serious Stats for a brief introduction) changes the loss function least squares in order to reduce impact of extreme points. Sample R code (graph not shown):

library(MASS)
qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’rlm’)

One slight problem here is that the approximate confidence bands assume normality and thus are probably too narrow.

Splines are an alternative to loess that fits sections of simpler curves together. Here is a spline with three degrees of freedom:

spline fit 3.png

library(splines)
qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’, formula=y ~ ns(x, 3))

A few final thoughts

If you want to know more the best place to start is with Hadley Wickham’s book. Chapter 2 covers qplot() and is available free online.

The immediate pros of the ggplot2 approach are fairly obvious – quick, good-looking graphs. There is, however, much more to the package and there is almost no limit to what you can produce. The output of the ggplot2 functions is itself an R object that can be stored and edited to create new graphs. You can use qplot() to create many other graphs – notably kernel density plots, bar charts, box plots and histograms. You can get these by changing the geom (or by default with certain object types an input).

The cons are less obvious. First, it takes some time investment to get to grips with the grammar of graphics approach (though this is very minimal if you stick with the quick plot function). Second, you may not like the default look of the ggplot2 output (though you can tweak it fairly easily). For instance, I prefer the default kernel density and histogram plots from the R base package to the default ggplot2 ones. I like to take a bare bones plot and build it up … trying to keep visual clutter to a minimum. I also tend to want black and white images for publication (whereas I would use grey and colour images more often in presentations). This is mostly to do with personal taste.

Pasting Excel data into R on a Mac

When starting out with R, getting data in and out can be a bit of a pain. It should take long to work out a convenient method – depending on what OS you use and what other packages you work with.

In my case I prefer to work with Excel spreadsheets (which are versatile and – for the most part – convenient for sharing with collaborators or students). For this reason I mostly work with comma separated variable files created in Excel an imported using read.csv(). I even quite like the fact that this method requires me to save the .xls worksheet as a .csv file (as it makes it harder to over-write the original file when I edit it for R). In know that there are many other methods that I could use, but this works fine for me.

I do however occasionally miss some of the functionality of software such as MLwiN that allows me to paste Excel data directly into it. I’ve seen instructions about how to do this on a Windows machine (e.g., see John Cook’s notes), but a while back I stumbled on a simple solution for the Mac. I’ve forgotten where I saw it (but will add a link as soon as I find it or if someone reminds me). The solution uses read.table() but is a bit fiddly and therefore best set up as a function.

paste.data <- function(header=FALSE) {read.table(pipe("pbpaste"), header=header)}

I’ve included this in my master function list so it can be loaded with other functions from the book and blog.

To use it just copy tab-delimited data (the default for copying from an Excel file or Word table) and call the function in R. The data are then imported as a data frame in R. For an empty call it assumes there is no header and adds default variable names. Adding the argument header=TRUE or just TRUE will treat the first row as variable (column) names for the data frame. Copy some data and try the following:

source('http://www2.ntupsychology.net/seriousstats/SeriousStatsAllfunctions.txt')

paste.data()

paste.data(header = TRUE)
paste.data(TRUE)
paste.data(T)

N.B. R code formatted via Pretty R at inside-R.org

UPDATE: Ken Knoblauch pointed out an older discussion of this issue in https://stat.ethz.ch/pipermail/r-help/2005-February/066257.html and also noted the read.clipboard() function in William Revelle’s excellent psych package (which works on both PC and Mac systems).

Updating to R 2.15, warnings in R and an updated function list for Serious Stats

Whilst writing the book  the latest version of R changed several times. Although I started on an earlier version, the bulk of the book was written with 2.11 and it was finished under R 2.12. The final version of the R scripts were therefore run and checked using R 2.12 and, in the main, the most recent packages versions for R 2.12.

When it came to proof read R 2.13 was already out and therefore most of the examples were also checked with version, but I stuck with R 2.12 on my home and work machines until last week.

In general I don’t see the point of updating to a new version number if everything is working fine. One advantage of this approach is that the version I install will usually have bugs from the initial release already ironed out. That said, new versions of R have (in my experience) been very stable.

I tend to download the version only when I fall several versions behind or if it is a requirement for a new package or package version. On this occasion it turned out that the latest version of the ordinal package (for fitting ordered logistic regression and multilevel ordered logistic regression models). There are two main drawbacks with updating. The first is reinstalling all your favourite package libraries (and generally getting it set up how you like it). The second is dealing with changes in the way R behaves.

For re-installing all my packages I use a very crude system. For any given platform (Mac OS, Windows or Linux) there are cleverer solutions (that you can find via google). My solution works across cross-platform and is fairly robust, if inelegant. I simply keep an R script with a number of install.packages() commands such as:

install.packages(‘lme4′, ‘exactci’, ‘pwr’, ‘arm’)

I run these in batches after installing the new R version. I find this useful because I’m forever installing R on different machines (so far Mac OS or Windows) at work (e.g., for teaching or if working away from the office or on a borrowed machine). I can also comment the file (e.g., to note if there are issues with any of the packages under a particular version of R). This usually suffices for me as I usually run a ‘vanilla’ set-up without customization. It would be more efficient for me to customize my set-up, but for teaching purposes I find it helps not to do that. Likewise, I tend to work with a clean workspace (and use a script file to save R code that creates my workspaces). I should stress that this isn’t advice – and I would work differently myself if I didn’t use R so much for teaching.

One of the first things that happened after installing R 2.15 was that some of my own functions started producing warnings. R warnings can be pretty scary for new users but are generally benign. Some of them are there to detect behaviour associated with common R errors or common statistical errors (and thus give you a chance to check your work). Others alert you to non-standard behaviour from a function in R (e.g., changing the procedure it uses when sample sizes are small). Yet others offer tips on writing better R code. Only very rarely are they an indication that something has gone badly wrong.

Thus most R warnings are slightly annoying but potentially useful. In my case R 2.15 disliked a number of my functions of the form:

mean(data.frame)

The precise warning was:

Warning message:
mean() is deprecated.
Use colMeans() or sapply(*, mean) instead.
 

All the functions worked just fine, but (after my initial irritation had receded) I realize that colMeans() is a much better function. It is more efficient but, even better, it is obvious that it calculates the means of the columns of a data frame or matrix. With the more general  mean() function it is not immediately obvious what will happen when called with a data frame as an argument. It is also trivial to infer that rowMeans() calculates the row means.

I have now re-written  a number of functions to deal with this problem and to make a few other minor changes. The latest version of my functions can be loaded with the call:

source('http://www2.ntupsychology.net/seriousstats/SeriousStatsAllfunctions.txt')

I will try and keep this file up-to-date with recent versions of R and correct any bugs as they are detected.

The functions can be downloaded as a text file from:

http://sites.google.com/site/seriousstats2012/home/

Follow

Get every new post delivered to your Inbox.

Join 38 other followers