Detecting Events in Gridded Data

AJ Smit and Robert W Schlegel



Because of human-induced climate change, we anticipate that extreme events will occur more frequently and that they will become greater in intensity. Here I address this question by using some gridded SST data, which is the only way that we can assess if this trend is unfolding across large ocean regions. Using the gridded 0.25 degree Reynolds OISST, I will find extreme SST events (marine heatwaves (MHWs)) around South Africa by applying the detect_event() function pixel-by-pixel to the data we downloaded in the previous vignette. After detecting the events, I will fit a generalised linear model (GLM) to calculate rates of change in some marine heat wave metrics, and then plot the estimated trend.


Gridded data

In the previous vignette we saw how to download and prepare OISST data. In this vignette we will use the data we downloaded for our example on how to detect MHWs in gridded data. Because we saved our data as an .Rda file, loading it into R is easy:

OISST <- read_rds("~/Desktop/OISST_vignette.Rda")

Event detection

Two good choices: purrr vs. plyr

When we want to make the same calculation across multiple groups of data within one dataframe we have two good options available to us. The first is to make use of the map() suite of functions found in the purrr package. This is a very fast tidyverse friendly approach to splitting up tasks. The other good option is to go back in time a bit and use the ddply() function from the plyr package. This is arguably a better approach as it allows us to very easily use multiple cores to detect the MHWs. The problem with this approach is that one must never load the plyr library directly as it has some fundamental inconsistencies with the tidyverse. We will see below how to perform these two different techniques without causing ourselves any headaches.

It is a little clumsy to use multiple functions at once with the two methods we will look at here so we will combine the calculations we want to make in one wrapper function first:

Up first we have the purrr method:

Running the calculations on only one of my 2.1 GHz cores, as seen above, takes ~174 seconds.

If we want to use multiple cores we will need to use the plyr technique:

The plyr technique takes ~110 seconds on my computer using three cores. This technique is not simply three times faster because when using multiple cores there is a certain amount of loss in efficiency due to the computer needing to remember which results are meant to go where so that it can stitch everything back together again for you. This takes very little memory, but over large jobs it can start to become problematic. Occasionally ‘slippage’ can occur as well where an entire task can be forgotten. This is very rare but does happen. This is partly what makes purrr a viable option as it does not have this problem. The other reason is that purrr performs much more efficient calculations than plyr, But what if we could have the best of both worlds?

A harmonious third option

As one may see above, running these calculations on a very large (or even global) gridded dataset can quickly become very time consuming. While running these calculations myself on the global OISST dataset I have found that the fastest option is to combine the two options above. In my workflow I have saved each longitude segment of the global OISST dataset as separate files and use the purrr method on each individual file, while using the plyr method to be running the multiple calculations on as many files as my core limit will allow. One may not however do this the other way around and use purr to run multiple plyr calculations at once. This will confuse your computer and likely cause a stack overflow. Which sounds more fun than it actually is… as I have had to learn.

In order to happily combine these two options into one we will need to convert the purr code we wrote above into it’s own wrapper function, which we will then call on a stack of files using the plyr technique. Before we do that we must first create the aforementioned stack of files.

This may initially seem like an unnecessary extra step, but when one is working with time series data it is necessary to have all of the dates at a given pixel loaded at once. Unless one is working from a server/virtual machine/supercomputer this means that one will often not be able to comfortably hold an entire grid for a study area in memory at once. Having the data accessible as thin strips like this therefore makes life somewhat easier. And as we see in the code chunk below it also (arguably) allows us to perform the most efficient calculations on our data.

Not only is this technique computationally faster, because it only loads one of our small slice files at a time it is also much lighter on our memory use. To maximise efficiency even further I would recommend writing out this full work flow in a stand-alone script and then running it using source() directly from an R terminal.

Trend detection

With our calculations made we will now look at how to fit some GLMs to the results in order to determine long-term trends in MHW occurrence.

Up first we see how to calculate the number of events that occurred per pixel:

# summarise the number of unique longitude, latitude and year combination:
event_freq <- MHW_result %>% 
  mutate(year = year(date_start)) %>% 
  group_by(lon, lat, year) %>% 
  summarise(n = n())

# create complete grid for merging with:
sst_grid <- OISST %>% 
  select(lon, lat, t) %>% 
  mutate(t = lubridate::year(t)) %>% 
  dplyr::rename(year = t) %>% 

# and merge:
OISST_n <- left_join(sst_grid, event_freq, by = c("lon", "lat", "year")) %>% 
  mutate(n = ifelse(, 0, n))

Then we specify the particulars of the GLM we are going to use:

lin_fun <- function(ev) {
  mod1 <- glm(n ~ year, family = poisson(link = "log"), data = ev)
  # extract slope coefficient and its p-value
  tr <- data.frame(slope = summary(mod1)$coefficients[2,1],
                   p = summary(mod1)$coefficients[2,4])

In the following code chunk we will see how to run the GLM on each pixel using the purrr technique:

OISST_nTrend <- OISST_n %>% 
  group_by(lon, lat) %>% 
  nest() %>% 
  mutate(res = map(data, lin_fun)) %>% 
  select(-data) %>%
  unnest() %>% 
  mutate(pval = cut(p, breaks = c(0, 0.001, 0.01, 0.05, 1)))

Or if one prefers, we may use the plyr technique:

OISST_nTrend <- plyr::ddply(OISST_n, c("lon", "lat"), lin_fun)
OISST_nTrend$pval <- cut(OISST_nTrend$p, breaks = c(0, 0.001, 0.01, 0.05, 1))

With small summary statistic calculations there is very little difference in the timing between techniques. Use whichever makes you the happiest! The plyr code is certainly shorter, but I personally find the purrr code much easier to read. This is particularly important for group projects.

Visualising the results

Let’s finish this vignette by visualising the long-term trends in the annual occurrence of MHWs per pixel in the chosen study area. First we will grab the base global map from the maps package:

# The base map
map_base <- ggplot2::fortify(maps::map(fill = TRUE, plot = FALSE)) %>% 
  dplyr::rename(lon = long)

Then we will create two maps that we will stick together using ggpubr. The first map will show the slope of the count of events detected per year over time as shades of red, and the second map will show the significance (p-value) of these trends in shades of grey.

map_slope <- ggplot(OISST_nTrend, aes(x = lon, y = lat)) +
  geom_rect(size = 0.2, fill = NA,
       aes(xmin = lon - 0.1, xmax = lon + 0.1, ymin = lat - 0.1, ymax = lat + 0.1,
           colour = pval)) +
  geom_raster(aes(fill = slope), interpolate = FALSE, alpha = 0.9) +
  scale_fill_gradient2(name = "count/year (slope)", high = "red", mid = "white",
                       low = "darkblue", midpoint = 0,
                       guide = guide_colourbar(direction = "horizontal",
                                               title.position = "top")) +
  scale_colour_manual(breaks = c("(0,0.001]", "(0.001,0.01]", "(0.01,0.05]", "(0.05,1]"),
                      values = c("firebrick1", "firebrick2", "firebrick3", "white"),
                      name = "p-value", guide = FALSE) +
  geom_polygon(data = map_base, aes(group = group), 
               colour = NA, fill = "grey80") +
  coord_fixed(ratio = 1, xlim = c(13.0, 23.0), ylim = c(-33, -42), expand = TRUE) +
  labs(x = "", y = "") +
  theme_bw() +
  theme(legend.position = "bottom")

map_p <- ggplot(OISST_nTrend, aes(x = lon, y = lat)) +
  geom_raster(aes(fill = pval), interpolate = FALSE) +
  scale_fill_manual(breaks = c("(0,0.001]", "(0.001,0.01]", "(0.01,0.05]",
                               "(0.05,0.1]", "(0.1,0.5]", "(0.5,1]"),
                    values = c("black", "grey20", "grey40",
                               "grey80", "grey90", "white"),
                    name = "p-value",
                    guide = guide_legend(direction = "horizontal",
                                               title.position = "top")) +
  geom_polygon(data = map_base, aes(group = group), 
               colour = NA, fill = "grey80") +
  coord_fixed(ratio = 1, xlim = c(13.0, 23.0), ylim = c(-33, -42), expand = TRUE) +
  labs(x = "", y = "") +
  theme_bw() +
  theme(legend.position = "bottom")

map_both <- ggpubr::ggarrange(map_slope, map_p, align = "hv")

From the figure above we may see that the entire study area shows significant (p<= 0.05) increases in the count of MHWs per year. This is generally the case for the entire globe. Not shown here is also the significant increase in the intensity of MHWs as well.