top of page
  • Richard Dixon

Climate modelling and catastrophe modelling


Since 1990, the UK hasn't experienced a really severe windstorm. You can point the finger at Kyrill in 2007 or Christian in 2013, but neither of these really hold a candle to the October Storm of 1987 or Daria in 1990. Meanwhile France, with Lothar, Martin, Xynthia and Klaus since 1999 have had more than their fair share. Is this a trend or is it just down to sheer bad luck for France?

The above question is just one of many that we can potentially inform ourselves better on (if not necessarily get a firm "yes" or "no" on) by adopting some of the methods used in the climate modelling community. One of the most promising aspects of climate modelling is the sheer volume of global climate model data out there. The data produced often has been developed with entirely different research interests in mind compared to what we might want to answer in the catastrophe modelling community.

The other element of contemporary climate modelling studies is the increasingly high resolution of modern climate models that - in my opinion - stands them apart from those currently used in catastrophe modelling: more about that in the second half of this post.

1) What if we'd had our time all over again?

No, don't worry: I'm not going to go all philosophical and all carpe diem on you here. I'm talking here about the idea of "If history had repeated itself, what would have happened?" - the idea of counterfactual analysis (for some further reading, have a look at this from Gordon Woo at RMS).

I attended the Royal Meteorological Society annual conference in July of this year and there was one piece of work from the UK Meteorological Office that stuck out as some research whose methodology could greatly benefit our understanding of:

  • "what's happening now" in cat risk - e.g. the France/UK windstorm example above

  • return periods of recent historical events in a changing climate

  • what sort of events we might see in the tail

The premise is remarkably simple, but potentially very powerful: simply re-simulate the last 30 years multiple times and understand what the model produces. By re-running recent history several times, you are able not only to try and understand how rare any "real" historical events have been, but also couch them in the context of the current changing climate. Also, a useful bi-product of this is that you are also able to identify some possible tail events and interrogate how they compare to actual historical events we understand.

The chart below is taken from the paper presented at the conference by Thompson et al. (2017). It's essentially showing simulations of rainfall and highlighting how the model they used produces multiple "new" extreme events. There's more detail in the text - and the paper goes on to talk about calculating the return period of a recent extreme event in south-east England.

Figure 1: Text taken from paper: "Unprecedented monthly rainfall in all winter months. South east England monthly rainfall totals from observations (grey) and the model (red) for October to March. The boxrepresents the interquartile range and the range of the whiskers represents the minimum and maximum monthly rainfall totals. Red dots indicate model months with greater total rainfall than has yet been observed and ticks on the upper observations line indicate values in the upper quartile of events. For January the ticks on the model line indicate months above the observed record prior to 2014 and the grey dot above the observations indicates the record observed monthly rainfall of January 2014".

This methodology could have multiple uses to help answer some of the pressing questions in our current climate. What about the hurricane drought? Can we use multiple simulations of recent hurricane seasons to understand if the large-scale atmospheric flows that drove all the storms away from land still exist in these simulations - or whether the last 12 years of quiescence is just the capriciousness of natural atmosphere variability and not part of some trend to lower hurricane activity?

There are many outstanding questions that could, potentially, be informed by this approach.

2) The role of resolution

Of course, climate model use in catastrophe modelling is old news: we've been leaning on Numerical Weather Prediction technology for over a decade now in the insurance industry and its taken our modelling of extreme events to the next level. However, the most contemporary of climate models are different beasts, driven by relatively high model resolution and immense computing power and data storage facilities behind the scenes.

A recent gathering of 15 industry meteorologists (more about this another time: the results will also be appearing on the Lighthill Risk Network website) prioritised their "top questions to understand more about in European windstorm". The second placed topic was "Understanding better what windstorms look like in the tail". This was driven by concerns from some of this group that windstorms in the tail of catastrophe models don't necessarily resemble the sort of storms we've seen in recent history. (I always recommend eyeballing of cat model tail events as a simple validation tool - have a look for yourself in this example).

This topic of climate model resolution and its impact on cat risk modelling is something I am currently exploring as a Visiting Research Fellow at the University of Reading. It's basically trying to understand better how our choice of model resolution in climate models impacts our view of risk. My basic concern is that we use models that are forced from global climate models whose resolution may be too low to develop fully realistic populations of events. See the schematic below for a simplistic view of the hypothesis.

Figure 2: A hypothesis of how the resolution of the climate model might influence the shape of windstorm footprints: the lower the resolution (left plot), the more slow-moving and "round" the footprints become, as opposed to the fast-moving, often narrow footprints we see in "reality" (right plot). The shape of these footprints obviously could have a notable impact on cross-country correlation in the tail, amongst other things.

So - the concerns shared by the group of meteorologists above about storm footprint shape could directly be related to the choice of model resolution that forces the catastrophe models we use.

Even though the climate models we use to inform risk often then go through a second phase of "downscaling" to a much higher resolution, it's the initial resolution of the climate model that might lead to biases in the footprints we use for risk assessment. Being a little controversial: to a certain extent it could be a numerical modelling case of "rubbish in, rubbish out" scenario: although "rubbish" here is a little harsh.

However - this is simply a hypothesis, nothing more, but it's part of a topic that hasn't really been discussed widely that I would like to see more talked about in the crossover between industry and academia. I hope that by the end of this research project there will be some useful guidelines and caveats on usage of climate models for catastrophe risk studies. And remember - climate modelling isn't just the preserve of European windstorm. US Winterstorm and US Tornado Hail modelling also has - in some cases - climate modelling as its backbone.

It could well be that the above hypothesis is a load of junk - but any understanding that comes out of it - as well as the potential to dig into high resolution climate modelling studies hopefully will benefit us all.

 

There will be a session on high resolution climate modelling at the Oasis Conference on 5th-6th September this year where the above Met Office work will be presented and I'll talk more about the thoughts above.

Bibliography

Thompson et al. (2017): High risk of unprecedented UK rainfall in the current climate. Nature Communications 8, Article 107


bottom of page