top of page
  • Richard Dixon

Understanding cat risk: every little helps

For the more "mature" perils where models are 15,20,25 years old there is plenty that we have learnt from academia that, iteratively, has fed into the models and improved them. Add to this the growth of better reporting and recording of loss data to help improve these models and more and more academics moving into the growing world of catastrophe modelling, our models are better than ever: but what can we do to increase further their robustness? The answer doesn't necessary lie within the modelling firms themselves: the onus I believe is on us, as end-users.

The typical trend within catastrophe modelling is for new scientific developments, worthy of inclusion into cat models, are typically digested and implemented by catastrophe modelling companies. This takes time. Modelling companies have a lot on their plate. The ever-widening geographic and scientific reach of their models, the more there is to update. It's a wonder they have any time to engage in their own research as well as digesting new research from academia to include in model updates. This is where those of us in the industry can help: by partnering with academia to help produce science that better informs these models.

Prompted by a meeting organised by NERC - the Natural Environment Research Council - last February on Industry-Academia interaction, the first steps were made in the summer of 2017, held at the OASIS offices and facilitated by Claire Souch (that many of you will know from her time at RMS, SCOR and AgRisk) and myself. We convened a group of cat model users from around Europe who had a background in meteorology to brainstorm a list of questions of "things that we'd like to know more about" in European Windstorm that could be couched in terms of research projects with academia.

The concept was very simple: come up with a list of questions (we ended up with 12) and then take these questions away to see which of these you would fund, if given an arbitrary €1000 of funding, to split as you see fit between all the various studies. In all, 15 companies were represented (Allianz, Amlin, Aon Benfield, Aspen, AXA, Axis, CatInsight, Guy Carpenter, Lloyds, Partner Re, SCOR, Swiss Re, Tokio Millennium Re, Willis, XL Catlin)

I've shown the results below in order of amount of funding:

Explaining briefly the Top 3 that were felt most important to understand:

1) Understanding natural variability

This centres around whether the windstorm activity we've seen in recent years (lower windstorm activity, more storms in France, fewer in the UK) is part of a trend (which may be driven by climate change) or just natural variability. If it's more likely a trend, then inclusion in catastrophe models or in views of risk seems sensible. What would happen if we re-ran history 50 times? Would we see the same behaviour or has what we've seen in the last 20-25 years been just down to the capriciousness of natural variability?

2) Understanding tail risk

One of the areas of concern that were raised by a few of the group was the suggestion that windstorm footprints in the tail of catastrophe models were a concern: they tended not to resemble what we've seen in history and some were concerned it was altering the correlation seen in catastrophe models. We are now entering a realm of high-resolution climate models, and it was felt a "second set of eyes" would be useful from such models. (This is something that I am working with the University Reading, Climate-KIC and Lighthill Risk network on in Spring 2018 and will be subject of a future blog post).

3) Correlation between wind and flood

The 1989/90 winter season was a benchmark for multiple notable windstorms: but that season isn't talked about for being noteworthy for flooding. The 2013/14 season was noted for the flooding. It also was for its storminess, but was nowhere near as notably stormy as the 1989/90 season. The group wanted to understand more about what happens in the tail. Do notably active (i.e. damaging) windstorm seasons go hand-in-hand with flooding, or is there less of a correlation?


This list will be held permanently in due course on the Lighthill Risk Network website. It's there for all to see and use as they see fit. But: do you disagree (or indeed agree) with this order? If so, I would love to hear from you, especially if you're not a meteorological specialist and a cat modelling practitioner: I can send you this list and the explanations of each topic to get your take on where you'd feel you'd spent your nominal £1000.

This work is hopefully just the beginning. Plans are afoot to organise a follow-up question-gleaning exercise on flood, and in an ideal world it would move onto other perils/regions. If necessary, we can revisit the question list every, say, two years to freshen it up.

Already, the Natural Environment Research Council are potentially looking to mould a research call around this. It would be terrific if this went ahead: industry-led, well-targeted questions that feed into partnered academic research. The questions posed here have answers that may help improve our catastrophe models or at the very least help those of us in risk-taking entities by improving the scientific knowledge behind our views of risk.

This post originally appeared on the Simplitum/Nasdaq ModEx blog.

bottom of page