The Tiny Shifts in Hazard That Still Make a Difference
I've noticed at times the frustration with tying any abnormal hazard activity to climate change even before we've had a chance to properly assess climate's impact. I'm certainly against this level of knee-jerk behaviour, but how much does our risk landscape have to shift before we start seeing changes that should make us sit up and re-assess EP curves?
I'm going to demonstrate with this a short example, again using the data I've so often referred to from the C3S seasonal forecasts. It's a 600-year simulation of winter rainfall in the UK from 1993-2016. There are 25 simulations in each of the 24 years and each simulation is a September forecast for which I've used the October to March rainfall accumulations for a location in central southern England.
The chart below shows, when splitting the data into two 300-year chunks from 1993-2004 and 2005-2016 (12 years, 25 simulations of each year = 300 years) the change in risk between these two datasets:
We can see that the EP curve of rainfall for these 300 year datasets (using a simple methodology where, for example, the 10th ranked year = 300/10 = 30 year RP) is pretty similar between the two datasets. The mean rainfall here increases by 1.2% between the two periods. and we note the upward shift in rainfall of anything up to 4% along the EP even between these two fairly short simulation periods. As we head towards, say, above 1-in-30 we start to hit issues with the dataset size, so I'm not going to talk about tail EP curve shifts here. The thing I found most interesting was that for such a short time horizon, we're still seemingly seeing a small upward shift in risk: but I still have issues on this front with dataset size.
So, I wanted to to take a slightly different look at this data to understand what a shift in mean risk does to the tail - the bit we're interested in in cat modelling - using the results above. So I've taken the full 600-year dataset and fitted a normal distribution to it. I'm not saying here that a normal distribution is the best fit here, it was more my wanting to understand how little shifts in mean risk might affect the tail of a distribution.
What I then did was simply adjust the mean of the normal distribution so it increased by 1.2%, akin to the shift in rainfall risk we saw from our data above (I've not plotted it here as it's an almost imperceptible change in the curve). Plotted below is a chart of the exceedance probability of each of the rainfall intervals in the chart above for a) the mean rainfall and b) the mean rainfall adjusted by 1.2%:
You will also see I've plotted on the chart numbers alongside some parts of the curve. This is showing the return period of rainfall greater than the value on the x-axis for the mean (grey) and mean+1.2% (blue) scenarios. What interested me is that even for minuscule shifts in hazard that we might dismiss, the frequency of tail events is shifting. Not massively, but enough to make you take notice. (Take a loss EP curve of your favourite peril/region and contrast the difference in loss between a 66 year and 77 year event: I imagine it's a lot higher than the 1.2% change in hazard we're seeing in this example!)
And these changes are just for a 1.2% risk. The chart below is what happens if you shift the mean of the hazard data by 5% rather than 1.2% - again, not mind-blowingly big increase in hazard:
For this 5% increase we're seeing a doubling of the likelihood of a 200-year event. Once-in-a-lifetime type events become more like once-in-a-generation: all for a 5% change in hazard (in this simple example, at least).
So the next time someone points out that those wildfires in 2018 and 2019 were probably down to luck, or Ophelia's obscure location in the far-east Atlantic is a one-off or it's freakish we've just had one of the wetter winters followed by one of the driest springs, maybe take a step back at least to have a think: are we maybe seeing manifestations of tail risk moving up the EP curve as our underlying risk subtly shifts?