If you want the executive summary, the title gives it away: combining hazard and vulnerability from two different sources doesn't necessarily create you a new model, much as it would be wonderfully efficient for us all to help create new views of risk.
However, this article is intended to highlight the caveats around combining different views of hazard and vulnerability. This is with an eye on an exciting, impending era where Open Source catastrophe modelling is increasingly coming into focus as a viable second set of eyes on existing models.
I want to use a very simple example of building a hurricane model using the same loss data and events, but subtle differences in the hazard. (Disclaimer: I'm well aware that Cat Models are much more complicated to build than this simple example!)
A large housing estate with total rebuild cost of $100m close to the coastline (see no-expense-spared schematic on the right) has reported its collective damage for four landfalling hurricanes over the space of 20 years. The information is used by two catastrophe modelling companies, Cat Guys and Cat Shack.
The table below summarises the historical information and damage that these two modelling companies have to use. The winds here are as reported as per the usual standards of the 1-minute sustained wind-speed over the sea at the coastline:
Let's have a look at how Cat Guys formulate their hazard and vulnerability.
For the hazard in the housing area, they have to work out how strong the winds would have been in the housing estate given that there is increased roughness here compared to the exposed coastline: Their estimates, using their own roughness calculations are below, along with the reported damage and the expected frequency of such winds based on the historical information in the table above.
Given the historical frequency of the various category of hurricanes, their hazard EP curve at the housing estate is shown below. Taking the example of 76.5 mph point: this has a return period of 25 years because there have been 4 Cat 1 hurricanes in the past 100 years that would have given a wind speed of 76.5 mph at the site. Similarly, there have been 3 Cat 2 hurricanes, giving a return period of 33.3 years where there was a wind of 93.6 mph at the site - and so on. [An error in the exceedance calculation has been pointed out to me in the comments below: I will change in due course, but this doesn't change the message here - thanks to Manuel for pointing this out]
The vulnerability curve for the Cat Guys data is simply the relationship between wind at the housing area and the resultant damage ratio (loss / $100m) taken from the available loss data for each event, shown here:
And for what it's worth, the EP curve of loss is simply the "product" of these two:
Essentially, the losses and the hazard from the events with a reported loss are used to back-engineer the vulnerability, so the EP curve of loss matches the loss experience (so the loss from the Cat 4 storm = the 100 year loss, the loss from the Cat 3 storm = 50 year loss as there have been two Cat 3s in 100 years and so on).
Now, the Cat Shack folks have the same methodology but have got their hands on the latest roughness data, fresh off the satellite and with the latest academic interpretations of it. This data actually has more of a roughness impact over the housing area, as seen in the blue in the chart below, meaning their interpretation has lower wind speeds for each event:
They use the same methodology as the Cat Guys to build their EP curve. The resultant hazard and vulnerability curves turn out quite different. These are shown below, but I've also shown them alongside the Cat Guys vulnerability curve and hazard curves. The green numbers and red lines I will come to shortly
You can see that the for Cat Shack, because of the greater roughness impact, their hazard EP curve is lower, but the resultant vulnerability, based off the same set of loss data ends up higher (in order to match the historical losses). For Cat Guys, the opposite is true: higher hazard and therefore lower vulnerability to end up matching the historical losses..
Just to hammer this home,if we follow the data from hazard through vulnerability to loss, following the steps 1-4 in the charts above (highlighted with the green numbers) for the Cat Shack data (top two charts)
1) Read off a return period (in this example, 40 years)
2) Read off the wind gust for that return period (for Cat Shack, it's 90 mph)
3) On the vulnerability curve find the 90 mph point on the x-axis
4) Read off the resultant damage (for Cat Shack, it's about 7.5%)
...we can see that every 40 years, the portfolio should receive 7.5% damage.
Do the same process for the Cat Guys data and despite the fact that the hazard and vulnerability components are different, because the models are tuned to the same loss data, you'll find you come to the same answer (7.5%)
Point 1: Where models are built off loss data, you can get very different hazard and vulnerability components in models that will lead to the same result. As such comparing a singe model component (e.g. vulnerability) between models in isolation is dangerous where loss data has been used to help "anchor" results.
There is another danger here, which is very relevant to the exciting world into which we are moving with Open Source modelling and new views of risk becoming available. (Before I continue - I am strong advocate of open source cat modelling and the options it will bring.)
Let's suppose you are offered an alternative view of risk, taking the Cat Guys vulnerability and the Cat Shack hazard. Remember - they're both built off the same loss data...
To cut a long story short, below is the loss EP curve for the Cat Guys vulnerability and Cat Shack hazard, alongside the EP curve of loss for Cat Shack (or Cat Guys - remember they're both built off the same loss data);
The differences are quite stark. The low Cat Shack hazard and low Cat Guys vulnerability lead to a very low EP curve (I've had to estimate the 25 year RP for this, but hopefully the message is clear) - one which could drastically underestimate the risk for this book of business.
In this case there is about 40% difference in loss for each return period. Even though the models have been built to the same loss data, combining the hazard and vulnerability components from them when they have a different hazard calculation basis leads to dramatically different results.
Point 2: Always check how a model has been built before trying to validate a single component or before using single components from a model towards building a hybrid model. Hazard and vulnerability are very often inextricably wedded. It is always worth having this in mind in perils/regions where the same client/industry loss data may have been available to multiple model vendors.
These are exciting times in cat modelling: we'll hopefully see more models coming into the market to offer different views of risk, but we must be very cognizant of the workings of the models themselves before taking any leaps to use model components to help create new views of risk or indeed comparing model components side-by-side.
A very brief hello! I intend to use this blog to write on anything from a topical subject to a distillation of research that will appear on the...