Optimizing CAT Models: Adaptation Strategies For Shifting Baselines

CATEGORIES

See how it works

Explore the Federato platform at your own pace with our free interactive product tour.

Take a tour

Catastrophe models, or CAT models, are used in insurance to estimate potential losses from natural disasters like hurricanes, floods, and wildfires. These models rely heavily on historical data to simulate how future events might unfold and affect insured properties.

However, the environment is changing. Climate change, urban development, and other human activities have altered the frequency and severity of many natural hazards. This means the past may no longer be a reliable guide to the future.

As a result, the assumptions and data that traditional CAT models rely on are being questioned. Insurers and modelers are now exploring new ways to update and adapt these models so they reflect current conditions more accurately.

Understanding Shifting Baselines In CAT Modeling

In CAT modeling, a shifting baseline refers to the gradual change in what is considered "normal" risk over time. Each new generation of models may treat the current level of risk as the baseline, even if that level has already been influenced by decades of climate change or land use changes.

Traditional CAT models often use long-term historical data to estimate future risks. These datasets assume that the patterns of the past—such as how often a major storm hits a region—will continue into the future without significant change.

But climate-driven events like more intense wildfires in the western United States or record-breaking floods in Europe are challenging that assumption. The conditions that produced historical losses are not the same as those driving current disasters.

This mismatch between model inputs and real-world conditions can lead to inaccurate risk assessments. When baselines shift but models don't adapt, insurers may underestimate or misjudge emerging risks.

Why Long-Term Data Matters For Insurance Risk

Short-term data can create a false sense of stability. Looking at just the last few years might make a region seem safer than it really is if that period happened to have fewer disasters. This can lead to underpricing risk or writing too many policies in vulnerable areas.

Long-term data provides a more complete picture by capturing rare but severe events that might only happen once every few decades. For example, a 100-year flood doesn't mean it happens exactly once per century—it means there's a 1% chance of it happening in any given year.

Key benefits of long-term data:

  • Reveals cycles: Shows patterns that repeat over decades or centuries
  • Captures extremes: Includes rare but devastating events that short records might miss
  • Provides context: Helps determine if recent trends are truly unprecedented

Historical climate records from tree rings, ice cores, and sediment layers can extend our understanding back hundreds or even thousands of years. These paleoclimate records show how temperature, rainfall, and storm patterns have varied naturally over time.

From Static Baselines To Dynamic Thresholds

Traditional CAT models use static baselines—fixed reference points based on historical averages. But as conditions change, these fixed points become less relevant. A more effective approach uses dynamic thresholds that adapt as new data comes in.

Threshold-based modeling identifies specific trigger points where risk levels change significantly. For example, wildfire risk might increase sharply after 14 consecutive days above 90°F with humidity below 20%. Once these conditions are met, the model adjusts its risk calculations automatically.

This approach recognizes that risk doesn't always change gradually. Sometimes it jumps suddenly when certain environmental conditions align. By focusing on these threshold points, insurers can respond more quickly to changing conditions.

Here's how threshold and baseline approaches compare:

Feature Static Baseline Approach Dynamic Threshold Approach
Updates Periodic (often yearly) Continuous as thresholds are crossed
Flexibility Low - requires full recalibration High - adjusts in real-time
Data sources Primarily historical Historical plus current conditions
Response to new trends Delayed Immediate
Complexity Lower Higher

Dynamic thresholds help insurers allocate resources more effectively by focusing attention where risk is actively changing. This targeted approach improves efficiency and helps prioritize high-risk areas for closer monitoring.

Bringing AI And Advanced Statistics To CAT Modeling

Artificial intelligence (AI) and statistical techniques like importance sampling are transforming how CAT models handle shifting baselines. These tools help insurers update their risk assessments more frequently without rebuilding entire models from scratch.

AI-powered triage systems can quickly sort through thousands of insurance submissions to identify which properties need closer review based on changing risk factors. The system learns from past underwriting decisions and claims data to spot patterns that might indicate elevated risk.

For example, an AI system might flag a property for review if recent wildfire activity has occurred within 10 miles, even if the property previously fell into a lower-risk category. This helps underwriters focus their attention where it's most needed as conditions evolve.

Importance sampling is a statistical technique that improves model accuracy by focusing computational resources on the scenarios that matter most. Instead of giving equal weight to all possible events, the model pays special attention to rare but high-impact scenarios like Category 5 hurricanes or 500-year floods.

This approach is especially valuable when baselines are shifting, as it allows models to quickly adjust to new information without requiring a complete overhaul. The result is more accurate risk assessment with less computational overhead.

Integrating Multiple Data Sources For Better Results

No single data source can provide a complete picture of catastrophe risk. By combining information from different systems and sources, insurers can reduce uncertainty and build more reliable models.

Types of data that improve CAT models:

  • Internal policy and claims records: Shows actual loss experience
  • Weather station measurements: Provides precise local conditions
  • Satellite imagery: Tracks environmental changes over time
  • Building codes and construction data: Reflects structural vulnerability
  • Population and development trends: Shows how exposure is changing

Many insurance companies store valuable historical data in legacy systems that use different formats and structures. Extracting and standardizing this information takes work but provides rich insights that can't be found elsewhere.

Third-party data from commercial vendors and open-source projects can fill gaps in internal records. For example, high-resolution elevation data improves flood modeling, while vegetation and fuel load maps enhance wildfire risk assessment.

When combining data from multiple sources, quality control becomes essential. Each dataset needs to be validated for accuracy, completeness, and relevance before being incorporated into CAT models. This process helps prevent errors from propagating through the system and affecting risk calculations.

Making CAT Models Work For Underwriters

Even the most sophisticated CAT model is only valuable if underwriters can use it effectively in their daily work. The challenge is translating complex risk calculations into actionable insights that inform real-world decisions.

Underwriters need clear signals about how risk is changing in specific locations or for particular types of properties. Rather than drowning in technical details, they need practical guidance on questions like:

  • Is this property more or less risky than similar ones in our portfolio?
  • Has the risk profile for this region changed significantly since our last assessment?
  • What specific factors are driving changes in risk for this submission?

Practical Steps For Adapting To Shifting Baselines

Insurers looking to update their approach to CAT modeling can take several concrete steps:

  1. Audit existing models: Evaluate how well current models have predicted recent events. Look for patterns of over- or under-estimation that might indicate baseline shifts.
  2. Expand data horizons: Incorporate longer historical records and non-traditional data sources to provide more context for recent trends.
  3. Implement threshold monitoring: Identify key indicators that signal meaningful changes in risk, and create systems to track when these thresholds are crossed.
  4. Create feedback loops: Compare model predictions to actual outcomes after catastrophic events, and use this information to refine future models.
  5. Integrate models with workflows: Ensure that updated risk information reaches underwriters in a format they can easily use to make decisions.

These steps don't require completely abandoning existing CAT models. Instead, they build on current capabilities by adding new data, methods, and perspectives that account for changing conditions.

The goal isn't perfect prediction—that's impossible in a complex, changing world. Instead, the aim is to create models that adapt as conditions change, providing insurers with the most current and accurate view of risk possible.

By embracing dynamic approaches to CAT modeling, insurers can navigate the challenges of shifting baselines while continuing to provide valuable protection against catastrophic events.

Frequently Asked Questions About CAT Models And Shifting Baselines

What exactly is a shifting baseline in catastrophe modeling?

‍‍‍A shifting baseline occurs when the reference point for "normal" risk gradually changes over time, often without being explicitly recognized in models, leading to systematic underestimation of current and future catastrophe risks.

How often should CAT models be updated to account for shifting baselines?

‍‍‍‍‍‍Rather than fixed schedules, modern approaches favor continuous monitoring of key thresholds and triggers that signal meaningful changes in risk conditions, allowing for more responsive and timely updates.

Do threshold-based approaches work for all types of catastrophe perils?

‍‍‍‍‍‍Threshold approaches work particularly well for weather-related perils with clear trigger points, while other hazards like earthquakes may benefit more from long-term data integration and importance sampling techniques.

How can smaller insurers with limited resources implement these advanced modeling approaches?

‍‍‍‍‍‍Smaller insurers can start by focusing on their highest exposure regions, leveraging third-party data and models, and implementing threshold monitoring for key perils before expanding to more comprehensive solutions.