April 8, 2025
Building a Common Language for Cyber Risk: Why CRQ Needs Standardised Metrics
James Hanbury
Global Lead Director, Co-founder

The Measurement Problem in Cyber Risk

Weather forecasts are among the most widely understood forms of risk communication in the world. They blend data, uncertainty, and time horizons into formats that make sense to both experts and the general public. And they do so using a consistent set of metrics.

When we check the forecast, we’re shown temperature, probability of precipitation, wind speed, cloud cover, and so on. These metrics allow people to make quick, informed decisions — should I bring an umbrella? Cancel the outdoor event? Pack a coat?

Cyber risk quantification (CRQ) needs the same standardisation. While organisations increasingly adopt CRQ to assess cyber risk in financial terms, there’s little consistency in how results are presented. Executives reviewing CRQ reports often face different formats, levels of detail, and inconsistent terminology. Without a common language, interpreting and acting on CRQ insights becomes difficult.

If we want CRQ to drive real-world decisions, we need a clearer, standardised approach to presenting risk metrics — one that makes cyber risk as easy to interpret as a weather forecast.

So, what should a CRQ “forecast” look like?

1.     Annualised Loss Exposure (ALE): The Temperature

The foundational view of financial risk exposure

  • What it is: ALE represents your estimated annual financial exposure to cyber risk. Think of it as the “temperature” of your cyber risk climate.
  • Why it matters: Just as temperature helps you decide what to wear, ALE gives you a baseline for understanding and communicating risk.
  • Example: An average ALE of £3M, a most likely ALE of £300K, and a 90th percentile ALE of £9M
  • Use it when: You want to compare the scale of cyber risk to other risks, compare cyber risk scenarios, or guide budget or insurance decisions.

2.     Confidence Interval: The Predictability Rating

How much trust you can place in the numbers

  • What it is: A statistical range that describes uncertainty. A wide interval means more variability in the input assumptions or scenario modelling.
  • Why it matters: Like a low-confidence weather forecast, wide intervals signal more unknowns.
  • Example: An ALE of £2.5M with a 90% confidence interval of £1M–£8M may spark discussions about data quality or threat volatility.
  • Use it when: You want to be transparent about the limitations of the model – without undermining its credibility – it builds transparency and trust.

3.         Loss Event Frequency: The Rain Forecast

How often the risk is likely to materialise

  • What it is: The estimated likelihood of one or more cyber loss events occurring within a given period.
  • Why it matters: Helps differentiate between frequent, lower-severity events and rare but catastrophic scenarios.
  • Example: 0.4 events/year = 1 incident every 2.5 years on average.
  • Use it when: Use frequency estimates to support control decisions — especially when weighing prevention versus impact-reduction.

4.     Risk Sensitivity: The Wind Speed

Which assumptions matter most in your model

  • What it is: A measure of how sensitive your results are to changes in input variables – like control effectiveness estimates or loss magnitude assumptions.
  • Why it matters: High sensitivity highlights where small changes in conditions could lead to big shifts in loss exposure.
  • Example: A small drop in detection capability could raise ALE by 40%.
  • Use it when: Prioritising where to improve model accuracy or identifying scenarios that need deeper analysis.

5.     Control Effectiveness: The Cloud Cover

How well your security controls are working

  • What it is: An assessment of how well your current defences reduce the likelihood or impact of loss.
  • Why it matters: Weak controls create “foggy” risk landscapes – where exposure is harder to predict and harder to manage.
  • Example: Backup and recovery capability scores low, leading to higher impact estimates for ransomware.
  • Use it when: Planning remediation or control investment strategies.

Infographic showing the five metrics described in this article.

Bringing It All Together

No single metric can explain a CRQ analysis on its own. But together, these elements offer a forecast that’s both comprehensive and intelligible.

Each element tells you something different:

  • ALE gives you scale.
  • Frequency tells you how often.
  • Confidence intervals tell you how sure we are.
  • Sensitivity highlights model fragility.
  • Control effectiveness show how well you’re equipped.

The more we adopt shared conventions like these, the easier it becomes to communicate risk clearly.

Over the coming weeks, this blog series will explore how organisations can refine their CRQ approach, communicate risk more effectively, and embed CRQ into everyday decision-making. The next post will focus on how to make CRQ results truly actionable – bridging the gap between analysis and decision-making.

Read the next blog in the series

From Insight to Action: Making CRQ Results Actually Useful

For all the energy that organisations invest in CRQ, a frustrating truth remains: many results don't actually lead to better decisions. Quantification is a powerful tool. But like any tool, its value lies in how it’s used.
Blog
Building a Common Language for Cyber Risk: Why CRQ Needs Standardised Metrics
Get your copy below.
By submitting this form I agree that Cyber Risk Insights may collect, process and retain my data pursuant to its Privacy Policy.
Thank you! Use the button below to read now.
Oops! Something went wrong while submitting the form.

Summary

The Measurement Problem in Cyber Risk

Weather forecasts are among the most widely understood forms of risk communication in the world. They blend data, uncertainty, and time horizons into formats that make sense to both experts and the general public. And they do so using a consistent set of metrics.

When we check the forecast, we’re shown temperature, probability of precipitation, wind speed, cloud cover, and so on. These metrics allow people to make quick, informed decisions — should I bring an umbrella? Cancel the outdoor event? Pack a coat?

Cyber risk quantification (CRQ) needs the same standardisation. While organisations increasingly adopt CRQ to assess cyber risk in financial terms, there’s little consistency in how results are presented. Executives reviewing CRQ reports often face different formats, levels of detail, and inconsistent terminology. Without a common language, interpreting and acting on CRQ insights becomes difficult.

If we want CRQ to drive real-world decisions, we need a clearer, standardised approach to presenting risk metrics — one that makes cyber risk as easy to interpret as a weather forecast.

So, what should a CRQ “forecast” look like?

1.     Annualised Loss Exposure (ALE): The Temperature

The foundational view of financial risk exposure

  • What it is: ALE represents your estimated annual financial exposure to cyber risk. Think of it as the “temperature” of your cyber risk climate.
  • Why it matters: Just as temperature helps you decide what to wear, ALE gives you a baseline for understanding and communicating risk.
  • Example: An average ALE of £3M, a most likely ALE of £300K, and a 90th percentile ALE of £9M
  • Use it when: You want to compare the scale of cyber risk to other risks, compare cyber risk scenarios, or guide budget or insurance decisions.

2.     Confidence Interval: The Predictability Rating

How much trust you can place in the numbers

  • What it is: A statistical range that describes uncertainty. A wide interval means more variability in the input assumptions or scenario modelling.
  • Why it matters: Like a low-confidence weather forecast, wide intervals signal more unknowns.
  • Example: An ALE of £2.5M with a 90% confidence interval of £1M–£8M may spark discussions about data quality or threat volatility.
  • Use it when: You want to be transparent about the limitations of the model – without undermining its credibility – it builds transparency and trust.

3.         Loss Event Frequency: The Rain Forecast

How often the risk is likely to materialise

  • What it is: The estimated likelihood of one or more cyber loss events occurring within a given period.
  • Why it matters: Helps differentiate between frequent, lower-severity events and rare but catastrophic scenarios.
  • Example: 0.4 events/year = 1 incident every 2.5 years on average.
  • Use it when: Use frequency estimates to support control decisions — especially when weighing prevention versus impact-reduction.

4.     Risk Sensitivity: The Wind Speed

Which assumptions matter most in your model

  • What it is: A measure of how sensitive your results are to changes in input variables – like control effectiveness estimates or loss magnitude assumptions.
  • Why it matters: High sensitivity highlights where small changes in conditions could lead to big shifts in loss exposure.
  • Example: A small drop in detection capability could raise ALE by 40%.
  • Use it when: Prioritising where to improve model accuracy or identifying scenarios that need deeper analysis.

5.     Control Effectiveness: The Cloud Cover

How well your security controls are working

  • What it is: An assessment of how well your current defences reduce the likelihood or impact of loss.
  • Why it matters: Weak controls create “foggy” risk landscapes – where exposure is harder to predict and harder to manage.
  • Example: Backup and recovery capability scores low, leading to higher impact estimates for ransomware.
  • Use it when: Planning remediation or control investment strategies.

Infographic showing the five metrics described in this article.

Bringing It All Together

No single metric can explain a CRQ analysis on its own. But together, these elements offer a forecast that’s both comprehensive and intelligible.

Each element tells you something different:

  • ALE gives you scale.
  • Frequency tells you how often.
  • Confidence intervals tell you how sure we are.
  • Sensitivity highlights model fragility.
  • Control effectiveness show how well you’re equipped.

The more we adopt shared conventions like these, the easier it becomes to communicate risk clearly.

Over the coming weeks, this blog series will explore how organisations can refine their CRQ approach, communicate risk more effectively, and embed CRQ into everyday decision-making. The next post will focus on how to make CRQ results truly actionable – bridging the gap between analysis and decision-making.

Key messages

01

02

03

Blog
Building a Common Language for Cyber Risk: Why CRQ Needs Standardised Metrics

Summary

The Measurement Problem in Cyber Risk

Weather forecasts are among the most widely understood forms of risk communication in the world. They blend data, uncertainty, and time horizons into formats that make sense to both experts and the general public. And they do so using a consistent set of metrics.

When we check the forecast, we’re shown temperature, probability of precipitation, wind speed, cloud cover, and so on. These metrics allow people to make quick, informed decisions — should I bring an umbrella? Cancel the outdoor event? Pack a coat?

Cyber risk quantification (CRQ) needs the same standardisation. While organisations increasingly adopt CRQ to assess cyber risk in financial terms, there’s little consistency in how results are presented. Executives reviewing CRQ reports often face different formats, levels of detail, and inconsistent terminology. Without a common language, interpreting and acting on CRQ insights becomes difficult.

If we want CRQ to drive real-world decisions, we need a clearer, standardised approach to presenting risk metrics — one that makes cyber risk as easy to interpret as a weather forecast.

So, what should a CRQ “forecast” look like?

1.     Annualised Loss Exposure (ALE): The Temperature

The foundational view of financial risk exposure

  • What it is: ALE represents your estimated annual financial exposure to cyber risk. Think of it as the “temperature” of your cyber risk climate.
  • Why it matters: Just as temperature helps you decide what to wear, ALE gives you a baseline for understanding and communicating risk.
  • Example: An average ALE of £3M, a most likely ALE of £300K, and a 90th percentile ALE of £9M
  • Use it when: You want to compare the scale of cyber risk to other risks, compare cyber risk scenarios, or guide budget or insurance decisions.

2.     Confidence Interval: The Predictability Rating

How much trust you can place in the numbers

  • What it is: A statistical range that describes uncertainty. A wide interval means more variability in the input assumptions or scenario modelling.
  • Why it matters: Like a low-confidence weather forecast, wide intervals signal more unknowns.
  • Example: An ALE of £2.5M with a 90% confidence interval of £1M–£8M may spark discussions about data quality or threat volatility.
  • Use it when: You want to be transparent about the limitations of the model – without undermining its credibility – it builds transparency and trust.

3.         Loss Event Frequency: The Rain Forecast

How often the risk is likely to materialise

  • What it is: The estimated likelihood of one or more cyber loss events occurring within a given period.
  • Why it matters: Helps differentiate between frequent, lower-severity events and rare but catastrophic scenarios.
  • Example: 0.4 events/year = 1 incident every 2.5 years on average.
  • Use it when: Use frequency estimates to support control decisions — especially when weighing prevention versus impact-reduction.

4.     Risk Sensitivity: The Wind Speed

Which assumptions matter most in your model

  • What it is: A measure of how sensitive your results are to changes in input variables – like control effectiveness estimates or loss magnitude assumptions.
  • Why it matters: High sensitivity highlights where small changes in conditions could lead to big shifts in loss exposure.
  • Example: A small drop in detection capability could raise ALE by 40%.
  • Use it when: Prioritising where to improve model accuracy or identifying scenarios that need deeper analysis.

5.     Control Effectiveness: The Cloud Cover

How well your security controls are working

  • What it is: An assessment of how well your current defences reduce the likelihood or impact of loss.
  • Why it matters: Weak controls create “foggy” risk landscapes – where exposure is harder to predict and harder to manage.
  • Example: Backup and recovery capability scores low, leading to higher impact estimates for ransomware.
  • Use it when: Planning remediation or control investment strategies.

Infographic showing the five metrics described in this article.

Bringing It All Together

No single metric can explain a CRQ analysis on its own. But together, these elements offer a forecast that’s both comprehensive and intelligible.

Each element tells you something different:

  • ALE gives you scale.
  • Frequency tells you how often.
  • Confidence intervals tell you how sure we are.
  • Sensitivity highlights model fragility.
  • Control effectiveness show how well you’re equipped.

The more we adopt shared conventions like these, the easier it becomes to communicate risk clearly.

Over the coming weeks, this blog series will explore how organisations can refine their CRQ approach, communicate risk more effectively, and embed CRQ into everyday decision-making. The next post will focus on how to make CRQ results truly actionable – bridging the gap between analysis and decision-making.

Key messages

01

02

03

Recent Insights

From Pilot to Capability: The Journey to Operationalise CRQ

CRQ can’t remain a pilot forever. To drive meaningful, repeatable value, it needs to mature into a business capability: trusted, embedded, and regularly informing decisions.
James Hanbury

Winning the First Yes: Navigating the Five Most Common CRQ Objections

Before a single scenario is modelled or a number estimated, one of first challenges in adopting cyber risk quantification (CRQ) is simply persuading stakeholders it's worth doing.
James Hanbury

Six Principles of Effective CRQ: How to Build an Engine That Lasts

In this article, I’ll share six working principles I’ve found essential for embedding CRQ in a way that sticks — not just as a project, but as a true business capability.
James Hanbury

Empowering you to make smarter cyber risk decisions.

Thank you! A member of the team will be in touch shortly.
Oops! Something went wrong while submitting the form. Please try again.