May 6, 2025
Winning the First Yes: Navigating the Five Most Common CRQ Objections
James Hanbury
Global Lead Director, Co-founder

The Real Work Starts Before the Pilot

Before a single scenario is modelled or a number estimated, one of first challenges in adopting cyber risk quantification (CRQ) is simply persuading stakeholders it's worth doing.

CRQ often begins not with a proof of concept, but with a leap of belief — a willingness to explore a new way of framing risk, even before it's been validated internally. This belief can be hard-won, because it asks stakeholders to reconsider long-held views about risk, decision-making, and the limits of current frameworks.

Advocating for CRQ means asking stakeholders to adopt a new lens. That kind of trust is rarely built through logic alone.

This blog explores five objections I frequently hear when introducing CRQ. These aren't hostile challenges — they're thoughtful concerns, often raised by leaders trying to ensure their teams stay focused on what matters. That's exactly why they deserve thoughtful responses. These are the kinds of conversations that set the tone and often determine whether CRQ ever gets off the ground.

The visual below aims to capture the essence of each.

In each objection, I have included:

  • A realistic expression of the concern ('as heard in the wild')
  • Why it's reasonable
  • A constructive reframe
  • What I've seen work in practice
  • Lines to help move the conversation forward

Earning belief in CRQ isn’t about conversion — it’s about credibility, one conversation at a time.

Objection 1

“Isn’t this too subjective?”

As heard in the wild:

“I see the appeal, but how much of this is judgement dressed up as data?

Why this concern is reasonable:

Stakeholders are right to be cautious about turning uncertainty into numbers. Even without seeing quantification fail, many are alert to the risk of false precision. Their concern isn’t CRQ itself — it’s how credible the foundations are.

How to reframe it constructively:

CRQ doesn’t remove judgement; it makes it visible, challengeable, and more reliable. Calibrated expert judgement — a powerful and often underused tool — helps organisations think probabilistically, incorporate diverse views, and surface assumptions. In this frame, CRQ shows where judgement is applied and how confident we can be. The goal isn't to remove human insight, it's to channel it more reliably.

What I've seen work in practice:

  • Calling out that most existing risk ratings already rely on judgement — CRQ just makes that judgement visible and structured.
  • Using structured techniques like calibration training to demonstrate how expert input can be systematically improved.
  • Showing how CRQ helps focus attention on key assumptions — and makes them easier to explain and challenge.

Lines that can move the conversation forward:

  • “CRQ doesn’t remove judgement — it refines it with structure and transparency.”
  • “It’s not guesswork — it’s guided judgement, openly discussed.”
  • “The question isn’t whether we use judgement — it’s whether we use it well.”

When stakeholders worry CRQ is "too subjective", it helps to show where it sits on the broader spectrum — and how it compares to the tools we're already using.

Objection 2

“How do we know it’s accurate?”

As heard in the wild:

“We'll never be able to 'prove' the results so how can we trust them enough to make decisions?”

Why this concern is reasonable: For many leaders, trust in a model is tied to verifiability. If CRQ outputs can’t be tested, they risk sounding like opinion, not insight — especially in a data-poor, fast-moving domain like cyber.

How to reframe it constructively: CRQ isn’t about perfect foresight. It’s a structured way to reason under uncertainty — bringing together data, context, and judgement in a transparent and testable way. Accuracy means being directionally right, not precisely perfect. Confidence grows when scenarios align with real-world events faced by similar organisations.

What I’ve seen work in practice:

  • Drawing comparisons with forecasting disciplines (e.g. financial stress tests, weather models) where the goal is directional reliability, not precision.
  • Showing how loss ranges were informed by actual incidents at similar firms — adding real-world context to modelled results
  • Walking through post-hoc comparisons where model outputs aligned with outcomes, even if ranges were broad.

Lines that move the conversation forward:

  • “We’re not claiming precision — we’re building enough confidence to act.”
  • “This isn’t binary accuracy — it’s better reasoning in the face of uncertainty.”
  • “We’re not trying to predict the future — just to stop flying blind.”

Objection 3

“Are we mature enough for this?”

As heard in the wild:

“We’re still fixing the basics like inventories and access controls. Shouldn’t we get those right before we try something like CRQ?”

Why this concern is reasonable: CISOs are right to be cautious about sequencing. When core capabilities are still forming, it can feel premature to add a more advanced layer.

How to reframe it constructively: CRQ isn’t something you do after you're mature — it helps you mature more strategically. It highlights where improvements matter most and can shift senior focus from control checklists to real risk. It’s not, for example, about saying DLP doesn’t matter — it’s about showing whether it matters enough.

What I've seen work in practice:

  • Starting with a ransomware scenario to quickly show where impact could be reduced most — without needing perfect inventories.
  • Highlighting how CRQ uncovered low-visibility risks that had been missed by traditional controls-led dashboards.
  • Reframing the boardroom conversation from “are our controls complete?” to “where’s our biggest risk reduction opportunity?”

Lines that can move the conversation forward:

  • “CRQ helps us decide which basics matter most — and which can wait.”
  • “We don’t need to be fully mature to benefit from CRQ — we just need to care about improving where it counts."
  • “This isn’t about advanced risk analysis — it’s about making smarter trade-offs, starting now."

Objection 4

“What if it contradicts existing narratives?”

As heard in the wild:

“We’ve told the Board we’re within appetite. What if CRQ says otherwise?”

Why this concern is reasonable: Leaders are right to be cautious. Risk narratives have likely been communicated to Boards and regulators.

How to reframe it constructively: CRQ isn’t about proving past conclusions wrong — it’s about improving future ones. It sharpens, validates, or reconsiders earlier judgements based on better tools. Positioning CRQ as an evolution can reduce defensiveness and opens the door to learning.

What I've seen work in practice:

  • Framing CRQ as an enhancement to existing insights — helping validate what’s already there, and refining what needs revisiting.
  • Leading with alignment: starting presentations by showing where CRQ results support existing risk views.
  • Equipping leaders with language to explain CRQ as a next evolution, not a reversal.

Lines that can move the conversation forward:

  • “CRQ doesn’t rewrite the story — it helps us tell it more clearly."
  • "We’re not necessarily changing the conclusion — we’re strengthening the evidence."
  • “Progress means evolving the narrative — not defending the old one at all costs.”

Objection 5

“It sounds interesting, but why now?”

As heard in the wild:

“We already have risk matrices and appetite frameworks — what's the urgency?”

Why this concern is reasonable: CRQ often arrives without a burning platform. It’s not on a regulator’s checklist (yet), and it doesn’t plug a glaring control gap. So the question becomes: is the value clear enough to prioritise today?

How to reframe it constructively: This isn’t about replacing what works — it’s about adding depth where it’s missing. CRQ helps with trade-offs, cost-benefit clarity, and appetite expressed in financial terms. And while there’s no burning platform today, expectations are rising. Boards are asking better questions. CRQ helps you answer them.

What I've seen work in practice:

  • Piloting CRQ in parallel with existing risk frameworks — no disruption, but added clarity when difficult trade-offs arise
  • Using CRQ to justify prioritisation when faced with multiple security asks and budget for one.
  • Framing CRQ as readiness for board-level scrutiny — translating cyber into the language of financial risk.

Lines that can move the conversation forward:

  • “Boards are asking tougher questions — CRQ helps us answer them with confidence.”
  • "CRQ makes our existing frameworks work harder — especially when decisions get complex."
  • “Now is the time to lead with this — before it becomes table stakes.”

Confidence, Not Conversion

CRQ isn’t a hard sell — and it shouldn’t be.

The goal of these conversations isn’t to convert sceptics on the spot. It’s to create space for new ways of thinking. To offer clearer language, more defensible insight, and a way to navigate complexity without defaulting to colour codes and gut feel.

These objections aren’t fringe views. They’re thoughtful, informed questions raised by people who’ve carried real accountability for risk — and who want to make sure their next step is better than their last.

Handled well, these aren't objections to be overcome. Instead, they should be thought of as opportunities to earn trust.

Because every concern addressed builds confidence. Every quiet objection handled with care becomes an invitation to rethink. And every practitioner who’s prepared — not just with a model, but with the language and mindset to lead change — helps move the discipline forward.

CRQ doesn’t need to be introduced with fanfare. It often starts quietly with a well-framed conversation. The kind that earns a first yes. And from there, momentum builds

Next up: From First Yes to Embedded Capability

Earning the first “yes” is a significant milestone — but it’s just the beginning.

In the next blog, I’ll explore what separates short-term pilots from long-term success. From governance to data, process to tooling — this is where CRQ stops being a side project and starts changing how decisions get made.

If you’re preparing for that journey, we'd be happy to help.

Read the next blog in the series

From Pilot to Capability: The Journey to Operationalise CRQ

CRQ can’t remain a pilot forever. To drive meaningful, repeatable value, it needs to mature into a business capability: trusted, embedded, and regularly informing decisions.
Blog
Winning the First Yes: Navigating the Five Most Common CRQ Objections
Get your copy below.
By submitting this form I agree that Cyber Risk Insights may collect, process and retain my data pursuant to its Privacy Policy.
Thank you! Use the button below to read now.
Oops! Something went wrong while submitting the form.

Summary

The Real Work Starts Before the Pilot

Before a single scenario is modelled or a number estimated, one of first challenges in adopting cyber risk quantification (CRQ) is simply persuading stakeholders it's worth doing.

CRQ often begins not with a proof of concept, but with a leap of belief — a willingness to explore a new way of framing risk, even before it's been validated internally. This belief can be hard-won, because it asks stakeholders to reconsider long-held views about risk, decision-making, and the limits of current frameworks.

Advocating for CRQ means asking stakeholders to adopt a new lens. That kind of trust is rarely built through logic alone.

This blog explores five objections I frequently hear when introducing CRQ. These aren't hostile challenges — they're thoughtful concerns, often raised by leaders trying to ensure their teams stay focused on what matters. That's exactly why they deserve thoughtful responses. These are the kinds of conversations that set the tone and often determine whether CRQ ever gets off the ground.

The visual below aims to capture the essence of each.

In each objection, I have included:

  • A realistic expression of the concern ('as heard in the wild')
  • Why it's reasonable
  • A constructive reframe
  • What I've seen work in practice
  • Lines to help move the conversation forward

Earning belief in CRQ isn’t about conversion — it’s about credibility, one conversation at a time.

Objection 1

“Isn’t this too subjective?”

As heard in the wild:

“I see the appeal, but how much of this is judgement dressed up as data?

Why this concern is reasonable:

Stakeholders are right to be cautious about turning uncertainty into numbers. Even without seeing quantification fail, many are alert to the risk of false precision. Their concern isn’t CRQ itself — it’s how credible the foundations are.

How to reframe it constructively:

CRQ doesn’t remove judgement; it makes it visible, challengeable, and more reliable. Calibrated expert judgement — a powerful and often underused tool — helps organisations think probabilistically, incorporate diverse views, and surface assumptions. In this frame, CRQ shows where judgement is applied and how confident we can be. The goal isn't to remove human insight, it's to channel it more reliably.

What I've seen work in practice:

  • Calling out that most existing risk ratings already rely on judgement — CRQ just makes that judgement visible and structured.
  • Using structured techniques like calibration training to demonstrate how expert input can be systematically improved.
  • Showing how CRQ helps focus attention on key assumptions — and makes them easier to explain and challenge.

Lines that can move the conversation forward:

  • “CRQ doesn’t remove judgement — it refines it with structure and transparency.”
  • “It’s not guesswork — it’s guided judgement, openly discussed.”
  • “The question isn’t whether we use judgement — it’s whether we use it well.”

When stakeholders worry CRQ is "too subjective", it helps to show where it sits on the broader spectrum — and how it compares to the tools we're already using.

Objection 2

“How do we know it’s accurate?”

As heard in the wild:

“We'll never be able to 'prove' the results so how can we trust them enough to make decisions?”

Why this concern is reasonable: For many leaders, trust in a model is tied to verifiability. If CRQ outputs can’t be tested, they risk sounding like opinion, not insight — especially in a data-poor, fast-moving domain like cyber.

How to reframe it constructively: CRQ isn’t about perfect foresight. It’s a structured way to reason under uncertainty — bringing together data, context, and judgement in a transparent and testable way. Accuracy means being directionally right, not precisely perfect. Confidence grows when scenarios align with real-world events faced by similar organisations.

What I’ve seen work in practice:

  • Drawing comparisons with forecasting disciplines (e.g. financial stress tests, weather models) where the goal is directional reliability, not precision.
  • Showing how loss ranges were informed by actual incidents at similar firms — adding real-world context to modelled results
  • Walking through post-hoc comparisons where model outputs aligned with outcomes, even if ranges were broad.

Lines that move the conversation forward:

  • “We’re not claiming precision — we’re building enough confidence to act.”
  • “This isn’t binary accuracy — it’s better reasoning in the face of uncertainty.”
  • “We’re not trying to predict the future — just to stop flying blind.”

Objection 3

“Are we mature enough for this?”

As heard in the wild:

“We’re still fixing the basics like inventories and access controls. Shouldn’t we get those right before we try something like CRQ?”

Why this concern is reasonable: CISOs are right to be cautious about sequencing. When core capabilities are still forming, it can feel premature to add a more advanced layer.

How to reframe it constructively: CRQ isn’t something you do after you're mature — it helps you mature more strategically. It highlights where improvements matter most and can shift senior focus from control checklists to real risk. It’s not, for example, about saying DLP doesn’t matter — it’s about showing whether it matters enough.

What I've seen work in practice:

  • Starting with a ransomware scenario to quickly show where impact could be reduced most — without needing perfect inventories.
  • Highlighting how CRQ uncovered low-visibility risks that had been missed by traditional controls-led dashboards.
  • Reframing the boardroom conversation from “are our controls complete?” to “where’s our biggest risk reduction opportunity?”

Lines that can move the conversation forward:

  • “CRQ helps us decide which basics matter most — and which can wait.”
  • “We don’t need to be fully mature to benefit from CRQ — we just need to care about improving where it counts."
  • “This isn’t about advanced risk analysis — it’s about making smarter trade-offs, starting now."

Objection 4

“What if it contradicts existing narratives?”

As heard in the wild:

“We’ve told the Board we’re within appetite. What if CRQ says otherwise?”

Why this concern is reasonable: Leaders are right to be cautious. Risk narratives have likely been communicated to Boards and regulators.

How to reframe it constructively: CRQ isn’t about proving past conclusions wrong — it’s about improving future ones. It sharpens, validates, or reconsiders earlier judgements based on better tools. Positioning CRQ as an evolution can reduce defensiveness and opens the door to learning.

What I've seen work in practice:

  • Framing CRQ as an enhancement to existing insights — helping validate what’s already there, and refining what needs revisiting.
  • Leading with alignment: starting presentations by showing where CRQ results support existing risk views.
  • Equipping leaders with language to explain CRQ as a next evolution, not a reversal.

Lines that can move the conversation forward:

  • “CRQ doesn’t rewrite the story — it helps us tell it more clearly."
  • "We’re not necessarily changing the conclusion — we’re strengthening the evidence."
  • “Progress means evolving the narrative — not defending the old one at all costs.”

Objection 5

“It sounds interesting, but why now?”

As heard in the wild:

“We already have risk matrices and appetite frameworks — what's the urgency?”

Why this concern is reasonable: CRQ often arrives without a burning platform. It’s not on a regulator’s checklist (yet), and it doesn’t plug a glaring control gap. So the question becomes: is the value clear enough to prioritise today?

How to reframe it constructively: This isn’t about replacing what works — it’s about adding depth where it’s missing. CRQ helps with trade-offs, cost-benefit clarity, and appetite expressed in financial terms. And while there’s no burning platform today, expectations are rising. Boards are asking better questions. CRQ helps you answer them.

What I've seen work in practice:

  • Piloting CRQ in parallel with existing risk frameworks — no disruption, but added clarity when difficult trade-offs arise
  • Using CRQ to justify prioritisation when faced with multiple security asks and budget for one.
  • Framing CRQ as readiness for board-level scrutiny — translating cyber into the language of financial risk.

Lines that can move the conversation forward:

  • “Boards are asking tougher questions — CRQ helps us answer them with confidence.”
  • "CRQ makes our existing frameworks work harder — especially when decisions get complex."
  • “Now is the time to lead with this — before it becomes table stakes.”

Confidence, Not Conversion

CRQ isn’t a hard sell — and it shouldn’t be.

The goal of these conversations isn’t to convert sceptics on the spot. It’s to create space for new ways of thinking. To offer clearer language, more defensible insight, and a way to navigate complexity without defaulting to colour codes and gut feel.

These objections aren’t fringe views. They’re thoughtful, informed questions raised by people who’ve carried real accountability for risk — and who want to make sure their next step is better than their last.

Handled well, these aren't objections to be overcome. Instead, they should be thought of as opportunities to earn trust.

Because every concern addressed builds confidence. Every quiet objection handled with care becomes an invitation to rethink. And every practitioner who’s prepared — not just with a model, but with the language and mindset to lead change — helps move the discipline forward.

CRQ doesn’t need to be introduced with fanfare. It often starts quietly with a well-framed conversation. The kind that earns a first yes. And from there, momentum builds

Next up: From First Yes to Embedded Capability

Earning the first “yes” is a significant milestone — but it’s just the beginning.

In the next blog, I’ll explore what separates short-term pilots from long-term success. From governance to data, process to tooling — this is where CRQ stops being a side project and starts changing how decisions get made.

If you’re preparing for that journey, we'd be happy to help.

Key messages

01

02

03

Blog
Winning the First Yes: Navigating the Five Most Common CRQ Objections

Summary

The Real Work Starts Before the Pilot

Before a single scenario is modelled or a number estimated, one of first challenges in adopting cyber risk quantification (CRQ) is simply persuading stakeholders it's worth doing.

CRQ often begins not with a proof of concept, but with a leap of belief — a willingness to explore a new way of framing risk, even before it's been validated internally. This belief can be hard-won, because it asks stakeholders to reconsider long-held views about risk, decision-making, and the limits of current frameworks.

Advocating for CRQ means asking stakeholders to adopt a new lens. That kind of trust is rarely built through logic alone.

This blog explores five objections I frequently hear when introducing CRQ. These aren't hostile challenges — they're thoughtful concerns, often raised by leaders trying to ensure their teams stay focused on what matters. That's exactly why they deserve thoughtful responses. These are the kinds of conversations that set the tone and often determine whether CRQ ever gets off the ground.

The visual below aims to capture the essence of each.

In each objection, I have included:

  • A realistic expression of the concern ('as heard in the wild')
  • Why it's reasonable
  • A constructive reframe
  • What I've seen work in practice
  • Lines to help move the conversation forward

Earning belief in CRQ isn’t about conversion — it’s about credibility, one conversation at a time.

Objection 1

“Isn’t this too subjective?”

As heard in the wild:

“I see the appeal, but how much of this is judgement dressed up as data?

Why this concern is reasonable:

Stakeholders are right to be cautious about turning uncertainty into numbers. Even without seeing quantification fail, many are alert to the risk of false precision. Their concern isn’t CRQ itself — it’s how credible the foundations are.

How to reframe it constructively:

CRQ doesn’t remove judgement; it makes it visible, challengeable, and more reliable. Calibrated expert judgement — a powerful and often underused tool — helps organisations think probabilistically, incorporate diverse views, and surface assumptions. In this frame, CRQ shows where judgement is applied and how confident we can be. The goal isn't to remove human insight, it's to channel it more reliably.

What I've seen work in practice:

  • Calling out that most existing risk ratings already rely on judgement — CRQ just makes that judgement visible and structured.
  • Using structured techniques like calibration training to demonstrate how expert input can be systematically improved.
  • Showing how CRQ helps focus attention on key assumptions — and makes them easier to explain and challenge.

Lines that can move the conversation forward:

  • “CRQ doesn’t remove judgement — it refines it with structure and transparency.”
  • “It’s not guesswork — it’s guided judgement, openly discussed.”
  • “The question isn’t whether we use judgement — it’s whether we use it well.”

When stakeholders worry CRQ is "too subjective", it helps to show where it sits on the broader spectrum — and how it compares to the tools we're already using.

Objection 2

“How do we know it’s accurate?”

As heard in the wild:

“We'll never be able to 'prove' the results so how can we trust them enough to make decisions?”

Why this concern is reasonable: For many leaders, trust in a model is tied to verifiability. If CRQ outputs can’t be tested, they risk sounding like opinion, not insight — especially in a data-poor, fast-moving domain like cyber.

How to reframe it constructively: CRQ isn’t about perfect foresight. It’s a structured way to reason under uncertainty — bringing together data, context, and judgement in a transparent and testable way. Accuracy means being directionally right, not precisely perfect. Confidence grows when scenarios align with real-world events faced by similar organisations.

What I’ve seen work in practice:

  • Drawing comparisons with forecasting disciplines (e.g. financial stress tests, weather models) where the goal is directional reliability, not precision.
  • Showing how loss ranges were informed by actual incidents at similar firms — adding real-world context to modelled results
  • Walking through post-hoc comparisons where model outputs aligned with outcomes, even if ranges were broad.

Lines that move the conversation forward:

  • “We’re not claiming precision — we’re building enough confidence to act.”
  • “This isn’t binary accuracy — it’s better reasoning in the face of uncertainty.”
  • “We’re not trying to predict the future — just to stop flying blind.”

Objection 3

“Are we mature enough for this?”

As heard in the wild:

“We’re still fixing the basics like inventories and access controls. Shouldn’t we get those right before we try something like CRQ?”

Why this concern is reasonable: CISOs are right to be cautious about sequencing. When core capabilities are still forming, it can feel premature to add a more advanced layer.

How to reframe it constructively: CRQ isn’t something you do after you're mature — it helps you mature more strategically. It highlights where improvements matter most and can shift senior focus from control checklists to real risk. It’s not, for example, about saying DLP doesn’t matter — it’s about showing whether it matters enough.

What I've seen work in practice:

  • Starting with a ransomware scenario to quickly show where impact could be reduced most — without needing perfect inventories.
  • Highlighting how CRQ uncovered low-visibility risks that had been missed by traditional controls-led dashboards.
  • Reframing the boardroom conversation from “are our controls complete?” to “where’s our biggest risk reduction opportunity?”

Lines that can move the conversation forward:

  • “CRQ helps us decide which basics matter most — and which can wait.”
  • “We don’t need to be fully mature to benefit from CRQ — we just need to care about improving where it counts."
  • “This isn’t about advanced risk analysis — it’s about making smarter trade-offs, starting now."

Objection 4

“What if it contradicts existing narratives?”

As heard in the wild:

“We’ve told the Board we’re within appetite. What if CRQ says otherwise?”

Why this concern is reasonable: Leaders are right to be cautious. Risk narratives have likely been communicated to Boards and regulators.

How to reframe it constructively: CRQ isn’t about proving past conclusions wrong — it’s about improving future ones. It sharpens, validates, or reconsiders earlier judgements based on better tools. Positioning CRQ as an evolution can reduce defensiveness and opens the door to learning.

What I've seen work in practice:

  • Framing CRQ as an enhancement to existing insights — helping validate what’s already there, and refining what needs revisiting.
  • Leading with alignment: starting presentations by showing where CRQ results support existing risk views.
  • Equipping leaders with language to explain CRQ as a next evolution, not a reversal.

Lines that can move the conversation forward:

  • “CRQ doesn’t rewrite the story — it helps us tell it more clearly."
  • "We’re not necessarily changing the conclusion — we’re strengthening the evidence."
  • “Progress means evolving the narrative — not defending the old one at all costs.”

Objection 5

“It sounds interesting, but why now?”

As heard in the wild:

“We already have risk matrices and appetite frameworks — what's the urgency?”

Why this concern is reasonable: CRQ often arrives without a burning platform. It’s not on a regulator’s checklist (yet), and it doesn’t plug a glaring control gap. So the question becomes: is the value clear enough to prioritise today?

How to reframe it constructively: This isn’t about replacing what works — it’s about adding depth where it’s missing. CRQ helps with trade-offs, cost-benefit clarity, and appetite expressed in financial terms. And while there’s no burning platform today, expectations are rising. Boards are asking better questions. CRQ helps you answer them.

What I've seen work in practice:

  • Piloting CRQ in parallel with existing risk frameworks — no disruption, but added clarity when difficult trade-offs arise
  • Using CRQ to justify prioritisation when faced with multiple security asks and budget for one.
  • Framing CRQ as readiness for board-level scrutiny — translating cyber into the language of financial risk.

Lines that can move the conversation forward:

  • “Boards are asking tougher questions — CRQ helps us answer them with confidence.”
  • "CRQ makes our existing frameworks work harder — especially when decisions get complex."
  • “Now is the time to lead with this — before it becomes table stakes.”

Confidence, Not Conversion

CRQ isn’t a hard sell — and it shouldn’t be.

The goal of these conversations isn’t to convert sceptics on the spot. It’s to create space for new ways of thinking. To offer clearer language, more defensible insight, and a way to navigate complexity without defaulting to colour codes and gut feel.

These objections aren’t fringe views. They’re thoughtful, informed questions raised by people who’ve carried real accountability for risk — and who want to make sure their next step is better than their last.

Handled well, these aren't objections to be overcome. Instead, they should be thought of as opportunities to earn trust.

Because every concern addressed builds confidence. Every quiet objection handled with care becomes an invitation to rethink. And every practitioner who’s prepared — not just with a model, but with the language and mindset to lead change — helps move the discipline forward.

CRQ doesn’t need to be introduced with fanfare. It often starts quietly with a well-framed conversation. The kind that earns a first yes. And from there, momentum builds

Next up: From First Yes to Embedded Capability

Earning the first “yes” is a significant milestone — but it’s just the beginning.

In the next blog, I’ll explore what separates short-term pilots from long-term success. From governance to data, process to tooling — this is where CRQ stops being a side project and starts changing how decisions get made.

If you’re preparing for that journey, we'd be happy to help.

Key messages

01

02

03

Recent Insights

From Pilot to Capability: The Journey to Operationalise CRQ

CRQ can’t remain a pilot forever. To drive meaningful, repeatable value, it needs to mature into a business capability: trusted, embedded, and regularly informing decisions.
James Hanbury

Six Principles of Effective CRQ: How to Build an Engine That Lasts

In this article, I’ll share six working principles I’ve found essential for embedding CRQ in a way that sticks — not just as a project, but as a true business capability.
James Hanbury

The Art and Science of CRQ: Why Practitioners Must Lead the Change

What Shackleton Can Teach Us About Navigating Cyber Risk
James Hanbury

Empowering you to make smarter cyber risk decisions.

Thank you! A member of the team will be in touch shortly.
Oops! Something went wrong while submitting the form. Please try again.