Discover how cybersecurity and information security risk management empowers organisations to make well-informed decisions and strengthen cyber resilience.
In the age of digital transformation, businesses and organisations around the world and across industries are adopting new technologies to improve efficiency, add value and drive innovation. During COVID-19, organisations worldwide found creative ways to extend their digital services for customers, citizens, and employees.
But this rapid transformation increased the probability of cyber incidents and heightened the level of digital risk to organisations and individuals. It has also led to regulatory changes. The speed at which digital technology took hold left many organisations underprepared to protect their most important assets.
With so much sensitive data, digital transformation brings new kinds of threats which organisations, regardless of size or level of complexity, must manage. Cybersecurity incidents can include:
In cases involving fraud or theft, the outcome of a cybersecurity incident can be severe financial losses, extended disruption, or worse. The US Securities and Exchange Commission has estimated that 60% of small businesses go bankrupt within six months of a cyber-attack or data breach.
As companies increasingly rely on information technology for their business operations, protecting digital assets and the information they contain has never been more important. But even with a multitude of international standards and frameworks to help, most organisations struggle to manage their cyber risk effectively. Why?
In order to protect their most important digital assets, businesses, organisations, and government agencies must first identify the assets, understand how they support business activities, and determine how to protect them by identifying the threats and impact of a cyber incident. This might sound simple, but many businesses struggle to articulate exactly what it is they’re trying to protect and why it needs protecting.
Further complicating the issue, digital transformation has altered the risk landscape. In a global, interconnected economy, the supply chain now extends to third-party software vendors and service providers. We believe a successful cyber risk management programme which addresses these issues should:
Although digital transformation took hold almost overnight, governance models have been slow to react. We believe that failing to implement proper cyber risk governance represents negligence by the board and senior management. Globally, many regulators are starting to reach the same conclusion. Soon, most organisations will be obliged to have effective and demonstrable cyber risk management governance.
Solving an enterprise issue of this scale needs leadership support. It involves setting the tone at the top, starting with the board of directors and other business leaders. Cyber risk cannot be left to the CISO and CIO to manage in isolation. It is a business-wide issue where every function must be aware of cyber and technology risk associated with their activities and the choices available to safeguard their digital assets.
To achieve this aim, cyber risk management needs a universal language, so everyone easily understands what’s at stake. Financial measures are this universal language, and the quantification of risk in financial terms will enable a massive shift in the governance of information security. Stakeholders must have confidence in the financial data presented and for this to happen quantifying cyber risk in financial terms must be based on a transparent standards-based methodology. We will discuss open standards and risk quantification in a later section.
In business, organisations face external and internal factors that make it uncertain whether they will achieve their business objectives. In moving to a mostly digital environment, protecting digital assets from risk has become most businesses' primary security concern.
Every information security programme is founded on three principles: confidentiality, integrity, and availability (known as the CIA triad). Confidentiality is limiting access to information by authorised users only and now includes evolving privacy obligations. Integrity is the assurance that the information is trustworthy and accurate. Availability refers to systems and applications working when they should. Information security and cybersecurity teams’ primary goal is to maintain the CIA triad while ensuring the organisation stays productive.
Cybersecurity incidents, intentional or otherwise, pose problems for businesses of all sizes and in every industry. Even the most well-protected and well-funded enterprise systems regularly experience significant cyber incidents.
While it might require significant investment, maintaining stable business operations is worth the high price tag. Customers and potential business partners now expect companies to deliver, regardless of adverse events. Resilient organisations need to be proactive in the domain of information security.
But resources are finite, so decision makers must evaluate whether one defence strategy is better than another, and whether initiatives are worth the investment. To do this, they need accurate information regarding the most significant risks to their business. This is easier said than done. It is a challenge to obtain the relevant metrics to inform these defence decisions in the context of each unique business model, their control maturity, and the threats they face.
With so many cybersecurity solutions, controls, frameworks and standards to choose from, many organisations find themselves faced with the paradox of choice, unable to decide due to the seemingly endless options. Businesses and their boards are concerned about cybersecurity, which means they’re allocating more resources to this domain. Effective cybersecurity risk management provides decision makers with the data to select initiatives with the best return on investment and impact.
To make good decisions about risk, we believe organisations should:
Organisations also need to be confident their risk governance method is effective and accurate, and this can be demonstrated to external stakeholders. The best way to achieve this is to adopt open standards which are transparent.
It has become increasingly difficult for organisations to understand their cyber risk exposure clearly and, consequently, how to manage the financial impact resulting from a major incident. Between ongoing geopolitical tensions, increasing online criminal activity, and organisations’ growing digital footprint, the list of risk scenarios keeps getting longer: ransomware, zero-day vulnerabilities, data protection and privacy legislation, data breaches, data loss, identity theft, fraud, phishing, social engineering, cyber warfare, exploits, cyber espionage, failure of a third-party supplier, nation-state attacks, and IoT systems compromise.
What’s more, intensifying media and vendor hype around cyber risk creates an overwhelming sense of uncertainty, leaving many organisations wondering what actions they can and should take to reduce risk.
One of the most valuable tools and techniques to cut through this noise is effective risk scenario scoping.
To deconstruct the never-ending list of cyber risks, threats, and vulnerabilities, we need a model that clearly defines and measures cyber risk scenarios while removing the noise – enabling us to allocate resources to the areas where they’re needed most.
A lack of clarity surrounding key cybersecurity terms hinders the risk identification process. Even cybersecurity experts use terms like ‘threat,’ ‘vulnerability,’ and ‘risk’ interchangeably to describe a risk scenario. And although standards and frameworks are meant to mitigate these misunderstandings, definitions often vary between guidelines and glossaries. They are also highly technical – only understood by cyber risk experts. The solution to this problem is an open methodology standard and taxonomy. A taxonomy is a system of classification. And in the case of risk management, a risk taxonomy creates a hierarchy of terms with a controlled vocabulary to address risk management specifically.
The FAIR model for risk quantification is the only open standard that provides a common language for communicating about risk (such as threat, vulnerability, etc.) and uses quantitative methods. The FAIR taxonomy maps how each variable or term relates to each other. FAIR also provides a simple process to define objective and quantifiable risk scenarios. This scenario scoping model is hugely beneficial to clearly defining the risk to which an organisation is exposed.
Scenario scoping using FAIR is simple to perform. It is a method to present quantified risk using a controlled vocabulary to various stakeholders which provides clarity and facilitates good communication.
Now let’s look a little closer at measurement models and how we can measure a scenario so the risk management process will result in defensible decisions.
We need formal methods for calculating the degree of risk because humans struggle to think statistically.
Extensive scientific research into this field from figures like Daniel Kahneman, Philip Tetlock and George Box has uncovered the significant role that cognitive biases play in decision-making. “How to Measure Anything: Finding the Value of Intangibles in Business”, by Douglas Hubbard, is one of the best books covering academic research in decision analysis. A key takeaway from the book is that without a solid methodology guiding the process of measuring risk and estimating uncertain data points, the probability of inaccurate analysis and poor decisions is very high.
To manage risk, businesses and organisations use various methodologies (more about this in the section below). Although each approach is different, the fundamentals are the same. However, measuring risk brings complications.
Most standards and frameworks opt for a qualitative approach to risk assessment, linking each variable to a qualitative word or phrase that describes how bad it is, or its risk posture (e.g., high/medium/low, very likely/likely/not likely). Some standards and frameworks, such as ISO 31000 or NIST CSF, suggest the following equation to measure risk:
Risk = Likelihood vs Impact
In theory, this equation is simple and straightforward, and it can be used to express cyber risk in qualitative terms. But when it comes down to using this approach to generate useful information about cyber risks, the logic starts to fall apart. We won’t go into detail here on flaws associated with a purely qualitative model (we will explore this in another article), but at a high level, the reasons are linked to the use of nominal and ordinal scales – assigning words or labels as a unit of measurement. The definitions mean different things to different people (even the same people) at different times. To make matters worse, if we then apply mathematics to nominal or ordinal scales the results become meaningless.
Thanks to a significant amount of research combined with many years of experience in the field, there are solutions to address issues with a purely qualitative approach to risk management. These techniques are collectively referred to under the term cyber risk quantification or CRQ or in some instances Cyber Risk Economics.
CRQ ultimately expresses cyber risk in financial terms, but this is not the only important aspect of the approach. CRQ also provides techniques to scope scenarios, deal with expert bias, use ranges to model uncertainty, measure variables using numbers and statistical techniques to calculate risk as a distribution of probabilities of financial loss.
Possibly due to the abstract nature of cybersecurity, it can be difficult to conceptualise how to apply standardised units of measurement to cyber risk management. For this reason, many organisations opt for the prevailing qualitative approaches for measuring cyber risk. However, this has started to change, and it is now recognised that a better approach is needed.
The CRQ approach to risk management solves many of the problems we covered earlier, by providing a consistent methodology for scoping (i.e., defining and identifying) and measuring risks based on quantifiable data derived from mathematical models.
Factor Analysis of Information Risk (FAIR) is the first CRQ model designated as an international standard. In its taxonomy, definitions and analysis methods, FAIR can be used to establish accurate probabilities for the frequency and magnitude of loss events. Using FAIR, organisations can:
Within the FAIR taxonomy, the meaning of risk focuses on loss rather than something that is speculative, which could generate a positive or negative outcome. The FAIR method further breaks down risk scenarios by the different factors that comprise probable frequency (Loss Event Frequency) and probable loss (Loss Magnitude) which can be measured quantifiably. Using FAIR, a risk analyst can input a range of probable values for each variable and represent the risk exposure as a distribution of probable outcomes. One of the common objections to the use of CRQ and FAIR is that it is just too hard, and the data required for each variable is not available. However, FAIR is no more difficult to use than qualitative methods. All that is needed is some training and perhaps some external help at the start.
Like all approaches to risk management, FAIR aims to provide organisations with tools and techniques for more effective management. FAIR is unique in that it proposes an accurate quantitative risk model which is open and transparent.
Regardless of which method your organisation uses to manage risk, you should always evaluate that method on at least these criteria; its practicality, its accuracy, its transparency, and the degree to which it helps make actionable decisions. FAIR seeks to reduce uncertainty in a defendable way so the decision-maker can compare options with the best available data.
Control frameworks and standards typically contain sets of fundamental controls recommendations to prevent financial or information loss. They come from authoritative bodies that specify structures and best practices for organisations to use in implementing their processes and operations, regardless of size or sector.
These recommendations can be useful for implementing, testing, and maintaining internal controls. However, each have their own proprietary taxonomies, lists or descriptions of controls and control objectives, so it can be difficult for organisations to determine the best fit for their business. Four of the most common control frameworks and standards are:
Organisations use them because, generally, they work – and their benefits include:
But with so many control frameworks to choose from, and with so many controls to implement, they can quickly get overwhelming.
Likewise, most control frameworks are centred around technical controls. Though useful, they can reinforce the common misconception that cybersecurity is limited to technology. Although technical controls are a big part of cybersecurity, they alone aren’t enough to protect an organisation from cyber risk. Even if an organisation has technical controls in place, they inevitably open themselves up to risk if they don’t have the right processes or people supporting them.
To determine which controls affect cyber risk in which ways, organisations need to understand the relationship between controls, and how to measure or estimate their effectiveness. Even if a control is operating as intended, how reliable is it? And is the value of reducing a particular risk enough to justify its cost? At the same time, organisations need to align these efforts with their business objectives and focus on the areas that will have the most tangible impact.
Ultimately, to make effective decisions, you need to understand the relationship between controls and the risks they influence in the context of your organisation. Organisations often rely on support from stakeholders to make changes to the controls they use. But enterprise intelligence about internal controls is often lacking; in which case, they can’t understand the risk scenarios or which controls may influence their frequency or impact.
Organisations need to provide decision-makers with easily digestible metrics that are linked to their system of internal controls. And if they want stakeholders on board, it’s even more important to communicate the information in financial terms. The good news is standards and frameworks are evolving in this space. To date, FAIR is the most complete international standard model for accurate quantitative analysis of risk.
As FAIR continues to gain traction as a legitimate method for measuring risk, emerging use cases have prompted regulatory bodies to begin encouraging adoption of CRQ in corporate governance. Multiple standards organisations including NIST, PCI DSS, and NACD already endorse and recommend CRQ using the Open FAIR method.
For organisations already complying with any or all of these standards, adding CRQ to their risk management practices is an important part of modern cyber risk management. Gartner says CRQ is a pillar of integrated risk management (IRM): the next frontier of governance, risk, and compliance (GRC).
In its relatively short time on the cybersecurity scene, CRQ has already evolved. Once a recommendation, it will soon become a requirement through regulation or legislation. Below are a few organizations that are moving in this direction:
2023 Director's Handbook on Cyber-Risk Oversight provides corporate boards with key principles and practical guidance – and cites FAIR as one of the models to use. This resource and others like it reveal just how important the issue of cyber risk quantification is becoming to boards. CRQ using FAIR is beginning to make more sense to business stakeholders, boards, executive leadership, and security teams on a bigger scale.
In March 2022, the SEC proposed new regulations that would require public companies to disclose “material cybersecurity incidents,” report the details of their cyber risk assessment and management programmes, business continuity and recovery plans, and more.
The proposal aims to give shareholders more visibility into the potential impact of cybersecurity events on their investments. If it’s passed, companies will need to find a streamlined and reliable way to translate their security posture into financial terms. While the proposal doesn’t explicitly reference CRQ, businesses won’t successfully meet the new requirements without it.
IDW’s PS 340 has become a necessary standard to guide the mandatory processes of risk identification and risk monitoring at individual German public auditors and public audit firms. In 2021, it was updated to include requirements for organisations to quantitatively describe and measure their risks, and to be able to communicate those results in financial terms with decision makers, stakeholders, and the board.
As more regulators add CRQ to their list of compliance requirements, organisations will need to prepare themselves for the disruptions that could follow. Even if it doesn’t become a requirement for every industry, CRQ can demonstrate to regulators in other industries that an organisation takes risk management seriously.
The way companies manage risk is changing. Like any transformation, it takes time and adjustment before the results become clear. And it could also take several iterations before you get the process down to a science. What is clear, however, is that an effective risk management programme can’t run itself.
In order to understand risks in the context of a particular business, quantify them in the context of controls and turn the process into a repeatable and standardised activity, organisations need the right governance or organisational oversight. Cyber risk management and governance are decision-making disciplines.
ISO 73:2009 defines risk management as: “a central part of the strategic management of any organisation. It is the process whereby organisations methodically address the risk attached to their activities”. It focuses on assessing significant risks and implementing suitable responses. Before getting into any measurement activities, we need to ask the right questions:
This leads more to questions such as:
This is why CRQ is a must-have: one of the key tenets of the FAIR standard is the risk management stack. Optimal risk management is about limiting future losses to an organisation and looking at their appetite/tolerance level. This requires well-informed decisions, which require effective comparisons, and can only be achieved with meaningful measures based on accurate models.
To quote Douglas Hubbard once more: “What makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong”.
When it comes to cyber risk, we want to measure risk as accurately as possible because it informs key decisions about how to treat that risk. Having a formal measurement model is critical in supporting an organisation’s overall aim of limiting future losses to within its risk appetite and tolerance as cost-effectively as possible.
Qualitative risk analysis is the process of using ordinal rating scales (i.e. 1-5 or low to high) to plot risks based on the likelihood of a risk event and the impact of loss to the organization. The interpretation of each ordinal scale can change from person to person. Quantitative risk analysis uses probability distributions and data from the organization, like cost, time and frequency, to calculate the probability and impact of a risk event. Quantitative methods determine the probable frequency and probable magnitude of a future loss in financial terms.
A CRQ risk scenario identifies digital assets in scope, threats to the assets and the impact (loss) in the case of a threat event or cyberattack.
Currently there are not any CRQ compliance requirements although there are recommendations being proposed by the US Securities and Exchange Commission and the German Institut der Wirtschaftsprüfer.
Discover our entire blog