FAIR™️ Methodology

FAIR™️ risk methodology: quantifying and managing cyber risk

How is the FAIR™️ methodology different from other cyber risk analysis methods? What is its governing principle? How can you benefit from this standard?

C-RiskC-Risk
Published on 14 February 2022 (Updated on 14 February 2022)

In 2021, business related cybersecurity challenges are on the rise. Cyber ​​attacks are carried out by criminals with ever more substantial means and those attacks are increasingly frequent. Cyber ​​risk, which was initially a technological risk, has now transformed into operational risk. Today, it affects all layers of the company, all the way up to the CEO. These relatively new circumstances are an incentive for organisations to take an interest in cyber risk analysis and management methods. ISO 27005, CIS RAM, COSO Risk Framework, NIST CSF or EBIOS: it is difficult to choose the approach best suited to a given structure. This choice is all the more complex as these are often based on a qualitative risk analysis. This leaves little room for researching reliable probability data on the occurrence of a cyber risk. The FAIR™(Factor Analysis of Information Risk ™) methodology was designed to remedy that problem. Here are some explanations.

The FAIR™ methodology: how to conduct a probabilistic cyber risk analysis


The FAIR™ analysis method fills two gaps. The range of cyber risk analysis methods is rather wide. The NIST cybersecurity framework (CSF), for example, is a very popular approach worldwide. This is also the case for the ISO27005 method.

However, those are primarily intended for cybersecurity experts. Besides, they are often non-prescriptive, which leaves the decision on how to measure risk to the practitioners: they most often base their work on a qualitative approach, which is inherently subjective (see ISO27005, section 8.3).

The FAIR™ taxonomy complements those qualitative methods by responding to their limitations on how to measure risk.

Why a new cyber risk analysis method?

All the risk analysis methods (ISO 27005, NIST CSF, COSO, OCTAVE, etc.) that have existed on the market for the last thirty years are qualitative. They are based on IT “expert opinions and experience'' to rank risks with subjective scales. With such scales, the risk is stamped as “low” or “high” and the results of the analyses most often come as a colour-based risk map (heatmap), which goes from green to red.

Of course, these methods provide good practice and necessary cyber hygiene habits. However, since they rely on subjective risk analysis, they do not grant all business functions a common basis they can work on.

In order to choose the right cyber risk management strategy, all of the company's divisions must share the same terminology and the same understanding of risk. Risk analysis is the cornerstone of an organisation's cyber security strategy: if it remains qualitative, it cannot be entirely useful.

The FAIR™ method offers an objective and quantifiable risk analysis model, which results in a mathematical risk estimate. This then leads to the development of risk scenarios that can be compared to one another. Taking all of this into account, analysts and information security experts have everything they need to design effective cyber risk prevention measures.

Definition and objective of the FAIR™ standard

The FAIR™ standard offers a taxonomy and a methodology for cyber risk analysis in all business functions. Through financially quantified risk scenarios, the FAIR™ framework establishes a link between cybersecurity experts, business managers and general management. This standard is designed, supported and promoted by the FAIR™ Institute, a professional non-profit organisation.

This approach to cyber risk analysis first proposes a taxonomy of the distinct factors that constitute risk. This is a collection of definitions which clarify certain concepts: risk, threat, danger, asset, control, audit ... The FAIR™ method explains the connections between these factors, in order to give food for thought to the company.

The FAIR™ standard also provides a methodology for breaking down risk into distinct measurable factors and for using statistics and probabilities to quantitatively estimate risk. The objectives are to analyse complex risks, to identify key data for quantification and to understand the interdependencies between risk factors.

Then, on the basis of logical, easy-to-explain, repeatable and defensible scenarios, forecasts of future loss (in GBP, EUR, USD, etc.) can be presented to decision makers.

What questions does this methodology address?

45% of Fortune 1000 companies already use the FAIR™ standard. It is the subject of university courses in more than 20 establishments. All of these organisations trust this approach because it enables management to make informed decisions about cybersecurity. The FAIR™ standard thus helps answer the following questions:

  • How many times could a disaster occur in a given time interval?
  • How much will this disaster cost?
  • What are the main cyber risks?
  • Which assets are concerned?
  • How and how much to invest to reduce those risks?
  • Between two control solutions, which one would reduce the risk most effectively?
  • What risks call for the use of insurance and for what coverage amount?
  • Which insurance policy best suits the company’s risks?

By extension, the FAIR™ analysis method gives you the opportunity to have an effective thought process about your cybersecurity budget. It also helps you to choose the risk reduction solution that will yield the best return on investment. This approach facilitates regulatory compliance too.

Benefits from quantifying cyber risk with the FAIR standard

How does the FAIR™ methodology work?


The FAIR™ methodology relies on the taxonomy featured in the diagram below. It is based on a “frequency x magnitude” model which is applicable to all situations and exportable to all businesses.

The results (in GBP, EUR, USD, etc.) may be used by the different divisions of the organisation, by the board of directors and by general management.

For instance, if a company estimates that a loss event could occur once every 10 years, and that it involves a 20,000,000 USD loss, then the formula would be:

A loss event frequency (LEF) of 1/10 year x 20,000,000 USD loss = 2,000,000 USD/year.

This risk model leaves the decision makers with two ways of reducing loss exposure:

  • reducing the LEF, the number of times that loss events occur;
  • mitigating the amount of financial losses that would result from such events.

The risk taxonomy on which the FAIR™ standard can be schematised as follows:

Risk taxonomy according to FAIR™ Analysis framework

Risk according to the FAIR™ method

The FAIR™ philosophy is about conceiving risk as an uncertain event the probability and consequences of which need to be measured. The FAIR™ standard is probabilistic rather than predictive. Risk is therefore defined as the probability of a loss event relative to an asset. It is “the probable frequency and probable magnitude of a future loss”.

Risk is then broken down into factors that make up the probable frequency and the probable loss:

  • threat event frequency;
  • threat contact frequency;
  • probability of threat agents taking action;
  • vulnerability;
  • threat capacity;
  • loss event frequency (LEF);
  • primary loss magnitude;
  • secondary loss event frequency;
  • secondary loss magnitude;

Factors which affect loss

Those are the attributes or properties of an asset, of a threat, of an organisation or even the external environment, which will affect the magnitude of the loss for the party involved in a disaster. Those factors can impact primary or secondary loss in any of the four following categories: the asset and the threat (primary loss) and the organisational and external factors (secondary loss).

Loss factors of an asset in the FAIR™ standard

The variables of loss of an asset comprise its value and/or its liabilities (personal information which must be protected under the UK General Data Protection Regulation, for example).

The value and/or the liabilities of an asset play a very important part in the nature and the magnitude of the loss. The value of an asset can be evaluated according to the following criteria:

  • Criticality - index measuring the loss of productivity of an organisation which can no longer produce its goods or provide its services;
  • Cost: the intrinsic value of the asset (cost of replacement or repair);
  • Sensitivity: damage that would result from unintentional publication.

The loss factors of threat in the FAIR™ standard

Those are the action of a threat, its competence, its being internal or external and the mode of exploitation of the breach by the threat agent.

Threat agents can use an asset for the following purposes:

Impact on confidentiality

  • Access: unauthorised access to data but without any further action;
  • Abuse: unauthorised use of the asset such as identity theft, misuse of servers and other IT resources, etc.
  • disclosure: unlawful sharing of sensitive data

Impact on integrity

  • modification of any information or information handling process which results in inaccuracy, unreliability or untrustworthiness of this information or process.

Impact on availability

  • the threat agent prevents or denies legitimate and authorised access to an asset (e.g. deletion of information, system disconnection, ransomware, etc.)

The effect of those threats depends on the specific properties of the asset. If the “sensitive data” asset is disclosed, for example, this will not necessarily have an effect on productivity. However, the company's responsibility in terms of legal compliance will be affected. This is why the properties of the asset and the type of threat both determine the nature and magnitude of financial loss, be it primary or secondary.

FAIR™ methodology scenario definition diagram

Benefits and limitations


C-Risk uses an approach such as the FAIR™ standard because the quantification of cyber risks in financial terms really optimises the governance of information systems security. Professional organisations or associations such as NIST, the SANS Institute, the OCTAVE method by Carnegie Mellon SEI, ISACA, or even COSO, now reference this approach to supplement their databases in order to quantify the risks.

The frequency/magnitude pair leads to a logical result while the nominal scales used to categorise risks do not permit risk comparison or future loss estimation.

As indicated before, the FAIR™ methodology is a probabilistic approach, it does not leave room for prediction and no method does. It does not achieve exhaustiveness either but it rather focuses on the assets most critical to the functioning of the organisation. It accounts for the most probable scenarios rather than pushing you to try to imagine everything that could happen, as some other methods would.

FAQ

In the FAIR™ standard, risk is defined as the "probable frequency and probable magnitude of future loss".

The FAIR™ methodology has 4 steps: - identifying the components of the scenarios; - evaluating the frequency of loss events; - estimating the magnitude of the future loss; - calculating the risk.

You will not be able to make a prediction with The FAIR™ methodology, but it will provide you with the best possible estimate of cyber risk-induced potential loss, according to the available data.