Scoping AI Risks Using a Data-Driven Approach
Since 2022, following the public release of ChatGPT, AI chatbots have revolutionized the way we interact with the world. Today, anyone with a smartphone and an internet connection can generate highly convincing content with just a few prompts. Businesses are rapidly adopting AI, integrating it into decision-making, customer service, creative processes, and more. However, this widespread accessibility has also expanded the attack surface that organizations must secure.
.png)
This broad adoption by employees and organizations has introduced new cyber, technology and operational risks. These include the potential for social engineering, security breaches and systems failures. To properly assess and manage these risks, you must identify which AI technology you are going to look at.
Artificial intelligence is not a single, uniform technology but a broad category encompassing many different applications and techniques. To begin to understand AI risks, it is important to clearly define our terms before analyzing AI risk.
What is AI?
The first question we are often asked in relation to AI risk is “What do you mean by AI?”. AI, as it’s used in the media and more generally, is an umbrella term for many types of artificial intelligence applications and systems. These are often mixed together by people with different understandings or uses cases in mind. This can lead to confusion around what actual risks they face because the potential threats are not the same.
So I decided to ask some of the most common AI chatbots:
In a paragraph, describe what AI is and what are the different kinds based on capability most commonly used?
ChatGPT
Artificial Intelligence (AI) is the field of computer science that enables machines to mimic human intelligence, including learning, reasoning, and problem-solving. Based on capability, AI is classified into three types: Narrow AI, General AI, and Super AI. Narrow AI (Weak AI) is designed for specific tasks, such as virtual assistants, facial recognition, and recommendation systems. General AI (Strong AI) refers to machines with human-like intelligence that can perform any intellectual task a human can, though it remains theoretical. Super AI surpasses human intelligence, exhibiting advanced decision-making, creativity, and self-awareness, but it is purely hypothetical. These categories help define AI’s current applications and future possibilities.
Microsoft Copilot
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Based on capability, AI can be categorized into three main types: Narrow AI, which is designed for specific tasks like virtual assistants (e.g., Siri, Alexa); General AI, which aims to perform any intellectual task that a human can do, though it remains largely theoretical at this stage; and Superintelligent AI, which surpasses human intelligence and capabilities, also a theoretical concept.
Claude
Artificial Intelligence (AI) refers to computer systems designed to perform tasks that typically require human intelligence, encompassing both narrow AI (designed for specific tasks) and general AI (hypothetically capable of human-like general intelligence). The most common types based on capability include machine learning systems that learn from data patterns, deep learning models using neural networks for complex pattern recognition, natural language processing systems for understanding and generating human language, computer vision systems for processing and analyzing visual information, expert systems that make decisions based on predefined rules, robotics AI for physical task automation, and reinforcement learning systems that learn through trial and error to optimize decision-making processes. While narrow AI excels in specific domains like image recognition, language translation, or game playing, artificial general intelligence (AGI) that matches or exceeds human-level cognition across all domains remains a theoretical goal.
Gemini
Artificial intelligence (AI) is the ability of a computer or machine to mimic human intelligence. This can involve things like learning, problem-solving, and decision-making. AI is a broad field, and there are many different types of AI. The most common types of AI are narrow or weak AI, general AI or strong AI, and super AI. Narrow AI is designed for a specific task, like playing chess or recommending products. It's the most common type of AI today. General AI is a theoretical type of AI with human-level intelligence, capable of performing any intellectual task a human can. Super AI is also theoretical and would surpass human intelligence in all aspects. While general and super AI are still largely in the realm of science fiction, narrow AI is already having a significant impact on our world.
The answers above are broadly similar. The most common type of AI technology used falls under narrow AI, sometimes called weak AI. This includes:
- Large Language Models (LLMs)
- Generative AI (GenAI)
The difference between LLMs and GenAI is their application. LLMs are a subset of generative AI and LLMs only generates text. GenAI applications can use various kinds of input, such as images or sounds as well as words, and the applications can generate content that includes text, images, sounds. Applications such as ChatGPT, Microsoft Copilot or Anthropic’s Claude are now called multimodal GenAI because they can process and generate more than just text.

Once you have identified the types of AI that you or your third parties are using and need to assess, you will be able to better scope the potential risks scenarios you and your company could face.
The CIA Triad and Common Risks Related to AI
For risk managers and policymakers, the CIA triad is useful for identifying common AI-related risks.

Confidentiality
- Employees using open source LLMs and leaking sensitive company information
- Third-party LLM has poor controls leading to your data being leaked via prompt injection
- Cyber criminals using AI tools to enhance attacks such as phishing which lead to a breach
Integrity
- Data poisoning of LLM leading to issues with the model outputs
- Training data used not properly secured or maintained leading to potential bias in its outputs
Availability
- Cyber criminals using LLM’s to discover and exploit zero day vulnerabilities
- Using LLMs for specific use cases and if unavailable leads to system outage
How do we analyze AI risk?
To help with identifying what the most relevant AI risk scenarios, we recommend using the steps outlined in the FAIR-AIR approach playbook. This approach helps risk analysts and risk leaders speak the same language as the business so that decision-makers and executives can understand the risk and what should be done to reduce the risk.
5 Steps to Analyze Risk Using the FAIR-AIR Approach
1) Contextualize
- Why are you analyzing this risk and what’s driving the reason behind it?
- What are vectors for AI Risk?
2) Scope
- Identify risk scenarios
- Identify the attack surface, threat actor, method of attack, and impact of threat on your asset
3) Quantify
- Determine Likelihood and Impact of the risk
4) Prioritize/Treat
- Determine Treatment options and prioritize those with the greatest impact
5) Decision-making
- Using results and treatment options decide on plan going forward

