Blog
AI Security in three steps – 2: detect
Seeing how AI is used is one thing. Understanding when it starts creating real risks is something else entirely. Unfortunately, many organisations lack the ability to determine what is safe and what constitutes a risk when it comes to AI use in their own environment.
When AI use becomes a risk, and how to spot it in time
In our first article, we focused on discovery and on gaining visibility into how AI is actually used across the organisation. This is an essential first step, but it is not enough.
Noticing AI use is one thing. Understanding when it becomes risky is something else entirely.
The next step on the journey is detection, the ability to identify when AI use crosses the line from legitimate to risky, why this happens, and what needs to be addressed. This is where many organisations get stuck. They can see the usage, but they lack the context required to determine what is safe and what is problematic.
AI introduces risks that do not fit traditional security models
Traditional security is built around clear categories such as vulnerabilities, misconfigurations, weak access controls and malicious code. AI, however, behaves differently. Its risks do not arise solely from infrastructure. They emerge in content, decisions and behaviours, and they appear within the interaction itself rather than only in the surrounding system. In practice, this means organisations need to manage two parallel risk domains.
Risks linked to what AI produces and how it is used
- Hallucinations and incorrect answers that appear credible.
- AI bias and skewed decisions in processes such as selection or assessment.
- Toxic or inappropriate content.
- Recommendations that steer business, security or compliance in the wrong direction.
- Unwanted or cost‑driving usage.
Risks linked to how AI can be exploited or manipulated
- Prompt injection and indirect instructions embedded in data or content.
- Data exfiltration via prompts, context or responses.
- Attempts to extract system instructions or meta‑prompts.
- Data poisoning during training or database updates.
Many of these risks do not appear as clear incidents. They are gradual, context-dependent and often only show their impact over time.
Risk assessment requires context
You cannot judge AI risks based on isolated datapoints or network traffic alone.
To make informed assessments, organisations need to understand:
- who or what is interacting with AI
- which model or service is being used
- what type of data is being provided
- the context in which the use occurs
- what the AI actually outputs or does
Without this context, risk assessment becomes either too broad or simply wrong. Everything looks dangerous, or the real issues are overlooked.
How ATG built Observability into the core of its IT Operations
When ATG transitioned from a betting monopoly to a broader gaming company, the technical demands shifted quickly. New services were added. Infrastructure had to scale. Performance became business crit…
Two capabilities that together create understanding
1. Continuous analysis of real AI usage
The first capability is the ability to continuously observe how AI is used in practice. The goal is not to inspect every prompt, but to understand patterns, anomalies and changes over time.
This makes it possible to:
- identify usage patterns that increase risk
- distinguish normal behaviour from deviations
- see how behaviours shift as new AI services are introduced
This analysis relies on visibility from browsers, networks, API calls and logging, linked to identity and role.
2. Active validation of AI risks using AI
The second capability is the active testing of AI systems. Traditional Red Teaming does not scale well for AI, since tests are manual, fragmented and quickly outdated. AI systems simply evolve too rapidly.
AI Red Teaming uses automated testing where models, applications and AI agents are exposed to a wide range of attack and misuse scenarios. The purpose is to validate both content related risks and security risks, based on how AI is actually used.
This makes it possible to:
- identify real vulnerabilities rather than theoretical ones
- understand how serious the risks are in practice
- prioritise the protections that truly matter
Here, AI is used to test AI, at the scale required to keep pace.
When AI acts without a human in the loop
Risk assessment must also work when no human is involved.
When AI is used through APIs, automated workflows or AI agents, organisations need to be able to monitor:
- how data is used during inference
- which external services are called
- how behaviours evolve over time
- whether decisions or actions deviate from expectations
As AI agents gain more autonomy, this becomes a fundamental requirement for governance.
From visibility to protection without guesswork
Organisations that move directly from visibility to blocking often make the same mistakes: they stop legitimate use, overlook real risks and implement protections that fail to hit the mark. However, with proper visibility, guesswork is removed and the right protections can be put in the right place.
When risks can be identified and validated, it becomes possible to:
- separate low‑risk from high‑risk
- enable AI where it creates value
- focus protections where they are genuinely needed
Organisations without detection guess.
Organisations with detection prioritise.
Next step: Protection
In the next article, we will look at how to build guardrails, both technically and organisationally, based on the risk landscape now uncovered. Without this step, protections remain guesswork. With it, they become proportionate, justified and sustainable.
Observability
Complex IT environments multiply blind spots and widen the margin for error. Sometimes, all you need is a bird’s-eye view.
About the author
Octavio Harén
CISO & Business Area Manager Cybersecurity, Conscia Sweden
Octavio Harén is the Head of Cybersecurity and CISO at Conscia Sweden. He is responsible for Conscia Sweden's internal information security programme and for leading strategic cybersecurity initiatives, focusing on developing solutions and offerings that address customers' most complex security challenges. With over ten years of experience in IT infrastructure and cybersecurity, Octavio has established himself as a leading expert in the industry.
Related