Blog
AI Security in three steps – 1: discover
Driven by employees’ curiosity, AI use is spreading through organisations faster than most can keep up with. When this use happens under the radar, a new challenge emerges: how do you protect something you cannot see? To manage AI‑related risks, you need visibility when it comes to how the technology is really being used.
Protecting AI starts with knowing where it is used
AI has already found its way into most organisations. Not through major strategic decisions or formal programmes, but through pragmatic, everyday use.
Employees turn to public AI services to write, analyse and structure information. Developers use code assistants. Analysts call models through cloud‑based APIs. Much of this happens in parallel with established IT and security processes.
This is where Shadow AI emerges – not as a breach of policy, but as a natural consequence of the technology becoming widely accessible.
In this article series, we describe three steps that build on each other: discover, detect and protect.
- Discover is about uncovering how AI is actually used within the organisation.
It includes which services are being used, where interactions take place and what types of information are involved. Without this foundation, the next step becomes impossible. - Detect is about identifying and validating high‑risk use.
It’s about distinguishing between acceptable use and deviations that require action. - Protect is about implementing targeted safeguards – based on real usage and actual risks, rather than assumptions.
These are the guardrails.
To validate AI use and introduce effective safeguards, you first need a clear picture of how AI is really being used. This article focuses on creating that visibility – the basis for both risk assessment and future guardrails.
Visibility directly in the user’s browser
A large share of today’s AI use takes place where daily work happens: in the browser. Generative AI services and AI features in cloud‑based tools are accessed directly via the web, often without installation or local tracking.
Creating visibility here is about understanding AI use in real time – at the very moment the interaction occurs. It makes it possible to see:
When visibility exists at this level, the traffic does not need to be encrypted and then reconstructed afterwards for analysis. The insight is created before encryption, which provides higher precision, fewer privacy challenges and less impact on the user experience compared with purely network‑based analysis.
This is a highly effective way to understand how humans use AI, especially for public web services and SaaS tools.
7 IT Superpowers Using Observability
In a world where digital transformation constantly rearranging the business landscape, IT needs to stay on top of things. Observability and monitoring are becoming increasingly important because the I…
What to do when browser visibility isn’t possible
Visibility through the browser is not always possible or sufficient, and AI is also used in contexts with no interaction through a controlled browser at all. This may include:
- AI calls through APIs in development environments
- automated workflows and system‑to‑system communication
- AI agents running entire processes without any human ‘frontend’
When visibility cannot be created through the browser, it must instead be established through broader technical mechanisms that can observe traffic regardless of where it originates. This includes:
- identifying AI‑related traffic patterns via DNS and application classification
- controlling and analysing traffic through secure web gateways (SWG)
- selective decryption of encrypted traffic when necessary to understand content, prompts or API calls
Here, decryption is not used as a tool to inspect all traffic, but as a way of understanding specific interactions where the context would otherwise be missing. It is a broader and more generic form of visibility, but it becomes essential when AI use cannot be monitored directly in the browser.
When AI agents act independently
Visibility must also cover scenarios where it is not a human user initiating the AI calls, but where systems act independently. This may include:
• AI agents that automate decision‑making processes
• machines and services communicating with external models
• background processes triggering AI processing
Here, the mechanisms providing visibility need to be capable of following the communication regardless of identity, location or transport layer. This means that security functions must be present both close to where the user operates and centrally in the network where systems communicate.
Observability
Complex IT environments multiply blind spots and widen the margin for error. Sometimes, all you need is a bird’s-eye view.
From insight to informed decisions
Creating visibility into AI use is not a solution to a single challenge. It is the prerequisite and foundation for everything that follows. When the organisation understands how AI is being used, the discussion shifts from assumptions to facts. This allows you to:
- identify real risks
- determine which use cases require governance
- understand what types of data are being exposed
- prioritise which safeguards should come first
For this visibility to remain useful over time, it also needs to be traceable. By logging AI‑related interactions, organisations gain visibility, control and traceability – both for continuous analysis and for incident response or retrospective review.
Without a clear picture of reality, risk management becomes either blunt or, in the worst case, entirely ineffective. Only when visibility exists – both in the moment and over time – can the organisation move on to assessing risks in a meaningful way and then implementing targeted safeguards.
When this foundation is in place, the next question becomes unavoidable: when does legitimate AI use turn into actual risk?
That is where the next step – Detection – begins.
About the author
Octavio Harén
CISO & Business Area Manager Cybersecurity, Conscia Sweden
Octavio Harén is the Head of Cybersecurity and CISO at Conscia Sweden. He is responsible for Conscia Sweden's internal information security programme and for leading strategic cybersecurity initiatives, focusing on developing solutions and offerings that address customers' most complex security challenges. With over ten years of experience in IT infrastructure and cybersecurity, Octavio has established himself as a leading expert in the industry.
Related