Blog
AI Security in Three Steps – 3: Protection
Now that AI is being used broadly, visibility and warning signs are no longer enough. What matters is the ability to guide its use in practice. This means building technical safeguards that make it possible to the safe, consistent, and scalable use of AI – without slowing innovation.
Protection guides AI use with central guardrails
Once the organisation has gained visibility into how AI is used (Discover) and begun to understand which behaviours and patterns create risk (Detection), the critical next step remains: actually steering AI use in practice.
Protection is the third step, and it is not about writing more policies. It is about implementing technical guardrails that make it possible to use AI at scale – without losing control, security or compliance.
This is where many organisations take the wrong path. They try to secure each AI tool individually. A rule here, a configuration there, a contract with a vendor. This approach does not work when AI is used simultaneously in the browser, through APIs, by developers, by the business – and increasingly by AI agents acting on their own.
Protection must therefore be central, consistent and architectural – not tool‑specific.
What are AI guardrails?
AI guardrails are technical and organisational mechanisms that ensure AI is used within defined boundaries, even as usage evolves, scales or becomes automated.
In practice, this means:
- protecting data before it reaches AI services
- limiting what AI is allowed to see, do and act on
- creating traceability and accountability for interactions and decisions
- enabling AI use without compromising security or compliance
Many AI services have their own protective features. The problem is not that these mechanisms are missing, but that the organisation does not control them centrally. When multiple AI tools are used in parallel, each with different safeguards and different logic, the result is fragmented control and unclear accountability.
This is why guardrails need to move out of individual AI tools and into the organisation’s own architecture.
Browser‑based guardrails for human AI use
A large share of today’s AI usage happens where the real work takes place – in the browser. That makes the browser one of the most effective places to introduce guardrails, especially for human AI use. When protection sits in or near the browser, the organisation can guide AI interactions before data is encrypted and leaves the user’s workspace.
This enables:
- control over which AI services are allowed to be used
- guidance on what types of interactions are permitted
- protection of sensitive information during uploads, copying and sharing
- real‑time enforcement of rules based on user, role and context
The major advantage is precision. Guardrails can be applied directly in the user’s workflow, without broadly decrypting all traffic and without relying on after‑the‑fact analysis. This gives a better user experience, fewer privacy challenges and faster time to value.
For human AI use, this is often the most powerful layer of protection.
Central guardrails when AI does not run through the browser
Not all AI interactions take place in a controlled browser environment. This includes API calls to external models, internal AI platforms and AI agents that communicate with other systems. This type of AI usage requires protection that works regardless of interface. Here, guardrails need to sit in the network, identity and data paths.
A practical starting point is that all AI‑related communication, whether human or machine‑driven, should be able to pass through shared control points.
This includes:
- user access to public AI services
- API calls to external LLMs
- internal models and RAG solutions
- AI agents that call other systems or AI services
Access‑ and traffic‑based protection
A fundamental building block is the ability to control and inspect traffic to and from AI services.
In practice, this often includes:
- allowing or blocking AI services based on risk and classification
- requiring managed devices and strong identity
- consistent governance through web gateways or equivalent solutions
- the same level of protection regardless of where the user is located
When visibility into encrypted traffic is used where appropriate, it becomes possible to understand what is actually being sent to AI services, not just that they are being used. This is essential for applying data protection and policies in a meaningful way.
Protection should follow the user and the workflow – not the network.
Observability – because business and IT are two sides of the same coin
As organisations grow more digital, even small performance or security issues can have an immediate impact on operations and the bottom line. Observability brings clarity by unifying real‑time perform…
Data protection as an integrated part of AI guardrails
When AI is used to process information, data protection becomes a central part of the architecture – not an add‑on. DLP capabilities become a key tool here, not as static rules but as dynamic controls over what information actually leaves the organisation.
This means:
- identifying sensitive information in real time
- preventing protected data from being sent to unauthorised AI services
- masking or tokenising information before it is used by AI
- governing which data types are allowed in different contexts
In environments where AI agents operate autonomously, this becomes especially important. Protection cannot rely on human judgement at every step.
Guardrails for proprietary AI and agent‑based systems
Protection does not end with public AI services. When the organisation builds its own models, internal copilots or agent‑based workflows, the same principles must apply.
This includes:
- control over which data is used for training and inference
- visibility into which external sources, APIs and tools are being called
- logging of interactions, decisions and access
- clear separation between test, training and production
AI systems that act on the organisation’s data must be just as traceable and governable as any other business‑critical system – regardless of whether decisions are made by code, model or agent.
Traceability as the foundation for accountability and improvement
When AI influences decisions, processes and customer experiences, traceability becomes essential.
This means:
- interactions and decisions can be reviewed after the fact
- deviations can be analysed
- compliance can be demonstrated during audits or regulatory oversight
Central logging and analysis are not about surveillance, but about transparency, accountability and continuous improvement.
Without traceability, it is impossible to understand why something happened or to build a better version next time.
AI protection that enables, not blocks
The real test of effective guardrails is not how much they stop, but how well they enable safe use.
Organisations that succeed:
- offer secure alternatives to unsanctioned AI tools
- make the right choice easier than shortcuts
- build protections that are consistent, predictable and technically grounded
When protection is central and clearly defined, AI usage shifts from Shadow IT to controlled and governed practices.
Protection builds on Discover and Detect
Protection only works when the earlier steps are in place. Without visibility, you protect blindly. Without an understanding of risk, you protect the wrong things. But when guardrails are based on real usage and validated risks, AI can be governed in a way that is both secure and commercially sustainable.
In the next and final part of this blog series, we bring everything together into a full picture: how organisations can use Discover, Detect and Protect to build AI governance that remains effective even as usage continues to accelerate.
Observability
Complex IT environments multiply blind spots and widen the margin for error. Sometimes, all you need is a bird’s-eye view.
About the author
Octavio Harén
CISO & Business Area Manager Cybersecurity, Conscia Sweden
Octavio Harén is the Head of Cybersecurity and CISO at Conscia Sweden. He is responsible for Conscia Sweden's internal information security programme and for leading strategic cybersecurity initiatives, focusing on developing solutions and offerings that address customers' most complex security challenges. With over ten years of experience in IT infrastructure and cybersecurity, Octavio has established himself as a leading expert in the industry.
Related