- Written by: Terence Kam
- December 16, 2025
- Categories: Cybersecurity, Shadow AI
You would have heard of a new emerging class of software that promises to revolutionise our interaction with the internet: the agentic AI web browser. It doesn’t just show you web pages. It can actually act on your behalf inside them. Instead of you manually clicking, typing, and switching tabs, the browser’s built‑in AI agent can understand your goals, plan the steps, and carry them out autonomously.
Do not confuse that with AI-enhanced web browsers
Most current browsers with AI (like Edge Copilot or Safari summaries) still rely on you to do the clicking and decision‑making. An agentic AI browser flips that. It is goal‑driven rather than query‑driven. You delegate the outcome, and it figures out the process.
Fundamental cybersecurity problem of current agentic AI web browsers
Let’s cut to the chase.
There is one fundamental cybersecurity problem with agentic AI web browsers. As long as this fundamental problem is not solved, the AI industry will go in circles, repeatedly reacting to cybersecurity exploits by hackers and never getting on top of the problem. It will always be a cat-and-mouse game until the AI industry gets its act together and thoroughly learn this hard lesson from the cybersecurity industry.
Until it does, the risk to users of agentic AI web browsers is too great.
As Bruce Schneier, a world-renowned authority on cybersecurity, wrote here:
Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class.
This is what we will see happen to agentic AI web browsers in the near future because the AI industry is violating this core principle of cybersecurity:
Do not mix data and commands in the same ‘pipe’!
As Bruce Schneier wrote,
This general problem of mixing data with commands is at the root of many of our computer security vulnerabilities. In a buffer overflow attack, an attacker sends a data string so long that it turns into computer commands. In an SQL injection attack, malicious code is mixed in with database entries. And so on and so on. As long as an attacker can force a computer to mistake data for instructions, it’s vulnerable.
Modern devices, operating systems, hardware, and software are so much more secure nowadays (compared to 20 years ago) because of the extreme lengths taken to separate data from commands (e.g., code). These are some of the technical examples:
DEP / NX Bit (Data Execution Prevention): Modern operating systems, in conjunction with the hardware, implement a security feature called Data Execution Prevention (DEP). This feature works by marking certain memory regions as non-executable. This means that data stored in those regions cannot be executed as code.
W^X Policy (Write XOR Execute): Common in OpenBSD and hardened Linux builds; memory can be writable or executable, but never both at the same time.
Memory Management Unit (MMU): Enforces per‑page permissions so that data pages can’t be executed and code pages can’t be modified at runtime.
IOMMU (Input–Output MMU): Isolates device DMA access so a malicious peripheral can’t inject executable code into system memory.
Recently, there has been a movement towards sunsetting traditional programming languages (e.g., C, C++) for “memory-safe” programming languages (e.g., Rust), to ensure that, by design, data and commands do not mix.
Mixing data and commands in Large Language Models (LLM)
When you use an LLM AI chatbot, you will notice one thing: your commands and data are mixed in the same pipe. For example, within an input to the LLM, you supply both instructions and input data to the chatbot. When the boundary between data and commands is blurred, that is where cybersecurity vulnerabilities are found.
Worst-case scenario for non-agentic LLM?
For a non-agentic LLM, it does not act on your behalf. So, what is the worst that can happen?
As an end-user, the worst that can happen is that the AI chatbot will output forbidden or ‘unsafe’ content. For businesses and organisations, the risk is more serious: the AI chatbot can leak out forbidden information.
What does agentic AI need to do its job?
The purpose of agentic AI is for it to execute actions on your behalf. To do this, it needs to have the same level of access as you. For example, it needs to have access to your emails, banking details, credit card numbers, private information, messaging apps, and so on. Currently, all these accesses are secured with cybersecurity authentication (e.g., passwords, multi-factor authentication) to ensure that only you are allowed in. To use agentic AI, you need to share all these accesses with it.
This is where the cybersecurity risk lies. If the agentic AI can be tricked into going rogue, make a mistake, or malfunction, it can take unexpected or unauthorised actions on your behalf!
Crucial cybersecurity question to consider when using agentic AI
If a vendor provides you with agentic AI tools, you need to give its AI agent access that is secured with cybersecurity authentications. Does that mean that the vendor has the same access? If yes, do you trust the vendor enough to give them such access?
Sometimes, you are granting access to the software written by the vendor and not to the vendor itself. On the other hand, if the agentic AI service lives entirely in the cloud, it could mean that the software running the vendor’s itself cloud has access.
You need to think clearly before you proceed. For non-technical folks, this question can be difficult to answer.
Problematic architecture of current agentic AI
It seems to me that the current generation of agentic AI software is engineered in an insecure way: It passes on input to the LLM, which then makes no distinction between data and commands. As Brave Software made this analysis of Perplexity’s Comet agentic AI web browser,
The vulnerability we’re discussing in this post lies in how Comet processes webpage content: when users ask it to “Summarize this webpage,” Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user’s emails from a prepared piece of text in a page in another tab.
In fact, from the LLMs available to the general public (Copilot, ChatGPT, Grok, Perplexity, etc), data and commands are mixed in the same pipe. This type of insecure LLM architecture is then used under the hood in agentic AI tools.
What will I do?
Personally, I will give agentic AI a pass. Until the AI industry figures out a way to reliably separate data from commands, I will not let agentic AI act on my behalf.

