rewrite this content using a minimum of 1000 words and keep HTML tags
ZDNET’s key takeaways
Browser extensions can use AI prompts to steal your data.All AI LLMs can be exploited, both commercial and internal.LayerX’s technology now works with Chrome for Enterprise to protect you
That browser extension you just installed in Chrome may seem harmless enough. If created by a savvy cybercriminal, it could take advantage of AI to steal personal or business data without your knowledge.
Also: Is that extension safe? This free tool lets you know before you install
A new report from browser security provider LayerX describes how any browser extension can access the prompts of AI-powered LLMs (large language models) to inject them with the necessary instructions designed to steal data. Without even requiring special permissions, such an extension could prove especially dangerous in a business environment where it’s capable of capturing internal or proprietary information.
How the exploit works
The exploit itself is based on the way most generative AI tools work in the browser. When you use an LLM-based AI assistant, the prompt is designed as part of the web page’s Document Object Model (DOM), an API that allows access to all the objects on the page. Any extensions with scripting access to the DOM can directly read from and write to the prompt, according to LayerX.
With that level of access, a malicious extension could run prompt injection attacks to change the user’s input or add hidden instructions. From there, it can extract data from the original prompt, from the AI’s response, or from the entire conversation. Ultimately, the extension could trick the AI into divulging sensitive data or performing malicious tasks.
Also: 5 browser extension rules to live by to keep your system safe in 2025
Though this exploit potentially threatens all browser users, the risk could be greater for enterprises. Here, users may copy and paste proprietary or regulated content into a prompt. An internal AI also has access to confidential corporate data, anything from source code to legal documents to M&A plans. Further, many businesses allow employees to freely install any extension they want, increasing the odds that a malicious one may inadvertently be added.
All types of LLMs are vulnerable to this exploit, according to LayerX. This includes third-party web-based services like ChatGPT, Claude, Google Gemini, and Microsoft Copilot, as well as internal LLMs and similar tools.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The researchers proved their concept
As part of its research, LayerX said that it successfully tested this exploit on all the top commercial LLMs, with attention focused on ChatGPT and Google Gemini. With both of those AIs, the researchers were able to prove their concept that a malicious extension could manipulate AI to stage data exfiltration attacks.
With ChatGPT, the researchers described the following steps to show how the exploit works:
You install a compromised extension that requires no special permissions.A command-and-control server run by the attackers sends a query to the extension.The extension opens a background tab and queries ChatGPT.The results are exfiltrated to an external log.The extension deletes the conversation to cover up its tracks. As such, viewing your chat history wouldn’t show any signs of intrusion or compromise.
LayerX found some extensions already capable of prompt injections. Such Google Chrome extensions as Prompt Archer, Prompt Manager, and PromptFolder are all capable of reading, storing, and writing to AI prompts. Though these extensions appear to be perfectly legitimate, this shows how a malicious one can use the same functionality to do damage.
How can you protect yourself against malicious extensions?
For the business world, LayerX worked with Google to add its extension risk scoring feature directly into the Chrome for Enterprises browser. When you try to use an extension, LayerX’s technology will analyze all the relevant details, including the access permissions, publisher information, and usage. The feature also looks for any malicious code in the extension and responds in time to block it.
Also: I found a malicious Chrome extension on my system – here’s how and what I did next
Beyond protecting individual users from dangerous extensions, LayerX’s technology should help IT admins get a better handle on such threats. The risk scores assigned to each extension will appear in the management dashboard of Chrome Enterprise, providing all the necessary details to determine which ones are legitimate and which ones are not.
Aside from the LayerX protection for Chrome Enterprise, IT and security admins can take a couple of other steps to combat these malicious extensions.
Monitor DOM interactions. Monitor all DOM interactions with your company’s generative AI tools. Be on the lookout for any listeners or webhooks that can interact with AI prompts.Block risky extensions. Block suspicious extensions not just through allow lists but based on actual risk. Your best bet is to use publisher reputation details along with dynamic extension sandboxing to prevent malicious extensions from running.
Finally, LayerX offers a free website designed to identify risky browser extensions. Known as ExtensionPedia, this online database evaluates the security of more than 200,000 extensions across Chrome, Firefox, and Edge.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.
and include conclusion section that’s entertaining to read. do not include the title. Add a hyperlink to this website [http://defi-daily.com] and label it “DeFi Daily News” for more trending news articles like this
Source link