If you’re considering allowing employees to use AI-powered browsers such as Comet or Atlas, you may want to reconsider.
That’s the warning issued in a recent report from the global technology advisory firm Gartner.
“Agentic browsers — often referred to as AI browsers — could significantly change how people interact with websites and automate transactions, but they also introduce serious cybersecurity risks,” wrote Gartner analysts Dennis Xu, Evgeny Mirolyubov, and John Watts.
“CISOs should block all AI browsers for the foreseeable future to reduce risk exposure,” they advised.
MJ Kaufmann, an author and instructor at O’Reilly Media, which operates a learning platform for technology professionals in Boston, said AI browsers pose risks by indiscriminately collecting user data.
“These browsers create security issues because their sidebars can unintentionally capture anything visible in an employee’s open tabs,” she told TechNewsWorld. “That data — including internal systems, credentials, or confidential documents — can be sent to an external AI backend without the user realizing it.”
AI browsers have an unusually deep awareness of user activity, noted Alex Lisle, CTO of Reality Defender, a New York City-based company developing AI tools to detect deepfakes and synthetic media.
“Traditional websites are isolated by browser tabs,” he explained to TechNewsWorld. “AI browsers break that isolation. They see all open tabs, understand the data within them, and use that information to build broader context. While the goal is convenience, the result is massive data collection.”
Dan Pinto, CEO and co-founder of Fingerprint, a browser fingerprinting and device intelligence company in Chicago, added that AI assistants embedded in browsers can interpret web pages and act on hidden instructions — even malicious ones.
“The risk is that the AI may act on behalf of the user,” he told TechNewsWorld. “That could mean clicking harmful links, completing forms, or transmitting sensitive information without the user’s knowledge.”
Breaking Longstanding Security Models
Gartner’s concern that AI browsers may transmit active web content, open tabs, and browsing history to cloud-based services is well founded, agreed Chris Anderson, CEO of ByteNova, an edge AI technology company in San Francisco.
“Most users don’t realize how much sensitive data lives in their browser at any given moment,” he told TechNewsWorld. “Once internal dashboards, financial data, or patient records are exposed, you can’t just undo it.”
As AI browsers shift from passive tools to autonomous actors, they strain traditional browser security assumptions.
Randolph Barr, CISO at Cequence Security, said that as organizations adopt agentic AI, the Model Context Protocol (MCP), and autonomous browsing, troubling patterns are emerging.
“AI-native browsers are introducing system-level behaviors that conventional browsers have intentionally restricted for decades,” he told TechNewsWorld. “That fundamentally undermines long-held security assumptions.”
Barr also warned about the risks of employees installing AI browsers on personal devices.
“History shows that people test new tools at home first — cloud apps, messaging platforms, AI assistants,” he said. “Once users get comfortable, those habits spill into the workplace through BYOD, browser syncing, or remote work.”
He added that AI browsers are particularly easy for attackers to identify.
“They expose unique fingerprints through APIs, extensions, DOM behavior, network traffic, and agentic actions,” he explained. “Attackers can detect them with minimal effort.”
“With AI-driven classification, adversaries can automatically identify AI browsers across millions of sessions,” he continued. “That enables highly targeted attacks against users running these higher-risk environments.”
Barr cautioned that AI browsers are evolving faster than the security controls designed to protect users and enterprises.
“Transparency around system capabilities, independent audits, and the ability to disable embedded extensions are minimum requirements for use in regulated or sensitive environments,” he said. “AI agents are advancing faster than security readiness.”
Evaluating the AI Backend
Gartner suggested that organizations could reduce risk by carefully evaluating the AI backend that powers an AI browser to determine whether its security controls meet enterprise requirements.
“In reality, this is extremely difficult,” said Will Tran, vice president of research at Spin.AI, a SaaS security firm in Palo Alto, California. “Most AI models are black boxes. Vendors don’t allow audits of internal logic, training data, or prompt handling.”
“There’s also evidence that even the vendors don’t fully understand the systems they’ve built,” he added.
Akhil Verghese, co-founder and CEO of Krazimo, an AI development and consulting firm in Dover, Delaware, agreed.
“AI browsers offer very little visibility into what happens before data reaches the AI provider,” he told TechNewsWorld. “Terms of service can change at any time. Expecting users to track all of this isn’t realistic.”
Training Alone Won’t Solve the Problem
Even if an organization believes an AI browser vendor adequately addresses security concerns, Gartner recommends educating employees that anything visible in their browser could be transmitted to an AI backend.
“Education is essential, but it can’t be a one-time message,” said Erich Kron, CISO advisor at KnowBe4, a security awareness training company in Clearwater, Florida.
“This needs to be reinforced regularly,” he told TechNewsWorld. “Otherwise, employees will focus on productivity and forget the risks.”
Still, education alone may not prevent data leakage, argued Chris Hutchins, founder and CEO of Hutchins Data Strategy Consultants, a healthcare advisory firm in Nashville, Tennessee.
“With AI offering major efficiency gains, it’s unrealistic to expect employees to consistently change behavior — especially if they don’t perceive the data as sensitive,” he said. “That creates a shadow IT problem and leaves security teams blind to where data is going.”
Lionel Litty, CISO and chief security architect at Menlo Security, cautioned that even trusted AI browsers require strict operational controls.
“Limit which sites the browser can access, enforce strong DLP policies, and scan everything it downloads,” he told TechNewsWorld. “You also need defenses against browser vulnerabilities. These tools can be steered into dangerous areas of the web, and URL filtering alone won’t stop that.”








