Artificial intelligence (AI) adoption in the workplace is accelerating faster than most organizations can secure it, leaving employee use increasingly unmonitored and risky.
According to new research from AI-powered data security firm Cyberhaven, nearly 40% of all employee interactions with AI tools now involve sensitive corporate data. As workers bypass official channels to use specialized tools like Claude and DeepSeek, organizations find themselves split between “frontier” adopters and “laggards” who are losing control over their most valuable information assets.
Cyberhaven Labs, the company’s in-house research team, released its findings on Thursday based on data lineage tracking how information moves between endpoints, SaaS apps, and AI tools in real time. In many cases, companies prioritize growth and experimentation. Security, governance, and oversight follow.
At the same time, most AI usage today occurs in tools that carry elevated risk. Employees are routinely inputting sensitive data into a vast and growing ecosystem of GenAI tools, coding assistants, and custom-built agents.
For many organizations, AI use is a kind of Wild West, according to the report. Tools proliferate faster than policies, employee usage often outpaces visibility, and sensitive data flows across models, applications, and accounts with limited centralized control.
“It’s increasingly difficult to secure the wide use of tools, but what organizations can and should do is ensure their data security solutions include AI usage,” Cyberhaven CEO Nishant Doshi told TechNewsWorld.
AI Tool Sprawl Floods Enterprise Networks
A small cohort of frontier organizations aggressively deploys hundreds of tools to nearly 70% of their workforce. Laggards remain stalled at 2% adoption. This divide is creating a vacuum where employees, driven by productivity demands, are bypassing corporate guardrails to build their own Shadow AI ecosystems.
Cyberhaven research reports five primary findings:
- Organizations with the highest rates of AI adoption are utilizing over 300 GenAI tools within their enterprise environment.
- Chinese open-weight models are now enterprise favorites, accounting for half of endpoint-based usage among Cyberhaven users.
- GenAI tools remain risky across the board. When looking at the top 100 most-used GenAI SaaS applications, 82% are classified as “medium,” “high,” or “critical” risk.
- One-third of employees are accessing GenAI tools from personal accounts, increasing overall risk and Shadow AI.
- Employees are feeding AI tools sensitive data, as over a third (39.7%) of all interactions with AI tools involve sensitive data.
The following distribution shows how sharply the number of GenAI tools in use escalates among the most aggressive enterprise adopters.
“Organizations must understand and trace the full lifecycle of data to properly secure it,” Doshi urged.
Frontier Gap: Strategy or Permissive Culture?
Frontier organizations are most likely implementing an official corporate strategy that encourages employees to incorporate AI into their day-to-day workflow, Doshi suggested. Such encouragement then leads to a more permissive internal culture toward experimenting with new AI features and technologies, sanctioned or not.
“Laggard companies are primarily held back by a block-first security posture and fragmented legacy data systems that make official integration too risky or complex,” he said.
Also at play is an inherent lack of trust. These organizations lack complete confidence in their employees to use these tools securely or in line with company values.
“This creates a divide where leadership at these organizations views AI as a threat to be managed. Frontier companies view it as a productivity engine that can be securely enabled,” he added.
Chinese AI Models Rapidly Enter the Enterprise
A surprising shift in the AI power balance shows Chinese open-weight models transitioning from outsiders to enterprise staples almost overnight. Products like DeepSeek are driving a staggering increase in endpoint-based AI use.
This surge follows the January 2025 release of DeepSeek-R1, which proved that China could match — and, in specific tasks like coding, potentially exceed — U.S. frontier models, according to Doshi. For many employees, the lure of superior performance and the ease of bypassing corporate filters outweigh geopolitical caution. These factors create a massive, unmonitored footprint for Chinese silicon within Western networks.
“These models’ comparable capabilities, combined with LMArena’s tendency to provide open-weight models, have led to 50% of endpoint-based usage,” he confirmed.
Unaware Workers Expose Sensitive Data
As noted above, an alarming 39.7% of interactions involve sensitive data. At least some of that results from employees knowingly uploading proprietary info such as source code. That also leads to unintended sensitivity mishaps when users integrate AI tools into standard workflows such as CRM and R&D.
“AI is still a relatively new technology, and employees do not fully understand what it means to put sensitive data into an AI system. This means your data no longer remains under corporate control and now resides with an AI vendor,” Doshi said.
Even worse, some of these vendors actually train their models with user-submitted data. For employees, however, he added, it may not be clear what constitutes sensitive data or how AI operates.
“Therefore, a lack of security awareness among employees is a major issue in enterprise AI use. Additionally, that lack of awareness is a reason many organizations are hesitant to encourage AI use, opting for a ‘block first’ policy,” he explained.
Personal Accounts a Veiled Culprit
Sixty percent of Claude and Perplexity usage happens via employees’ personal accounts. It contributes to Shadow AI outperforming sanctioned corporate tools in user experience and utility.
Claude and Perplexity are edge cases as they are specialized tools. Users widely adopt Claude as the best coder, and Perplexity is a purpose-built AI search engine.
“I believe that people specifically use Claude and Perplexity for these purposes, respectively, as they provide a better user experience than sanctioned corporate tools,” Doshi offered.
He agreed that Shadow AI is, in general, a growing issue, as AI tools can process sensitive information at machine speed. Derivative copies of an organization’s most valuable data can quickly proliferate outside of central control.
“Combating this requires robust data security and data governance,” he said.
GenAI Apps Risky Business
The report classified 82% of the top 100 GenAI apps as “risky.” But that does not mean the current enterprise-grade AI market is failing to provide the specific features employees need, according to Doshi.
The chart below breaks down risk levels among the most widely used GenAI SaaS tools.
The high percentage of risky apps results from employees prioritizing frictionless accessibility over official channels. Many of these niche apps offer specialized workflows that require zero prompting expertise, allowing workers to bypass the learning curve.
Take NoteGPT as an example. It provides an AI summarizer, transcriber, PDF converter, and writer out of the box.
“Additionally, many new enterprise tools that promise both security and innovation often lack the capabilities of developer-created and released tools or even more basic GenAI tools such as Claude,” Doshi cautioned.
Death of the Chatbot and Beyond
The shift toward embedded agents and coding tools is ushering in a new era. The standalone AI chatbot, with its ChatGPT-style interface, may already be peaking in favor of invisible, integrated AI.
However, Doshi thinks it is too early to say for sure. Researchers are seeing only AI app adoption plateau, while specialized AI coding agents continue to grow.
“Rather than seeing it as chatbots on the decline, I would argue that some AI tools, like coding agents, have done a great job of applying AI to specialized domains and are thus receiving recognition from their users, leading to growth,” he suggested.
Data show that AI use is most prevalent among engineers. Doshi reasoned that it makes sense for them to turn to coding tools and embedded agents for their specific day-to-day workflows, rather than a marketing employee who may need a GenAI tool for basic, one-off tasks.
What Comes Next for AI in the Workplace
According to Doshi, AI usage, sanctioned or otherwise, will continue to rise. The number of AI apps used and the percentage of employees using them will be strong benchmarks for companies that have adopted AI or plan to do so to drive strategic productivity gains.
Some people referred to 2025 as the Year of AI agents. While partially true, he predicts that more enterprises will develop and adopt AI agents in the coming years.
“This is just the beginning, which makes securing and governing these tools all the more vital,” he urged.



















