Related Posts
Popular Tags

77% of employees share company secrets on ChatGPT: Report

77% of employees share company secrets on ChatGPT: Report

Most employees are exposing sensitive company information by pasting it into generative AI platforms such as ChatGPT, new research has found.

According to the Enterprise AI and SaaS Data Security Report 2025 by LayerX Security, 77 per cent of large language model access within enterprises is to ChatGPT, with nearly one in five employees pasting data into generative AI tools. More than half of those pastes involve corporate information.

LayerX said its telemetry, collected through enterprise browser monitoring across global organisations, showed that 45 per cent of corporate users actively engage with AI platforms. Of these, 43 per cent use ChatGPT alone. The report concluded that generative AI tools are now the leading channel for company-to-personal data exfiltration, responsible for 32 per cent of all unauthorised data movement.

Sensitive information exposed

The report found that 40 per cent of uploaded files contain personally identifiable information or payment card data, while 22 per cent of pasted text includes sensitive regulatory material. For firms subject to rules such as the EU’s GDPR, the US HIPAA healthcare law or the Sarbanes-Oxley Act, the risks include fines, regulatory sanctions and reputational damage.

In comments to The Register, Or Eshed, chief executive of LayerX Security, said that enterprise data leaks through AI tools can raise “geopolitical issues, regulatory and compliance concerns, and lead to corporate data being inappropriately used for training if exposed through personal AI tool usage.”

The study highlighted that 71.6 per cent of generative AI access occurs via unmanaged browsers and personal accounts, falling outside identity management systems. The same pattern was seen in other software-as-a-service platforms, including Salesforce (77 per cent), Microsoft Online (68 per cent) and Zoom (64 per cent).

Copy-and-paste behaviour driving risk

LayerX researchers noted that the principal risk comes from manual copy-and-paste actions, which bypass traditional data loss prevention and firewall systems. On average, enterprise users paste into generative AI platforms 6.8 times a day, with 3.8 of those pastes containing corporate data.

Because this activity is invisible to conventional monitoring tools, the report warned that threat actors and data aggregators could exploit such leaks for training models, stealing credentials, or targeting specific industries.

Security implications for enterprises

The report advised companies to adopt multilayered defences to address the emerging risks from AI use. Recommendations included enforcing centralised access controls such as single sign-on, monitoring browsers and endpoints to track data flows, hardening AI systems and APIs against malicious inputs, and segmenting infrastructure.

It also highlighted the need for AI governance and posture management frameworks, forensic analysis of browser logs, and employee training on secure AI use.

The findings underline how the rapid adoption of AI in the workplace has outpaced security controls. The National Bureau of Economic Research has estimated that nearly a quarter of workers now use generative AI on the job, but few companies have established comprehensive security frameworks to manage the risks.

Governance lagging behind adoption

The LayerX report concluded that enterprises face a widening gap between AI-driven productivity gains and their ability to control associated risks. It said data exfiltration through generative AI marks a shift in enterprise security, from traditional phishing or storage misconfigurations to exposure via everyday AI-assisted tasks.

As AI becomes further embedded in business workflows, the report warned that organisations must ensure enthusiasm for efficiency does not undermine the confidentiality of corporate data.

Source – https://www.peoplematters.in/news/ai-and-emerging-tech/77percent-of-employees-share-company-secrets-on-chatgpt-report-46778

Leave a Reply