77% of Employees Leak Data Through ChatGPT
.webp)
77% of Employees Leak Data Through ChatGPT
25-10-13, 4:20 p.m.
Employees across major organizations are unknowingly feeding confidential company data into AI tools like ChatGPT, creating silent data leaks that bypass corporate defenses. The study uncovers how unmanaged personal accounts and everyday copy-paste habits are now the biggest blind spots in enterprise cybersecurity.
A recent global study has uncovered a growing cybersecurity threat that many businesses aren’t even aware of employees unintentionally leaking sensitive company information through generative AI tools like ChatGPT.
According to LayerX Security’s research, 77% of employees regularly paste company data into AI platforms, often through personal, unmanaged accounts that bypass corporate monitoring. Even more alarming, 82% of these interactions occur outside secure enterprise environments, making it nearly impossible for organizations to track what information is leaving their systems.
ChatGPT leads this growing trend, representing over 90% of all generative AI usage within organizations. What was once a productivity tool is now the largest channel for data exfiltration, accounting for 32% of all unauthorized data transfers. The implications go beyond intellectual property loss. The report revealed that 40% of files uploaded to AI platforms contain personally identifiable or payment card data, while 22% include regulated information under frameworks like GDPR and HIPAA. These exposures could lead to severe financial penalties and compliance violations—especially when employees use personal accounts that aren’t covered by enterprise protections.
Even traditional security measures are struggling to keep up. Over two-thirds of corporate logins to major platforms like Salesforce, Microsoft Online, and Zoom now occur through unmanaged personal accounts. Without single sign-on (SSO) or proper access controls, businesses are left blind to who’s accessing what data and where it’s going.
What’s most concerning is that these leaks often happen through simple actions like copying and pasting. Employees perform an average of 46 paste operations a day, with several containing sensitive or regulated data. From there, information easily ends up in tools like Google, LinkedIn, Slack, and Databricks—all outside the protection of corporate cybersecurity systems.
At Upside Business Technologies, we help businesses close these visibility gaps before they turn into full-blown breaches. Through advanced monitoring, access management, and AI-safe cybersecurity policies, we ensure that innovation never comes at the cost of security. Generative AI is here to stay but without the right safeguards, it could become your company’s biggest risk. If you want to understand how to protect your business from these emerging AI-related threats, we’re ready to help.
