Generative AI Fuels Surge in Corporate Data Leaks

Generative AI Fuels Surge in Corporate Data Leaks

As a leading authority on cloud technology, Maryanne Baines has a unique vantage point on the intersection of innovation and risk. Her work evaluating cloud providers and their applications across industries gives her a deep understanding of how enterprises are grappling with the rapid adoption of new technologies. Today, she shares her insights on the escalating security challenges posed by generative AI, from the surge in data violations and the persistence of “shadow AI” to the emerging threats from autonomous systems. The conversation explores why traditional security measures are falling short and what practical steps organizations must take to balance progress with protection in this new AI-driven landscape.

With generative AI data violations reportedly doubling, what specific types of regulated data are most at risk? Please detail the immediate consequences for a company when this information, like financial or health data, is exposed through an AI tool.

The most significant risk we’re seeing right now involves regulated data. It’s truly alarming; this category—which includes personal, financial, and healthcare information—accounts for a staggering 54% of all policy violations. This isn’t a theoretical problem; it’s happening constantly. The average organization is dealing with 223 incidents of sensitive data being sent to AI apps every single month. For a security team, it feels like trying to plug holes in a dam that’s cracking everywhere at once. The immediate consequences are severe, ranging from regulatory fines and legal action to a complete loss of customer trust, which can be devastating for any business.

Nearly half of generative AI users still access tools through personal, unmanaged accounts. Why does this “shadow AI” persist despite corporate policies, and what practical, step-by-step measures can security teams take to gain visibility and control over this activity?

“Shadow AI” persists largely due to convenience and a lack of awareness about the true scale of the risk. Despite policies, 47% of users still turn to personal accounts because they are familiar and accessible. This behavior creates a massive blind spot and a significant insider threat; in fact, personal apps are implicated in six out of ten insider threat incidents where sensitive data like source code or credentials are leaked. To regain control, security teams need a methodical approach. First, you must map exactly where your sensitive information is traveling, including through these personal app instances. The next step is to implement robust controls that can log and manage user activity across all cloud services, not just the ones your company officially sanctions. Finally, use that data to apply consistent policies everywhere, ensuring that you can track data movements and enforce protection standards across both managed and unmanaged tools.

Given the 500% surge in AI prompt volume, employees are inputting vast amounts of information. Could you share an example of how a seemingly harmless prompt can inadvertently leak intellectual property or source code, and what are the key indicators of such high-risk behavior?

It’s easy to underestimate the danger in a simple prompt, especially with prompt volumes jumping 500% in a year. Imagine a developer struggling with a piece of code. They might copy and paste a buggy snippet into a public AI chatbot and ask, “How can I fix this function to optimize database queries for our customer rewards program?” In that one innocent-seeming action, they’ve potentially exposed proprietary source code and details about the company’s business logic. The key indicator of this high-risk behavior is the nature of the data being uploaded. When you see source code, internal strategy documents, or credentials appearing in prompts, alarm bells should be ringing. It’s this unintentional but frequent leakage, happening across thousands of prompts per month, that creates such a complex and widespread risk profile.

While nine-in-ten organizations now block some AI applications, overall usage continues to grow. What are the limitations of a block-only security strategy, and how can teams evolve their approach to balance necessary data loss prevention with enabling employee innovation?

A block-only strategy is fundamentally a losing battle. While it’s true that nine in ten organizations are blocking at least one AI app, we see overall usage continuing to skyrocket. Simply blocking tools often just pushes employees to find workarounds, usually through those unmanaged personal accounts we discussed, which deepens the “shadow AI” problem. This approach stifles innovation and creates a frustrating environment for employees who are genuinely trying to be more productive. A more evolved strategy requires security teams to become “AI-aware.” This means expanding the scope of existing tools like Data Loss Prevention (DLP) to understand the context of AI interactions. The goal should be to foster a balance, enabling the use of approved, secure tools while implementing intelligent policies that prevent sensitive data from leaving the organization, no matter which application an employee is using.

As autonomous agentic AI systems create a new attack surface, how does the risk from an AI agent differ from a human user? Please explain what new monitoring or governance frameworks are essential for security teams to implement now.

The risk from an autonomous AI agent is fundamentally different from a human user because it operates with a level of speed, scale, and independence that we’ve never had to secure against before. A human user makes one decision at a time; an AI agent can execute thousands of actions across multiple systems based on a single directive, creating a vast and unpredictable attack surface. This requires a complete re-evaluation of our traditional security perimeters and trust models. It’s no longer enough to just monitor user logins. Security teams must immediately start incorporating agentic AI monitoring into their risk assessments. This means meticulously mapping the tasks these systems are authorized to perform and ensuring they operate strictly within predefined, approved governance frameworks. Without this oversight, you’re essentially giving a powerful, autonomous entity the keys to your kingdom and hoping for the best.

What is your forecast for generative AI security risks?

My forecast is that the complexity and velocity of generative AI security risks will continue to outpace the defensive capabilities of most organizations for the foreseeable future. We’re moving from a world of protecting static data to one where we must secure dynamic, AI-driven processes and interactions. The rise of agentic AI will be the next major flashpoint, creating entirely new categories of vulnerabilities that traditional security tools are not equipped to handle. Organizations that fail to evolve their security posture to be “AI-aware”—integrating intelligent data protection and robust governance directly into their AI adoption strategy—will find themselves in a constant state of crisis, struggling to keep pace with threats that are both more sophisticated and more automated than anything they have ever faced before.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later