Why Is Cloud Data Protection Declining Despite Higher Spending?

Why Is Cloud Data Protection Declining Despite Higher Spending?

Maryanne Baines is an authority in cloud technology and a seasoned expert in evaluating tech stacks and data protection frameworks across diverse industries. With a career focused on the intricate balance between rapid digital transformation and robust security architectures, she offers a critical perspective on the evolving threats facing modern enterprises. Her deep understanding of how global organizations manage sensitive information provides the necessary context to navigate the complexities of cloud-based vulnerabilities and the shifting landscape of cyber defense.

In this discussion, we explore the alarming trend of declining encryption rates despite record cloud spending and the rise of artificial intelligence. We delve into the operational friction caused by managing fragmented security tools, the unique risks posed by AI-driven identity governance, and the persistent threat of credential theft. Finally, we address the long-term implications of quantum computing on today’s data protection strategies, offering a roadmap for organizations to secure their future.

Despite massive cloud adoption and the rise of AI, sensitive data encryption has actually declined to below 50% recently. Why is this gap widening as data volume increases, and what specific steps can organizations take to reverse this trend while maintaining operational speed?

The reality is quite sobering; we’ve seen encryption of sensitive cloud data slip from 51% down to just 47% in a single year. This decline happens because organizations are moving at a breakneck pace to adopt AI and expand their cloud footprint, often prioritizing immediate accessibility over security protocols. When you are dealing with massive volumes of data, the perceived “performance tax” of encryption can lead teams to take shortcuts. To reverse this, companies must move away from manual encryption and adopt automated, policy-driven workflows that encrypt data the moment it hits the bucket. It’s about making security an invisible part of the data lifecycle so that speed and safety are no longer viewed as competing interests.

Many organizations manage five or more separate data protection and key management systems, yet misconfigurations remain the leading cause of breaches. How does tool fragmentation create these specific security gaps, and what are the practical trade-offs when consolidating these systems into a single point of visibility?

Fragmentation creates a “fog of war” where security teams are so busy jumping between five or more different consoles that they lose sight of the actual data. When 77% of organizations are juggling multiple protection tools, it’s inevitable that something will be misconfigured, which is why we see misconfiguration leading 28% of cloud breaches. Consolidating these systems into a single pane of glass allows for unified policy enforcement, but the trade-off is often a high initial migration effort and the risk of creating a single point of failure. However, the clarity gained from seeing exactly where your keys are and who has access to them far outweighs the temporary pain of reorganization. Moving to a centralized model reduces the cognitive load on engineers, allowing them to focus on high-level threats rather than chasing ghosts in disconnected logs.

AI agents are increasingly granted automated access to cloud data with less oversight than human users, and over 60% of these applications are already facing attacks. In what ways does AI amplify weaknesses in identity governance, and how should encryption policies evolve to handle machine-speed data processing?

AI changes the game because it operates at a scale and velocity that human administrators simply cannot monitor in real-time. We are seeing a trend where AI agents are given broad API permissions and tokens with far less scrutiny than a human employee, creating a massive “insider risk” that isn’t even human. If your encryption policy is weak, an AI system can inadvertently propagate or expose sensitive data across an entire environment in seconds. Our policies must evolve to be “identity-aware” at the machine level, requiring that every automated request is verified and that data remains encrypted not just at rest, but also during high-speed processing. We have to treat these AI agents like high-privilege users, applying strict “least-privilege” access and ensuring that even if an agent is compromised, the data it touches remains an unreadable ciphertext.

Credential theft has become the dominant technique used in cloud attacks, making identity management a higher priority than application security. How does compromising a machine credential change the impact of an unencrypted data breach, and what identity-centric metrics should teams monitor to mitigate this risk?

When an attacker steals a machine credential, they aren’t just logging into a portal; they are gaining a programmatic “key to the kingdom” that can bypass traditional perimeter defenses. If the target data is unencrypted, the breach is instantaneous and total because the attacker has the identity required to read everything in plain sight. This is why 67% of organizations now report credential theft as their top concern, shifting the focus from patching apps to securing identities. Teams need to move beyond simple login tracking and start monitoring metrics like “token usage anomalies” and “unusual API call patterns” from machine accounts. By focusing on how credentials are used rather than just who they belong to, you can spot the subtle signs of a hijacked machine identity before the data exfiltration begins.

Adversaries are currently collecting encrypted data with the intent to decrypt it once quantum computing becomes viable. Given that four in ten organizations haven’t started evaluating post-quantum cryptographic algorithms, what are the immediate risks of this “harvest now, decrypt later” strategy, and how should teams prioritize datasets?

The “harvest now, decrypt later” strategy is a ticking time bomb for any data with a long shelf life, such as medical records or national security secrets. Even if your data is encrypted today, if it’s using legacy algorithms, it could be laid bare the moment a functional quantum computer arrives. It is deeply concerning that 41% of organizations are still sitting on the sidelines, ignoring the fact that the window for a graceful migration is closing. Organizations must prioritize their datasets based on “longevity value”—anything that needs to remain secret for more than five to ten years needs to be moved to post-quantum cryptographic standards immediately. You have to assume that your most sensitive encrypted traffic is already being recorded by adversaries, so the risk isn’t in the future; the data loss is actually happening right now.

What is your forecast for cloud data encryption?

I believe we are approaching a “great consolidation” where the industry will finally realize that more tools do not equal more security. Over the next three years, I expect to see a massive shift toward “encryption-by-default” architectures where the cloud provider and the enterprise share a unified, automated key management layer. We will likely see encryption rates bounce back as AI-driven security tools start to fix the very problems that AI-driven attacks created, automatically identifying and shielding unencrypted sensitive data. However, the divide between the “quantum-ready” organizations and the laggards will widen, creating a new tier of digital risk. My forecast is that identity and encryption will eventually merge into a single continuous verification process, making plain-text data an obsolete and unacceptable liability in any professional cloud environment.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later