Are AI Cloud Services at Risk Due to Unpatched Vulnerabilities?

March 24, 2025

Amidst an era where artificial intelligence (AI) increasingly relies on cloud computing, recent revelations shed light on alarming security vulnerabilities within AI cloud services. A comprehensive research study has unveiled critical gaps in the protection of AI workloads hosted on popular cloud platforms such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Astoundingly, 70% of AI cloud workloads have been found to have at least one known but unpatched vulnerability, thereby posing substantial risks to data integrity and system security.

Noteworthy Security Vulnerabilities in AI Cloud Services

Identification of Severe Flaws in AI Workloads

A significant finding from the research is the identification of CVE-2023-38545, a severe vulnerability in the curl data transfer tool, present in 30% of the analyzed AI workloads. This vulnerability exemplifies the types of security issues that can allow attackers to manipulate data or gain unauthorized system access. The implications of such a vulnerability are severe, given that compromised data integrity can result in flawed AI models, leading to adverse real-world consequences. Unauthorized access due to vulnerabilities like CVE-2023-38545 poses a risk not only to sensitive data but also to the overall trust in AI systems.

Moreover, the prevalence of vulnerabilities in widely used tools like curl underscores the need for stringent security practices. Inadequate patch management and delayed responses to known issues can leave AI workloads exposed to exploitation by malicious actors. Given the critical role AI now plays in various sectors, including healthcare, finance, and national security, the urgency to address these vulnerabilities cannot be overstated.

Misconfigurations Leading to Elevated Risks

The report also highlights a high prevalence of misconfigurations in cloud services, further exacerbating the security landscape. For instance, an alarming 77% of organizations using Google Vertex AI Notebooks had left the default Compute Engine service account overprivileged, making them susceptible to exploitation. These misconfigurations often stem from a lack of awareness or understanding of access controls, resulting in excessive permissions that amplify the risk of data breaches.

Illustratively, the “Jenga-style” misconfiguration risk presents a unique challenge. In this scenario, layered services inherit risky default settings, leading to cascading vulnerabilities across the entire system. In essence, a single oversight or misconfiguration can become the weak link that exposes the entire infrastructure to potential exploits. Effective management and regular audits of cloud configurations are thus pivotal in mitigating these risks.

Management of AI Training DatA Critical Concern

Restriction Issues in AI Training Data Storage

The management of AI training data emerges as another critical concern in ensuring robust cloud security. The report reveals that 14% of organizations using Amazon Bedrock had improperly restricted public access to AI training data storage buckets, with 5% having overly broad permissions. Such mismanagement increases the likelihood of unauthorized access and potential tampering of training data, thus compromising the fidelity of AI models. In scenarios where AI models make critical decisions based on training data, any unauthorized modifications can have profound consequences.

Additionally, ineffective restrictions on training data can lead to data leaks, adversely affecting an organization’s competitive edge and reputation. Ensuring that AI training data storage buckets are adequately secured is fundamental to maintaining the integrity and confidentiality of AI models. Regular assessments and updates to access policies can help mitigate the risks associated with improperly configured data storage solutions.

Risks Associated with Root Access in AI Instances

The report also found that 91% of users running Amazon SageMaker notebook instances had instances that granted root access by default. This default setting presents a severe risk if any such instance is compromised, given the extensive control that root access provides. Attackers gaining root access can manipulate the AI environment, leading to potential data breaches, unauthorized system changes, and the destruction of critical data.

The significance of such risks extends beyond mere data loss, impacting overall operational continuity and eroding trust in AI deployments. Organizations must implement rigorous security protocols, including restricting root access and enforcing the principle of least privilege, to safeguard against these vulnerabilities. Proactive measures, such as thorough security reviews and continuous monitoring, are essential to detect and mitigate threats before they can be exploited.

Calls for Improved AI Cloud Security

Advocacy for Evolving Security Strategies

Experts, including Liat Hayun, Vice President of Research and Product Management for Cloud Security at Tenable, have been vocal about the urgent need for improved AI cloud security measures. The evolving threat landscape necessitates continually adapting security strategies to address intricate AI data attacks while enabling responsible AI innovation. The emphasis on balancing protection and innovation is crucial, as overly restrictive security measures can stifle progress, while inadequate protection can lead to devastating breaches.

Hayun advocates for a holistic approach to cloud security, involving regular security assessments, robust patch management practices, and cultivating a security-conscious culture within organizations. This proactive stance is essential to preemptively address vulnerabilities and mitigate long-term consequences such as data integrity compromise, critical system security breaches, and customer trust erosion.

Consensus on Enhancing Cloud AI Security Practices

The overarching consensus drawn from the findings is the pressing need for businesses to review and bolster their cloud AI security practices. Failure to address these risks can leave critical data and infrastructure exposed to threats, potentially resulting in severe financial and reputational damage. Organizations must adopt a comprehensive security framework that includes continuous monitoring, regular vulnerability assessments, and swift remediation efforts.

Implementing advanced security controls and continuously adapting strategies to the fast-evolving digital landscape is imperative. The report serves as a timely reminder for organizations to prioritize their cloud AI security efforts, deploying resources to safeguard their operations against potential cyber threats. Employing third-party experts for security audits and leveraging automated security tools can further enhance an organization’s ability to detect and respond to threats in real time.

Moving Forward with AI Cloud Security

In an age where artificial intelligence (AI) is becoming increasingly dependent on cloud computing, recent discoveries have exposed significant security vulnerabilities within AI cloud services. A thorough research study has identified major deficiencies in the protection of AI workloads hosted on prominent cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Alarmingly, 70% of AI cloud workloads were found to contain at least one known but unpatched vulnerability, presenting serious threats to data integrity and system security. These findings highlight the urgent need for enhanced security measures and prompt patching of vulnerabilities to safeguard the safety of AI applications and the sensitive data they handle. Ensuring robust protection for these cloud-based AI systems is crucial, given the escalating reliance on AI across various sectors. Such measures will help maintain the trust of users and protect valuable information from potential cyber threats.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later