Google Defines the Agentic Cloud Era at Next 2026

Google Defines the Agentic Cloud Era at Next 2026

The shift toward an agentic enterprise marks a fundamental change in how organizations process data, secure their infrastructure, and interact with customers. As businesses move beyond simple automation toward autonomous systems capable of reasoning and independent action, the underlying cloud architecture must evolve to support these complex workflows. This transformation is driven by advancements in specialized hardware, universal data integration, and the convergence of developer tools into a unified mission control. We are joined today by Maryanne Baines, a leading authority in cloud technology, to discuss how these innovations are reshaping the corporate landscape and what it means for the future of specialized AI deployments.

Specialized hardware is shifting toward a split architecture, with distinct chips optimized for either AI training or inference. How do these advancements, alongside custom CPUs offering high price-performance ratios, alter the budget for large-scale deployments, and what specific metrics should infrastructure leads track during this migration?

The introduction of eighth-generation TPUs, specifically the 8t for training and 8i for inference, allows companies to stop over-provisioning for general-purpose workloads and instead allocate spend where it matters most. By utilizing the Axion CPU, which offers a 2x price-performance advantage over standard x86 instances, infrastructure leads can significantly lower the baseline cost of their data center operations. We’ve already seen firms like Citadel Securities use previous generations to turn tasks that took weeks into projects that take only minutes. During this migration, leads should move away from tracking simple uptime and focus on metrics like inference latency and exaflops per rack, especially as hardware like the Nvidia Vera Rubin NVL72 enters the ecosystem.

Organizations are moving from basic chatbots to sophisticated agentic systems that handle complex procurement and customer service tasks. What are the essential technical requirements for building a unified “mission control” for these agents, and how do you ensure they maintain consistent context across different business departments?

Building a “mission control” requires a platform that can act as the connective tissue between disparate data points, people, and goals, which is precisely what the Gemini Enterprise Agent Platform aims to do. It functions as a comprehensive dashboard where IT teams can observe agentic actions, track workflows, and ensure that a procurement agent at a company like Unilever can access the same validated organizational data as a marketing agent. To maintain context, the system must utilize reasoning protocols that allow agents to orchestrate Google Cloud services directly, performing autonomous root cause analysis on configurations. This ensures that when an agent is tasked with a “Regional Campaign,” it doesn’t just pull random files, but understands the specific business semantics and history across departments.

Cybersecurity response times must now shrink from half an hour to under a minute to combat machine-speed threats. How do autonomous “red” and “green” agents work together to identify and automatically patch vulnerabilities, and what step-by-step processes should security teams follow to oversee these automated defenses?

The window for hackers to hand off access to secondary threat groups has plummeted to just 22 seconds, making human-led triage impossible. In this new paradigm, “red agents” act as friendly hackers, continuously scanning the external perimeter to validate exposures and find risks like authentication bypasses. Once a risk is validated, the data is handed to a “green agent,” which autonomously produces the necessary code to patch the vulnerability. Security teams must oversee this by monitoring the triage agents that have already successfully cut response times from 30 minutes down to 60 seconds. The process involves using telemetry from sources like Mandiant and VirusTotal to feed these agents, allowing the human operators to focus on high-level strategy rather than manual log review.

Transforming raw PDFs and images into “agent-ready” data often involves heavy manual engineering. How can a universal context engine automate the enrichment of these files, and what are the practical trade-offs when trying to eliminate data silos across multiple third-party platforms like Salesforce or Workday?

The Knowledge Catalog serves as a universal context engine that eliminates manual engineering by analyzing logs and profiling data in the background as soon as a file hits cloud storage. For example, when thousands of ingredient PDFs are uploaded, the system can autonomously extract entities to identify specific allergens, like soy, that a human might miss. By connecting this to a cross-cloud lakehouse, we can bridge silos across platforms like Salesforce, Palantir, and Workday, though the trade-off involves managing a much more complex data fabric. The benefit, however, is clear: you move from spending half your day finding information to spending your time actually using it to drive business outcomes.

Modern retail assistants are driving significant increases in sales conversions by guiding customers through complex product specs. For a brand looking to deploy a multilingual telephony or shopping agent within a few weeks, what does the iterative development cycle look like, and which anecdotes highlight common pitfalls?

The development cycle is now remarkably fast; YouTube TV, for instance, deployed a multilingual telephony agent in just six weeks using CX Agent Studio. This tool allows developers to use drop-down menus to add sub-agents for specific promotions and test them in real-time, allowing for rapid iteration. A common pitfall is the inability to track agentic actions, which is why “Agent Observability” is a critical feature to prevent agents from going off-script. Brands like Home Depot are already seeing a 10% increase in conversions through their ‘Magic Apron’ assistant, proving that the key is to let the AI handle the complex specs while humans supervise the workflow.

AI agents are now being used to migrate legacy codebases multiple times faster than human developers. Could you provide a detailed breakdown of the workflow when agents take over code migration, and what metrics best reflect the impact on a company’s overall development velocity and technical debt?

The workflow involves agents not just writing new snippets, but analyzing entire existing codebases to identify dependencies and refactor them for modern environments. Google’s internal teams found that using agents allowed them to migrate code 6x faster than traditional manual methods, which fundamentally changes the math on technical debt. To measure impact, companies should track the reduction in production timelines—Virgin Voyages, for example, saw a 60% reduction—and the frequency of successful code deployments. This shift allows developers to move away from maintenance and toward high-value innovation, effectively shrinking the “debt” that usually slows down large enterprise software cycles.

Edge computing is now allowing complex AI models to run securely in remote environments, such as cruise ships or localized retail stores. How do you manage model synchronization and hardware constraints in these disconnected settings, and what specific steps ensure that the user experience remains seamless?

Managing AI in disconnected or remote settings requires Distributed Cloud Edge, which allows Gemini models to run locally while maintaining security. On a cruise ship, for instance, the system must handle synchronization when the ship docks or regains high-bandwidth connectivity, but the actual inference happens on-device to avoid latency. Walmart has implemented a similar localized approach, equipping workers with Pixel Fold devices so they can solve supply chain problems or answer customer queries on the spot without relying on a central data center. The step-by-step focus here is on “on-device” reasoning, ensuring that the assistant remains responsive even if the external network is unstable.

What is your forecast for the agentic cloud?

I believe we are entering an era where the cloud is no longer just a storage or compute resource, but a “self-driving” ecosystem where agents manage the infrastructure they run on. We will see capital expenditures continue to soar—Google alone is expected to reach $175-185 billion—as the demand for these autonomous “mission control” systems becomes the standard for every global enterprise. Within the next two years, the distinction between “software” and “agent” will blur, and companies that haven’t unified their data fabric will find themselves unable to compete with the 75% of cloud customers already leveraging these AI tools.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later