With Oracle making headlines for both divesting from Ampere Computing and launching powerful new Ampere-based instances, we’re seeing a fascinating strategic play in the cloud infrastructure space. To unravel what this means for developers and the broader industry, we sat down with Maryanne Baines, a renowned authority in cloud technology. In our conversation, we explored the innovative engineering behind Oracle’s new A4 instances and their unique OCPU concept, the performance leap over previous generations, and how Oracle’s “silicon neutrality” strategy positions it against competitors like Amazon and Microsoft, who are heavily invested in their own custom Arm chips.
Oracle’s new A4 instances introduce the “OCPU” concept, bundling two physical AmpereOne M cores. Can you detail the engineering rationale behind this specific pairing and describe, with examples, how this configuration benefits developers on workloads like web serving or data processing versus a 1-to-1 mapping?
The OCPU concept is a very pragmatic engineering decision designed to bridge the gap between raw hardware and what developers are accustomed to. By pairing two physical cores into a single OCPU, Oracle is emulating the familiar behavior of threaded cores from the x86 world. This isn’t just for show; it has real performance implications. For a developer running a web server, this means a single OCPU can handle concurrent requests more gracefully, providing a smoother and more predictable performance profile without needing to manage individual, single-thread cores. When you look at data processing, this bundling allows a single billable unit to execute parallel tasks with a shared, high-bandwidth path to memory. It simplifies resource management and prevents the kind of micro-stalls you might see if you were trying to coordinate work across completely separate cores, ultimately giving developers a more robust and efficient building block.
The article highlights a 35% core-for-core performance gain in A4 instances over the A2, attributing it partly to a 20% higher clock speed. Could you break down the other contributing factors, such as the 12-channel DDR5 memory, and explain the business case for still offering larger A2 instances?
That 35% performance uplift is truly significant, and you’re right, the 20% clock speed increase is only part of the story. The move to a 12-channel DDR5 memory architecture is a massive contributor. You have to imagine it like widening a highway from eight lanes to twelve; it dramatically increases the amount of data that can be fed to those 96 cores at any given moment. This is a game-changer for memory-intensive applications that were previously bottlenecked by data access speeds. The business case for keeping the older A2 instances available is quite shrewd. Not every cloud workload is CPU-bound. There are large-scale applications, perhaps in-memory databases or certain scientific simulations, where the sheer amount of available RAM is the most critical factor. The A2 instances still offer a massive memory footprint, with up to 946 GB. By keeping both, Oracle caters to two distinct customer profiles: those who need the absolute best per-core performance with the A4, and those who need maximum memory capacity with the A2.
Larry Ellison cited “silicon neutrality” for divesting from Ampere, yet OCI is now launching these advanced A4 instances. Can you elaborate on the timing of these decisions and explain how this ongoing collaboration with Ampere fits into Oracle’s broader multi-vendor CPU and GPU strategy?
The timing seems contradictory on the surface, but it actually reflects the long-term nature of hardware development cycles. The decision to divest from Ampere, as Larry Ellison explained, is a high-level strategic pivot. Oracle no longer wants to be in the business of owning a chip designer. However, the A4 instances are the fruit of a collaboration that began years ago. You don’t just spin up a new cloud instance like this in a few weeks. The launch of A4 is the culmination of that prior partnership. Now, this ongoing work fits perfectly into their new strategy of “chip neutrality.” Ampere is now simply one of several key suppliers, not a strategic investment. This allows Oracle to sit at the table with all the major CPU and GPU vendors, evaluate their roadmaps, and integrate the best-in-class silicon for any given workload without being financially tied to a single one. It’s about offering choice and performance, not just pushing their own hardware.
While competitors like Amazon and Microsoft invest in proprietary Arm chips like Graviton5 and Cobalt, OCI’s A4 instances use Ampere’s open-market silicon. How does this neutrality strategy translate into tangible benefits for your customers regarding cost-effectiveness, performance, and long-term innovation on OCI?
This is Oracle’s key differentiator. By leveraging open-market silicon from Ampere, they tap into an ecosystem driven by broader market competition. Ampere has to innovate and price its chips competitively to win business from everyone, not just one hyperscaler. This dynamic directly benefits OCI customers, likely contributing to the very aggressive pricing we see, like $0.0138 per OCPU per hour. For customers, this means they get cutting-edge performance without paying a “proprietary hardware” premium. Furthermore, it de-risks their own development. Building an application on an A4 instance means you’re working with an architecture that exists outside the Oracle ecosystem, which provides a level of comfort and portability. Long-term, this strategy allows Oracle to adopt innovation faster, whether it comes from Ampere, AMD, Intel, or NVIDIA, ensuring their customers always have access to top-tier, cost-effective performance.
What is your forecast for the role of non-x86 architectures, like Arm, in hyperscale cloud data centers over the next five years?
My forecast is that we are at a major inflection point. Over the next five years, Arm-based architectures will transition from a compelling alternative to a mainstream, first-class citizen for a vast array of cloud-native workloads. The sheer core counts we’re now seeing—from Ampere’s 96-core chips in OCI to Amazon’s massive 192-core Graviton5—demonstrate that the performance is absolutely there. But the real driver is power efficiency. As data centers grapple with skyrocketing energy costs and the insatiable demands of AI, the performance-per-watt advantage of Arm becomes an overwhelming economic argument. Hyperscalers are all in on this shift, and we will see non-x86 architectures command a significant and growing share of the compute landscape, pushing x86 into a future where it is no longer the default choice, but simply one of several viable options.
