With a deep background in evaluating cloud technologies and their real-world applications across various industries, Maryanne Baines offers a critical perspective on the current state of enterprise AI. We sat down with her to unpack the growing paradox of massive AI spending and disappointing returns. Our conversation explored why so many AI initiatives fail to deliver value, how leaders can navigate the tricky path from a small pilot to a full-scale deployment, and the ways in which a cautious economic climate is shaping high-stakes technology decisions.
Given that over half of CEOs see neither revenue gains nor cost savings from their AI investments, what are the primary reasons for this disconnect? Please walk us through a common mistake companies make when first deploying AI that leads to these disappointing results.
That statistic is staggering, isn’t it? A full 56 percent of leaders are pouring money into this technology and seeing nothing back. The core of the problem is a fundamental misunderstanding of what AI is. It’s not a plug-and-play solution; it’s a transformative business capability. The most common mistake I see is the “tactical trap.” A department head hears about AI, gets a budget, and launches an isolated project without a clear connection to the company’s overall strategy. They’re so focused on the tech itself that they forget to ask how it will fundamentally change a workflow or improve a customer outcome. The result is an expensive, disconnected tool that solves a minor problem for a small team, but it never moves the needle on revenue or costs for the entire organization.
We hear that isolated AI projects often fail to deliver value, yet pilots are crucial for testing concepts. How should leaders reconcile a failed pilot with the push for enterprise-wide deployment? What specific metrics should a pilot project hit to justify a larger, riskier rollout?
This is the central paradox leaders are wrestling with. A failed pilot feels like a dead end, but it shouldn’t be. The key is to redefine what “failure” means. If a pilot fails because the underlying concept was flawed, that’s a valuable lesson learned cheaply. But more often, it “fails” due to poor execution or a lack of foundational support. Instead of just killing the project, leaders must ask why it failed. Was the data poor? Did we lack the right skills? Was there no buy-in from the team meant to use it? A successful pilot isn’t just about immediate ROI. The critical metrics to watch are user adoption, data quality improvement, and integration feasibility. If your team is excited about the tool and it’s helping to clean up your messy data, that’s a massive win that justifies a bigger rollout, even if it hasn’t saved a dollar yet.
With AI adoption still below 25% in key areas like support services and product development, what foundational elements are most companies missing? Can you detail the top cultural or technological barriers you see preventing wider, more effective AI integration in large organizations?
Those low adoption numbers—just 20 percent in support services and 19 percent in product development—tell a clear story. Companies are missing the boring, but essential, groundwork. Technologically, the biggest barrier is data chaos. AI is useless without clean, accessible, and well-governed data, and most enterprises are a mess on that front. Culturally, the barrier is a lack of an “AI-ready” mindset. I see a lot of fear and skepticism on the ground, with employees who see AI as a threat or just another complicated tool they’re forced to use. Leadership hasn’t built a clear roadmap or defined the risk processes, so everyone is operating in a fog of uncertainty. You can’t just buy an AI platform; you have to build a culture that enables it.
While AI infrastructure spending is booming, some applications show minimal returns, such as a chatbot saving agents just three minutes a day. How can leaders better evaluate the true productivity gains of an AI tool before investing, and what are the warning signs that a solution is more hype than help?
That chatbot example is painfully perfect. Saving three minutes a day sounds nice until you calculate the immense cost of developing and maintaining that system. The value just isn’t there. To avoid this, leaders need to move beyond vanity metrics and conduct a rigorous “day-in-the-life” analysis before a single dollar is spent. Shadow the employees who will use the tool. What are their biggest time sinks? Where are the real bottlenecks? If an AI solution doesn’t directly address a significant pain point, its ROI will be negligible. The biggest warning sign is a vendor who talks endlessly about the technology’s features but can’t clearly articulate, in simple business terms, how it will change a core process and what specific, measurable outcome you can expect.
With CEO confidence at a five-year low due to geopolitical uncertainty and cyber threats, how does this cautious environment impact big-ticket AI spending? Describe how a leader should weigh pursuing transformative AI initiatives against the pressure to control costs during uncertain economic times.
It’s a real tightrope walk. CEO confidence is down to just 30 percent, and everyone is feeling the pressure from geopolitical risks and tariffs. In this environment, any project that smells of pure, speculative experimentation is the first on the chopping block. However, the data also shows that companies that stop investing out of fear underperform their peers significantly. The right approach is to frame AI not as a cost, but as a strategic imperative for resilience and efficiency. A leader should prioritize initiatives that have a clear, defensive value. For example, instead of a speculative AI for new product discovery, focus on an AI tool that strengthens your supply chain against disruption or enhances your cybersecurity. Tying AI investment to mitigating clear and present business risks makes it an essential expenditure, not a luxury.
What is your forecast for enterprise AI adoption over the next two years?
I believe we’re at an inflection point. The initial hype-fueled, scattershot approach is ending because it’s proven to be a massive waste of money for most. Over the next two years, I predict a “great consolidation” of AI strategy. Companies will move away from dozens of isolated pilots and begin focusing on a few, high-impact, enterprise-wide platforms that are deeply integrated into core business functions. We’ll see a much stronger emphasis on building those foundational elements—data governance, risk management, and employee training. The companies that successfully make this pivot will start to see the revenue growth and cost savings that have eluded the majority, while those that don’t will fall even further behind. The era of experimentation is over; the era of strategic execution is beginning.
