-->

Friends of Enterprise AI World! Register NOW for London's KMWorld Europe 2026 & save £300 with the code HOLIDAYS. Offer ends 12/31.

Will the AI Bubble Pop? Experts Predict Reality Will Burst the Bubble in 2026

Article Featured Image

According to Peter Curran, GM of retail at Coveo, after three years of AI hype and hustle, 2026 will mark a return to the fundamentals: Companies will focus on specific, measurable use cases where AI delivers clear value rather than pursuing AI initiatives for the sake of appearing innovative. 

“Many leaders are feeling impatient waiting to see a return on their investments, especially as budgets tighten and scrutiny increases,” Curran said. “This reset will shift mindsets from ‘are you using AI?’ to ‘what value is AI actually delivering for us?’” 

Siroui Mushegian, CIO at Barracuda, also believes companies will hit an AI operations wall as projects scale from pilots to dozens of implementations.

Technology and security leaders will face an AI operational bottleneck, struggling to scale from isolated pilots to enterprise-wide implementations. Industries that rely on complex data ecosystems like finance, manufacturing and healthcare will be particularly vulnerable to conflicting data pipelines, inconsistent architectures, and uneven security practices. Without AIOps frameworks and strong governance structures, organizations risk losing visibility, control of their tech stacks, and long-term operational resilience,” Mushegian said.

Effective AI is going to hinge on the trusted data underneath, said Philip Dutton, CEO of Solidatus.

“In 2026, ‘explainable AI’ is going to mean ‘explainable data’. Regulators won’t just ask what a model did, they’ll ask which data made it behave that way and who changed that data last,” Dutton noted. “And as AI becomes embedded in decision-making, the C-level are going to be demanding more explainability. The ability to trace AI inputs and outputs across data pipelines is going to define trustworthy AI. Lineage will therefore become the new audit trail for AI ethics, accountability, regulatory assurance, etc.”

Diversity and cultural intelligence will become AI's strategic edge, thinks Crystal Foote, founder and head of partnerships at Digital Culture Group.

“In 2026, the most effective AI-driven campaigns will come not from those with the largest datasets, but from brands that embed cultural intelligence into the core of their tech infrastructure,” said Foote. “AI is only as effective as its operator; without diverse talent guiding its development and deployment, campaigns risk automating bias instead of unlocking insight. Authentic cultural resonance requires more than data—it demands teams with lived experience and cultural fluency.”

Agencies that eliminated DEI roles missed a critical opportunity to evolve those positions into AI and innovation leadership, she explained.

“The expertise required to detect and mitigate bias, understand nuance, and build inclusive strategies already existed within those roles. To succeed in the next wave of AI, brands must reimagine their organizational structures—infusing diversity not as a layer of review, but as a strategic function at the foundation of how AI is built, trained, and applied,” Foote urged.

MCP will become the backbone of a new digital trust fabric, asserts Benoit Grange, chief product and technology officer, Omada.

“2025 showed us what happens when autonomy outpaces accountability. AI systems began acting across business processes with little visibility into who or what was making decisions. This exposed a critical gap: governance frameworks built for human users are insufficient for autonomous agents acting at runtime,” Grange said. “At the same time, the Model Context Protocol (MCP) emerged as a promising foundation for secure collaboration between AI systems defining how agents exchange context, identity, and authorization in real time. This could be the backbone of a new digital trust fabric.”

AI isn’t moving as quickly as you think, acknowledges Menno Odijk, field CTO, at Mendix, the low-code subsidiary of Siemens.

“We’re overestimating how quickly AI will evolve. It could be a while till GenAI can really create production applications. Andrej Karpathy—who is very influential in this space—estimates GenAI is currently at 90% readiness, which is good for prototypes, but it's definitely not good for production work,” Odijk explained. “He predicts that for each added 9%, we need an equal amount of time. So, going from 90% to 99% with GenAI is going to take as much time as it did to get here—that’s a minimum of three years. I think we're overestimating how quickly things will evolve. There's a lot more time needed before we’re at a level where GenAI can really build production apps.”

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues