-->

Friends of Enterprise AI World! Register Now for Data Summit 2026, May 6 - 7, in Boston.

The AI Stack: What Decision Makers Need to Know

Article Featured Image

Organizations are at vastly different stages with respect to their AI stacks, which is not surprising given the rapid pace of advancement in this field within the past few years. Those that are just getting into AI may have experimented but not gone beyond the parameters of their existing enterprise stack. Others may have elements of the AI stack in place but need to expand it in order to build out AI applications.

Tech-savvy adopters of generative AI (GenAI) and agentic AI may have a capable AI stack and the maturity to maximize its use across multiple applications, whether basic chatbots or sophisticated, multistep workflows. For these organizations, the goal will be to build out their applications as rapidly as possible. However, for many companies, especially those without in-depth technical staff, outside help will be needed in order to achieve AI goals.

Each layer of the AI stack serves a specific purpose, and although they may be broken down differently by various practitioners, the layers start with an infrastructure that includes storage, computational, and networking capabilities.

The data layer consists of databases, data warehouses, data lakes, and data lakehouses with a means of ingesting and processing the data as well as tagging and ensuring data security.

Up to this point, the AI stack is comparable to a standard enterprise stack, but here it diverges to include a model development framework, model training and testing, and development of whatever AI applications are selected for implementation, such as chatbots and AI-driven workflows.

“When we first began our business, some companies that contacted us wanted an open-ended evaluation of their business to determine where they stood with respect to AI,” said Akhil Verghese, cofounder and CEO of Krazimo, an AI consultancy. “We began by answering questions about AI readiness, but fairly quickly evolved into solving actual AI problems.” Most AI initiatives fall into one of two categories, according to Verghese. “These either involve connecting an LLM [large language model] to data and fine-tuning it or integrating AI capabilities with existing enterprise tools such as CRMs or schedulers.”

Krazimo focuses on clients that require substantial and detailed support in order to become AI-enabled. It can accomplish this because its technical staff consists of former Google engineers with a high level of expertise. “We tend to go deeply into the stack,” commented Verghese. “We serve companies such as real estate, legal, and other businesses that do not have a large IT staff. They need to access a lot of different information for their projects, but it is scattered, so developing AI agents can be a challenge.”

Introducing the Model Context Protocol

The progression toward highly autonomous AI systems naturally leads to the agentic AI paradigm, in which responsibility for orchestrating a solution shifts from a human operator to a large language model (LLM). To operate effectively in this role, the model must be able to identify the tools it can use, understand their capabilities, and recognize the structure of the data and other resources available to it. These descriptions are provided through a new protocol known as the Model Context Protocol (MCP).

MCP was developed by Anthropic engineers and introduced in November 2024. It is an open standard and open source protocol that defines how AI systems, including LLM-based applications and autonomous agents, connect to external tools, data sources, and services. In December 2025, MCP was transferred to the Agentic AI Foundation (AAIF), which also includes Block’s goose and Open AI’s AGENTS.md, under the Linux Foundation for community-driven governance.

Rather than requiring developers to create custom integrations for every tool or service, MCP establishes a common protocol. An application or tool can implement MCP support once and then interoperate with any number of MCP-compatible servers, exposing data, tools, and workflows. As a result, AI applications that support MCP, such as those built on Anthropic’s Claude models and other platforms that have adopted the standard, can access the external resources they need to address a given task.

Although it is still a relatively new technology, MCP shows significant promise and already supports a growing number of practical use cases. Its lightweight design enables coordinated operation of diverse applications that may not have direct interfaces with one another. When orchestrated by an MCPenabled LLM, these otherwise independent components can work together toward solving a shared task. When MCP is paired with a model that has advanced reasoning abilities, a human analyst can provide the model with a high-level task description in natural language and allow it to take over much of the execution.

One example of a product using MCP is the PolyAnalyst no-code platform developed by Megaputer Intelligence. “By exposing PolyAnalyst as an MCP server,” said Sergei Ananyan, CEO of Megaputer, “part of the workflow design and execution can be shifted from a human analyst to an LLM.” During execution, the model selects appropriate tools and incrementally assembles a data analysis workflow. Maintaining a human in the loop remains essential to ensure that the model’s reasoning and logic are sound.

DATA LAYERS

Creating a complete and accurate data layer can be very time-consuming, but by providing a few good examples of how data should be structured, part of this process can be automated. “Ensuring quality data makes the performance of applications such as customer service bots dramatically better,” Verghese emphasized. “Having a human in the loop at the outset and along the way is critical.”

A majority of pilot studies do not scale well when the AI application is put into production. One reason is that the data in pilots is clean and constrained, whereas enterprise data is likely to be uneven in quality and of much greater volume. But another important reason lies outside the technological realm—failure to think through the purpose of the application or define success. “It’s often not the technology,” asserts Verghese, “but the fact that organizations have not done the requisite pre-work to define a clear purpose or define criteria for success.”

Another cause of failure is insufficient testing during the model development stage. “Testing used to be very deterministic,” continued Verghese, “so the tests were simple. A function that does something intelligent is harder to test, because exact wording cannot be used in verifying the correct answers.” Unfortunately, the testing phase is often cut short, leaving the AI application unvalidated.

Selecting a tractable problem is a key factor to success; lead conversion is a prime example of an agentic AI system.

“Use of an AI application for lead conversion is consistently one of the most successful AI initiatives,” Verghese commented. “We have seen up to a 100% increase in conversions, and 25%–35% is common.” In some businesses, a quick reply after an initial contact from a prospective customer is particularly important. AI supports lead conversion from a number of angles, including targeting, qualification, and personalization. Putting adequate thought into the decision as to which initiative to pursue at the application level can dramatically increase the odds of success.

EAIWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues