The AI Stack: What Decision Makers Need to Know
THE INTEGRATION CHALLENGE
Although the AI stack has discrete components, it needs to function as an integrated pipeline. “The orchestration layer and workflow need to function with the data at one end and the applications at the other,” said Michael Bevilacqua, VP for AI product management at Adeptia. “We assist companies in connecting their data with their AI applications.”
Adeptia was founded 25 years ago as a data integration platform for insurance and manufacturing companies that used large legacy systems, and now it applies that experience to AI systems in the same verticals.
As part of its integration management, Adeptia uses two industry standard frameworks, LangChain and LangGraph. LangChain helps create LLM applications that connect language models with data and APIs. LangGraph is an open source framework used for building AI agents and multiagent systems and runs on top of LangChain. “These tools are industry standard frameworks for AI, and we wanted to leverage them for workflows,” said Tim Bond, chief product officer at Adeptia, “rather than reinventing the wheel.”
One of the most difficult problems in AI workflows is state management. “AI can get complicated, with events coming in and out of the process,” continued Bond. “For example, it may have to pause for human input at certain points or wait for another process to be completed.” Stateful workflows store information about data, context, and previous events, enabling them to retrieve information and utilize it for the next action. They can also personalize information based on historical information.
Adeptia continues to focus on its initial verticals of insurance and manufacturing. “We are familiar with the types of data that are used in these systems,” Bond commented. “A lot of legacy systems have schema that are loosely defined or hard to understand. Adding context to the semantic layer to explain what’s in them allows an AI application to access the data more cleanly. Context provides a distinct competitive advantage.” Adeptia starts with the data source, performs any required transformations, and moves it to the target application, with business logic and rules in between.
One effective strategy for developing agentic AI is to start with an internal use case before externalizing it. “When airlines first developed systems for booking passenger flights, they designed the systems for their employees to use and later made them available for consumers to book flights themselves,” noted Bond. Starting internally allows easier monitoring and benchmarking in the early stages of evaluation and opportunities for quality control. “The same model is good for developing AI agents. Your employees understand your business processes, they know the right answers, so they can provide valuable input, and as your AI agents get smarter and more capable, you can then launch them publicly.”
Regardless of the amount of automation, the role of humans remains critical. “Supervised autonomy is better than blind automation,” noted Bevilacqua. “Initially, systems were built for human users, with guardrails about identity and access management. AI is different, and security has not necessarily adapted to AI.” Even with AI systems performing self-checking, human intervention should be available. An AI agent should not be able to do anything that it is not supposed to do.
MAKING ENTERPRISE APPLICATIONS USER-FRIENDLY
Enterprise applications form the backbone of information management in organizations but can be difficult to use. Software company Adopt AI helps corporate developers create AI agents that guide workers through the processes that enterprise applications perform. “People often don’t use enterprise applications efficiently, partly because many of the applications do not have documentation about how to use them or how other applications can interact with them,” said Anirudh Badam, chief AI officer and cofounder of Adopt AI. “It is also not easy for LLMs that provide natural language interfaces to get to these applications.”
However, LLMs have a unique advantage, Badam pointed out. “They can do two things really well. They can understand what you are saying significantly better than previous technologies, and they can also write code that the computer can read natively. Our mission is to make computing more accessible to everyone by using these two features.” Adopt AI has put together a kit for developers to use that lets enterprise applications communicate with each other and guide end users through the steps to accomplish their tasks.
Information about how an application works is obtained through process capture of user workflows, which becomes the documentation for the process. That documentation is then sent, along with an API and the developer’s kit, to the LLMs. The orchestration layer understands the user’s intent and can convert them into actions in the application. For example, an HR manager might say that they want to onboard six interns and ask for the process through a natural language interface. The AI agent figures out the appropriate workflow in each of the underlying applications, calls for them, and walks the manager through the process.
“Once the developer has used Adopt AI to create the workflow, the user has conversational orchestration at the front end, and workflow automation at the back end,” concluded Badam. “This reduces the burden on IT for technical support of enterprise applications and expedites the workers’ tasks.”