Moving to Agentic AI
Intelligent agents are not new. In its April 15, 1995, issue, Datamation published my 1–3-year forecast. Agents, I predicted, would automate repetitive tasks, help with email, and coach us on managing time wisely. Sound familiar?
I went on to predict agents would customize user interfaces, manage workflows, and make restaurant reservations. By 2000, agents would “begin to understand context.” Socially aware agents would orchestrate interactions, including interorganizational workflows such as supply chain and payment management.
Of course, this did not happen by 2000, but the emergence of Robotic Process Automation, or RPA, the lowest rung of the agent hierarchy, began. RPA provided a reliable foundation for deterministic processes, those driven by predefined rules and repeatable workflows, enabling well-designed interactions between processes and people.
Twenty-five years later, the term “agent,” or “agentic AI,” remains an overloaded one. It generally describes systems that perceive, reason, and act toward goals autonomously. The recent infatuation with chatbot technology and language models has shifted its attention to agents, promising they will do all the things I forecasted and much more.
WHAT IS AGENTIC AI?
Unlike chatbots that wait for a human prompt to respond, agentic AI can use data or other triggers. Agents employ reasoning over context, adapt to change, and make decisions that weren’t prescripted. They can interact with APIs, humans, and data in ways that resemble goal-directed collaboration.
Agents exist along a semantic continuum that begins with robotic agents guided by rules and moves through AI assistants to multi-agent systems. With each step in sophistication, the level of agency, learning ability, goal handling, and error recovery changes. With increased sophistication, agents and coordinated agencies also become more complex.
It is important to note that this continuum is a simplification. Agent evolution resembles biology more than engineering, a branching tree rather than a straight line. New branches sprout as requirements and discovery demand, while other branches are pruned or subsumed.
I wrote a LinkedIn article in October 2025 that suggested we abandon the ideas of future-proofing, noting that, in technology, any maturity matrix is a myth (“The End of Future-Proofing and Why Maturity Is Now a Mirage”; linkedin.com/pulse/end-future-proofing-why-maturity-now-mir age-daniel-rasmus-rwkmc). A year from now, we will likely still be talking about agents, but the details may be very different. Just implementing agents doesn’t future-proof enterprise systems, and reaching a “false” maturity level is akin to planting a flag on an ice cream cone.
I created a graph depicting a simplified view of agent classifications and their attributes based on the work of K. J. Kevin Feng and his colleagues (“Levels of Autonomy for AI Agents Working Paper”; arxiv.org/html/2506.12469v1).
Agents could easily be represented by a 3D diagram that includes their purpose, how they are embodied, and their form of organization. Examples of agents not represented on this chart include market and auction agents, drone control and sensor network agents, synthetic populations, affective companions, social rehearsal agents, and game-play agents. Agents could also be classified by the roles people play in an agentic system, such as operator, collaborator, approver, or observer.
WHY IT LIKES AGENTS
Agentic AI has reignited IT’s strategic imagination because it promises to restore a sense of control that had been eroding as automation spread across functions such as marketing and finance. Rather than being reduced to custodians of infrastructure, IT leaders now see themselves as architects of intelligent systems capable of reasoning, adapting, and acting autonomously. Agentic architectures call for disciplined design and governance rather than improvisation—these are not desktop assistants but orchestrated ecosystems in which software components interact, learn, and collaborate.
Such systems demand technical fluency and architectural rigor. Building, securing, and maintaining multi-agent environments require expertise in integration and data governance as well as an appreciation for probabilistic reasoning. Large language models (LLMs) and related technologies introduce new dimensions of uncertainty, but they also deepen the role of IT as the interpreter between human goals and machine execution. In this resurgence, IT retains ownership of software-based automation, although that control remains conditional and complex.