Moving to Agentic AI
OPERATIONAL AND SECURITY CHALLENGES OF AGENTIC SYSTEMS
Like so many areas of AI, agentic systems have solved the last-mile problems but now face the last inch. Compared to the enormous amounts already ingested by foundation models, departmental content trapped in SharePoint or PDF files should be easy to add—but it rarely is.
Companies such as Anthropic and OpenAI spend millions restructuring data before beginning to train models. This is because language models operate on tokens, not words, and even subtle inconsistencies in how information is encoded can distort meaning. Content must be precise for agentic systems, where the software must reason, plan, and act.
Successful orchestration depends on translating enterprise-specific human knowledge into machine-usable forms. This task remains an ongoing investment for startups seeking to ease ingestion pipelines and enterprises re-coding their knowledge for AI.
At scale, orchestrating AI agents means managing entire communities of autonomous digital workers. These agents must coordinate processes, integrate external tools and APIs, and achieve multistep goals while maintaining alignment with human intent. Yet the more agents interact, the greater the risk of misinterpretation or drift.
Tasks delegated through multiple layers of agents can slowly diverge from their original purpose, as each system reformulates goals within its own frame of reference. This drift can lead to “authority creep,” in which agents extend their permissions to complete tasks more efficiently, often beyond their intended boundaries. The complexity of these interactions introduces new security exposures, from unanticipated API chaining and misuse to emergent phenomena such as multi-agent collusion, in which autonomous systems discover ways to collaborate outside explicit programming, potentially circumventing constraints or governance protocols.
Beyond security, the interconnected nature of agentic systems invites cascading consequences. A single decision can ripple across dependent processes, amplifying small deviations into large disruptions. As agent networks grow, costs and latency often rise while auditability and transparency decline.
The reasoning paths behind agent decisions become increasingly opaque, making it difficult to determine why a particular sequence of actions occurred. Large-scale deployments can also suffer from inefficiencies, with agents looping on tasks or triggering unnecessary complexity within orchestration frameworks. Managing these systems is no longer about controlling software—it is about maintaining coherence in a dynamic, partially autonomous ecosystem that constantly negotiates between precision and unpredictability.
A GUIDE TO GOVERNING COMPLEX AGENTIC SYSTEMS
The complexity of orchestrated agent systems, in which software acts with varying degrees of independence, demands a new level of governance and design discipline. Problems like goal drift, security exposure, and cascading instability cannot be managed with the same tools used for traditional software. Organizations must treat AI agents as semi-autonomous entities operating under a defined framework of authority, constraints, and oversight. Success requires blending architectural rigor with continuous human supervision.
Governance begins with clarity of purpose. Every agent should have a well-defined role and scope that are anchored to an explicit description of its goals, constraints, and expected outcomes. These design “personas” guide the agent’s behavior and protect human intent from being lost as tasks are delegated across systems. Guardrails and policies must translate directly into code, forming the bridge between organizational intent and technical execution.
Autonomy should grow gradually and only when agents demonstrate consistent, verifiable performance. Defining thresholds for human review, particularly for high-risk or irreversible actions, helps prevent authority creep and keeps human oversight embedded within automated systems.