Citrix Launches NetScaler AI Gateway to Provide Governance for Enterprise AI Workloads
Citrix, a Cloud Software Group company, is introducing new AI governance capabilities for NetScaler, its modern, high-performance application delivery and security platform.
According to the company, NetScaler AI Gateway helps control large language model (LLM) costs and performance and secures AI application delivery across the enterprise.
NetScaler AI Gateway extends the platform’s proven strengths in application delivery, security, observability, and governance to AI workloads, enabling enterprises to manage AI applications with the same operational discipline used for other mission-critical services, the company said.
“AI is becoming embedded in the operational fabric of the enterprise,” said Steve Shah, general manager of NetScaler at Citrix. “Before organizations scale AI across core business processes, they need trusted guardrails to manage risk, control cost, and maintain performance. NetScaler AI Gateway extends our leadership in enterprise application delivery into the AI era, enabling organizations to run AI services with the governance, visibility, and operational confidence required for mission-critical deployments.”
NetScaler AI Gateway is already integrated with Citrix Aidrien, Citrix’s AI-powered in-product assistant. The architecture of Citrix Aidrien relies on NetScaler to manage and secure AI traffic as part of its production deployment.
Citrix is also announcing NetScaler integration with Protecto, an AI-native data protection platform. The integration connects Protecto's agentic data classification capabilities to NetScaler AI Gateway, ensuring sensitive information is detected and masked before it reaches AI workflows, while preserving the usability of the data. This ultimately helps enterprises accelerate AI adoption without compromising compliance, security, or customer trust, the company said.
Key capabilities of NetScaler AI Gateway include:
- Token-based rate limiting to help control LLM usage costs and enforce fair consumption across teams and applications
- Token-latency-based load balancing that optimizes performance, cost, and reliability by intelligently routing inference requests across multiple LLM backends
- Spillover routing that automatically redirects requests to a secondary model when token quotas are exhausted, ensuring continuity instead of request failure
- Prompt management and LLM redaction that enforce consistent governance behavior by intercepting and modifying prompts in transit and protecting sensitive information
- Integration with best-of-breed AI security platforms, such as Enkrypt AI LLM Firewall for Open Worldwide Application Security Project (OWASP) LLM threat detection and Protecto for context-aware detection and masking of sensitive data such as personally identifiable information
- Web application firewall protections for LLMs and Model Context Protocol (MCP) servers, available in the latest NetScaler signature updates
- AI-specific observability with metrics and logs covering token usage, latency, and quota violations, exportable to analytics platforms for visibility and troubleshooting
NetScaler AI Gateway builds on Citrix’s broader secure-by-design approach to AI, including integrations with AI security and LLM firewall technologies designed to protect enterprise AI deployments. Complementing NetScaler AI Gateway, the NetScaler Console MCP Server securely connects AI agents to operational intelligence through a standardized MCP interface—extending the same governance and visibility to agentic AI workflows, Citrix said.
Both NetScaler AI Gateway and NetScaler Console MCP Server are available to customers now.
For more information about this news, visit www.citrix.com.