Dataiku Debuts LLM Cost, Quality, and Safety Guardrail Services
Dataiku, the Universal AI Platform, is debuting its LLM Guard Services suite, a set of solutions designed to preserve the quality, safety, and cost effectiveness of generative AI (GenAI) deployments at scale, from proof-of-concept to full production. The three components—Cost Guard, Safe Guard, and Quality Guard—function within a scalable, no-code framework integrated with the Dataiku LLM Mesh, the secure API gateway for building and managing enterprise-grade GenAI.
According to Dataiku’s recent survey, enterprises are looking to consolidate tools as they scale their projects to avoid siloed systems, yet 88% lack a dedicated application or process for managing large language models (LLMs).
"As the AI hype cycle follows its course, the excitement of two years ago has given way to frustration bordering on disillusionment today. However, the issue is not the abilities of GenAI, but its reliability," said Florian Douetteau, CEO of Dataiku. "Ensuring that GenAI applications deliver consistent performance in terms of cost, quality, and safety is essential for the technology to deliver its full potential in the enterprise.”
Propelled by this operational and security gap, Dataiku’s LLM Guard Services work to address common challenges associated with building, deploying, and managing GenAI at scale. Paired with Dataiku’s Universal AI Platform, enterprises benefit from being able to easily select and implement GenAI, while LLM Guard Services delivers oversight and assurance for LLM selection and usage.
“As part of the Dataiku Universal AI platform, LLM Guard Services is effective in managing GenAI rollouts end-to-end from a centralized place that helps avoid costly setbacks and the proliferation of unsanctioned ‘shadow AI’—which are as important to the C-suite as they are for IT and data teams,” said Douetteau.
LLM Guard Services is comprised of the following pillars:
- Cost Guard: A cost-monitoring component designed to facilitate effective monitoring of enterprise LLM usage, enabling organizations to better anticipate and manage their GenAI spend.
- Safe Guard: A solution that evaluates requests and responses for sensitive information, offering customizable tooling to secure LLM usage and avoid data abuse and leakage.
- Quality Guard: A quality assurance solution that maximizes response quality while incorporating objectivity and scalability to the evaluation cycle, delivering automatic, standardized, code-free LLM evaluation for each use case. This component also serves to democratize GenAI apps to stakeholders for greater understanding, as well as forwards sustained GenAI reliability over time with greater predictability.
To learn more about Dataiku’s LLM Guard Services, please visit https://www.dataiku.com/.