Tecton Expands Platform to Empower Enterprise Production of LLM Applications
Tecton is introducing a major platform expansion to unlock the full potential of generative AI in enterprise applications, empowering AI teams to build reliable, high-performing systems by infusing LLMs with comprehensive, real-time contextual data.
Generative AI, powered by LLMs, promises to transform business operations with unparalleled automation, personalization, and decision-making capabilities. However, LLMs remain strikingly underutilized in enterprise production environments, according to the company.
"The AI industry is at a crossroads. We've seen the potential of LLMs, but their adoption in enterprise production environments has been stifled by reliability and trust issues," said Mike Del Balso, CEO and co-founder of Tecton. "Our platform expansion represents a paradigm shift in how enterprises can leverage their data to build production AI applications. By focusing on better data rather than bigger models, we're enabling companies to deploy smarter, more resilient AI applications that are customized to their unique business data and can be trusted in mission-critical scenarios."
Tecton enhances retrieval-augmented generation (RAG) applications by integrating comprehensive, real-time data from across the enterprise. This approach augments the retrieved candidates with up-to-date, contextual information, enabling the LLM to make more informed decisions.
The outcome is hyper-personalized, context-aware AI applications capable of split-second accuracy in dynamic environments, the company said.
To help customers build production generative AI applications, Tecton is launching a suite of capabilities including managed embeddings, scalable real-time data integration for LLMs, enterprise-grade dynamic prompt management, and innovative LLM-powered feature generation.
Tecton now offers a comprehensive embeddings solution that generates and manages rich representations of unstructured data to power generative AI applications. This service efficiently handles transforming text into numerical vectors that capture semantic meaning, enabling various downstream AI tasks.
Tecton's comprehensive management of the embeddings lifecycle, from generation to storage and retrieval, dramatically reduces the engineering overhead typically associated with implementing a RAG architecture. As a result, data scientists and ML engineers can shift their focus from infrastructure management to improving model performance, ultimately enhancing productivity and innovation, the company said.
Tecton’s embeddings service supports both pre-trained models and custom embedding models, allowing teams to bring their own models or leverage state-of-the-art open-source options. This flexibility enables faster productionization, improved model performance, and optimized costs.
Tecton's new Feature Retrieval API allows developers to provide engineered features for LLMs to access when generating responses. This integration enables LLMs to access real-time or streaming data about user behavior, transactions, and operational metrics, dramatically improving their ability to provide accurate, contextually relevant responses.
The API is designed with enterprise security and privacy in mind, ensuring that sensitive data is protected and that only authorized models and agents can access specific data. This allows enterprises to maintain control over their data while still leveraging the power of LLMs.
Tecton's extended declarative framework now incorporates prompt management, introducing standardization, version control, and DevOps best practices to LLM prompts. This advancement tackles a significant challenge in LLM application development: the lack of systematic prompt management, which is crucial for guiding LLM behavior.
Dynamic Prompt Management empowers version control, change tracking, and easy rollback of prompts when necessary. This capability drives enterprise-wide standardization of AI practices, accelerating development and ensuring consistency across environments. It facilitates rapid adoption of best practices in prompt engineering, potentially saving hundreds of development hours while significantly reducing compliance risks, according to the vendor.
Tecton's feature engineering framework now leverages LLMs to extract meaningful information from unstructured text data, transforming it into structured, usable formats and creating novel features that were previously difficult or impossible to generate. These LLM-generated features can enhance traditional ML models, deep learning applications, or enrich context for LLMs themselves.
The framework handles the complexities of working with LLMs at scale, including automatic caching to reduce API calls and associated costs, and rate limiting to ensure compliance with API usage limits. This allows data teams to focus on defining the feature logic rather than worrying about the underlying infrastructure.
"With this platform expansion, we're not just improving AI performance—we're fundamentally changing how enterprises approach AI development," said Kevin Stumpf, CTO and Co-Founder of Tecton. "By providing a unified framework for both predictive ML and generative AI, we're enabling organizations to leverage their business data to build advanced AI-powered functionality directly into their applications, all fueled by a single data platform."
Tecton's generative AI features are now available to preview.
For more information about this news, visit www.tecton.ai.