-->

Friends of Enterprise AI World! Register NOW for London's KMWorld Europe 2026 & save £300 with the code EAIFRIEND. Offer ends 12/12.

Liquid AI’s Open Source, Small Foundation LFM2 Models Outperform and Outclass Competitors

Liquid AI, the provider of efficient general-purpose AI at every scale, is launching a new class of its Liquid Foundation Models, dubbed LFM2, which set a new standard in quality, speed, and memory efficiency deployment. Consisting of structured, adaptive operators, LFM2 drives more efficient training, faster inference, and better generalization in long-context or resource-constrained environments, according to the company.

LFM2 represents a leap in small foundation model performance, delivering 2x faster decode and prefill performance than Qwen3 on CPU, as well as significantly outperforming models in each size class. With these improvements, LFM2 is ideal for local and edge use cases, specifically designed to provide the fastest on-device generative AI (GenAI) experience across the industry, Liquid AI described.

“At Liquid, we build foundation models that achieve the optimal balance between quality, latency, and memory for specific tasks and hardware requirements. Full control over this balance is critical for deploying best-in-class generative models on any device. This is exactly the type of control our products allow for [the] enterprise,” said Liquid AI.

Some key benchmarks for LFM2 include:

  • 200% higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer, and every other transformer- and non-transformer-based autoregressive models available to date, on CPU
  • On average, performs significantly better than models in each size class on instruction-following and function calling
  • 300% improvement in training efficiency compared to previous iterations of LFMs, increasing cost efficiency

“At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,” said Ramin Hasani, co-founder and CEO of Liquid AI. “LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.”

In addition to its plethora of performance enhancements, Liquid AI has open sourced LFM2, globally unveiling the model’s unique architecture. LFM2’s weights can now be downloaded from Hugging Face and are available via Liquid Playground for testing. Liquid AI also intends for the LFM2 models to be integrated into its Edge AI platform and an iOS-native consumer app for testing.

To learn more about LFM2, please visit https://www.liquid.ai/.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues