| CARVIEW |
Powerful intelligence. Everywhere.
Our ultra-efficient multimodal models are turning the promise of an AI-powered world into reality. Optimized for CPUs, GPUs, and NPUs, they enable privacy-, low-latency, and security-critical applications everywhere, not just the cloud.
















Maximum intelligence. Minimum compute.
End-to-end custom AI tailored to your business
We deliver full-scale custom AI solutions tailored to your business’ needs, hardware and data. Our unique process for developing efficient models is rooted in our proprietary device-aware model architecture search, allowing us to quickly provide the best fit model for latency, privacy and security critical needs - on device, in the cloud, or hybrid.
Own your moat with specialized AI
Amplify your startup’s growth and create competitive advantage with customized LFMs. Through our startup program, selected startups gain access to our full stack along with guidance from our product and engineering teams to specialize and deploy the best model for your business.
Specialize and deploy LFMs anywhere
Through our developer tools and community we’re making building, specializing and deploying highly efficient powerful AI accessible to everyone, from devs just getting started to experts building at scale.
Efficient AI that delivers. Everywhere you need it.
A fundamentally different architecture built for real-world intelligence
Our Liquid Foundation Models (LFMs) leverage liquid neural networks, inspired by dynamical systems and signal processing, to process complex, sequential, and multimodal data with superior reasoning and flexibility. For faster, more efficient AI - without compromising capability and performance.

The most efficient models on the market
Our LFMs are purpose-built for efficiency, speed, and real-world deployment on any device. From wearables to robotics, phones, laptops, cars, and more, LFMs run seamlessly on GPUs, CPUs or NPUs, while delivering best-in-class performance. So you get AI that works - everywhere you need it.
Our LFM2 family includes a range of modalities and parameter sizes and are rapidly customizable to deliver AI that’s just right for your use case.

Efficient on-device intelligence for everyone
With LEAP, our developer platform for building, specializing and deploying on-device AI—and Apollo, a lightweight app for vibe checking small language models directly on your phone, we’re making on-device AI accessible to everyone from beginners to experts.
LiquidAI/LFM2-1.2B
LiquidAI/LFM2-Audio-1.5B
LiquidAI/LFM2-350M
LiquidAI/LFM2-VL-1.6B
LiquidAI/LFM2-700M
LiquidAI/LFM2-VL-450M
LiquidAI/LFM2-350M-ENJP-MT
LiquidAI/LFM2-1.2B-RAG
LiquidAI/LFM2-1.2B
LiquidAI/LFM2-350M
LiquidAI/LFM2-700M
LiquidAI/LFM2-1.2B
The future of industry, powered by Liquid.
Join us in shaping the next generation of AI.





















.png)






.png)








































