Thursday, 30 Apr 2026
Subscribe
logo
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Font ResizerAa
Data Center NewsData Center News
Search
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI & Compute > Liquid AI’s LFM2-VL gives smartphones small AI vision models
AI & Compute

Liquid AI’s LFM2-VL gives smartphones small AI vision models

Last updated: August 17, 2025 9:28 am
Published August 17, 2025
Share
Liquid AI's LFM2-VL gives smartphones small AI vision models
SHARE

Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, information, and safety leaders. Subscribe Now


Liquid AI has launched LFM2-VL, a brand new era of vision-language basis fashions designed for environment friendly deployment throughout a variety of {hardware} — from smartphones and laptops to wearables and embedded methods.

The fashions promise low-latency efficiency, robust accuracy and adaptability for real-world functions.

LFM2-VL builds on the corporate’s current LFM2 architecture launched simply over a month in the past. The corporate says it gives the “quickest on-device basis fashions in the marketplace” because of its method of producing “weights” or mannequin settings on the fly for every enter (generally known as a linear input-varying (LIV) system), extending it into multimodal processing that helps each textual content and picture inputs at variable resolutions.

Based on Liquid AI, the fashions ship as much as twice the GPU inference pace of comparable vision-language fashions, whereas sustaining aggressive efficiency on frequent benchmarks.


AI Scaling Hits Its Limits

Energy caps, rising token prices, and inference delays are reshaping enterprise AI. Be a part of our unique salon to find how high groups are:

  • Turning vitality right into a strategic benefit
  • Architecting environment friendly inference for actual throughput positive factors
  • Unlocking aggressive ROI with sustainable AI methods

Safe your spot to remain forward: https://bit.ly/4mwGngO


“Effectivity is our product,” Liquid AI co-founder and CEO Ramin Hasani in a put up on X asserting the brand new mannequin household:

meet LFM2-VL: an environment friendly Liquid vision-language mannequin for the machine class. open weights, 440M & 1.6B, as much as 2× quicker on GPU with aggressive accuracy, Native 512×512, sensible patching for large photos.

effectivity is our product @LiquidAI_

obtain them on @huggingface:… pic.twitter.com/3Lze6Hc6Ys

— Ramin Hasani (@ramin_m_h) August 12, 2025

Two variants for various wants

The discharge consists of two mannequin sizes:

  • LFM2-VL-450M — A hyper-efficient mannequin with lower than half a billion parameters (inner settings) geared toward extremely resource-constrained environments.
  • LFM2-VL-1.6B — A extra succesful mannequin that continues to be light-weight sufficient for single-GPU and device-based deployment.
See also  Mistral AI’s new coding assistant takes direct aim at GitHub Copilot

Each variants course of photos at native resolutions as much as 512X512 pixels, avoiding distortion or pointless upscaling.

For bigger photos, the system applies non-overlapping patching and provides a thumbnail for world context, enabling the mannequin to seize each nice element and the broader scene.

Background on Liquid AI

Liquid AI was based by former researchers from MIT’s Pc Science and Synthetic Intelligence Laboratory (CSAIL) with the objective of constructing AI architectures that transfer past the broadly used transformer mannequin.

The corporate’s flagship innovation, the Liquid Basis Fashions (LFMs), are primarily based on rules from dynamical methods, sign processing and numerical linear algebra, producing general-purpose AI fashions able to dealing with textual content, video, audio, time collection and different sequential information.

In contrast to conventional architectures, Liquid’s method goals to ship aggressive or superior efficiency utilizing considerably fewer computational sources, permitting for real-time adaptability throughout inference whereas sustaining low reminiscence necessities. This makes LFMs well-suited for each large-scale enterprise use instances and resource-limited edge deployments.

In July, the corporate expanded its platform technique with the launch of the Liquid Edge AI Platform (LEAP), a cross-platform SDK designed to make it simpler for builders to run small language fashions straight on cell and embedded units.

LEAP gives OS-agnostic assist for iOS and Android, integration with each Liquid’s fashions and different open-source SLMs, and a built-in library with fashions as small as 300MB — sufficiently small for contemporary telephones with minimal RAM.

Its companion app, Apollo, allows builders to check fashions fully offline, aligning with Liquid AI’s emphasis on privacy-preserving, low-latency AI. Collectively, LEAP and Apollo mirror the corporate’s dedication to decentralizing AI execution, lowering reliance on cloud infrastructure and empowering builders to construct optimized, task-specific fashions for real-world environments.

See also  Best data security platforms of 2025

Velocity/high quality trade-offs and technical design

LFM2-VL makes use of a modular structure combining a language mannequin spine, a SigLIP2 NaFlex imaginative and prescient encoder and a multimodal projector.

The projector features a two-layer MLP connector with pixel unshuffle, lowering the variety of picture tokens and enhancing throughput.

Customers can alter parameters comparable to the utmost variety of picture tokens or patches, permitting them to steadiness pace and high quality relying on the deployment situation. The coaching course of concerned roughly 100 billion multimodal tokens, sourced from open datasets and in-house artificial information.

Efficiency and benchmarks

The fashions obtain aggressive benchmark outcomes throughout a spread of vision-language evaluations. LFM2-VL-1.6B scores properly in RealWorldQA (65.23), InfoVQA (58.68), and OCRBench (742), and maintains strong leads to multimodal reasoning duties.

In inference testing, LFM2-VL achieved the quickest GPU processing instances in its class when examined on a regular workload of a 1024X1024 picture and brief immediate.

Licensing and availability

LFM2-VL fashions can be found now on Hugging Face, together with instance fine-tuning code in Colab. They’re suitable with Hugging Face transformers and TRL.

The fashions are launched below a customized “LFM1.0 license.” Liquid AI has described this license as primarily based on Apache 2.0 rules, however the full textual content has not but been revealed.

The corporate has indicated that business use can be permitted below sure situations, with completely different phrases for firms above and beneath $10 million in annual income.

With LFM2-VL, Liquid AI goals to make high-performance multimodal AI extra accessible for on-device and resource-limited deployments, with out sacrificing functionality.

See also  Do AI reasoning models require new approaches to prompting?

Source link
TAGGED: AIs, LFM2VL, liquid, models, small, smartphones, vision
Share This Article
Twitter Email Copy Link Print
Previous Article Black Hat 2025: ChatGPT, Copilot, DeepSeek now create malware Black Hat 2025: ChatGPT, Copilot, DeepSeek now create malware
Next Article Claude can now process entire software projects in single request, Anthropic says Claude can now process entire software projects in single request, Anthropic says
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

What PubMatic’s AgenticOS signals for enterprise marketing

The launch of PubMatic’s AgenticOS marks a change in how synthetic intelligence is being operationalised…

January 7, 2026

Enterprises are rethinking AI infrastructure as inference costs rise

AI spending in Asia Pacific continues to rise, but many corporations nonetheless battle to get…

November 24, 2025

OVHcloud expands its European footprint with new data region in Berlin

OVHcloud, a outstanding identify within the European cloud business, has bolstered its operations in Germany…

November 21, 2025

Agentic AI in finance speeds up operational automation

In finance, attaining operational automation by integrating agentic AI requires a data-centric basis to drive…

March 10, 2026

Advania retains status as VMware cloud service provider with Broadcom

Advania has retained its standing as a VMware Cloud Service Supplier (VCSP) accomplice inside the…

January 29, 2026

You Might Also Like

STL launches Neuralis data centre connectivity suite in the U.S.
AI & Compute

STL launches Neuralis data centre connectivity suite in the U.S.

By saad
What is optical interconnect and why Lightelligence's $10B debut says it matters for AI
AI & Compute

What is optical interconnect and why Lightelligence’s $10B debut says it matters for AI

By saad
IBM launches AI platform Bob to regulate SDLC costs
AI & Compute

IBM launches AI platform Bob to regulate SDLC costs

By saad
The evolution of encoders: From simple models to multimodal AI
AI & Compute

The evolution of encoders: From simple models to multimodal AI

By saad

About Us

Data Center News is your dedicated source for data center infrastructure, AI compute, cloud, and industry news.

Top Categories

  • AI & Compute
  • Cloud Computing
  • Power & Cooling
  • Colocation
  • Security
  • Infrastructure
  • Sustainability
  • Industry News

Useful Links

  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

Find Us on Socials

© 2026 Data Center News. All Rights Reserved.

© 2026 Data Center News. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.