Lenovo unveiled the ThinkEdge SE100, the primary compact, entry-level AI inferencing server designed for edge computing, making AI accessible and inexpensive for companies of all sizes.
The ThinkEdge SE100 is 85% smaller than conventional servers, GPU-ready, and provides high-performance, low-latency AI capabilities for real-time duties like video analytics and object detection.
It helps various industries, together with retail, manufacturing, healthcare, and vitality, with purposes like stock administration, high quality assurance, and course of automation. The server is adaptable, scalable, and energy-efficient, consuming beneath 140W even in its fullest configuration, lowering carbon emissions by as much as 84%.
Lenovo’s Open Cloud Automation (LOC-A) simplifies deployment, slicing prices by as much as 47% and saving as much as 60% in sources and time.
“Lenovo is dedicated to bringing AI-powered innovation to everybody with continued innovation that simplifies deployment and speeds the time to outcomes,” says Scott Tease, Vice President of Lenovo Infrastructure Options Group, Merchandise. “The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is definitely tailor-made to various enterprise wants throughout a broad vary of industries. This distinctive, purpose-driven system adapts to any setting, seamlessly scaling from a base gadget, to a GPU-optimized system that allows easy-to-deploy, low-cost inferencing on the edge.”
Enhanced safety features, corresponding to tamper safety and disk encryption, guarantee knowledge security in real-world environments.
The ThinkEdge SE100 is a part of Lenovo’s broader hybrid AI portfolio, which incorporates sustainable and scalable options to convey AI to the sting.
Lenovo continues to steer in edge computing, with over one million edge techniques shipped globally and 13 consecutive quarters of progress in edge income.
This innovation reinforces the rising pattern of AI-driven edge computing, the place low-latency, high-performance inferencing can function nearer to knowledge sources, lowering prices and accelerating insights in various, distributed environments.
Associated
AI/ML | edge computing | edge {hardware} | edge servers | Lenovo
