Hyperconnected cloud firm Zenlayer not too long ago launched “Distributed Inference,” a worldwide AI inference platform for high-performance processing at Tech Week in Singapore
Through the use of the worldwide edge infrastructure of Zenlayer, the platform solves ache factors comparable to wasted GPUs, unbalanced loading and excessive latency.
It incorporates optimization for scheduling, routing, networking and reminiscence administration which is able to enhance edge AI efficiency and simplify AI deployment.
“Inference is the place AI delivers actual worth, nevertheless it’s additionally the place effectivity and efficiency challenges turn out to be more and more seen,” says Joe Zhu, founder and CEO of Zenlayer. “By combining our hyperconnected infrastructure with distributed inference know-how, we’re making it attainable for AI suppliers and enterprises to deploy and scale fashions immediately, globally, and cost-effectively.”
It consists of elastic GPU entry, computerized orchestration throughout 300+ factors of presence (PoPs) and a non-public spine decreasing latency as much as 40%. Along with that, the answer is ready to accommodate numerous sorts of AI fashions out-of-the-box for simple accessibility and comes with real-time monitoring options making it simpler for international scaling.
Zenlayer is working towards facilitating worldwide real-time AI by which the companies might focus on innovation reasonably than deployment complexity.
Zenlayer has greater than 300 edge nodes in over 50 nations, serving over 85% of the world’s web customers with lower than 25ms latency and a completely redundant international community. The corporate is additional extending its AI-ready service choices to speed up the complete potential of AI on the edge.
Associated
AI inference | AI/ML | edge AI | GPU | Zenlayer
