Scality and WEKA have launched a joint answer that mixes WEKA’s NeuralMesh with Scality RING to assist enterprise knowledge storage necessities. The combination is designed to ship high-performance storage alongside scalable object-based capability for AI and high-performance computing (HPC) workloads.
On the centre of the answer is Scality’s light-weight object connector for NeuralMesh, which permits integration between the 2 platforms. In contrast with conventional S3 interfaces, Scality stories that its object connector achieved as much as 10x increased efficiency on comparable {hardware} in testing, together with as much as 20% decrease infrastructure prices. The purpose is to permit organisations to increase AI knowledge pipelines whereas managing infrastructure and operational overhead.
The answer structure retains energetic and new knowledge on WEKA’s flash-based NeuralMesh tier, whereas transitioning older or much less incessantly accessed knowledge to Scality’s object storage. This method is meant to cut back reliance on all-flash deployments whereas sustaining efficiency for energetic workloads and long-term storage for bigger datasets.
NeuralMesh is positioned to assist GPU utilisation, speed up time to first token, and assist AI pipelines with high-speed knowledge entry.
Scality RING is designed to scale to exabyte capability and supply sturdiness of as much as 14 nines. The article connector gives a tier for scalable storage with reported value benefits in contrast with conventional object storage approaches.
The combination is validated for interoperability and doesn’t require extra engineering modifications to implement. Knowledge tiering happens between the flash-based efficiency layer and the object-based capability layer.
The connector is offered as an alternative choice to community-driven object storage methods, providing enterprise assist alongside efficiency and reliability.
By way of this collaboration, the mixed answer brings collectively high-performance storage and scalable object capability to assist AI and HPC environments whereas aiming to cut back infrastructure prices and simplify administration.
