AI deployment makes use of a number of storage layers, and each has completely different necessities, says Dell’Oro’s Fung. For storing huge quantities of unstructured, uncooked knowledge, chilly storage on HDDs makes extra sense, he says. SSDs make sense for heat storage, equivalent to for pre-processing knowledge and for post-training and inference. “There’s a spot for every sort of storage,” he says.
Planning forward
In accordance with Constellation’s Mehta, knowledge middle managers and different storage consumers ought to put together by treating SSD procurement like they do GPUs. “Multi-source, lock in lanes early, and engineer to requirements so vendor swaps don’t break your knowledge path.” He recommends qualifying not less than two distributors for each QLC and TLC and beginning early.
TrendForce’s Ao agrees. “It’s higher to construct stock now,” he says. “It’s troublesome to lock-in long run offers with suppliers now attributable to tight provide in 2026.”
Primarily based on suppliers’ availability, Kioxia, SanDisk, and Micron are in the very best place to help 128-terabyte QLC enterprise SSD options, Ao says. “However in the long run, some module homes might be able to present related options at a decrease value,” Ao provides. “We’re seeing extra module homes, equivalent to Phison and Pure Storage, supporting these options.”
And it’s not simply SSD for quick storage and HDD for sluggish storage. Reminiscence options have gotten extra advanced within the AI period, says Ao. “For enterprise gamers with smaller-scale enterprise fashions, you will need to control Z-NAND and XL-Flash for AI inference demand,” he says.
These are reminiscence applied sciences that sit someplace between the SSDs and the RAM working reminiscence. “These options will probably be cheaper in comparison with HBM and even HBF [high bandwidth flash],” he says.
