In keeping with Broadcom, a single Jericho4 system can scale to 36,000 HyperPorts, every working at 3.2 Tbps, with deep buffering, line-rate MACsec encryption, and RoCE transport over distances larger than 100 kilometers.
HBM powers distributed AI
Bettering on earlier designs, Jericho4’s use of HBM can considerably improve complete reminiscence capability and scale back the facility consumed by the reminiscence I/O interface, enabling sooner information processing than conventional buffering strategies, in accordance with Lian Jie Su, chief analyst at Omdia.
Whereas this will increase prices for information heart interconnects, Su mentioned higher-speed information processing and switch can take away bottlenecks and enhance AI workload distribution, growing utilization of information facilities throughout a number of areas.
“Jericho4 may be very totally different from Jericho3,” Su mentioned. “Jericho4 is designed for long-haul interconnect, whereas Jericho3 focuses on interconnect inside the similar information heart. As enterprises and cloud service suppliers roll out extra AI information facilities throughout totally different areas, they want steady interconnects to distribute AI workloads in a extremely versatile and dependable method.”
Others identified that Jericho4, constructed on Taiwan Semiconductor Manufacturing Firm’s (TSMC) 3‑nanometer course of, will increase transistor density to help extra ports, built-in reminiscence, and larger energy effectivity, options which may be important for dealing with massive AI workloads.
“It permits unprecedented scalability, making it best for coordinating distributed AI processing throughout expansive GPU farms,” mentioned Manish Rawat, semiconductor analyst at TechInsights. “Built-in HBM facilitates real-time, localized congestion administration, eradicating the necessity for advanced signaling throughout nodes throughout high-traffic AI operations. Enhanced on-chip encryption ensures safe inter-data heart visitors with out compromising efficiency.”
