Key to the brand new providing is the F5 AI Gateway, which is a container or standalone bundle that runs alongside an AI utility or massive language mannequin (LLM) that manages, orchestrates, and secures the interplay between numerous AI companies, information sources, and user-facing functions. It ensures site visitors administration and communication between on-premises techniques, cloud-based AI fashions, edge units, and APIs.
The F5 AI Gateway affords an automatic means for purchasers to safe and handle interactions amongst AI functions, APIs, and LLMs. It’s a containerized Kubernetes service that may be deployed by itself or built-in with current F5 software program, {hardware}, or companies, the corporate said. The gateway helps widespread AI fashions reminiscent of OpenAI, Anthropic, and Ollama in addition to generic HTTP upstream LLMs and small language mannequin (SLM) companies.
F5 has additionally added an AI assistant for its NGINX One SaaS-based utility help administration console. The software is powered by the F5 AI Knowledge Material and serves as an clever accomplice to stretched NetOps, SecOps, DevOps, and platform ops groups. The AI assistant for NGINX One makes use of a pure language interface to streamline operations and helps clients configure and optimize utility supply, preemptively tackle threats, and establish anomalies earlier than they impression manufacturing, in line with F5.
F5 mentioned it’ll add an AI assistant for its BIG-IP utility supply system. Prospects will be capable of automate the creation, upkeep, and optimization of iRules whereas decreasing the time and assets required to handle site visitors and securely ship apps, F5 said.
Along with the software program, F5 has expanded its Velos {hardware} household by including a CX1610 chassis and BX520 blade. The Velos CX1610 chassis and BX520 400-Gbps blade scale to multi-terabits of throughput. The F5 Velos options information packet routing, granular safety, load balancing, and low latency, enabling information ingestion and real-time information wants for AI workloads, F5 said.
Extra F5 information: