Rockport Networks, rebranded as Cerio, is to roll out composable datacenter IT infrastructure that goes beyond a single PCIe domain.
Startup Rockport has developed switchless and flat network technology for interconnecting servers, storage, and clients, with each endpoint effectively being its own switch. It announced initial product availability in October last year when it came out of stealth mode. Phil Harris became CEO in June that year as investors put more money into the business. Now he’s rebranded the company and its technology as Cerio and says it delivers new scale economics for AI and cloud datacenters.
Harris said: “Every major inflection point in computing is driven by the need for better economics, a better operational model – and now, greater sustainability. For the past couple of decades, we’ve been limited in how we build systems. No longer bound by a single PCIe domain, our customers can compose resources from anywhere in the datacenter to any system.”
The idea is that IT resources such as CPUs, GPUs, NVMe storage drives, DPUs, FPGAs, and other accelerators can be composed into workload-dedicated systems with the right amount of each resource applied to the workload, and then returned to a resource pool for subsequent reallocation when the workload completes. Composability has been a long sought-after technology with large-scale datacenters the main target, but it has not become a mainstream technology, despite the efforts of HPE with Synergy, Dell EMC with its MX7000, and startups such as Liqid and GigaIO in hyperscale and HPC datacenters.
Other startups such as DriveScale and Fungible have had a brief period selling product and then been acquired with their technology removed from general availability.
Cerio is hoping its technology will appeal to hyperscale datacenter operators as it enables them to decrease resource idle time.
Harris said: “Pre-orders of the Cerio platform from hyperscaler, cloud service provider, enterprise and government organizations are a clear signal of the demand for a fundamentally new system architecture that is more commercially and environmentally sustainable.”
Cerio is working with early access customers in North America, Europe, and Asia-Pacific on the implementation of scalable GPU capacity and storage agility use cases.
Its technology is based on high-radix distributed switching, advanced multipathing, and intelligent system adaptation of protocols across low-diameter network topologies, ones with fewer nodes, such as routers and switches, than otherwise. Multipathing enables traffic flows across the network to be kept separate and not interfere with one another. View radix as the number of IO ports on a network device such as a switch or router.
Dr Ryan Grant, assistant professor in the Department of Electrical and Computer Engineering at Queen’s University, Kingston, Ontario, said: “The Cerio platform is driving groundbreaking research into AI acceleration, to optimize the flow of data on a per-application basis. We’re using the unique multipathing capabilities of the Cerio fabric to optimize the precise calibrations of GPU selection, density and communications that will make traffic flows highly efficient and responsive in distributed, heterogenous systems.” His research involves collaboration with Cerio. Matt Williams, Cerio CTO, said: “The work we’re doing with Dr Grant and his team will help us calibrate the per-workload optimizations that will make traffic flows highly responsive for complex AI, machine learning and deep learning applications.”
Grant says Cerio’s technology involves PCIe decoupling from the underlying fabric. PCIe is a bus and not a network fabric, giving it fundamental scale limitations: “PCIe decoupling in the Cerio platform makes it possible to extend PCIe beyond the compute node – and the compute rack – to provide configurable, efficient row-scale computing that changes the economics of the datacenter.”
A Cerio white paper, Beyond the Rack: Optimizing Open Infrastructure for AI, is available here (registration only). It explains how decoupling PCIe from the underlying fabric overcomes the issues of using native PCIe in a large distributed system.