Liqid has announced general availability of its ThinkTank composable disaggregated infrastructure platform, which it says will restore the balance between expensive storage and even more expensive GPUs.
The company’s take is that while GPUs are essential to AI and other challenging modern workloads, GPU-packed servers aren’t always the most flexible of beasts. Operators can find they quickly hit a limit on how far they can scale up the number of GPUs in a system, while storage and IO shortcomings can leave the GPUs they do have underutilized.
The ThinkTank boxes feature Liqid’s Matrix CDI software which enables users to “quickly compose precise resource amounts into host servers, and move underutilized resources to other servers to satisfy changing workload needs.”
They also include the vendor’s own PCIe fabric technology, and its Honey Badger storage cards, which offer up to 4 million IOPs and 24GB/s throughput, along with compute nodes and Ethernet and InfiniBand support.
The boxes range from the X1, which packs four GPUs, through eight and 16-GPU versions, to the 32-GPU Thinktank XR. A single unit ThinkTank Flex is billed as a “bring your own hardware” option. The boxes support Nvidia or AMD GPUs, with a maximum of 20 per compute node, as well as those vendors’ respective AI stacks and IOs. The X4 carries 15 of Liqid NVMe flash storage drives, with the XR topping out 120TB.
Existing architectures can leave GPUs with utilization rates of 12-15 percent, director of product marketing George Wagner claimed, leaving AI and HPC workloads hamstrung. Liqid’s composability approach means the customer can say, “I want this host with this many CPUs, I need this many GPUs, I need this much storage, go make it happen.”
Ben Bolles, exec director of product management, added: “By hosting the high performance storage on the PCI fabric, you then can solve the challenge that customers run into when they’re trying to feed all the data, to the GPUs and the accelerators.”
The system includes Liqid’s ioDirect technology, which Wagner said “allows the GPU to talk to GPU and bypass the CPU all together, as well as allow our storage to communicate with GPU and bypass the CPU as well.” It also includes support for Nvidia’s NVLink bridge and AMD’s Infinity fabric.
The announcement comes a few months since Liqid snagged $100m of fourth round funding, apparently pitching it into unicorn territory. Customers to date include the US Army Corps of Engineers and University of Illinois’ Electronic Visualization Lab.