PEAK:AIO provides HPC-level performance for AI with NAS simplicity and low cost

UK startup PEAK:AIO has rewrittten some of the NFS stack and LInux RAID code to get a small 1RU server with a PCIe 5 bus sending 80GB/sec of data to a single GPU client server for AI processing.

Three cheap servers doing this would send 240GB/sec to the GPU server, faster than high-end storage arrays using GPUDirect which pump data to an Nvidia DGX-A100 GPU server at 152 to 191GB/sec. PEAK:AIO provides storage to the AI market and has a user base ranging from startups to world status projects including the UK NHS AI deployment trials. It supports server vendors such as Dell EMC, Supermicro, Gigabyte, and will be distributed by PNY Technologies.

Mark Klarzynski.

PEAK:AIO co-founder and CEO Mark Klarzynski told us: “Storage vendors are not understanding the AI market and its particular use of data.”

He said many high-end GPUDirect-using arrays were not selling – they were too expensive. And he said some suppliers’ case study systems had discounted their kit by more than 90 percent for the case study. 

He said AI customers want historical data fed fast and affordably to GPU servers. High-end storage has speed, features and very high prices. Klarzynski said GPU servers could cost $200,000, with storage array suppliers proposing arrays costing a million bucks. That storage is disproportionate in price.

These AI-using customers dont need high-end storage array services like deduplication, snapshots, Fibre Channel and iSCSI. CTO Eyal Lemberger said: “You don’t need snapshots for versioning in this field.”

Stripping these services out of the arrays wouldn’t make them go that much faster, because their software is inefficient. 

You could equip a COTS server with 2x 8-core CPUs, putting out say 4GB/sec, with the same GPUDirect support, RDMA for NFS, a PCIe 5 bus and rewrite parts of the basic NFS and Linux RAID software to turn a 2RU server into a near HPC-class data pump. That’s what PEAK:AIO has done.

2U Software Defined Storage appliance powered by PeakIO and providing up to 367TB of NVMe over Fabric storage, with the entry-level configuration of 4 SSDs providing over 2 million IOPS, 12GBps random read with less than 90 microseconds latency. The system is scalable up to 24 SSDs, providing more than 23GBps from a single enclosure, and can be extended with additional units.

Most AI-using customers are not storage admins. They don’t want anything complicated and they don’t scale that much. “Ninety percent of GPU servers are in clusters of less than five GPUs. Nvidia SuperPod sales are very low.”

Klarzynski said: “You can start from 50TB and scale to a few hundred terabytes. … We’re not trying to enter the SuperPod world. … AI is less about big data and more about good data. It’s less about super-large CPU clusters and more about smaller clusters of GPU supercomputers. Why would we expect storage designed for the world of big data, HPC or enterprise workloads to work perfectly for the new world of AI?” 

The customers don’t want block IO, such as is delivered by NVMeoF, and they are not used to complicated parallel file systems. They need a straightforward file system, basic NFS, as is used by VAST Data. “In fairness, VAST,” Klarzinski said, “got it right.”

PEAK:AIO is a software company. Klarzinski explained: “We’ve rewritten parts of the NFS storage stack and the way it interoperates with GPUDirect. We rewrote Linux’s internal RAID for more performance,” and achieved a 400 percent speed improvement for RAID6 writes and 200 percent for reads.

PEAK:AIO diagram.

The power of PEAK:AIO’s software development is impressive. The latest software release doubled data output speed from 40GB/sec to 80GB/sec.

It developed RDMA Multipath tools for NFS v4, with a kernel focus, and for NFS v3 – its preferred performing version for AI. A test in Dell’s labs confirmed its v1 software’s 40GB/sec number – full wire speed through aggregating multiple links.

PEAK:AIO developed its v1 product last year. Klarzynski is proud of that: “It took off. We sold a fair portion.” He said PEAK:AIO was pretty much self-funded – there is no multi-million dollar VC funding. Now it has v2 software, making it an AI Data Server, and wants to raise its profile.

PEAK:AIO AI Data Server feature list.

A Dell Validated Designs for AI – built in collaboration with PEAK:AIO and Nvidia – delivers an AI Data Server designed for mainstream AI projects, providing realistic capacity levels and ultra-fast performance. It uses Dell PowerEdge servers and Nvidia GPUDirect to create a central pool of shared low-latency NVMe storage.

The latest version of PEAK:AIO’s AI Data Server will be publicly available in Q2. PEAK:AIO systems start at under $8,000 and scale from 50TB to 700TB of NVMe flash per single node. They are currently available to resellers for beta testing.


PEAK:AIO was founded in 2019 by Mark Klarzynski. He led the evolution of software defined storage (SDS), pioneering the development of an iSCSI, Fibre Channel, and InfiniBand SDS framework (SCST) still used and licensed today by leading storage vendors. he went on to develop the initial all flash storage arrays in partnership with vendors such as FusionIO. For example, he worked on FusionIO’s ION Data Accelerator product.