Cloudian does a fast object storage speedrun with AMD CPUs and Micron flash

Object storage supplier Cloudian has managed to wring 17.7GBps writes and 25GBps reads from a six node all-flash cluster in a recent benchmark.

Cloudian said these are real-world results, generated on an industry-standard benchmark, GOSBENCH, that simulates real-life workloads. It is not an in-house benchmark tool. The servers used were  single processor nodes, each with a single, non-bonded 100Gbpds (Ethernet) network card and four Micron 6500 ION NVMe drives per node.

The company supplies HyperStore object storage software and this speed run was done with servers using AMD’s EPYC 9454 CPUs and upcoming v8 Hyperstore software.

Michael Tso

Cloudian CEO Michael Tso said in a statement: “Our customers need storage solutions that deliver extreme throughput and efficiency as they deploy Cloudian’s cloud-native object storage software in mission-critical, performance-sensitive use cases. This collaboration with AMD and Micron demonstrates that we can push the boundaries.”

AMD corporate VP for Strategic Business Development, Kumaran Siva, backed him up: “Our 4th Gen AMD EPYC processors are designed to power the most demanding workloads, and this collaboration showcases their capabilities in the context of object storage.”

CMO Jon Toor told us: “Most of our customers today are talking with us about all flash for object storage, if they’re not already there. Increased performance is a driver, especially as we move into more primary storage use cases. Efficiency is a driver also. With these results we showed a 74 percent power efficiency improvement vs an HDD-based platform, as measured by power consumed per GB transferred.”

HyperStore 8.0 incorporates multi-threading technology and kernel optimizations to capitalize on the EPYC 9454 processor, with its 48 cores and 128 PCIe lanes. This combination was then optimized for Micron’s 6500 ION 232-layer TLC SSDs which delivers 1 million 4KB random write IOPS.

Object storage tends to be linearly scalable as nodes are added to a cluster so great speeds are possible. Cloudian’s per-node performance was 2.95 GBps write and 4,15 Gbps read. 

In October 2019, OpenIO achieved 1.372 Tbps throughput (171.5GBpsec), using an object storage grid running on 350 commodity servers. That’s 0.49GBps per server.

A month later MinIO went past 1.4Tbps for reads, using 32 nodes of AWS i3en.24xlarge instances each with 8 NVMe drives, making a total of 256 NVMe drives, and that means 175GBps overall and 5.5GBps per AWS instance, outperforming Cloudian. We don’t know the NVMe drive performance numbers but Minio used two more of them per instance than Cloudian used per node. Object storage performance benchmarks are bedevilled with apples and oranges comparison difficulties.

Check out a Cloudian speed run Solution Brief here.