To INFINIBOX and beyond! Infinidat preps NVMe-oF upgrade for high-end array

Infinidat looks set to add ultra-fast NVMe-over-Fabrics access to its INFINIBOX arrays.

The company has not gone public yet but a source tells me that it demonstrated NVMe-over-Fabrics access to an INFINIBOX array at a recent customer event in Boston, Mass.

I hear from a customer attendee that a sample workload completed with latency under 50usec as measured from the host. The NVMe-oF access accelerated access to the INFINIBOX memory caching technology and provided faster data access than all-flash arrays.

We asked Infinidat CTO Brian Carmody if Infinidat plans to provide NVMe-oF functionality to its INFINIBOX arrays. He responded with a crisp: “No comment.”

Infinidat CTO Brian Carmody.

INFINIBOX is already fast

These high-end high-capacity monolithic arrays use disk to store their data and memory caching to deliver high cache hit rates on reads.  Performance is comparable to, if not faster than, all-flash arrays.

Test benchmarks have been publicised showing an INFINIBOX outpacing IBM and Pure Storage all-flash arrays (Infinidat explains its methodology here).

Typically, Infinidat arrays connect to accessing host servers across Fibre Channel or Ethernet, and that introduces network and storage stack delays to data access latency.

If NVMe-over-Fabrics (NVMe-oF) were used instead, data access latency would reduce by effectively extending a server’s PCIe bus out to the shared, external array. This is a much faster link than, say, 16 or 32Gbit/s Fibre Channel, with storage array LUN access requests passing across it.

A LUN is a Logical Unit Number which references a logical volume of block storage accessed by applications running in a server. The storage array controller receives a LUN read or write access request and converts the LUN into actual disk or flash drives.