Queen Mary University of London’s (QMUL)’s genomics research team generated too much data for its high performance computing infrastructure to handle comfortably.
And so the university’s IT department set out to improve workflow and performance. They did this buying in new hardware, a D24 NVME-over-Fabrics array made by the Israeli startup E8 Storage, combined with a Mellanox InfiniBand connection to accessing clustered servers.
QMUL has retained its existing DDN GridScaler array, a high-performance, parallel file system array based on Spectrum Sale. But it has pushed this behind E8, down the stack for use as a bulk storage tier.
Slow, slow, quick, quick, slow
QMUL’s IT team saw E8’s benchmark performances, such as this and this SPEC SFS2014 run, and decided to take a closer look.
On the SPEC SFS2014 run E8 used its D24 storage array with with 24 x HGST SN200 1.6TB dual-port NVMe SSDs, 24TB total capacity, and 16 Spectrum Scale client nodes to achieve 600 builds. The a record result at the time. It has since been superseded.
Tom King, assistant director for research, IT services at QMUL, said in a canned quote: “We were extremely pleased with the latest benchmark performance tests which showed E8 Storage as a leader in NVMe.”
The University acquired the D24 array from OCF, a Sheffield, UK specialist integrator of HPC systems, and E8 partner. The Spectrum Scale integration made it feasible to use the E8 as a high-performance tier in front of the DDN GridScaler.
E8’s D24 is used as a fast-access scratch tier for researchers, holding data and metadata. It enables more jobs to be pushed through the cluster at a faster rate.
Accessing data from a shared block array storage resource across an NVMe-over-Fabrics link is generally reckoned to be the fastest way to get at data in such an array. We compare it with with other storage protocols here.
If I could turn back time?
DDN recently set a new SPEC SFS2014 record, some 25 per cent faster than an E8 NVMe storage system which used Intel Optane 3D XPoint drives. DDN used a bog-standard Fibre Channel SAS SSD array to outperform an array full of NVMe-connected Optane SSDs. So this is a surprising result.