Earlier this week we covered the main details of VMware’s blockbuster vSphere 7 launch. In this article we zoom in on VMware’s vSphere 7 support for NVMe over Fabrics for faster data access to external block storage arrays.
This support gives virtual machines faster read and write access to external stored data, thus increasing performance.
NVMe-oF is the NVMe drive protocol carried over Ethernet or another network fabric. It enables access to a network-linked storage drive as if it were local to the accessing server – in other words, like a direct-accessed NVMe drive IO but cheaper and faster. Latency for the pooled storage is about 100μs instead of the 500μs or more of an all-flash storage array accessed across a File Channel or iSCSI network.
NVMe-oF also increases IO access speed using multiple IO queues and by enabling parallel access to more of an SSD’s flash.
Network fabrics
vSphere 7 supports Fibre Channel and RoCE v2 (RDMA over Converged Ethernet) network fabrics, Jason Massae, technical marketing architect at VMware, writes in a company blog. This enables faster access to any NVMe-oF array, such as those from HPE, NetApp, Pure Storage and, of course, VMware’s parent company, Dell.
vSphere 7 also supports shared VMDKs for Microsoft WSFC, and, in VMFS, optimised first writes for thin-provisioned disks, Massae writes. And there is extended VVOL support.
Jacob Hopkinson, VMware solutions architect at Pure Storage, explains in this article, “vSphere 7.0: Configuring NVMe-RoCE with Pure Storage”, how to configure vSphere’s ESXi hypervisor for NVMe-oF RoCE. He covers Mellanox and Broadcom network adapters and the setting up of vSwitches, port groups and the vmkernel.