OK, SANshine, how do you position Nebulon against external arrays, HCI, SmartNIC and DPU storage?

Analysis: Would-be SAN array and HCI rival Nebulon’s SPU (Storage Processing Unit) is a fresh way of delivering shared storage and other services by using what looks like hyper-converged infrastructure (HCI) hardware.

It uses a card plugged into a server’s PCIe bus, but its overall architecture, features and benefits are different from everything else in the storage market, sometimes in subtle ways. 

We’ve positioned Nebulon’s technology against external arrays, actual hyperconverged infrastructure, HCI with a hardware accelerator, storage accessed over SmartNICs and storage accessed via DPUs (Data Processing Units), with the aim of providing a way to better understand and differentiate Nebulon’s tech.

This article was based on briefings with suppliers such as Fungible, Nebulon, Nvidia and Pensando, and represents our understanding of the evolving shared storage and server infrastructure market. Let’s start with the external block array or filer.

External arrays

An external shared array typically links to accessing servers via Fibre Channel HBAs (block access) or Ethernet NICs (iSCSI block and NAS file access). The array generally has two x86 controllers, an operating system providing services and a bunch of drives.

Hyper-Converged Infrastructure

Hyper-converged infrastructure (HCI) emerged as an alternative to external shared arrays, providing shared capacity by aggregating locally attached storage in a group of linked servers and presenting out to applications as a virtual SAN.

The storage operating system and services functions ran on the processors in the servers and so took some of their CPU capacity away from running applications. The servers were also constrained to run under the control of a hypervisor.

SDS = Software-Defined Storage. V = virtual machine.

These servers also managed the overall infrastructure and, over time, system monitoring and predictive analytics services were added, delivered via the supplier’s cloud. They are now evolving towards AIOps.

HPE’s SimpliVity varies the classic HCI concept by adding an inline hardware accelerator to provide compression and deduplication.

SmartNIC infrastructure

A SmartNIC is a network interface card with an added processor and other accelerator hardware plus software/firmware to provide network, security and storage services. The card is plugged into a host server’s PCIe bus and also into an external storage system, with an example being Nvidia’s BlueField-2 SmartNIC and DDN’s ExaScaler array. The ExaScaler array software runs in the BlueField card and so that card is now the storage array controller.

This means the host server no longer carries out storage processing work and can run more virtual machines.

We note that in this case the dual-controller array becomes a single SmartNIC controller and that has implications for availability. This means the SmartNIC becomes a single point of failure.

Another BlueField example is VMware’s Project Monterey with the vSphere hypervisor running in the BlueField card.

These SmartNIC systems are not hyperconverged systems and the host servers are not constrained to run a hypervisor. They could be bare metal servers or containerised ones. Shared storage for the accessing servers comes from an external array accessed across a fabric and not from a virtual SAN.

Could a set of servers with SmartNICs provide a virtual SAN? Theoretically, yes, so long as a supplier provided the software needed, such as vSAN code running in the SmartNIC.

Infrastructure management is performed by the server-SmartNIC system, and monitoring and predictive analytics come from a supplier’s cloud service.

Nebulon’s SPU infrastructure

Nebulon conceptually replaces the SmartNIC with its own Storage Processing Unit (SPU) hardware and software. Unlike a typical SmartNIC server, the shared storage is provided by internal, locally attached drives, virtualised into a SAN, and not by an external array.

The SPU provides infrastructure management for the host server and is itself controlled and managed from Nebulon’s cloud which also provides the analytics services.

As with a SmartNIC the host server can run whatever applications are needed; bare metal, virtual machines or containers and no cycles are consumed running shared storage. The SPU does not run a hypervisor though.

In B&F’s view the Nebulon SPU is a SmartNIC variant; its functionality partially overlaps with SmartNIC functionality but its system architecture is different as we have described.

DPU infrastructure

A specific hardware Data Processing Unit (DPU) is conceived of as being necessary to run data centre infrastructure tasks such as networking, storage and security because they are too onerous and take server cycles away from running application code.

The view is that there are now so many servers, storage arrays, network and security boxes in a data centre that they need their own offloaded management, control and network fabric to relieve over-burdened application servers.

This is an emerging technology with two main startup suppliers – Fungible and Pensando – both building specialised chip hardware. Fungible has launched a shared, external FS1600 storage appliance front-ended by its own DPU card (SmartNIC in our view). This links to its central DPU across a dedicated high-speed TrueFabric that can scale to 8,000 nodes.

The FS1600 is conceptually similar to a DDN ExaScaler-BlueField array, with the BlueField SmartNIC running the ExaScaler controller software and interfacing the array to accessing servers, like the FS1600’s Fungible chip.

Neither the FS1600 nor the ExaScaler-BlueField array are like Nebulon’s SPU system as that provides a virtual SAN and not an external array.

Here’s a diagram and a table positioning the various kinds of systems.

Y = yes, N = No, P = Possible, SDS = Software-Defined storage, FC HBA = Fibre Channel Host Bus Adapter, JBOF = Just a Bunch of Flash drives.

Composability

SmartNICs and DPUs are involved with composability, the concept of dynamically creating a server instance from component processor, accelerator (FPGA, GPU, ASICs), DRAM, storage-class memory, storage and network elements selected from resource pools. This virtual server runs an application workload and is then de-instantiated with its resources being returned to their parent pools for later re-use.

The benefit is said to be better component resource utilisation than having them in fixed server configurations and stranded when they are not needed.

Suppliers like Pensando and Fungible are creating DPUs to run the infrastructure code and instructions that compose the servers and relieve them of the infrastructure workload tasks, such as east-west networking inside a data centre. Nvidia sees BlueField enabling data centre composability.

Nebulon is not involved in data centre composability. It could be but its view at present is that composability is a concern for hyperscale-class cloud and application providers, not for mainstream enterprises. Nebulon’s SPU is not, or not yet, a type of DPU in the composable sense.