Hyperconverged infrastructure heads closer to the edge

Interview: Get hyperconverged infrastructure (HCI) product development and marketing wrong and your business becomes a turkey. Witness Maxta, which crashed into the buffers this week, as an example of when it goes wrong.

With heavyweights like Nutanix and Dell/EMC/VMware dominating the market, mis-steps by smaller players can lead to disaster. 

So how to survive and thrive? Blocks & Files spoke Jeff Ready, CEO of Scale Computing, about his views on HCI and storage-class memory, NVMe-over-Fabrics, file support. Also where does he think the HCI market is going.

Overall, Ready reckons the edge is key, and automated admin and orchestration are key attributes in making HCI suitable for deployment outside data centres.

Blocks & Files: Does storage class memory have a role to play in HCI nodes?

Jeff Ready: It will, once there is sufficient demand for applications supporting it.  With the high cost this is still a niche play, but that will be rapidly changing as the price improves.  

Blocks & Files: In scaling out, does NVMe over Fabrics have a role to play in interconnecting HCI nodes?

Jeff Ready: Absolutely… today, Hyper-Core Direct utilises NVMeoF, and our near term plan is to use NVMeoF in HC3 as the storage protocol underpinning scribe, performing all of the node to node block operations for data protection, regardless of underlying devices (even IOs destined for HDD will utilise NVMeoF).  The simplicity and efficiency of the protocol is the reasoning behind this,

Blocks & Files: Does file access – thinking of Isilon, Qumulo and NetApp filers – have any role in HCI? If we view HCI as a disaggregated SAN can we think about a similarly disaggregated filer? Does this make any kind of sense?

Jeff Ready: Sure, it can make sense. Often depends on the applications that are running and what makes it easy for them. We’ve incorporated file-level functionality into our HCI stack as it makes it easy to (among other things) do file-level backup and restore from within the platform itself. Not every deployment needs it, but it can be handy. They key is offering the functionality without complicating the administration, otherwise you’ve missed a huge area of time savings with HCI.

Blocks & Files: Is software-defined HCI inherently better suited to hybrid cloud IT?

Jeff Ready: Yes, assuming that there is an architecture that exists in both places.  For example with our Cloud Unity product partnership with Google, the resources of the cloud appear as part of the same pool of resources you have available locally. Thus, the applications can run in either location without being changed or otherwise aware of where they are running. HCI implemented in this way creates one unified platform.  

Blocks & Files: Does Scale support containerisation? What are its thoughts on containerisation?

Jeff Ready: Yes. We find most organisations are still in the very early stages of considering containers. Some applications and deployments make great sense for containers, others less so. Our philosophy is to provide a single platform that can run both VMs and containers, so that the architecture does not stand in the the way of the right deployment method for a particular app.  

Blocks & Files: How does Scale view HCI with separate storage and compute nodes, such as architectures from Datrium and NetApp? Are there pros and cons for classic HCI and disaggregated HCI?

Jeff Ready: Having storage nodes available to be added to a mixed compute/storage node cluster is something that we and others have done for some time. 

So long as it all pools together transparently to the applications and administrators, then this makes sense. If the disaggregated resources then require separate administration or application configuration, you’ve sort of missed the entire point of HCI. 

Our approach is that the apps should feel like they are running on a single server, and that the virtualization, clustering, and storage take care of themselves. If you are back to managing a hypervisor (VMware or otherwise), or managing a storage stack (using protocols, caring out LUNs and shares, etc.), or piping compute nodes and storage nodes together as an admin, then you really don’t have a hyperconverged stack, in my opinion.

Ready, Steady, Go!


Blocks & Files: How do you see the HCI market evolving?

Jeff Ready: We’re seeing different features for HCI emerging in different markets.  

Edge computing is focused on cost, footprint, and manageability. Other areas are focused on streamlined performance and optimisation. Some are general purpose deployments that need a bit of each.  

For us, the emergence of edge computing is very exciting, where automation and orchestration is an absolute must-have, and this is where our own tech really shines: deploying into places where there are often zero IT admin resources – stuff needs to work without humans touching it when you are trying to manage thousands of locations. 

The edge is a natural complement to cloud computing and I expect this to be the fastest growing area in IT for some time to come. HCI is a technology that applies at the edge, but the automation and orchestration are really key drivers.

Comment

Smaller HCI suppliers  need a sharp eye on new technology, adopting it promptly, and also they need to understand where HCI fits in the IT market.

OK so this is easy to say, much harder to put into practice. But the importance of doing the simple things well is illustrated by developments this week, with HCI startup Maxta  running into the buffers and a big beast, Cisco, flexing its HCI muscles. (See my report on Cisco strengthening its HyperFlex HCI offering with all-NVMe systems, Optane caching, and edge systems.)

In edge locations with no local IT admin, the HCI system has do all the IT things needed to keep the branch, office, shop, whatever, operating and hooked up to the data centre and, probably, public cloud.  Forget that and the HCI vendor is heading nowhere.