Sponsored Hyperconverged infrastructure (HCI) ticks a lot of boxes for a lot of organizations. Recent figures from IDC showed growth in the overall converged systems market was static in the fourth quarter of 2020, but sales growth for hyperconverged systems accelerated, up 7.4 percent and accounting for almost 55 per cent of the market.
It’s not hard to see why this might be. The pandemic means that many organizations are having to support largely remote workforces, meaning a surge of interest in systems that can support virtual desktop infrastructure. Those all-in-one systems seem to offer a straightforward way to scale up compute and storage to meet these challenges in a predictable, albeit slightly lumpy, way.
But what if your workloads are unpredictable? Are you sure that your storage capacity needs will always grow in lockstep with your compute needs? Looked at from this point of view, HCI may be a somewhat inflexible way to scale up your infrastructure, leaving you open to the possibility of paying for and managing storage and/or compute resources that you don’t actually need. Suddenly that tight level of integration is a source of irritation. Aggravation even.
This is why HPE has begun to offer customers a path to “disaggregation” with the HPE Nimble Storage dHCI line, which allows compute nodes, in the shape of HPE ProLiant servers, to share HPE Nimble storage arrays, while still offering the benefits of traditional HCI.
HPE’s Chuck Wood, Senior Product Marketing Manager, HPE Storage and Hyperconverged Infrastructure, says that while the classic HCI model delivers when it comes to deployment and management, admins still face complexity when it comes to the actual infrastructure.
“In traditional HCI, when you need to do Lifecycle Management, like adding nodes or even doing upgrades, all of those operations can be disruptive, because your apps and your data are on the same nodes,” he says.
VM-centric experience
At the same time, he explains, customers want to apply HCI to workloads beyond VDI and user computing: “There was this simplicity piece that you wanted to bring to more and more workloads – databases, business critical and mixed workloads.”
At first, it might seem counter-intuitive to split up the hardware elements that make up traditional HCI systems in the pursuit of simplification.
But as Wood explains, HPE’s approach is centered around delivering a simple VM-centric experience: “The VM admin wants to operate and manage workloads, virtual machines, you don’t want to be worrying about the infrastructure. To them, that’s where a lot of the complexity lies.”
So, in practice, “what we’re offering that’s aggregated is this simplified management … vCenter is the management plane, and our tools to manage Nimble Storage dHCI are embedded completely in vCenter.”
At the same time, Nimble dHCI has a Container Storage Interface (CSI) driver, so, as Wood puts it, “you can run VMs alongside containers. We can be that common infrastructure.”
“The disaggregated part is just the notion that you can add more ProLiants if all you need is memory, and compute and CPU resources,” he says. “Or you can add disks and all-flash Nimble shelves, if all you require is capacity for storage.”
This all allows for a much more incremental, and non-disruptive, approach to building out the infrastructure. For instance, adding HPE Nimble dHCI array to an existing setup of HPE ProLiants and approved switches is a five-step process that takes just 15 minutes, Wood says.
It’s worth noting that adopting HPE Nimble-based dHCI architecture does not require you to start from scratch. “If you have existing ProLiant Gen9 or Gen10 servers, we can actually bring them into Nimble Storage arrays,” says Wood, though to date, he says, most customers have built from the ground up.
Resilient infrastructure
Resiliency is further underpinned by HPE InfoSight, HPE’s AI-powered intelligent data platform, which aggregates telemetry from HPE Nimble systems to provide predictive management and automation. According to HPE, InfoSight can automatically resolve 86 percent of issues, while time spent managing problems is slashed by 85 percent.
The combination of ease of deployment and management, disaggregated architecture and automation via HPE InfoSight, means a commitment guaranteed to six nines of data availability for the HPE Nimble Storage dHCI systems. “It yields a better infrastructure from a resiliency perspective,” says Wood. “If you don’t have a lot of dynamics in your data center, and you just sort of set it and forget it for your general purpose workloads, then HCI is great, it’s perfect. But when it’s a more dynamic environment, where you don’t know what you’re going to need three to six months from now, disaggregated dHCI can be a better play.”
This inevitably prompts the question that, if you need to cater for changeable workloads, wouldn’t it make more sense to start using the cloud to supplement your existing storage infrastructure, before moving entirely off-prem.
Wood says the cloud is absolutely part of HPE’s offering in the shape of its HPE Cloud Volumes service. This gives customers the benefits of keeping control of their data on-prem, while having the ability to flex into the cloud as needed, further reducing the temptation to over-provisioning. HPE Nimble dHCI also offers support for Google Anthos.
And HPE Nimble dedupe and compression features further reduce the need for over-provisioning on prem, or in the cloud, he says. “Many customers are getting five-to-one dedupe on their backups and primary storage, so it gives you very good capability and resource utilization.”
On-prem
Even companies who began as cloud native are now finding they need to build out on-prem infrastructure, says Wood. “There’s a tipping point that ‘we have to have something on premises for test dev or to scale’,” he explains. Having on-prem infrastructure, supplemented by the cloud, gives customers far more control of their destiny, he says.
Wood cites the example of Australian pet insurance firm PetSure which had been using AWS to support VDI for its employees and provide remote veterinary services for customers’ pets. It switched to on-prem HPE Nimble dHCI as the Covid-19 pandemic hit, to support newly remote workers while also expanding its mixed workloads.
Similarly, he says, a healthcare customer in the UK opted to process X-rays locally, because it couldn’t afford to take the chance on latency that a cloud-based approach could involve.
“That’s why not everything is going to go to the cloud,” he says. “Because you need that local CPU memory and disk to give you the speed that your business requires to do your transactions.”
And, HPE would argue, you can now do that with a lot less aggravation, and aggregation, than you could in the past.
This article is sponsored by HPE.