Does mission critical data mean taking things slow? Nah, let’s take it to the Max

Sponsored Feature We’re used to hearing how data is the secret to unlocking your organization’s potential, if only you’re brave enough to let it flow freely.

The problem is that experienced tech leaders and data specialists in large organizations are likely to be far less cavalier about letting financial, credit or medical data “just flow”. As Dell engineering technologist Scott Delandy explains, “They’re relatively risk averse, just by the nature of the types of applications that they run.”

The mission critical data systems underpinning these applications have been built up over years, decades even, with a sharp focus on “quality, stability and reliability,” he says, because “You always expect your credit card to work. You always expect the thing that you bought to ship and – usually – get there on time. And you expect to walk out of the hospital.”

At the same time, these organizations know that they do need to run new workloads, pouring their data into AI for example, or building out new, more agile but still critical applications on containers and microservices rather than traditional monoliths.

“Now, as they’re being asked to support next generation workloads, they don’t want to have to rebuild everything from scratch,” says Delandy. “They want to take the existing infrastructure, [or just] the operational models that they have in place, and they want to be able to extend those to these new workloads.”

Containerized OS

These were the challenges Dell had in mind when it began to plan the refresh of its PowerMax line, the latest in a series of flagship storage systems which is at the heart of tech infrastructure in the vast majority of Fortune 500 companies. The most recent update introduces two new appliances, the 2500 and the 8500, which feature new Intel Xeon Scalable CPUs, NVMe dynamic fabric technology, and 100Gb Infiniband support, as well as a new storage operating system, PowerMaxOS 10.

Given the focus on containerized applications, it shouldn’t be a surprise that the new OS is containerized itself, making it easier for Dell to develop and launch new features, and to share them across both PowerMax and the vendor’s other storage platforms.

“Because of the way the microcode, the software, the operating environment has been developed, that gives us the ability to cross pollinate data services between the different platforms,” says Delandy. One example of this cross pollination is a new set of file capabilities, which first appeared on the PowerStore platform, and is now also available on PowerMax.

But users still face the challenge of how to straddle the traditional VM world and modern applications built around containers – an architecture that was never even built with persistent storage in mind – and access the same automated, low touch, invisible type of workflow.

“And that’s why a lot of the work that we’ve been doing [is] around integrating to things like CSI (container storage interface),” he explains, “By putting a level of automation between the CSI API and the automation that we have on the infrastructure.”

This is based on Dell’s Container Storage Modules technology, “which is the connection between the CSI API’s and the infrastructure API’s, and it allows you to do all of those higher level things around replication, visibility, provisioning, reporting.”

This is a key example of “Taking the things that people have already built for and have had in place for decades and saying, ‘Okay, I’m just gonna go ahead and plug these new workloads in.’”

It also allows for active-active Metro replication for both VM and containerized applications, even though CSI block data is managed very differently to VM block data, Delandy explains. “We’ve been doing that in the VM world for like 100 years, right?”

Radical density

The software improvements combine with the hardware improvements to enable what Delandy describes as “radical density”, offering more effective capacity with less physical storage, to the tune of 4PB of effective capacity in a 5U enclosure, or 800TB of effective storage per rack unit.

One significant contributor to this is the ability to support higher density flash, while also supporting “very granular capacity upgrades”.

Key to this is Flexible Raid which allows single drives to be added to a pre-existing RAID group. This means, “When we put an initial configuration onto a user’s floor, we can use very dense flash technology, because we know if we start off with 15 terabyte drives, we’ll see 30 terabyte drives in the next release. We know that when the customer needs to upgrade, we can just add another 15 terabytes versus having to add 150 terabytes.”

Further flexibility comes courtesy of Dell’s Advanced Dynamic Media Enclosure technology which decouples the compute and storage elements of the PowerMax appliance. This allows more options on the balance of compute nodes versus storage capacity, as well as scalability. It also heads off the dilemma users face when an array starts topping out on performance, but because they have no way to upgrade the controllers, they are forced to add another entire array.

But even with the improvements that have led to this radical density, the remorseless growth of data means that admins still have to consider just how much they want to keep on their premium storage platforms. PowerMax has had multi cloud capabilities since its previous generation of appliances, with the ability to be able to connect in and copy and move data between primary block stores on site to an S3 object store. “That can either be a private object store, like in the Dell World, it could be a PowerScale, or it could be an ECS platform. Or it could be a cloud provider.”

The latest generation, Delandy continues, brings higher performance, resiliency, and high availability. But also, he says, “a better understanding of the use cases.” For example, analysis of users’ arrays suggests up to a quarter of capacity is taken up with snapshot data. There are perfectly good reasons why companies want to keep snapshots, but it also makes perfectly good sense to move them off the primary storage and into the cloud, for example.

“Now you can run more transactional databases, more VMs, more Oracle, more SQL, more applications that need the throughput and the processing power of the array versus just holding all this stale, static data.”

It’s big, but is it secure?

Whether the data is transactional or static, security is the “number one thing” that users want to talk about these days, Delandy says. Often the conversation is simply a question of highlighting to users the pre-existing features in the system: “It’s really helping them understand what things they can do and what types of protection they already have, and how to enable what are the best practices around the security settings for that.”

But the biggest concern customers have is “somebody getting into the environment and not finding out fast enough that you’ve been breached.”

Two new features are crucial here. One is the inclusion of hardware root of trust, which sees cryptographic keys fused onto the controller chips. Everything then has to be authenticated against these, from boot ups to upgrades and driver updates. This significantly reduces the risk of a bad actor obtaining backdoor access to the systems.

In addition, PowerMax now uses anomaly detection to monitor the storage and detect changes to the types of datasets being written – including any failure to meet the 4:1 data reduction rates the system’s updated reduction algorithms can deliver. “One of the things that we look at is what’s the reducible versus non reducible rate, and how does that change. We have it set to be so sensitive, that if we start to see changes in reducibility rates, that can indicate that something is being encrypted,” explains Delandy.

It’s a huge advantage for customers if they can get an indication of ransomware being at work within minutes, because typically encryption due to ransomware takes days, weeks, or even months.
The ability to introduce automation, and balance both the long and short term view, is crucial to the whole PowerMax ethos. Dell has sought to take what was already a highly reliable platform and make it simultaneously a highly flexible platform on which it can deliver rapid innovation.

But as Delandy says, Dell has to ensure it is taking a deliberate, targeted approach to actually solving customer problems. Or, put another way, “We’re not just doing it because it’s cool.”

Sponsored by Dell.