Sponsored During his time working at Amazon and then Facebook, Avinash Lakshman played a starring role in taming the complex and voracious data management needs of these hyperscale cloud giants.
At Amazon, where he worked between 2004 and 2007, Lakshman co-invented Dynamo, a key-value storage system designed to deliver a reliable, ‘always-on’ experience for customers at massive scale. At Facebook (from 2007 to 2011), he developed Cassandra, to power high-performance search across messages sent by the social media platform’s millions of users worldwide.
The ideas he developed along the way around scalability, performance and resilience fed directly into his own start-up, Hedvig, which he founded in 2012 – but this time, Lakshman applied them instead to the data storage layer.
By the time Hedvig was acquired by Commvault for $225 million in October 2019, it had attracted some $52 million in venture capital funding and a stellar line-up of corporate customers including Scania, Pitt State and LKAB. As Commvault CEO Sanjay Mirchandani said on completing the deal: “Hedvig’s technology is in its prime. It has been market tested and proven.”
What sets Hedvig apart is its unique approach to software-defined storage (SDS), which helps companies to better manage data that is becoming increasingly fragmented across multiple siloes, held in public and private clouds and in on-premise data centres, says Ediz Ertekin, previously Hedvig’s senior vice president of worldwide sales and now vice president of global sales for emerging technologies at Commvault.
“What companies are looking for in this multi-cloud world is a unified experience of managing data storage that simplifies what is fundamentally very complex, and becoming more complex with every day that passes,” he says.
The premise for SDS is pretty straightforward: by abstracting, or decoupling, software from the underlying hardware used to store their data, organisations can unify storage management and services across the diverse assets contained in their hybrid IT environments.
It’s an idea many IT decision-makers find attractive, because it gives mainstream corporate IT access to the kind of automated, agile and cost-effective storage approaches that power hyperscale pioneers like Amazon, Facebook and Google.
According to estimates from IT market research company Gartner, around 50 per cent of global storage capacity will be deployed as SDS by 2024. That’s up from around 15 per cent in mid-2019.
As Ertekin explains, the Hedvig Distributed Storage Platform transforms commodity x86 or ARM servers into a storage cluster that can scale from just a few nodes to thousands of nodes, covering both primary and secondary storage. Its patented Universal Data Plane architecture, meanwhile, is capable of storing, protecting and replicating block, file and object data across any number of private and/or public cloud data centres. In short, Hedvig’s technology enables customers to consolidate their storage workloads and manage them as a single solution, using low-cost, industry-standard hardware.
An urgent requirement
At many organisations, a new approach isn’t just attractive, it’s an urgent requirement, says Ertekin. That’s because IT leaders are quickly discovering that established storage management approaches simply don’t sit well with the multi-cloud and modern application strategies that their organisations are adopting in order to fulfill their digital transformation ambitions.
Hybrid and multi-cloud deployments are increasing rapidly, Ertekin points out. And while the cloud-native and containerised applications that frequently underpin new, business-critical digital services may speed up innovation by supporting faster, more agile DevOps approaches, they also add to storage complexity.
“These trends are leading to more fragmentation, and with more fragmentation comes less visibility and more friction,” he says. “Traditional storage management simply can’t keep pace with new requirements.”
It leaves IT teams having to manage a vast range of disparate storage assets, unable to ‘talk’ to each other because they’re not interoperable, he says. They’re also wrestling with multiple management tools to oversee that patchwork of storage systems. On top of that, older ways of working make it extremely difficult to migrate data back and forth between on-premise systems and cloud environments, and between different public clouds. “There’s a distinct lack of portability that IT teams know they must address, but if they can’t see what data resides where, that’s going to be a big struggle,” he says.
By contrast, SDS provides companies with a virtual delivery model for storage that is hardware-agnostic and provides greater automation and more flexible provisioning. As Ertekin explains it, application programming interfaces (APIs) enable Hedvig’s Distributed Storage Platform to connect to any operating system, hypervisor, container or cloud, enabling faster, more proactive orchestration by IT admins and automating many tasks, such as provisioning, that they may have previously needed to carry out manually.
“In this way, organisations using Hedvig don’t just experience the simplification that they need, but also better resource optimisation and substantial cost reductions, too,” he says.
All this has important implications for resilience, Ertekin adds: “When you have that visibility into where data is stored and the ability to move it around as necessary, you’re better equipped to deal with the potential impact of downtime.”
“Our customers can define their data protection policies to deal with everything from a single node failure to downtime across an entire site,” he continues. “For example, in the case of a production workload, where business continuity is critical, you might decide to keep multiple copies of the data, written across multiple nodes, multiple sites or multiple geographies, so it’s never necessary to physically move data if you run into problems.”
At a time when it’s never felt more difficult for IT decision-makers to understand what the future might hold, a big selling point for SDS is the predictability it can bring, according to Ertekin. “With SDS, there’s more flexibility and agility to adapt to changing business and IT requirements with confidence and without having to worry about the implications for performance, scale and cost,” he says.
For a start, the issue of remote working is currently high on CIO agendas, driven by Covid-related stay-at-home orders. SDS can provide the portability to ensure that data remains accessible to employees who need it and can easily be moved or replicated to where it is needed most, while still applying strict policies to it, in order to ensure compliance with corporate governance guidelines, not to mention wider legal and regulatory obligations.
On the cost front, says Ertekin, abstracting storage management from the hardware layer using software relieves organisations of many of the time and cost burdens associated with regular infrastructure refreshes. SDS enables them to rely on commodity hardware, rather than proprietary storage appliances, and to pick and choose between public cloud providers based on pricing. It offers a scale-out solution that allows them to take a more ‘pay as you go’ approach, where matching capacity to performance and costs needs becomes easier to estimate ahead of time.
Some developments, of course, are easier to predict. The burgeoning multi-cloud and containerisation trends show no signs of slowing as companies scramble to deliver new and compelling digital experiences to their customers. In fact, more than four out of five enterprises (81%) are already working with two or more public cloud providers, according to a 2019 Gartner survey, while another study from the firm suggests that three-quarters of global enterprises will be running containerised applications in production by 2022.
As companies increasingly rely on different application delivery models, their storage capabilities will need to be just as diverse, says Ertekin. For example, there will more frequently be a need to add persistent volumes to container environments. SDS delivers the integration with container orchestrators (COs) such as Kubernetes and Docker, for example, to quickly provision the storage that containerised apps need, along with data management and protection for them.
Nor does the volume and variety of data that needs to be managed and protected look set to demonstrate anything other than continued prolific growth.
“The sources might be bare-metal, virtual servers, or containerised servers or IoT edge devices. Their locations might be centralised data centres, remote and branch offices, edge locations or multiple public clouds – or any combination of these,” says Ertekin. “And the data may be file, block or object-based. It might be streaming data, metadata, primary or secondary data. It will need to be managed for legacy applications, as well as for big data analytics, for AI and machine learning.”
This all points to a world where the ability to impose some kind of simplification will be a must-have for many firms, he says. After all, every organisation is now under pressure to achieve the same kind of standardisation, elasticity, scaling and predictability that enables the hyperscale cloud giants to cater to the needs of often millions of users, in the face of huge spikes and troughs in traffic and without missing a beat.
“But to do that,” he warns, “mainstream organisations also need to be able to take advantage of the same technology underpinnings and strategic thinking that got those hyperscale companies to where they are today. And at the heart of that is the process of making a complex task much simpler, less resource-intensive, less costly. That’s what has guided everything we do at Hedvig and continues to guide us.”
This article is sponsored by Commvault.