Enterprise Storage in 2019: Keep those industry predictions rolling

Updated on January 21, 2019 with more predictions from Pure Storage, IBM and Maxta following the addition of Archive 360, Arcserve, NetApp and StorPool predictions earlier in January.

Original introduction It’s the most wonderful time of the year for crystal ball gazing so here are six predictions about IT storage trends in 2019.  There are good ideas here from our industry contributors – but do excuse the sometimes liberal application of tincture of marketing 

Purely Pure’s predictions

We have edited these predictions from Patrick Smith, EMEA CTO for brevity.

  1. Hybrid cloud improves; The arrival of a better hybrid architecture will create an environment that allows enterprises to combine the agility and simplicity of the public cloud with the enterprise functionality of on-prem. In this hybrid cloud world applications can be developed once and deployed seamlessly across owned and rented clouds.
  2. Automated, intelligent and scalable storage provisioning makes deploying large-scale container environments to an enterprise data centre possible. As a result of the development of container storage-as-a-service, in 2019, we believe that the new normal will be running production applications in containers irrespective of whether they are state-less or data-rich. Container adoption will increasingly be driven by the demand for cost-effective deployments into hybrid cloud environments with the ability to flexibly run applications either on-premises or in the public cloud.
  3. We expect NVMe over Fabrics to move from niche deployments and take a step towards the mainstream next year. It makes everything faster – databases, virtualised and containerised environments, test/dev initiatives and web-scale applications. With price competitive NVMe-based storage providing consistent low latency performance, the final piece of the puzzle will be the delivery of an end-to-end capability through the addition of NVMe-oF for front-end connectivity.

Immense Indigo’s indicators for 2019

What does Eric Herzog, Eric Herzog, VP, Product Marketing and Management, IBM Storage Systems, think we should watch out for in 2019? We’ve prepared a precis if his thoughts.

  1. All primary storage workloads should sit on flash. NVMe will also expand within the storage industry as a high-performance protocol: in storage systems, servers, and storage area network fabrics.
  2. Storage will be ‘cloudified’ with the capability of storage to transparently move data from on-premises configurations to public clouds and across private cloud deployments. You will be able to enjoy the application and workload SLAs of a private cloud and also ethe savings public clouds drive for backup and archival data.
  3. Data protection goes beyond backup and restore to be focused on how you can leverage secondary storage datasets (backups, snapshots, and replicas) to be used for DevOps, analytics and testing workloads.
  4. Storage processes should be automated across the board; for storage admins, DevOps, Docker experts, application owners, server and virtual machine administrators, and more, using APIs for automation and self-service.
  5. To enjoy the benefits of AI, your storage must have the ultimate in performance, availability and reliability. AI, at its core, requires massive amounts of data being processed accurately and reliability 365×24.  Storage is essential for this.

Maxta’s 2019 predictions

Hyper-converged infrastructure (HCI) software supplier Maxta’s CEO and founder Yoram Novick predicts that;

  1. HCI will add in software-defined storage capabilities to bypass the cluster size limitation imposed by some virtualisation software. HCI servers will be partitioned into  “App Servers” (those with applications VMs, virtualisation software, and possibly storage) and “Data Servers” (those with storage only) under a common management framework, the same HCI software can scale to 1000s of servers in the same cluster.
  2. Hybrid HCI will evolve to running the same applications on premises and in the poublic cloud. Using replication, recovery in the public cloud can be instantaneous with a near-synchronous recovery point and five 9’s availability.
  3. HCI will develop to support containers as well as virtual machines with an Abstraction Converged Infrastructure, or ACI. (Hint; Maxta will do this.)
  4. Hyperconvergence appliance vendors will not provide to prospects all the benefits of a true software approach – no HW lock-in – as HCI software vendors do. 

We edited his predictions for brevity.

Three Archive360 predictions

Archive360 archives data up to the Azure cloud. It reckons these things will happen in 2019:

1. To achieve defensible disposition of live data and ongoing auto-categorization, more companies will turn to a self-learning or “unsupervised” machine learning model, in which the program literally trains itself based on the data set provided. This means there will be no need for a training data set or training cycles.  Microsoft Azure offers machine-learning technology as an included service. 

2. Public cloud Isolated Recovery will help defeat ransomware. It refers to the recovery of known good/clean data and involves generating a “gold copy” pre-infection backup. This backup is completely isolated and air-gapped to keep the data pristine and available for use. All users are restricted except those with proper clearance. WORM drives will play a part in this.

3. Enterprises will turn to cloud-based data archiving in 2019 to respond to eDiscovery requests in a legally defensible manner, with demonstrable chain of custody and data fidelity when migrating data.

Three Arcserve predictions for 2019

1. Public cloud adoption gets scaled back because of users facing unexpected and significant fees associated with the movement and recovery of data in public clouds. Users will reduce public cloud use for disaster recovery (DR) and instead, use hybrid cloud strategies and cloud service providers (CSPs) who can offer private cloud solutions with predictable cost models.

2. Data protection offerings will incorporate artificial intelligence (AI) to predict and avert unplanned downtime from physical disasters before they happen. DR processes will get automated, intelligently restoring the most frequently accessed, cross-functional or critical data first and proactively replicate it to the cloud before a downtime event occurs.

3. Self-managed disaster recovery as a service (DRaaS) will increase in prominence as it costs less than managed DRaaS. Channel partners will add more self-service options to support growing customer demand for contractually guaranteed recovery time and point objectives (RTOs/RPOs) and expanding their addressable market free of the responsibility of managing customer environments.

NetApp’s five  predictions for 2019

These predictions are in a blog which we have somewhat savagely summarised 

1. Most new AI development will use the cloud as a proving ground as there is a rapidly growing body of AI software and service tools there.

2. Internet of Things (IoT) edge processing must be local for real-time decision-making. IoT devices and applications – with built-in services such as data analysis and data reduction – will get better, faster and smarter about deciding what data requires immediate action, what data gets sent home to the core or to the cloud, and what data can be discarded.

3. With containerisation and “server-less” technologies, the trend toward abstraction of individual systems and services will drive IT architects to design for data and data processing and to build hybrid, multi-cloud data fabrics rather than just data centres. Decision makers will rely more and more on robust yet “invisible” data services that deliver data when and where it’s needed, wherever it lives, using predictive technologies and diagnostics. These services will look after the shuttling of containers and workloads to and from the most efficient service provider solutions for the job.

4. Hybrid, multi-cloud will be the default IT architecture for most larger organisations while others will choose the simplicity and consistency of a single cloud provider. Larger organisations will demand the flexibility, neutrality and cost-effectiveness of being able to move applications between clouds. They’ll leverage containers and data fabrics to break lock-in.

5. New container-based cloud orchestration technologies will enable better hybrid cloud application development. It means development will produce applications for both public and on-premises use cases: no more porting applications back and forth. This will make it easier and easier to move workloads to where data is being generated rather than what has traditionally been the other way around.

StorPool predicts six things

1. Hybrid cloud architectures will pick up the pace in 2019. But, for more demanding workloads and sensitive data, on-premise is still king. I.e. the future is hybrid: on-premise takes the lead in traditional workloads and cloud storage is the backup option; for new-age workloads, cloud is the natural first choice and on-prem is added when performance, scale or regulation demands kick-in.

2. Software-defined storage (SDS) will gain majority market share over the next 3 to 5 years, leaving SAN arrays with a minority share. SDS buyers want to reduce vendor lock-in, make significant cost optimisations and accelerate application performance.

3. Fibre Channel (FC) is becoming an obsolete technology and adds complexity in an already complex environment, being a separate storage-only component. In 2019, it makes sense to deploy a parallel 25G standard Ethernet network, instead of upgrading an existing Fibre Channel network. At scale, the cost of the Ethernet network is 3-5 per cent of the whole project and a fraction of cost of a Fibre Channel alternative.

4. We expect next-gen storage media to gain wider adoption in 2019. Its primary use-case will still be as cache in software-defined storage systems and database servers.

On a parallel track, Intel will release large capacity Optane-based NVDIMM devices, which they are promoting as a way to extend RAM to huge capacities, at low cost, through a process similar to swapping. The software stack to take full advantage of this new hardware capability will slowly come together in 2019.

There will be a tiny amount of proper niche usage of Persistent memory, where it is used for more than a very fast SSD.

5. ARM servers enter the data centre. However this will still be a slow pickup, as wider adoption requires the proliferation of a wider ecosystem. The two prime use-cases for ARM-based servers this year are throughput-driven, batch processing workloads in the datacenter and small compute clusters on “the edge.”

6. High core-count CPUs appear. Intel and AMD are on a race to provide high core-count CPUs for servers in the datacenter and in HPC. AMD announced its 64-cores EPYC 2 CPU with overhauled architecture (9 dies per socket vs EPYC’s 4 dies per socket). At the same time, Intel announced its Cascade Lake AP CPUs, which are essentially two Xeon Scalable dies on a single (rather large) chip, scaling up to 48 cores per socket. Both products represent a new level of per-socket compute density. Products will hit the market in 2019.

While good for the user, this is “business as usual” and not that exciting.

The disappearing data lake

Don Foster, senior director at Commvault: “Not fully knowing or understanding what is being placed in a data lake, why it is stored, and if it is even of proper data integrity will have proved untenable and inefficient for mining and insight gathering. The data lake will begin to disappear in favor of technology which can discover, profile, map data where it lives, reducing storage and infrastructure costs while implementing data strategies that can truly provide insights to improve operations, mitigate risks and potentially lead to new business outcomes.”

Tools for the multi-cloud

Jon Toor, CMO at Cloudian: “IBM’s acquisition of Red Hat will reverberate throughout 2019, giving enterprises more options for designing a multi-cloud strategy and highlighting the importance of data management tools that can work across public cloud, private cloud and traditional on-premises environments.” 

Capacity

Gary Watson, CTO and founder of Nexsan: “Spinning hard drives are getting bigger, and they are still 5-10x cheaper than flash per terabyte…The reality is that the future in 2019 is most likely to be built on a world of both flash and spinning hard drives.

“Next year, we will … see capacity become a major concern for organisations, fuelled by data growth. … As we store more and more data moving forward, it’s important to protect all of it, and the cloud may not always be suitable. Finding the perfect balance between cloud and on-premises storage, for short-term and long-term data alike, will drive storage needs for the data boom and software growth in 2019.” 

A permanent home for edge computing

Alan Conboy, Office of the CTO, Scale Computing: “According to Statista the global IoT market will explode from $2.9 trillion in 2014 to $8.9 trillion in 2020. That means companies will be collecting data and insights from nearly everything we touch from the moment we wake up and likely even while we sleep. 

“In 2019, edge computing will require a new level of intelligence and automation to make those platforms practical. Where once only a smidge of data was created and processed outside a traditional data center, we will soon be at a stage where nearly every piece of data will be generated far outside the data center. This amount of data will create a permanent home for edge computing.” 

Rethinking converged solutions

Gijsbert Janssen van Doorn, technology evangelist, Zerto: “In 2018 we saw hardware vendors trying to converge the software layer into their product offering. However, all they’ve really created is a new era of vendor lock-in – a hyper-lock-in in many ways. 

“In 2019 organisations will rethink what converged solutions mean. As IT professionals increasingly look for out-of-the-box ready solutions to simplify operations, we’ll see technology vendors work together to bring more vendor-agnostic, comprehensive converged systems to market.”

Veeam has a little list

Data protector Veeam has made these predictions for 2019.

  • Multi-Cloud usage and exploitation will rise
  • Flash memory supply shortages, and prices, will improve in 2019
  • Predictive Analytics, based on telemetry data, will become mainstream and ubiquitous
  • The “versatalist” (or generalist) admin role will increasingly become the new operating model for the majority of IT organizations
  • The top 3 data protection vendors in the market continue to lose market share in 2019 (thought by us to be Commvault, Dell EMC, and Veritas)
  • The arrival of the first 5G networks will create new opportunities for resellers and CSPs to help collect, manage, store and process the higher volumes of data.