Home Blog Page 335

KumoScale beats Ceph hands down on block performance

Cephalopd
Wikipedia public domain image: https://commons.wikimedia.org/wiki/File:C_Merculiano_-_Cephalopoda_1.jpg

Kioxia today published test results showing its KumoScale storage software runs block access on an all-flash array up to 60 times faster than Ceph, thanks to NVMe/TCP transport.

Update: Block mode test details from Joel Warford added. 5.31 pm BST October 8.

Ceph is based on an object store with file and block access protocols layered on top and does not support NVMe-oF access. The Red-Hat backed open source software is enormously scalable and provides a single distributed pool of storage, with three copies of data kept for reliability.

Joel Dedrick, GM for networked storage software at Kioxia America, said in a statement: “Storage dollars per IOPS and read latency are the new critical metrics for the cloud data centre. Our recent test report highlights the raw performance of KumoScale software, and showcases the economic benefits that this new level of performance can deliver to data centre users.”

Kioxia says KumoScale software can support about 52x more clients per storage node in IOPS terms than Ceph software. This is at much lower latency and requires fewer storage nodes, resulting in a KumoScale $/IOPS cost advantage.

We think comparing Ceph with KumoScale is like seeing how much faster a sprint athlete runs than a person on crutches. NVMe-oF support is the crucial difference. Ceph will get NVMe-oF support one day – for example, an SNIA document discusses how to accomplish this. And at that point a KumoScale vs Ceph performance comparison would be more even. 

But until that point, using Ceph block access with performance-critical applications looks to be less than optimal.

KumoScale benchmarks

The Kioxia tests covered random read and write IOPS and latencies and the findings were;

  • KumoScale software read performance is 12x faster than Ceph software while reducing latency by 60 per cent.
  • KumoScale software write performance is 60x faster than Ceph software while reducing latency by 98 per cent.
  • KumoScale software supports 15x more clients per storage node than Ceph at a much lower latency in the testing environment.

At maximum load KumoScale delivered 2,293,860 IOPS compared to Ceph’s 43,852 IOPS. This is about 52x better performance.

According to the Kioxia report, the tests used an identical benchmark process, server clusters and SSDs, with a networked environment consisting of three storage nodes – each containing five Kioxia CM5 SSDs, with 3.84 TB capacities – 20TB of storage per node.

The test stimulus and associated measurements were provided by four test clients. All storage nodes used 100 GbitE via a single 100GbitE network switch. An identical hardware configuration was used for all tests, and four logical 200GiB volumes were created – one assigned to each of the test clients. 

The software was KumoScale v3.13) and Ceph v14.2.11 with NVMe-oF v1.0a and a TCP/IP transport. For KumoScale software, volumes were triply replicated (i.e. a single logical volume with one replica mapped to each of three storage nodes). For Ceph software, volumes were sharded using the ‘CRUSH’ hashing scheme, with a replication factor set to three. 

Kioxia’s Director of Business Development Joel Warford told us; “The performance benchmarks were run in block mode on both products for a direct comparison.  … Our native mapper function uses NVMe-oF to provision block volumes over the network which provides the lowest latency and best performance results.  KumoScale can also support shared files and objects using hosted open source software on the KumoScale storage node for non-block applications, but those are not high performance use cases.”

Also; “We used FIO as the benchmark to generate the workloads.”

Ceph’s IO passes through the host Linux OS IO stack whereas NVMe/TCP uses much faster RDMA (Remote Direct Memory Access) to the KumoScale volumes, which bypasses the Linux IO stack.

Optane Persistent Memory vs Optane SSDs – confused? Then read on

Sponsored Intel® Optane™ is a memory technology that has the potential to boost performance in key areas of the data centre, thanks to attributes that combine access speeds close to that of DRAM with the non-volatility or persistence characteristics of a storage medium.

This means Optane™ technology can be used in a number of ways – to build high-speed storage products or to create a new tier in the memory hierarchy between DRAM and storage. The latter option boosts application performance in a more cost-effective manner than simply throwing more highly expensive DRAM at the problem.

But what is Optane™? The underlying technology is based on an undisclosed material that changes its resistance in order to store bits of information. The key fact is that it is unrelated to the way that DRAM or flash memory is manufactured and does not rely on transistors or capacitors to store charge. This is why it has such different characteristics.

The past few years have seen Intel bring to market a number of Optane™-based products, and these have inevitably caused a little confusion because they all carry the Optane™ branding even though they are aimed at different use cases.

For example, Intel® Optane™ Memory is a consumer product in an M.2 module format that slots into a PC and is used to accelerate disk access for end user applications. It does this through the Intel® Rapid Storage Technology driver which recognises the most frequently used applications and data and automatically stores these in the Optane™ memory for speedier subsequent access.

This should not be confused with Optane™ DC Persistent Memory, a data centre product designed for Intel servers which fits into standard DIMM slots alongside DDR4 DRAM memory modules (more on this, below).

Both of these memory-focused products are distinct from the solid state drive (SSD) storage product lines that Intel manufactures using Optane™ silicon. These are split into Optane™ SSDs for Client systems and server focused Optane™ SSDs for Data Centre. Both essentially use Optane™ memory components as block-based storage  like the flash memory components in standard SSDs but with the advantage of lower latency.

Expanding your memory options

Optane™ DC Persistent Memory is perhaps the most interesting use of this type of memory technology. Support is built into the memory controller inside the Second Generation and Third Generation Intel® Xeon® Scalable processors, so that it can be used in combination with standard DDR4 memory modules.

It should be pointed out that Optane™ can be used like memory because of the way it is architected. The memory cells are addressable at the individual byte level, rather than having to be written or read in entire blocks as is the case with NAND flash.

The characteristics of Optane™ are such that its latency is higher than DDR4 DRAM, meaning it is slower to access, but close enough to DRAM speed that it can be treated as a slower tier of main memory. For example, Optane™ DC Persistent Memory has latency up to about 350ns, compared with 10 to 20ns for DDR4, but this still makes it up to a thousand times faster than the NAND flash used in the vast majority of SSDs.

Another key attribute of Optane™ DC Persistent Memory is that is currently available in higher capacities than DRAM modules, at up to 512GB. It also costs less than DRAM, with some sources putting the cost of a 512GB Optane™ DIMM at about the same price as the highest capacity 256GB DRAM DIMM.

This combination of higher capacity and lower price allows Intel customers to expand the available memory for applications at a lower overall cost. An example configuration could be six 512GB Optane™ DIMMs combined with six 256GB DDR4 DIMMs to provide up to 4.5TB of memory per CPU socket, which would mean 18TB for a four-socket server.

Having a larger memory space improves the performance of workloads that involve large datasets. More of the data is held in memory at any given time rather than having to be constantly fetched from the storage, which is much slower, even if in the case of all-flash storage.

However, it should be remembered that Optane™ is not quite like standard memory, in that it is also persistent, and can therefore keep its contents even without power. For this reason, Intel supports different modes through which the memory controller in Xeon® Scalable processors can operate Optane™ DC Persistent Memory modules in different ways to suit the application.

In Memory Mode, the Optane™ modules are treated as memory, with the DDR4 DRAM used to cache them. The advantage of this mode is that it is transparent to existing applications, which will just see the larger memory space presented by the Optane™ DIMMS, although it does not take advantage of the persistent capabilities of Optane™.

When configured in App Direct Mode, the Optane™ modules and DRAM are treated as separate memory pools, and the operating system and applications must be aware of this. However, this means that software has the flexibility to use the Optane™ memory pool in several ways, for instance, as a storage area for data too large for DRAM or for data that needs to be persistent  – such as metadata or recovery information that will not be lost in the event of a power failure.

Alternatively, the Optane™ memory pool in App Direct Mode can be accessed using a standard file system and treated as if it were a small, very fast SSD.

It is tempting to compare Optane™ DC Persistent Memory with non-volatile DIMMs (NVDIMMs), which also retain data without power, but in reality these are very different technologies. A typical NVDIMM combines standard DRAM with flash memory, and is used exactly like standard memory, except that in the event of a power failure, the NVDIMM copies the data from DRAM to flash to preserve it. Once power is restored, the data is copied back again.

Another type of NVDIMM simply puts flash memory onto a DIMM, but as it is not addressable at the individual byte level like DRAM, it is effectively storage and must be accessed via a special driver that exposes it to a file system API.

Building a better, faster SSD

Another attribute of Optane™ technology worth noting is that it has a higher endurance than many other technologies such as the NAND Flash found in SSDs. Endurance is a measure of how many times a storage medium can be written to and reliably retain information, and is a key consideration for storage in data centres, especially for workloads such as databases, analytics, or virtual machines, which perform a large number of random reads and writes.

Endurance is commonly measured either as average drive writes per day (DWPD), or as the total number of lifetime writes. Intel claims that a typical half terabyte flash SSD with a warranty covering three DWPD over five years provides three petabytes of total writes, whereas a half terabyte Optane SSD has a warranty of 60 DWPD for the same period, equating to 55 petabytes of total writes.

When Optane™ technology is used in SSDs, it also has a speed advantage over rival drives based on NAND Flash. Products such as the Intel® Optane™ SSD DC P4800X series have a read and write latency of 10 microseconds, compared with a write latency of about 220 microseconds for a typical NAND flash SSD. This means that although there may be a price premium for an Optane™ SSD, it is a much better choice for areas where performance is critical, such as the cache tier in storage arrays or hyperconverged infrastructure (HCI).

Intel also has a second generation of Optane™ SSDs in the pipeline that is set to offer even greater performance. One way in which this will be delivered is via a new SSD controller supporting the PCIe 4.0 interface, which the company claims will more than double the throughput when compared with existing Optane™ SSDs.

To summarise, Intel® Optane™ is an advanced memory technology that can be used like DRAM, but which has the non-volatility of storage. It is, however, much faster than flash, while also being cheaper than DRAM. This combination of characteristics means that it can be used to boost performance in the data centre by fitting into DIMM slots alongside DRAM to cost-effectively expand memory, or in the form of super-fast SSDs to boost the speed of the storage layer.

With enterprise workloads becoming more demanding and data intensive thanks to the incorporation of advanced analytics and AI techniques into applications, organisations need to be evaluating emerging technologies such as Intel® Optane™ to assess how these can help them build an IT infrastructure that will allow them to meet future challenges.

This article is sponsored by Intel.

How to evaluate Kubernetes data protection

Radar Antenna
Radar Antenna

How do you evaluate Kubernetes data protection vendors? According to the GigaOm, there is currently no framework to do this. So the tech research firm is developing a set of key criteria, which it will publish shortly.

Arrikto, Commvault, Dell Technologies, Druva, NetApp, Pure Storage (Portworx) and Veeam (Kasten) all have – or are developing – data protection for cloud-native environments. However, protecting Kubernetes-orchestrated containers is hard.

The basic problem is that a container is an element of an application. But the protection unit of focus is the cloud-native application, and not the potentially hundreds of individual containers that make up the application. Therefore, backing up at the container level makes little sense.

Virtual machines (VMs) by contrast can run applications and act the unit of focus for backup services. “Typically a VM will contain both the data and the application, “Chris Evans, an independent storage architect told us. “The data might be on a separate volume, which would be a separate VMDK file. With the container ecosystem there is no logical connection we can use, because the container and the data are not tightly coupled as they are in a VM.”

Cloud-native application container complexity

The Kubernetes container storage interface (CSI) provides a persistent volume abstraction but “there is no backup interface built into the platform,” Evans, says. “VMware eventually introduced the Backup API into vSphere that provided a stream of changed data per VM. There is no equivalent in K8s. So you have to back data up from the underlying backing storage platform. As soon as you do this, you risk breaking the relationship between the application and the data.”

GigaOm K8s checklist

GigaOm is developing a Key Criteria for Evaluating Kubernetes Data Protection. We have seen a draft copy. 

GigaOm analyst Enrico Signoretti writes: “Traditional backup paradigms and processes don’t work with Kubernetes applications. The speed of change and the diversity of components involved require a new approach, especially when the data protection solution is also a mechanism for enabling data mobility and advanced data services.”

He organises the criteria to be used to judge suppliers’ offerings into three categories: existing table stakes shared by all suppliers, key criteria to differentiate suppliers; and the emerging technologies of which they need to be aware.

Signoretti notes that today’s table stakes are yesterday’s key criteria, and some of the emerging technologies will become key criteria in the future.

The table stakes include native Kubernetes integration, software consumption models, APIs and automation, CSI (container storage interface) integration and security. Every offering should have these.

Concerning security, he notes; “There is no history of ransomware attacks on Kubernetes clusters, and there are no best practices or remediation activities that have been tested in real-world deployments regarding this threat.” This is something that needs to be addressed.

The key criteria list includes multi-cloud support, application environmental awareness, disaster recovery, applications and data migration, and system management.

Signoretti makes this point about disaster recovery: “CSI does not have the necessary level of sophistication to replicate data volumes, applications, or Kubernetes objects to a remote location.” Suppliers will have to build this, using Kubernetes’ snapshots and replication facilities.

Druva adds NAS filer backup and archive to the cloud

Druva has launched a cloud-backup and archive service for network attached storage (NAS). It says this is an industry first – and the cost-saving pitch is that it does away with file backup systems.

Stephen Manley

CTO Stephen Manley said in the launch statement today: “By reducing data copies and redundant data centrally with our patent global source-side deduplication, Druva’s new offering eliminates storage management headaches while delivering the built-in ransomware protection today’s businesses need.”

Druva’s core SaaS is built atop on AWS. Phoenix for NAS can be deployed in under 15 minutes with no storage hardware, according to the company The scale-out, on demand service uses proxies (agents) and supportsNFS and SMB devices. Files can be recovered from any point-in-time in backup and archive tiers.

Phoenix for NAS diagram. The Phoenix NAS proxy is agent software.

Using Phoenix for NAS, files are backed up to a warm storage tier for short term recovery needs, with automatic tiering of data to 30 per cent cheaper cold storage for long-term, one year-plus, retention. There is also a 50 per cent cheaper direct-to-cold infrequent access tier for archived large data sets with infrequent recovery requirements. Indexed files and metadata are available by search from hot to cold storage tiers.

Data is encrypted in-flight and at rest and deduplication conceals reference data and metadata. The backup are Isolated and immutable and protected with envelope encryption. This encrypts the backup data with a data encryption key, which in turn is encrypted with a root key. This is how Druva provides ransomware protection.

Druva’s software analyse data sets and makes recommendations for the exclusion of non-critical business data from backups and long-term retention copies. This helps to lower storage costs and speed backup operations.

Phoenix for NAS is available now, but note the Infrequent access tier is in early release with limited availability.

Komprise identifies cold Azure data and sends it to Blobs

Starting today, Komprise users can migrate file data to Azure Files and Azure NetApp Files. Cold file data is movable to cheaper Azure Blob storage classes transparently, reducing cloud network attached storage (NAS) costs by up to 70 per cent, Komprise claims.

Earl Williams, systems engineer at the fashion company Carhartt, is an early customer. “At Carhartt, we are transforming to a cloud-first strategy using Microsoft Azure, and we wanted to reduce storage and backup costs for our Digital Asset Management data,” he said. “By using Komprise, we were able to identify that 60 per cent of our data was cold, and we have now been able to transparently archive it to lower-cost Azure Blob storage.”

Subramanian

Komprise COO Krishna Subramanian said in the launch statement: “Customers are increasingly looking to run traditional file workloads in the cloud, especially with the rapid pace of digital transformation happening across businesses right now.”

Tiering

Komprise provides a data management abstraction layer that covers on-premises, AWS and Azure. We expect it will add support for Google Cloud, in due course. The intent is to enable customers to tier files from filers to lower-cost and slower access object stores. Data can be moved entirely on-prem, within public clouds and their regions, or between on-prem and cloud.

Komprise Cloud Data Growth Analytics concept with expected targets.

In all these cases data location and storage tiers are controlled to get the right data into the right place and cost-optimised tier of storage. To do this, Komprise uses Elastic Data Management (KEDM), a file migration utility incorporating parallelisation that runs 27 times faster than Linux Rsync, according to the company’s internal benchmarks. 

The Komprise software is available in the Azure Marketplace and has achieved co-sell ready status through the Microsoft One Commercial Partner program.

SK hynix first off the block with DDR5 DRAM

SK hynix has shipped the world’s first DDR5 DRAM – which is up to 80 per cent faster than its DDR4 predecessor.

DDR5 (Double Data Rate 5) has double the bandwidth and capacity of the current DDR4 memory. The more DRAM capacity and the higher the speed the better, as server and PC CPUs can get through more work with less waiting for memory contents to be read or written. 

Micron started sampling its DDR5 chip in January 2020 and will surely launch DDR5 DRAM products soon.

Research house Omdia predicts DDR5 will account for 10 per cent of the total global DRAM market in 2022, increasing to 43 per cent in 2024.

SK Hynix DDR5 memory products

SK hynix’s DDR5 chip supports transfer rates of 4,800- 5,600 Megabit-per-second (Mbps or megatransfers/sec), which is 1.8 times faster than the previous DDR4 generation. Operating voltage is 1.1V -20 per cent lower than the  1.2V of DDR4. The SK hynix DDR5 incorporates error correcting code (ECC).

DDR4 megatransfers/sec data rates range from 1600 to 4800 MT/sec. DDR5 increases this to 3200-6400 MT/sec. Blocks & Files expects SK hynix will support the higher DDR5 6400 data rate, in due cours.

DDR4 and DDR5 chips are built into modules called DIMMs (Duel Inline Memory Modules), and effective memory bandwidth is measured across DIMMs, with 1 DIMM per memory channel and 8 channels. 

DDR5 3200 will deliver 182.5 GB/sec effective bandwidth from its 3200 MT/s data rate, which is is 1.36 x faster than DDR4 3200. DDR5 4800 will provide 250.9 GB/sec and is 1.87 x faster than DDR4 4800. DDR5 6400 is expected to deliver 298.2 GB/sec bandwidth.

SK hynix first announced its 16Gbit DDR5 DRAM chip in November 2018. It thinks DDR5 chips could be as dense as 256 Gbytes in the future, using a stack of four DRAM dies and through-silicon-via (TSV) technology interconnecting the stack layers.

The company sees 3D stacked memory leading on to high-bandwidth memory (HBM) which uses a silicon interposer to link the memory stack and a CPU or GPU in a System-on-Chip (SoC).

Five Ways Dell EMC PowerStore Doesn’t Measure Up to HPE Nimble Storage

measuring tape
measuring tape

Sponsored Dell EMC recently announced PowerStore[1], their new mid-range storage platform intended to replace Unity, SC Series and XtremIO[2]. They spotlighted features including multi-array clustering, data-in-place upgrades and automation – features that we agree are important, but are also table stakes, as HPE Nimble Storage has been delivering them for years. And despite positioning PowerStore for the data era, it’s not doing nearly enough to deliver what enterprises need today from their storage platform.

What enterprises need today goes beyond the spec sheet. With data being the key to digital transformation, organizations need a proven storage platform that can run their business-critical apps, eliminate application disruptions, deliver the cloud experience, and unlock the potential of hybrid cloud.

That’s where HPE Nimble Storage shines and that’s where Dell EMC PowerStore falls short.

1. Running Business-Critical Apps

Enterprises are increasingly reliant on applications to handle everything from back-end operations to the delivery of products and services. That is why proven availability, protecting data, and ensuring applications stay up are more important than ever before.

But Dell EMC has the market[3] and HPE scratching our heads – with PowerStore only supporting RAID 5, old technology that can lead to catastrophic data loss with more than one drive failure. In comparison, with HPE Nimble Storage, enterprises can sustain 3 simultaneous drive failures with Triple+ Parity, protection that’s several orders of magnitude more resilient than RAID 5.

PowerStore is also marketed as “designed for 6x9s” availability[4]. But being “designed for” availability versus actually having a measured track record in customer production environments are vastly different. HPE Nimble Storage has 6x9s of proven availability[5] based on real, achieved values (as opposed to theoretical projections) and is measured for its entire installed base. And as enterprises entrust their data with us, we guarantee 6x9s availability for every HPE Nimble Storage customer.

2. Eliminating Application Disruptions

Enterprises need to be always-on, always-fast, and always agile. They don’t have time to fight fires and react to problems. But with countless variables and potential issues across the infrastructure stack, IT continues to be held back reacting to problems and dealing with disruptions.

The only way to get ahead is with intelligence. Intelligence that predicts problems. Intelligence that sees from storage to virtual machines. Intelligence that takes action to prevent disruptions. Dell EMC attempts to answer the call for intelligence with CloudIQ, an application that “provides for simple monitoring and troubleshooting” for storage systems[6]. But with the complexity that lives in IT today, this is simply not doing enough as it leaves customers with more questions than answers.

HPE InfoSight, on the other hand, is true intelligence. Since 2010, it has analysed more than 1,250 trillion data points from over 150,000 systems and has saved customers more than 1.5 million hours of lost productivity[7]. It uses machine-learning to predict and prevent 86% of problems before customers can be impacted[8]. And, its intelligence goes beyond storage to give deep insights into virtual infrastructure that improves application performance and optimizes resources.

The intelligence that enterprises can count on to ensure their apps stay up is HPE InfoSight.

3. Delivering the Cloud Experience

PowerStore has fewer knobs than Unity – a much needed improvement. But it’s not nearly enough. Our customers tell us they want to deliver services and get out of the business of managing infrastructure. This requires having the right foundation for their on-premises cloud with the simplicity of consumption, the flexibility to support any app, and the elasticity to scale on-demand.

HPE Nimble Storage is a platform that goes beyond making storage “easy to manage” – to be a foundation for on-premises cloud. HPE Nimble Storage dHCI is that foundation, a category creating, disaggregated HCI, delivering the HCI experience but with the performance, availability, and resource efficiency needed for business-critical apps. And as announced in May, enterprises can have an on-premises cloud by consuming HPE Nimble Storage dHCI as a service through HPE GreenLake.

But what about a cloud experience for the edge? The edge is in need of modernization as enterprises look to streamline their multi-site and remote IT environments. Dell EMC positions PowerStore AppsOn here[9] , but it’s not purpose-built for the job. HPE SimpliVity is an edge-optimized HCI platform with simple multi-site management, a software-defined scale-out architecture, and built-in data protection, making it the ideal choice for remote sites.

4. Unlocking the Potential of Hybrid Cloud

Every IT leader today looks at hybrid cloud as a potential enabler of innovation, only to realize the overwhelming challenges that exist with fragmented clouds. With data at the center of innovation, realizing the potential of hybrid cloud requires an architecture – a fabric – that provides a seamless experience with the flexibility to move data across clouds during its lifecycle from test/dev, production, analytics, to data protection.

HPE delivers that seamless experience by extending HPE Nimble Storage to the public cloud with HPE Cloud Volumes. HPE Cloud Volumes is a suite of enterprise cloud data services that enables customers, in minutes, to provision storage on-demand and bridges the divide between on-premises and public cloud. It brings consistent data services, bi-directional data mobility, and the ability to access the public cloud, unlocking hybrid cloud use cases from data migration, data protection, test/dev, containers, analytics, to running enterprise applications in the cloud.

While Dell EMC can connect their storage to the public cloud, HPE Nimble Storage goes further with a true cloud service, consumable on-demand, that helps our customers maximize their agility and innovation and optimize their economics with no cloud lock-in, nothing to manage, and no more headaches.

5. Delivering an Experience You’ll Love

On top of everything – from being a better fit for business-critical apps, being more intelligent and predictive, delivering on-premises cloud, to making hybrid cloud valuable – HPE Nimble Storage delivers a customer experience that simply excels.

A perfect example is support. Multi-tiered support that shuffles customers from Level 1 to Level 2 to Level 3 is reactive and too slow to resolve problems. This isn’t the case with HPE Nimble Storage, as customers have direct access to Level 3 with Level 1 and Level 2 support cases predicted and automated. That means no more escalations, 73% fewer support tickets, and 85% less time spent resolving storage problems[10]…and not to mention the friendliest support engineers who help you solve problems even if they’re outside of storage.

Rethink What Storage Needs to Do for You

HPE Nimble Storage reimagines enterprise storage with a unique solution that simplifies IT and unlocks enterprise agility across the data lifecycle. It transforms operations with artificial intelligence and it gives you an experience that you’ll truly love as you can see here from this video.

If you are a current customer of Dell EMC midrange storage products, we would welcome the opportunity to show you how HPE Nimble Storage can elevate your experience. Here are a dozen reasons why organizations are moving from Dell EMC to HPE Nimble Storage. And we can help make the investment in HPE Nimble Storage easier with financials offers that generate cash from your existing storage assets.

Learn more about HPE Nimble Storage, and how after a decade of innovation, it’s stepping into the future.

This article is sponsored by HPE.

Commvault hooks up Metallic with Azure Blobs

Commvault has integrated Metallic SaaS data protection with Azure Blob Storage, enabling customers to save on-premises storage capacity.

Commvault’s cloud-native Metallic service receives backup data from on-premises and in-cloud applications and stores it in AWS and Azure object storage. An existing Office 365 service saves its data in Azure. Now Metallic can save any data ingested on-premises by Commvault’s Backup and Recovery software and HyperScale X appliances. This encompasses physical, virtual and containerised workloads.

Commvault GM Manoj Naor issued a quote: “The need to leverage the cloud is only accelerating, and having simple, direct access to cloud storage as a primary or secondary backup target allows us to facilitate our customers’ journeys to the cloud while also providing a critical step in ransomware readiness with an air-gapped cloud copy.”

That’s air-gapped in the sense that, although network-linked, the Metallic-generated data copy cannot be directly accessed by the customer. So the data is virtually air-gapped, unlike a physically air-gapped tape cartridge.

Ranga Rajagopalan, VP of product management, answered our question about this point; “Yes. We’ve had customers defeat ransomware attacks with our virtual air-gapped approach.” He said there were “encryption and other hardening techniques, and separation of management and access for the Azure account. Customers can spin up their entire environment in the cloud.”

The new service is managed through the Commvault Command Centre. The fully managed Metallic Cloud Storage Service is available across North America, EMEA and APAC, and Metallic SaaS is available in North America and ANZ.

Huawei has fastest storage array in the world. SPC-1 says so

Huawei has smashed the SPC-1 benchmark, with an all-NVMe flash OceanStor array that is double the performance pg previous title holder Fujitsu at 40 per cent higher cost.

SPC-1 tests the ability of a storage array’s to process IOs for a business-critical workload. Huawei’s OceanStor Dorado 18000 V6 used 576 x NVMe SSDs, each with 1.92TB capacity, to score 21,002,561 SPC-1 IOPS. This translates into a price performance rating of $429.10/KIOPS.

Fujitsu’s ETERNUS DX89800 S4 scored 10,001,522 IOPS with a $644.16/KIOPS price-performance. It used slower SAS SSDs than the Huawei config and 16Gbit/s Fibre Channel links, compared with 32Gbit/s FC for the Huawei,

We have charted the top 10 SPC-1 results on an IOPS vs price-performance chart:

Top-10 SPC-1 results

Comment

Dell, HPE or NetApp could theoretically cobble together an all-NVMe SSD system with, say, 1,000 SSDs and sufficient 32Gbit/s FC links to handle the bandwidth. But a $10m-plus, 30 million-plus SPC-1 IOPS array would be a fairly esoteric array  and vendors would ask: “Is it worth it?” 

The array world is moving to radically faster NVMe-over Fabrics links to accessing servers and that is not reflected in the SPC-1 benchmark test. So why build a benchmark record-beating array with interconnect technology that’s going to be superseded? They’d see it as dead end. Don’t expect to see any US supplier at the top of the SPC-1 charts anytime soon.

Veeam embraces container backup by buying Kasten

Veeam is buying Kasten, a Kubernetes-orchestrated container data protection startup, for $150m in cash and stock.

Kasten’s K10 software will continue to be available independently and Veeam will also integrate it into its own backup & replication functionality. Veeam’s goal is to simplify enterprise data management by covering data protection for virtual machines, physical servers, SaaS applications, cloud workloads and containers in one platform.

Danny Allan, Veeam CTO, said in a statement: “With the acquisition of our partner Kasten, we are taking a very important next step to accommodate our customers’ shift to container adoption in order to protect Kubernetes-native workloads on-premises and across multi-cloud environments.” 

Kasten founders

Niraj Tolia, Kasten CEO, said: “The enterprise landscape is shifting as applications rapidly transition from monoliths to containers and microservices… Veeam’s success has been a beacon of inspiration for the Kasten team and we are very excited to join forces with a company where there is so much philosophical alignment.”

Kasten was started in January 2017 by Tolia and engineering VP Vaibhav Kamra. The company raised $3m in a March 2017 seed round and $14m in an August 2019 A-round. Veeam set up a reseller partnership with Kasten in May this year. Veeam and Kasten are both part of Insight Partners‘ investment portfolio.

Kasten’s K10 software snapshots the entire state of an application container – not just the data that needs protecting. That means the K10-protected container can be migrated to a different system and instantiated there, and also sent to a disaster recovery site.

Veeam’s Kasten purchase follows last month’s $370m acquisition of Portworx by Pure Storage. This M&A activity highlights the growing importance of Kubernetes-orchestrated containers to enterprise application development and deployment.

This week in storage with Fujitsu, HPE, Intel and more

This week’s data storage standouts include Intel spinning out a fast interconnect business; HPE and Marvell’s high-availability NVMe boot drive kit for ProLiant servers; and Fujitsu going through its own, branded digital transformation

Intel spins out Cornelis

Intel has spun out Cornelis Networks, founded by former Intel interconnect veterans Philip Murphy Jr., Vladimir Tamarkin and Gunnar Gunnarsson.

Cornelis will compete with Nvidia’s Mellanox business unit and technology, and possibly also HPE’s Slingshot interconnect. The latter is used in Cray Shasta supercomputers and HPE’s Apollo high performance computing servers.

Cornelis aims to commercialise Intel’s Omni-Path Architecture (OPA), a low-latency HPC-focused interconnect technology. The technology comes from Intel’s 2012 acquisition of QLogic’s TrueScale InfiniBand technology and Cray’s Aries interconnect IP and engineers, acquired by Intel in April 2012. 

Corneli’s initial funding is a $20m A-round led by Intel Capital, Downing Ventures, and Chestnut Street Ventures.

Fujitsu’s digital twin

Fujitsu is investing $1bn in a massive digital transformation project, which it is calling “Fujitra”.

The aim is to transform rigid corporate cultures such as “vertical division between different units” and “overplanning” by utilising frameworks such as Fujitsu’s Purpose, design-thinking, and agile methodology. Fujitsu’s purpose or mission is “to make the world more sustainable by building trust in society through innovation,” which seems entirely Japanese in its scope and seriousness.

Fujitsu will introduce a common digital service throughout the company to collect and analyse quantitative and qualitative data frequently and to manage actions based on such data. All information is centralised in real time to create a Fujitsu digital twin. 

Fujitsu has appointed DX officers for each of the 15 corporate and business units as well as five overseas regions. They will be responsible for promoting reforms across divisions, advance company-wide measures in each division and region, and lead DX at each division level.

Fujitra will be introduced at Fujitsu ActivateNow, an online global event, on Wednesday, October 14.

HPE and Marvell’s NVMe boot switch

Marvell’s 88NR2241 is an intelligent NVMe switch that enables data centres to aggregate, increase reliability and manage resources between multiple NVMe SSD controllers. The 88NR2241 creates enterprise-class performance, system reliability, redundancy, and serviceability with consumer-class NVMe SSDs linked by PCIe. This NVMe switch has a DRAM-less architecture and supports low latency NVMe transactions with minimum overhead. 

HPE NS204i-p NVMe RAID 1 accelerator

HPE has implemented a customised version of the 88NR2241 for ProLIant servers, calling it an NVMe RAID 1 accelerator. One HPE NS204i-p NVMe OS Boot Device is a PCIe add-in card that includes two 480GB M.2 NVMe SSDs. This enables customers to mirror their OS through dedicated RAID 1.  

The accelerator’s dedicated hardware RAID 1 OS boot mirroring eliminates downtime due to a failed OS drive – if one drive fails the business continues running. HPE OS Boot Devices are certified for VMware and Microsoft Azure Stack HCI for increased flexibility.

AWS Partner network news

  • Data protector Druva has achieved Amazon Web Services (AWS) Digital Workplace Competency status. This is Druva’s third AWS Competency designation. Druva has also been certified as VMware Ready for VMware Cloud.
  • Cloud file storage supplier Nasuni has achieved AWS Digital Workplace Competency status. This status is intended to help customers find AWS Partner Network (APN) Partners offering AWS-based products and services in the cloud.
  • Kubernetes storage platform supplier Portworx has achieved Advanced Technology Partner status in the AWS Partner Network (APN). 

The shorter items

DigiTimes speculates that China-based memory chipmakers ChangXin Memory Technologies (CXMT) and Yangtze Memory Technologies (YMTC) could be added to the US trade ban export list. This list is currently restricting deliveries of US technology-based semiconductor shipments to Huawei.  

The Nikkei Asian Review reports SK hynix wanted to buy some more shares in Kioxia, taking its stake from 14.4 per cent to 14.96 per cent stake from its existing 14.4 per cent holding. That would link Kioxia and SK hynix in a defensive pact against emerging Chinese NAND suppliers. This plan isnow delayed as Kioxia has postponed its IPO.

Veritas has bought data governance supplier Globalnet to extend its digital compliance and governance portfolio, with visibility into 80-plus new content sources. These include Microsoft Teams, Slack, Zoom, Symphony and Bloomberg.

Dell has plunked Actifio‘s Database Cloning Appliance (DCA) and Cloud Connect products on Dell EMC PowerEdge servers, VxRail and PowerFlex. Sales are fulfilled by Dell Technologies OEM Solutions.

Enmotus has launched an AI-enabled FuzeDrive SSD with 900GB and 1.6TB capacity points. It blends high-speed, high endurance static SLC (1 bit/cell) with QLC (4bits/cell) on the same M.2 board. AI code analyses usage patterns and automatically moves active and write intensive data to the SLC portion of the drive. This speeds drive response and lengthens its endurance.

Exagrid claims it has the only non-network-facing tiered backup storage solution with delayed deletes and immutable deduplication objects. When a ransomware attack occurs, this approach ensures that data can be recovered or VMs booted from the ExaGrid Tiered Backup Storage system. Not only can the primary storage be restored, but all retained backups remain intact. Check out an Exagrid 2 minute video.

Deduplicating storage software supplier FalconStor has announced the integration between AC&NC’s JetStor hardware with StorSafe, its long-term data retention and reinstatement offering, and StorGuard, its business continuity product.

HCL Technologies has brought its Actian Avalanche data warehouse migration tool to the Google Cloud Platform.

MemVerge has announced the general availability of its Memory Machine software which transforms DRAM and persistent memory such as Optane into a software-defined memory pool. The software provides access to persistent memory without changes to applications and speeds persistent memory with DRAM-like performance. Penguin Computing use Optane Persistent Memory and Memory Machine software to reduce Facebook Deep Learning Recommendation (DLRM) inference times by more than 35x over SSD.

SanDisk Extreme and Extreme PRO SSDS

Western Digital’s SanDisk operation has announced a new line of Extreme portable SSDs with nearly twice the speed of the previous generation. The Extreme and Extreme PRO products use the NVMe interface and with capacities up to 2TB. They offer a password interface and encryption. The Extreme reads and writes at up to 1,000 MB/s while the Extreme PRO achieves upon to 2,000MB/sec.

Nearline drives are bright spot in Gartner HDD forecast

Nearline disk drive capacity shipments and revenues will grow at double-digit percentages between 2019 and 2024, according to Gartner. The tech research firm predicts other disk categories will decline, with notebook HDDs heading the way of the dodo.

Aaron Rakers, a senior analyst at Wells Fargo, presented subscribers with Gartner hard disk drive (HDD) market notes for Q3 2020 and estimates for 2019-2024.

The total HDD market will decline by 6 per cent y/y in 2020 to $20.7bn, which follows a 12 per cent y/y decline in 2019 at $22.1bn. However, revenue should reach $21.9bn in 2021 (+5 per cent) and $22.6bn in 2022 (+4 per cent).

Gartner forecasts HDD market revenue overall will decline at 1.5 per cent CAGR through to 2024, with sales propped up by nearline disk drive growth and a small surveillance drive contribution.

Nearline 3.5-inch high capacity HDD exabytes shipped will grow 39 per cent 2019-2024 CAGR. Revenue is estimated to grow at 14 per cent CAGR, expanding from $8.9 billion in 2019 to $17.1 billion by 2024.

This implies nearline revenue will grow share of total HDD revenue from 40 per cent in 2019 to 55 per cent in 2020, 62 per cent in 2021, and c.84 per cent by 2024, according to Rakers.

Nearline HDD capacity will expand from 52 per cent of total HDD capacity shipped in 2019 to 70 per cent in 2020 and 86 per cent by 2024.

Total non-nearline HDD capacity shipped will decline at -1.3 per cent CAGR  2019-2024. Overall non-nearline HDD revenue will decline from $13.2bn in 2019 and $9.4bn in 2020 to less than $3.4bn by 2024.

SSD cannibalisation will eat up mission-critical enterprise HDD revenue – $1.8bn in 2019, and zero in 2024. Gartner expects  notebook PCs to move to a 100 per cent SSD/flash attach rate by 2024, an increase from 88 per cent in 2020.

The mobile and consumer HDD market will decline from $5.65bn in 2019 and $3.7bn in 2020 to slightly more than $820m by 2024. The 3.5-inch client and consumer HDD market is estimated to decline from $5.72bn in 2019 and $4.2bn in 2020 to $2.55bn by 2024. 

Surveillance drives will see some growth; with shipped units growing at nearly four per cent CAGR for 2019-2024.