HPE is to sell the HPE Ezemeral Data Fabric on a standalone basis and it has opened a marketplace for cloud-native software running on this AI-ML focused service.
Ezmeral Data Fabric was previously known as the MapR Data Platform for data lakes. It is an exabyte-scale, repository for streaming and file data, a real-time database, a global namespace, and integrations with Hadoop, Spark, and Apache Drill for analytics and other applications.
HPE said enterprise customers can use the service to create a unified data repository for data scientists, developers, and IT to access and use, with control of how it is used and shared. The company envisages AI and ML processing will take place across the IT spectrum, from edge sites and data centres through to the public clouds, with containerised apps processing data from a global repository.
The data is accessed via several protocols: HDFS, Posix, NFS, and S3, and can be tiered automatically to hot, warm and cold stores across hybrid cloud environments.
Kumar Sreekanti, HPE CTO and head of software, was quoted in a statement: “The separate HPE Ezmeral Data Fabric data store and new HPE Ezmeral Marketplace provide enterprises with the environment of their choice, and with visibility and governance across all enterprise applications and data through an open, flexible, cloud experience everywhere.”
HPE released Ezmeral, a software framework for containerised apps, in June 2020. At launch, there were two components: the Ezmeral Container Platform and Ezmeral ML Ops. Ezmeral was added to the GreenLake subscription service earlier this month. Now we have the separated-out Ezmeral Data Fabric, Ezmeral ML Ops, an Ezmeral Technology Ecosystem program and the Ezmeral Marketplace.
Vendors on the new marketplace are validated via the new Ezmeral Technology Ecosystem. Dataiku, MinIO, H2O.AI, Rapt.AI, Run:AI, Sysdig, and Unravel have already passed muster. In addition, Apache Spark, Tensorflow, Gitlab, and Elastic Stack are available on the marketplace.
Ezmeral Data Fabric is available as a software license subscription to run on any infrastructure or public cloud. The Ezmeral Container Platform and Ezmeral ML Ops are available as cloud services through GreenLake now, and HPE plans to offer the Ezmeral Data Fabric as a GreenLake service in the future.
Data fabric softener
With enterprise adoption of containerisation poised to go mainstream, multiple vendors have developed software management products to abstract the complexities of Kubernetes-orchestration of containers.
HPE’s Ezmeral competitors include VMware Tanzu, Red Hat OpenShift, Dell Karavi, to an extent, Diamanti, the Hitachi Vantara Kubernetes Service, MayaData’s Kubera, NetApp Astra, and Pure Storage Portworx.
Startups like Diamanti and MayaData see a great opportunity to make enterprise inroads with their Kubernetes management magic carpets. Incumbents like HPE see an opportunity to extend existing customer wallet share and a necessity to deny wannabe startups any headroom.
The US Department of Commerce has opened an investigation into Seagate, for possible sanction-busting disk drive shipments to Huawei. The probe centres on controller chips inside the drives, according to reports.
The Commerce Department imposed tougher sanctions on Huawei in August 2020, in order to “prevent Huawei’s attempts to circumvent US& export controls to obtain electronic components developed or produced using US technology This amendment further restricts Huawei from obtaining foreign made chips developed or produced from US software or technology to the same degree as comparable US chips.”
Companies can apply for a Department of Commerce license to ship products to Huawei. For example, Western Digital has applied for a license to sell disk drives and SSDs to Huawei, but in the meantime has stopped shipments to the company.
In September 2020, Seagate’s CFO, Gianluca Romano told a Deutsche Bank conference: “We are still going through the final assessment, but from what I have seen until now, I do not see any particular restriction for us in term of being able to continue to keep the Huawei or any other customers in China. So, we do not think we know we need to have a specific license.
We have asked Seagate about this investigation and a spokesperson said: “Seagate confirms that it complies with all applicable laws including export control regulations. We do not comment on specific customers.”
Micron aims to develop new storage-class memory products that compete with Intel’s Optane – using technology based on Compute Express Link (CXL)
Micron’s EVP and Chief Business Officer Sumit Sadana said yesterday: “We will end 3D XPoint development immediately and cease manufacturing 3D XPoint products upon completing our industry commitments over the next several quarters.” That means shipping Optane chips to Intel.
Explaining the company’s decision to withdraw from manufacturing 3D Xpoint in an investor call, Micron said the switch from proprietary CPU-to-Optane PMEM links to open CXL interconnects means its “focus is on addressing data-intensive workload requirements while reducing barriers to adoption, such as software infrastructure changes.”
Investor call
Micron said in prepared remarks it had derived substantial knowledge gain from 3D XPoint. “The knowledge, experience and intellectual property gained in this effort will give us a head start on several important products that we will introduce in the coming years… we will continue our technology pathfinding efforts across memory and storage, including our work toward future breakthroughs in storage-class memory.”
“Memory was always the strategic long term market opportunity for 3D XPoint.” However, significant problems have delayed progress: “One important challenge that 3D XPoint memory products face in the market is that the latency of access requires significant changes to data centre applications to leverage the full benefits of 3D XPoint.
“These changes are complex and extremely time-consuming, requiring years of sustained industry-wide effort to drive broad adoption. In addition, there are important cost-performance trade-offs that need to be characterised and optimised for each workload.”
Sadana said Micron is using its XPoint process technology and X100 XPoint SSD design teams in developing new CXL-based storage-class memory products due in the next few years. That means Micron will build product to compete with Optane and its 3D XPoint technology.
Intel said in a statement: “Micron’s announcement doesn’t change our strategy for Intel Optane or our ability to supply Intel Optane products to our customers.”
Analysts’ view
Jim Handy.
I asked Jim Handy from Objective Analysis for his take on tMicron’s XPoint withdrawal. To set the scene, he says that, for its entire history 3D XPoint memory has lost significant sums. By his estimations this loss was roughly $2bn in 2017 and 2018, dropping to $1.5bn in 2019. Micron, in its call, explained that its production of 3D XPoint was costing the company about $400m annually.
Blocks & Files: What will be the likely effect on Intel?
Jim Handy: I don’t anticipate a big impact on Intel. Here’s why: In the prepared statements for Micron’s Investor Call yesterday management said that the company would continue to ship 3D XPoint to honour its commitments for the next several quarters. That removes any concern over short-term availability. Intel has a small fab in New Mexico that already makes next-generation 3D XPoint chips and that can be ramped. I believe that it was once Intel’s largest fab maybe 20 years ago, so it’s certainly a large enough facility – it just needs additional tools. Of course, since Micron’s selling the fab that makes today’s 3D XPoint in Utah, Intel could simply buy it and solve the problem instantly.
Blocks & Files: How might Intel respond?
Jim Handy: I suspect that Intel has seen this coming for a long time and has a very solid contingency plan in place. The company will simply move from Plan A to Plan B. Of course, they will have to do some extra work to calm their Optane customers.
Blocks & Files: How should it respond in your view?
Jim Handy: I would favour the purchase of the Utah facility. It would be a seamless transition. I doubt that there’s a frenzy of likely purchasers since it will need significant re-tooling if it is to be used to produce something other than 3D XPoint.
Blocks & Files: What will be the effect on others storage-class memory suppliers, such as Samsung with its Z-SSD and Everspin with MRAM?
Jim Handy: Neither of these technologies (nor any other) plays into the heart of the Optane market, which is a persistent DIMM that sells for half of DRAM’s price. Z-SSD is not a DIMM, and Z-NAND is ill-suited for use in a DIMM (slow writes, erase-before write, etc.) so it’s not a fit for a DIMM, nor is Kioxia’s XL-FLASH. Everspin and Renesas MRAM, as well as Adesto’s and Panasonic’s ReRAM, all sell for considerably more than DRAM.
Blocks & Files: How would you sum up the state of Optane in the market now?
Jim Handy: I see little actual change. Intel is definitely not left in a lurch, and Micron will be better off without the XPoint losses that it has incurred in recent years. While customers will be put in a position of having a single source for the technology, with no prospects of getting an alternate source in the near term, Optane’s unique product positioning will prevent Intel from being able to gouge, since Optane must sell at sub-DRAM prices to make any sense. I don’t anticipate any other memory makers rushing in to fill the void since they have visibility into both Micron’s and Intel’s losses in this market.
Overall Handy says we should expect Intel to continue to promote its Optane technology to provide a strong competitive advantage against AMD processors.
The Webb view
Mark Webb of MKW Ventures Consulting gave B&F his five-point take on Micron’s withdrawal;
Intel needs to find a new supplier. Intel has a few back up plans, none are very cost effective. Intel can’t put more cash into this.
3D XPoint is by far the leading high density persistent memory. MRAM is a different market. Micron is abandoning development and the fab. Clearly this is not a good indicator of confidence in the technology’s revenue growth.
Optane DIMMS, Persistent Memory. are the main market. Intel’s customers report that Optane Persistent memory has uses and is effective in certain application. The question is how many applications and how many servers are needed with Optane (DIMMS). Right now the numbers are far less than 10 per cent of servers need Optane Persistent Memory.
Optane SSDs are a niche for data centres. This is true for Z-NAND as well. Persistent Memory is the market that matters
CXL is the future of memory/storage. It is not clear why Micron thinks 3D XPoint is not applicable for CXL memory.
Micron X100 3D XPoint-based SSD.
Controlling difficulties
An industry insider who declined to be named told me: “As we understand from a source there, they [Micron] were unable to build the [X100 SSD] controller. XPoint is essentially phase change memory, so the controllers are entirely different from NAND controllers. The decision was made not because of market opportunity, but rather because of execution and market timing.”
“The [XPoint] silicon gets written to in entirely different ways… Understand this new type of media does not get written to in the classic program/erase cycle method, as NAND flash does. All of the mechanics are completely different. … you can’t use conventional flash controllers for this.”
NetApp has released Spot Wave, a data management service for Apache Spark, and added Azure Kubernetes Service to the list that Spot Ocean – its Kubernetes-orchestrated container app deployment service – supports.
Spot and its containerised app deployment technology was acquired by NetApp in June last year. This enabled NetApp to offer containerised app deployment services based on seeking out the lowest cost or spot compute instances that met service level agreements.
Amiram Shachar.
Amiram Shachar, NetApp’s VP and GM for Spot, said in a statement: “The necessity for organisations to balance cloud infrastructure cost, performance and availability for optimal efficiency is complex and time-consuming. Spot Wave and Ocean are solving that problem by providing a serverless experience for Spark andensuring their infrastructure is continuously optimised.”
The Spot code or engine is said to be AI-based and it supplies the foundation of NetApp’s Spot Ocean, which supports Amazon Web Services ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service) instances as well as the Google Kubernetes Engine. NetApp has now added support for Azure Kubernetes Service.
Spot Wave builds on and Spot Ocean to automate the deployment of Apache Spark big data applications on the Big Three clouds.
NetApp Spot Wave screenshot
Spot Wave automates the provisioning, deployment, autoscaling and optimisation of these Spark applications. There is no need for users to set up server instances in the cloud. Wave runs Spark jobs on these clouds’ containerised infrastructure using a mix of spot, on-demand and reserved instances, which can provide up to 90 per cent cost saving compared to only using on-demand instances.
We are looking at a future where NetApp’s Astra data management suite for Kubernetes workloads, can help develop containerised apps and Spot runs them.
Analysis. Micron’s decision, announced yesterday, to scrap 3D XPoint development and sell its Lehi fab, which makes XPoint chips for itself and Intel, has thrown a giant spanner in the Optane works. Does storage-class memory have a future?
3D XPoint Optane Persistent memory (PMEM) acts as a means of enlarging memory capacity by adding a slow Optane PMEM tier to a processor’s DRAM. This shortens application execution time by reducing the number of IOs to much slower storage. The technology works but is difficult to engineer, which is why it has taken Intel much of the five years since Optane’s launch in 2015 to build a roster of enterprise software firms that support Optane PMEM.
Micron’s Lehi semiconductor foundry.
Take-up has not been helped by Intel’s treatment of Optane as a proprietary technology with a closed interface to specific Xeon CPUs. There is no Optane support for AMD or Arm CPUs which would enlarge the Optane PMEM market – but at the cost of Xeon processor sales.
Micron has decided that, in the wake of the rise of GPU-style workloads such as graphics, gene sequencing, AI and machine learning, the overarching need is for more memory bandwidth from CPUs, GPUs and other accelerators to a shared and coherent memory pool. This is different from the Optane presumption that CPUs are limited by memory capacity.
Compute Express Link scheme.
The Compute Express Link (CXL) is the industry-standard way to link processors of various kinds to a shared memory pool. Micron has said it supports CXL and will develop memory products that use it.
In the Micron worldview, Optane’s role would be as a CXL-connected storage-class memory pool. Other storage-class memory products, such as Everspin’s STT-MRAM, will also likely need to support CXL in order to progress in the new CPU-GPU-shared memory processing environment. That is, if SCM has a role at all.
SCM’s role
Storage-class memory occupies a price performance gap between faster and higher-priced DRAM and slower and lower-priced NAND flash. Its problem has been that in SSD form it is seen as too expensive for the speed increase it provides. In PMEM (DIMM) form it is too expensive and needs complex supporting software, making it a relatively poor DRAM substitute. No-one would use Optane PMEM if DRAM was (a) more affordable and (b) more of it could be attached to a CPU.
As the world of processing moves from a CPU-only to a twin multi-CPU and multi-GPU model, memory needs to be sharable between all these processors. That requires a different connectivity method than the classic legacy CPU socket method. High-bandwidth memory (HBM) stacks memory dies above an interposer card which connects with a CPU. It is not much of a stretch to envisage HBM pools connected to CPUs and GPUs across a CXL fabric.
High Bandwidth Memory concept.
There are several SCM suppliers, none of which have made much progress compared to Intel’s Optane. Samsung’s Z-NAND is basically a faster SSD. Everspin’s STT-MRAM is seen as a potential DRAM replacement and not a subsidiary, slower tier of memory to DRAM; that’s Optane’s role. Spin Memory’s MRAM is in early development. Weebit Nano’s ReRAM is also in relatively early development.
It has taken Intel five years to get to the point where it still doesn’t have enough software support to drive mass Optane PMEM adoption – which shows that these small startups face a monumental problem.
The lesson of Optane PMEM is that all these technologies will need complex system and application software support and hardware connectivity if they are to work alongside DRAM.
Perhaps the real problem is that there is no storage-class memory market. The CPU/GPU connectivity and software implementation problems are so great as to deny any candidate technology market headroom.
Micron has judged that the SCM game is not worth the candle. Intel now has to decide if it should go it alone. It could double down its Optane investment, by buying Micron’s Lehi fab, or it could decide to spend its Optane and 3D XPoint development dollars elsewhere.
Pure Storage has made its Cloud Block Store available on the Azure Marketplace.
Cloud Block Store is the cloudified version of Purity OS, the operating system that runs on the company’s FlashArrays. The software provides high-availability block storage, a DR facility and Dev/Test sandboxes. All these instantiations can be handled through Pure1 Storage Management.
Cloud Block Store enables bi-directional data mobility between FlashArray on-premises, hosted locations and the public cloud. The service is already available on AWS.
Aung Oo, Partner of Director Management for Microsoft Azure Storage, issued a statement: “Pure Cloud Block Store on Azure, which is built with unique Azure capabilities including shared disks and Ultra Disk Storage, provides a comprehensive high availability and performant solution.”
Pure has said it may roll out CBS to other public clouds – Google Cloud springs to mind. The company is also considering expanding storage protocol support – files and S3 objects spring to mind.
The company has announced a Pure Validated Design for Microsoft SQL Server Business Resilience to provide business continuity for SQL Server databases running on premises. This enables disaster recovery in the cloud, with Cloud Block Store for Azure acting as a high-availability target.
Pure CBS replication for DR.
With the Azure coverage, Pure joins HPE, Infinidat, NetApp, IBM’s Red Hat and Silk in providing a common block storage dataplane across their on-premises, hosted, AWS and Azure instances. Silk and Red Hat go further by covering GCP as well.
The hybrid multi-cloud environment is becoming a reality and we expect newer vendors, such as VAST Data and StorONE, to follow suit.
Amazon Web Services has cut some S3 Glacier prices by 40 per cent, AWS Chief Evangelist Jeff Barr revealed yesterday.
“We are lowering the charges for PUT and Lifecycle requests to S3 Glacier by 40 per cent for all AWS Regions… Check out the Amazon S3 Pricing page for more information,” he wrote.
A PUT request moves S3 data into Glacier. A Lifecycle request migrates data from one S3 storage class to another, with the aim of saving storage costs. S3 does not transition objects smaller than 128 KB because it’s not cost effective.
AWS S3 lifecycle waterfall
“You can use the S3PUT API to directly store compliance and backup data in S3 Glacier. You can also use S3 Lifecycle policies to save on storage costs for data that is rarely accessed,” Barr wrote.
We could not immediately discern how to compare the before and after prices, and have asked AWS for specifics. On its Amazon S3 pricing page, AWS notes “there are per-request ingest fees when using PUT, COPY, or lifecycle rules to move data into any S3 storage class.“
However, these fees are not displayed – and so there is no simple way to find out how much S3 Glacier PUT and Lifecycle requests cost. Customers are told to estimate their costs using an AWS pricing calculator. But this estimates prices for all AWS services, including S3 Glacier, based on your proposed usage.
Update: AWS told us: “The price reduction for PUTs and Lifecycle transitions requests for S3 Glacier reduced prices by 40 per cent in all AWS Regions. For example, for US East (Ohio) Region we reduced the price from $0.05 down to $0.03 per 1,000 requests for all S3 Glacier PUTs and Lifecycle transitions.”
The basic Glacier storage costs of $0.004/GB/month remain unchanged.
NetApp has anointed a new sales leadership team under the helm of President César Cernuda.
The members are:
Rick Scurfield, who is promoted from SVP, Globals, Verticals, and Pathways to Chief Commercial Officer
Max Long has been hired as NetApp’s SVP North America, joining th company from Adobe, where he was Chief Customer Officer
Alex Wallner is promoted to NetApp’s SVP International. He was previously the company’s SVP for w-w Enterprise and Commercial Field Operations.
Scurfield will be responsible for building a new ‘go-to-market motion’ strategy, which includes direct sales and channel coverage. Long and Wallner are responsible for sales in their regions. Long will run direct sales, channel sales and demand generation in North America and Wallner will lead the execution of all NetApp go-to-market activities in the rest of the world. The appointments are effective from May 1.
César Cernuda
The new sales structure is intended to facilitate NetApp’s goal to drive more leads, handle those leads faster, track progress more closely and get them processed into orders. The company also wants the digital and virtual sales teams and sales operations to work more effectively with NetApp’s direct sales force and channel partners.
Cernuda, who joined NetApp from Microsoft ten months ago, said the company is “implementing these changes to better serve [organisations’] needs. We will soon deliver a personalised, data-driven engagement model that allows our current and future customers to move at warp speed and aligns with the new way they want to engage with their solution partners.”
NetApp said the evolution of its data-driven GTM model and cloud-led sales organisation is expected to be a continuous process. That indicates Scurfield, Long and Wallner will be making changes in their respective fiefdoms to meet Cernuda’s goals.
William Blair analyst Jason Ader told his subscribers about a session with NetApp CFO Mike Berry earlier this month. “With respect to the impact on share gains from recent sales headcount additions (NetApp added 200 new sales reps in second half 2019 and 2020), NetApp management believes that new hires contributed to solid growth in the Americas geography (where sales were up 11 per cent year-over-year), though some portion of the recent hires are still getting ramped up.”
Toshiba’s new MG09 disk drive uses Flux-Control Microwave-Assisted Magnetic Recording (FC-MAMR). What is a flux? How is it controlled? Why should you control it in a microwave environment?
A group of Toshiba researchers explains the concept in a paper published in the Journal of Applied Physics, titled ‘Magnetization dynamics of a flux control device fabricated in the write gap of a hard-disk-drive write head for high-density recording’.
The researchers write: “MAMR using the FC effect is promising for extending the recording density of HDDs.”
The write head part of a disk drive’s read-write head has a gap of 20nm or so between two parts or poles, which point towards the disk drive platter’s surface. There is a magnetic or recording field in the gap between them, which is used to write data by setting the magnetic direction of bits on the platter surface as they pass underneath the write head.
Disk read write heads
MAMR head
With MAMR, a Spin Torque Oscillator (STO) is placed in this gap to produce microwaves which radiate outwards towards the recording medium on the surface of the disk platter. They impinge upon a bit location on that medium, and act to lower its resistance (coercivity) to having its magnetic direction changed; a microwave-assisted magnetisation switching effect (MAS).
The Toshiba researchers applied a bias current, an initial direct current voltage, to the STO, to induce the magnetisation oscillation needed to produce a microwave magnetic field and cause the magnetisation switching effect. They found that the average magnetisation direction of the spin torque oscillation modified the recording field. This is called a flux control effect.
When the average magnetisation direction is parallel to but opposite in direction to the gap field, the flux control effect can strengthen the recording magnetic field. In this situation fewer microwaves are generated, so the actual microwave-induced magnetisation switching effect is lessened and the magnetic field strength is increased sufficiently through the flux control effect to write data to the drive.
The Toshiba researchers write: “The advantage of the flux control effect over the MAS effect is that improvement is obtained in any media regardless of the media properties because the flux control effect simply modifies the recording field.”
They note that disk recording density is likely to be larger with non-flux-controlled MAMR because the latter has a strengthened effect due to frequency matching between the STO oscillation and the ferromagnetic resonance of the recording medium’s magnetisation.
FC write head
The Toshiba researchers built a write head with a magnetic flux control layer (instead of an STO between the two poles). This was accompanied by a spin polarisation layer, with a spacer between these two components, and our diagram shows this:
B&F diagram of FC-MAMR write head
There is a spin sink layer at the other end of the MFCL to block spin transfer torque at this interface. Normally, the magnetisation direction of the MFCL is the same as for the write head poles, gap field and spacer. When the bias current is applied the MFCL magnetisation direction is reversed (as shown in the diagram above).
This reversal weakens the magnetic flux inside the write gap but strengthens it outside that gap. The net effect is that the amplitude of the recording field is increased. That, in turn, means that the write performance of the flux control write head can be improved further than the limit of an ordinary write head.
The MFCL magnetisation reversal takes time but, even so, a flux control write head can operate at 2.86Gbit/s which is compatible with disk write speeds.
Toshiba’s FC-MAMR appears to be similar to Western Digital’s ePMR (partial MAMR) recording technology as a bias current is used in both cases. The WD tech “applies an electrical current to the main pole of the write head throughout the write operation. This current generates an additional magnetic field which creates a preferred path for the magnetisation flip of media bits. This, in turn, produces a more consistent write signal, significantly reducing jitter.”
The Toshiba paper makes no mention of jitter. Like Western Digital and ePMR, Toshiba believes its FC-MAMR will likely be superseded by full MAMR to make even higher-capacity disk drives.
Slurm sounds like a type of slurping or squirming but gives Liqid better access to the high-performance computing market. Panzura is adding ransomware detection to its cloud file system and HPE is broadening its GreenLake subscription service with distribution and colocation deals. Read on.
Liqid takes up Slurm
Composable systems supplier Liqid has integrated the Slurm Workload Manager with its Matrix Software. Slurm is an open source, Linux cluster workload management system popular in HPC circles.
Using Slurm, Liqid Matrix software dynamically assemble precise bare-metal server configurations for each HPC job, composed from pools of compute, storage, and GPU resources. When a job is completed, the resources are returned to the pools, and are available for automatic redeployment for future jobs.
Slurm-enabled Matrix handles evolving hardware, such as new server types, Liqid said. Newly acquired, disaggregated resources – GPUs, NVMe storage, NICs, HBAs, FPGAs, and storage-class memory – are recognised and represented in the pool as they are deployed, without the downtime associated with traditional hardware upgrades.
Panzura detects threats
Panzura has released CloudFS 8 Defend – software that detects unauthorised access and threats such as ransomware on-premises and in the cloud. CloudFS 8 Defend also discovers performance issues and integrates with the Varonis Data Security Platform.
CloudFS 8 Defend alerts cover security, the performance of specific virtual machines acting as filers within the Panzura cluster, and connectivity with the cloud store itself. Panzura says it serves as the first line of observability and defence for file performance and file storage problems,
HPE adds stuff to GreenLake
HPE has upgraded the GreenLake cloud services portfolio with cloud services for bare metal, virtual machines (starting at 100 VMs, and increasing to 600), including a backup-as-a-service option, and containers, based on Ezmeral.
On the colocation front, HPE is expanding its relationship with CyrusOne and Equinix. Joint customers can now run HPE GreenLake on CyrusOne or Equinix through one agreement and one invoice. HPE has announced partnerships with UK-based Interxion and Wavenet, as well as with Beyond.pl, a service provider in Central Europe.
GreenLake is now offered by a set of distribution partners including ALSO Group, Arrow Electronics, Ingram Micro, Synnex, and Tech Data. That gives HPE access to more than 100,000 resellers.
Shorts
Weka CEO Liran Zvibel last week revealed that Weka is working on extending the remit ofit scale-out, parallel file system software to handle Arm instances in the AWS and other clouds.
Research house TrendForce thinks notebook SSD prices are on an upward trend in the wake of the pandemic. It sees a 3 to 8 per cent price increase, as Q1 gives way to Q2.
IBM has announced ESS 5000 object storage systems. The high-capacity models SC9 and SL7 have up to 15.2 petabytes of raw capacity in a single building block, up to 9 x 4RU enclosures with the SC9 and up to 7 x 5RU enclosures for the SL7. ESS 5000 supports non-disruptive capacity upgrades.
Backup and archive-focused FalconStor has announced its fourth quarter 2020 results. Full year revenue of $14.8m declined 10 per cent Y/Y with a loss of $1.14m. An encouraging sign was a 26 per cent increase in new customer bookings Y/Y. The company has updated its StorSafe virtual tape library backup and archive product, adding AWS Glacier and Deep Glacier support, plus Azure, and the IBM and Wasabi Clouds. StorSafe supports on-premises Hitachi Vantara, IBM and Dell EMC object storage, and multi-tenancy. It works with backup SW from Veeam, Veritas NetBackup, Commvault, IBM BRMS and ProtecTier .
Datadobi has submerged its migration capabilities inside a data management wrapper. The company has sponsored a GigaOm report, “Building a Modern Data Management Strategy,” which examines increasing requirements for unstructured data management, and the role that Datadobi solutions can play in addressing those requirements.
IBM Spectrum Scale Data Access Edition for ESS and Data Management Edition for ESS release 5.1 gets an added QoS function to specify fileset throughput limits, and the ability to migrate files between Spectrum Scale for ESS and IBM Cloud Object Storage or AWS S3. R5,1 also adds Support for SMB and NFS) protocols, including NFS 4.1 and IPv6 support.
Quantum has released a reference architecture for large-scale surveillance workloads, combining a highly available front end (Quantum’s VS1110-A application servers) with StorNext, its file system product for video workloads. Quantum says video cameras are the biggest data generators in the world. The RA supports from 500 to 2,000 cameras and 30 days to one year of retention.
SAPBASIS in Denmark has joined the HYCU Cloud Services Provider Program. The company supports SAP environments including execution, project management and migration to Azure, AWS and Google Cloud and claims more than 30 customers serving 340,000 end users,
CloudCasa, Catalogic‘s Kubernetes Back-up-as-a-service, has achieved SUSE Rancher Reader Status and now supports data protection for Amazon Relational Database Service (RDS).
Commvault Metallic Office 365 Backup for data stored in Microsoft Azure is available free to students with each educator license.
Micro Focus‘s Vertica in Eon Mode is now available for Dell EMC Elastic Cloud Storage. This predictive analytics product provides the ability to right size the compute resources for analytical queries and storage resources for data held on-premises and in the public cloud.
Nexsan’s Assureon Cloud Edition is now GA and offers an immutable active data vault which can be deployed as a public cloud, hybrid or on-premises system.
Objective Security Corp has announced v8 of its ObjectiveFS distributed filesystem. This has an AWS S3 object store back end. V8 adds a new multi-threaded storage cleaner architecture, new Size Tiered and Time Based dynamic compaction heuristics, several new mount options (freebw, autofreebw, mtplus), compaction progress monitoring and more.
Spectra Logic and OpenDrives, a provider of NAS systems, have announced an integrated storage lifecycle management system. It combines OpenDrives’ high-performance, low-latency primary storage with Spectra Logic’s StorCycle Storage Lifecycle Management software.
NetApp and Aston Martin Cognizant Formula One have announced a multi-year partnership as the Aston Martin gears up for its return to Formula One competition.
People
Rubrik has promoted VP and Head of IT Enterprise Business Applications Ajay Sabhlok to Chief Information Officer and Chief Data Officer.
Cohesity has appointed former NSA official Marianne Bailey as an advisor.
Ionir, a Kubernetes data management startup that sprung from the ashes of Reduxio, has promoted chairman Mike Wall to exec chairman and strategic advisor, and hired Kirby Wadsworth as CMO. Wall was the chairman and CEO of object storage company Amplidata, and led its sale to Western Digital.
Worldwide revenue for enterprise external OEM storage systems declined 2.1 per cent Y/Y to $7.8bn in Q4 2020. The market for the full year 2020 fell 4 per cent.
Capacity shipped grew 11.3 per cent year over year to 23.8 exabytes. Revenue from original design manufacturers (ODMs) selling directly to hyperscale data centres grew 2.0 per cent Y/Y to $6.6bn.
The figures are gleaned from IDC’s latest quarterly tracker. But there are no vendor breakouts by market share or revenues, alas, as the analyst firm has put those supplier numbers behind a paywall.
Last quarter, IDC stopped revealing vendor revenues in the total enterprise storage systems market, revealing only vendor numbers and market share in the external enterprise OEM storage systems market subset. It is now revealing only whole external enterprise OEM market revenue and capacity shipped numbers, together with a top five supplier share chart.
Total capacity shipments for the external OEM + ODM Direct + server-based storage market increased 8.3 per cent to 135.1 exabytes. IDC doesn’t reveal the total revenue for this market.
The chart above shows Dell was the largest supplier in the quarter, followed by HPE, Huawei, NetApp and IBM. We can see that Dell lost market revenue share Y/Y as did IBM. NetApp appears flat while HPE and Huawei gained share. The Rest of Market gained share slightly.
Greg Macatee, IDC research analyst for Infrastructure Platforms and Technologies Group, noted that China was the sole region to “experience growth during the quarter … nearly 30 per cent year over year … bolstered by strong performances from a series of locally headquartered vendors. Another of the overall enterprise storage systems market bright spots was the ODM Direct segment, which grew this quarter at a rate of 2.0 per cent annually.”
Happy HPE and Huawei
HPE pushed out a statement in response to the latest tracker. “HPE continues to grow and take share in the Storage market. In Q4CY’20, when the overall storage market declined and Dell/EMC, NetApp, and Pure Storage declined.” Any NetApp decline in revenue share looks infinitesimal on IDC’s chart.
HPE said revenues grew 10.8 per cent Y/Y and it increased its market share by 1.3 percentage points. Re-reading our earlier reports, we know IDC pegged HPE’s external storage revenue at $802.5m and market share at 10.1 per cent. On this entirely scientific basis, we deduce HPE’s revenues in Q4 2020 were $889.5m and revenue market share was 11.4 per cent.
NetApp’s external storage revenue looks flat on IDC’s chart. The company clocked up $704.8m in Q4 2019, according to IDC. IBM Q4 2019 storage revs were $721.8m with a 9.1 per cent share. Judging from the IDC chart, IBM’s market share fell in Q4 2020 to a 6.5 to 7 per cent revenue share range, and appear to be lower than Huawei’s Q4 2019 revenue ($616.2m).
Pure Storage’s IDC storage tracker statement said: “2020 was a tough market for everyone but there were many bright spots. We are pleased with our performance in EMEA, where we were one of only two major vendors to show growth. While the US market shrank quite a bit and was tough for all of us, we are happy with our growth in the USA, where we outperformed all of our major competitors. Japan was especially strong as we grew more than 13 per cent YoY in a market that contracted 5.4 per cent – and Latin America where we grew 5.6 per cent in a market that contracted 8.1 per cent.”
Stop the cannibals
According to IDC, the total all-flash array (AFA) market generated $3.0bn in revenue in the quarter, down 6.9 per cent Y/Y. The hybrid flash/disk array market was worth nearly $3.2bn, increasing 5.8 per cent Y/Y. That indicates that nearline disk capacity sales are rising faster than enterprise SSD sales. Disk cannibalisation by SSDs in enterprise storage is on the way down. This is a surprise to us.
To conclude, Dell and IBM gave up market share and HPE and Huawei grew their market share.
Robin.io’s Cloud Native Storage (CNS) now provides storage and data management software for containerised apps in IBM’s public cloud.
Robin CNS integrates with IBM’s Cloud Kubernetes Service (IKS) which deploys and manages containers in that cloud. Robin CNS is already available for AWS, Azure, GCP, and VMware Cloud Foundation.
Robin’s CEO Partha Seetala, said in a statement: “Supporting IBM customers that rely on IBM Cloud Kubernetes Service is another step in our commitment to integrate Robin Cloud Native Storage with all of the environments that are most important to our users.”
Robin.io CNS graphic.
Chris Rosen, IBM program director for offering management, IBM Cloud Container Service, amplified this: “Robin CNS is designed to make it easy to deploy and manage stateful applications on IKS, enabling our users to easily onboard workloads such as MySQL, Postgres, MongoDB, Redis, Cassandra, and more.”
Robin CNS integrates with CSI and Kubernetes-native admin tooling through standard APIs. It offers app-consistent snapshots and backups, thin clones, multi-cloud portability, and high-availability.
Comment
Kubernetes provides the basic information and access, like CSI, that enables storage and data protection suppliers to provide storage provisioning, data protection and app migration services across any Kubernetes-orchestrated cloud and on-premises environments.
This level – but basic – playing field has enabled a raft of supplier and startup developments – HYCU, MayaData, NetApp Astra, Pure with Portworx, Robin.io, Red Hat OpenShift, Rook, StorageOS, SUSE with Rancher Labs, Veeam with Kasten, and VMware with Tanzu, to name a few.
However, storage and data protection for Kubernetes environments will likely become a feature rather than a product. This means product-focused startups will face strong competition as incumbent storage and data protection suppliers add the feature-set to their own products. NetApp’s Astra, which rolled out this week is a good example.