Home Blog Page 302

Fast RAID rebuild StorONE preps Azure debut

StorONE is running a technology preview of its TRU S1 software installed on Azure.

The company chose Azure cloud over AWS because of customer demand, StorONE marketing head George Crump told us in a briefing last week.

An Azure S1 instance could be a disaster recovery facility for an on-premises StorONE installation. It will be interesting to compare StorONE in Azure with Pure’s Cloud Block Store which is also available in Microsoft’s cloud. The GUI and functionality are identical to the on-premises StorONE array. The software might become generally available around May.

StorONE was founded in 2011, raised $30m in a single funding round in 2012 and promptly went into development purdah for six years. It announced TRU S1 software in 2018. This was described as enterprise-class storage and ran on clusterable Supermicro server hardware with Western Digital 24-slot SSD chassis.

Since then, StorONE has supported a high-performance Optane Flash Arrays, with Optane and QLC NAND SSDs, as well as a mid-performance Seagate hybrid SSD/HDD array. Crump told us that although all-flash arrays occupy the performance high-ground, the Seagate box is a “storage system for the rest of us with a mid-level CPU, affordability and great performance. … 2.5PB for $311,617 is incredible”.

“Seagate originally designed the box for backup and archive. We make it a mainstream, production workload-ready system.”  StorONE’s S1 software provides shared flash capacity for all hybrid volumes. Crump said the flash tier is “large and affordable – 100TB, for example – and typically priced 60 per cent less than our competitors.”

The sequential writes to the disk tier provide faster recall performance. Overall the hybrid AP Exos 5U84 system delivers 200,000 random read/write IOPS.

According to Crump, competitor systems slow down above 55 per cent capacity usage – and StorONE doesn’t: “We can run at 90 per cent plus capacity utilisation.” This was because StorONE spent its six-year development purdah completely writing and flattened the storage software stack to make it more efficient .

Speeding RAID rebuild

Crump noted two main perceived disadvantages of hybrid flash/disk storage;  slow RAID rebuilds, and performance. Failed disk drives with double-digit GB capacities can take days to rebuild in a RAID scheme, writing the recovered data to a hot spare drive, for example. That means a second disk failure could occur during the rebuild and destroy data, meaning recovery has to be made from backups.

StorONE’s vRAID protection feature uses erasure coding and has data and parity metadata striped across drives. There is no need for hot spare drives. A failed disk means that the striped data on that disk has to be recalculated, using erasure coding, and rewritten to the  remaining drives in the S1 array.

Crump said: “We read and write data faster. We compute parity faster. It’s the sum of the parts.”

S1 software uses multiple disks for reading and writing data in parallel, and writes sequentially to the target drives. In a 48 drive system, vRAID reads data from the surviving 47 drives simultaneously, calculating parity and then writing simultaneously to those remaining 47 drives.

Seagate AP Existing 5U84 chassis.

Crump told us: “We have the fastest rebuild in the industry; a 14TB disk was rebuilt in 1 hour and 45 minutes.” This was tested in a dual node Seagate AP Exos 5U84 system with 70 x 14TB disks and 14 SSDs. The disks were 55 per cent full.

Failed SSDs can be rebuilt in single-digit minutes. The fast rebuilds minimise a customer’s vulnerability to data loss due to a second drive failure overlapping the rebuild from a first drive failure.

Crump said StorONE has continued hiring during the pandemic, and that CEO Gal Naor’s ambition is to build the first profitable data storage company to emerge in the last 12 years.

Exclusive: Datera has gone bust

Datera, the high-end storage software startup, has gone into liquidation. All of the company’s assets are being transferred to the assignee of the creditors, according to a letter dated February 19 seen by Blocks & Files.

Michael Maidy, who signed the letter, is a managing member at Sherwood Partners, a business advisory firm specialising in restructuring. He noted in the letter that Datera was originally known as Rising Tide Systems.

A senior source close to the company has confirmed the liquidation.

Datera raised at least $63.9m in funding. The investors, led by Khosla Ventures, look to have taken a bath. Customers, channel partners and staff have lost out as well.

Datera’s tide goes out

Datera was founded in 2013 by ex-CEO Mark Fleischmann, ex-CTO Nicholas (Nic) Bellinger, and Chief Architect Claudio Fleiner. The company developed Elastic Data Fabric (EDF) software, providing block-access, scale-out, server-based storage running in x86 servers. Datera sold hardware appliances for a while, before becoming a software-only supplier.

The company touted EDF to enterprises and cloud service providers and supported DevOps and Cloud Native apps. Datera said the software provided webscale economics and integrated it with workload orchestration frameworks such as VMWare, Openstack, Docker, Kubernetes, Mesosphere/DC-OS and Cloudstack.

EDF competed with block storage arrays from Dell EMC, Hitachi Vantara, HPE, IBM, NetApp and Pure Storage. It also had to make progress against the hyperconverged systems and never managed to have an outstanding set of advantages against all-flash arrays like those from Pure. Partners like HPE treated it as just another software way to sell servers and had no real commitment to Datera.

Downturn and departures

There was a C-round of funding in May 2018 ,accompanied by cost-cutting and layoffs. Fleischmann resigned the CEO role in December that year. Board member, ex-EMC exec and Data Torrent CEO Guy Churchward then took on the Datera CEO position.

Churchward had two heart attacks in May 2019 but recovered and achieved selling partnerships with HPE and Fujitsu, amongst others. Fleischmann quit altogether in April 2020. And Feiner left in May 2020.

Fast forward to February 2021 and Churchward resigned for health reasons. At the same time, Chief Product Officer Narasimha Valiveti quit to join Oracle. That was, with hindsight, the writing on the wall.

At that time Chief Revenue Officer Flavio Santoni told us: “Guy had a prior agreement with the board to step back at the end of 2020. Our lead investor stepped in as interim CEO to work through the next few moves.”

The liquidation letter is dated four days after we were told this.

Your occasional storage digest with ScaleFlux, AWS, Dell and more

SCaleFlux CSD componentrey

In this week’s roundup, computational storage says hello to Nvidia’s GPUDirect; AWS cuts Glacier prices by a smidge and Dell has upgraded a server line with AMD’s Milan and Intel’s Ice Lake CPUs.

ScaleFlux and GPUDirect

ScaleFlux makes the CSD 2000 Computational storage SSD which has an on-board processor and can compress and decompress data in real time. The drive supports NVIDIA Magnum IO GPUDirect Storage and the company says this is the first use of computational storage for AI/ML and data analytics with GPUs.

CSD 2000 components.

The compression/decompression is transparent to applications, requires no code changes, does not incur latency or performance penalties, reduces data movement, and scales throughput with storage capacity. It also expands the capacity per flash bit by 3-5x.

Hao Zhong, ScaleFlux co-founder and CEO, said the CSD 2000 “handles the decompression process and eliminates up to 87 per cent of the data loading time so the GPU can get to work faster on the training activity.”

We asked for performance numbers and a spokesperson said: “We will not be showing any specifics on run time reduction for the model training or capacity expansion with specific training data sets at this time. As we expand engagements with customers, we expect to gather that information and make a subsequent announcement in the coming months.”

Trivial AWS Glacier data movement price cut

A few days ago we ran this story – AWS slashes Amazon S3 Glacier data movement prices – about a 40 per cent cut in AWS S3 Glacier prices for Data PUTs and Lifecycle requests. The percentage cut sounded great but there were no actual numbers. Now we have them and the net effect is trivial.

We were told the S3 pricing page – aws.amazon.com/s3/pricing/ – has a tab for Requests and data retrievals. There, you will find all the PUT and Lifecycle costs for every storage class.

AWS told us: “The price reduction for PUTs and Lifecycle transitions requests for S3 Glacier reduced prices by 40 per cent in all AWS Regions. For example, for US East (Ohio) Region we reduced the price from $0.05 down to $0.03 per 1,000 requests for all S3 Glacier PUTs and Lifecycle transitions.”

Gosh, we see a $0.02 cut per 1,000 requests. Colour us unexcited.

Dell brings AMD and GPU power to its servers

Dell has updated the PowerEdge server line with 17 models featuring AMD Milan as well as Intel Ice Lake gen 3 Xeon CPUs, and PCIe gen 4 bus support.

The core of the range is a set of rack servers, accompanied by edge and telecom models, a modular compute sled, GPU-optimised  and C-Series machines for HPC use cases.

The R7nn5 and R6nn5 systems use the Milan processors. An R750 server will use Intel’s Ice Lake gen 3 Xeon CPUs.

The XE8545 combines up to 128 cores of AMD Milan processors, four NVIDIA A100 GPUs, and NVIDIA’s vGPU software in a dual socket, 4U rack server. 

Dell XE8545 server.

The R750xa delivers GPU-dense performance for machine learning training, inferencing and AI with support for the NVIDIA AI Enterprise software suite and VMware vSphere 7 Update 2. It is a dual socket, 2U server with Ice Lake CPUs and supports up to four double-wide GPUs and six single-wide GPUs. It also supports Optane PMEM 200 storage-class memory.

Customers can expect the systems to begin rolling out soon. 

Shorts

Datadobi has released DobiProtect support for Azure Blob storage. The data migration company can now protect unstructured data by moving copies into Azure Blob storage.

Datto’s revenues in the quarter ended Dec 31, 2020, were $139m, up 16 per cent Y/Y, with $129m coming from subscriptions. There was a loss of $7.2m. These numbers beat Wall St expectations. Full year revenues were $485.3m, up 18 per cent Y/Y, with a loss of $31.2m. The company provides cloud data protection storage services sold by MSPs.

Cloud computing provider Linode has published a Cloud Spectator report revealing benchmarking data for cloud computing virtual machines (VMs) across Alibaba, Amazon Web Services (AWS), DigitalOcean, Google Cloud Platform, Linode and Microsoft Azure. Linode’s cloud servers based on AMD performed better than competing instances running on Intel chipsets.

WekaIO this week said the Oklahoma Medical Research Foundation is using its software to run more and concurrent research jobs in a shorter amount of time. OMRF is an independent, nonprofit biomedical research institute with more than 450 staff and over 50 labs studying cancer, heart disease, autoimmune disorders, and diseases of ageing.

Open-E has released an update for its ZFS- and Linux-based Open-E JovianDSS Data Storage Software. The up29 version, which performs better and has more configuration options, is free of charge for all software users and is available for download on the company’s website.

SaaS-based data protector Druva has promoted Robert Brower to be SVP of Worldwide Partners and Alliances. He’s going to recruit new partners. Brower is currently VP of Strategic Operations and Chief of Staff.

AIOps-supplying Virtana has announced a SaaS-based Virtana Platform to estimate costs for migrating applications to the public cloud. It’s appointed Alex Thurber as SVP Customer Success and Channel Strategy and Jonathan (Jon) Cyr as VP of Product Management. 

Dell extends HCI+hardware lead in Q4, and HPE is growing fast. But what about Nutanix?

Dell extended its lead over Nutanix in branded hyperconverged infrastructure (HCI) sales in the fourth 2020 quarter. Nutanix also fell back compared with VMware when measured by HCI software revenues, the latest IDC figures show.

However, Nutanix now recognises revenues over the lifetime of the contract and not at the point of sale, and this transition could mask underlying performance. Market watcher IDC notes that Nutanix’ Y/Y comparison is highly influenced by its shift away from software licenses to subscription sales, recent incentives targeting annual contract value over total contract value, and a go-to-market shift towards OEM partners.

IDC estimates the global HCI market grew 7.4 per cent in Q4. HCI is part of the overall converged systems market which IDC slices three ways in its Worldwide Quarterly Converged Systems Tracker: certified reference systems and integrated infrastructure; integrated platforms; and HCI systems. The HCI systems are tracked by the HCI brand and also by the HCI software owner to take account of OEM sales.

IDC research analyst Greg Macatee said in the press release: “The converged systems market closed out the year with tepid 0.2 per cent year-over-year growth in the fourth quarter, while the market for the full year 2020 finished down 0.6 per cent annually. That said, hyperconverged system sales were the market’s main pocket of growth in 4Q20, finishing up 7.4 per cent year over year, which is an acceleration over what we have witnessed over the past few quarters.”

  • Certified reference systems and integrated infrastructure market grew revenues 0.1 per cent Y/Y to $1.6bn; 35.6 per cent of all converged systems revenue. 
  • Integrated platforms revenues declined 25.9 per cent Y/Y to just under $460m; 10.1 per cent of the market. 
  • HCI revenues grew 7.4 per cent Y/Y  to $2.5 bn; 54.2 per cent of the market.

HCI numbers

Dell led the top three branded HCI sales suppliers with revenue increasing 11.1 per cent Y/Y to $801.8m revenue, giving it 32.6 per cent share – up 0.9 per cent. HPE was second, with $331.7m revs, up 25.4 per cent, and 13.5 per cent share. Nutanix was third with $254.1m revenues, down 18.8 per cent and 10.3 per cent share.

Tom Black, HPE’s SVP and GM for Storage, said in a statement: “The momentum in HPE’s HCI business continues to accelerate. This is now the second consecutive quarter in which we have clearly outperformed the market.”

Nutanix software is sold by OEMs such as Dell and HPE , and so the HCI cut by software owner presents a different revenue picture.

By this yardstick, VMware is top with $953.8m in revenues, up 1.7 per cent Y/Y, giving it 38.7 per cent share. Nutanix is in second place, with $575.5m in revenues, down 6.6 per cent Y/Y and 23.4 per cent revenue share. Unexpectedly, Huawei is in the joint third spot, with revenues of $154.5m, up 75.7 per cent, and 6.3 per cent share. It is tied with Dell Technologies, which had revenues of $137.4m, an even higher 102.4 per cent growth rate, and 5.6 per cent share.

HCI systems are taking slightly more of the overall storage market than before, as a chart indicates – if you look closely;

B&F chart using filed IDC tracker numbers.

IDC said the external storage market (arrays, filers and object stores) declined 2.1 per cent Y/Y to $7.8 bn in revenues in the quarter. As HCI revenues were up 7.4 per cent in the same period, to $2.5bn, they gained share. The most recent three 4Q peaks in the yellow line on the chart show a gentle downward trend in the difference between HCI and external storage revenues over three years, as HCI revenues gain on external storage revenues.

Diamanti extends Kubernetes support to Google Cloud

Diamanti has added Google Cloud support with the new release of the Spektra Kubernetes management platform.

Spektra manages Kubernetes cluster on-premises and in the cloud. The software includes ‘Ultima’, a network and storage data plane that provisions storage and network resources for apps in VMware Tanzu, Red Hat OpenShift, Amazon EKS and Azure Kubernetes Service. With the latest release of Spektra – V3.2 – Ultima now supports Google Kubernetes Engine.

B&F diagram

Diamanti claims Ultima lowers the total cost of ownership (TCO) by avoiding or minimising certain cloud provider charges for backup, data protection, disaster recovery, and multi-zone availability capabilities.

Users can set up disaster recovery between two Google Kubernetes clusters and between on-premises clusters and GCP. Applications can be migrated between Ultima-supported environments.

Diamanti diagram.

Spektra 3.2 includes:

  • The existing Docker runtime container interface, can be replaced by Cri-O, which enables  Kubernetes to run containers directly without as much tools and code as the Docker runtime,
  • Searchable application logs for applications to help diagnose issues and shorten fix time,
  • Ability to launch terminals with command line interfaces directly in the Specktra user interface to speed problem-handling.

Together with HPE Ezmeral, MayaData, NetApp Astra and many others, Diamanti aims to be the Kubernetes concierge for enterprises. Hybrid and multi-cloud portability and protection in the Kubernetes environment is hard work; Diamanti says it makes the work easier.

HPE touts Ezmeral Data Fabric for AI and machine learning workloads

HPE is to sell the HPE Ezemeral Data Fabric on a standalone basis and it has opened a marketplace for cloud-native software running on this AI-ML focused service.

Ezmeral Data Fabric was previously known as the MapR Data Platform for data lakes. It is an exabyte-scale, repository for streaming and file data, a real-time database, a global namespace, and integrations with Hadoop, Spark, and Apache Drill for analytics and other applications.

HPE said enterprise customers can use the service to create a unified data repository for data scientists, developers, and IT to access and use, with control of how it is used and shared. The company envisages AI and ML processing will take place across the IT spectrum, from edge sites and data centres through to the public clouds, with containerised apps processing data from a global repository.

The data is accessed via several protocols: HDFS, Posix, NFS, and S3, and can be tiered automatically to hot, warm and cold stores across hybrid cloud environments.

Kumar Sreekanti, HPE CTO and head of software, was quoted in a statement: “The separate HPE Ezmeral Data Fabric data store and new HPE Ezmeral Marketplace provide enterprises with the environment of their choice, and with visibility and governance across all enterprise applications and data through an open, flexible, cloud experience everywhere.”

HPE released Ezmeral, a software framework for containerised apps, in June 2020. At launch, there were two components: the Ezmeral Container Platform and Ezmeral ML Ops. Ezmeral was added to the GreenLake subscription service earlier this month. Now we have the separated-out Ezmeral Data Fabric, Ezmeral ML Ops, an Ezmeral Technology Ecosystem program and the Ezmeral Marketplace.

Vendors on the new marketplace are validated via the new Ezmeral Technology Ecosystem. Dataiku, MinIO, H2O.AI, Rapt.AI, Run:AI, Sysdig, and Unravel have already passed muster. In addition, Apache Spark,  Tensorflow, Gitlab, and Elastic Stack are available on the marketplace.

Ezmeral Data Fabric is available as a software license subscription to run on any infrastructure or public cloud. The Ezmeral Container Platform and Ezmeral ML Ops are available as cloud services through GreenLake now, and HPE plans to offer the Ezmeral Data Fabric as a GreenLake service in the future.

Data fabric softener

With enterprise adoption of containerisation poised to go mainstream, multiple vendors have developed software management products to abstract the complexities of Kubernetes-orchestration of containers.

HPE’s Ezmeral competitors include VMware Tanzu, Red Hat OpenShift, Dell Karavi, to an extent, Diamanti, the Hitachi Vantara Kubernetes Service, MayaData’s Kubera, NetApp Astra, and Pure Storage Portworx.

Startups like Diamanti and MayaData see a great opportunity to make enterprise inroads with their Kubernetes management magic carpets. Incumbents like HPE see an opportunity to extend existing customer wallet share and a necessity to deny wannabe startups any headroom.


US Commerce Dept. probes Seagate for Huawei sanctions ‘breach’

The US Department of Commerce has opened an investigation into Seagate, for possible sanction-busting disk drive shipments to Huawei. The probe centres on controller chips inside the drives, according to reports.

The Commerce Department imposed tougher sanctions on Huawei in August 2020, in order to “prevent Huawei’s attempts to circumvent US& export controls to obtain electronic components developed or produced using US technology This amendment further restricts Huawei from obtaining foreign made chips developed or produced from US software or technology to the same degree as comparable US chips.”

Companies can apply for a Department of Commerce license to ship products to Huawei. For example, Western Digital has applied for a license to sell disk drives and SSDs to Huawei, but in the meantime has stopped shipments to the company.

In September 2020, Seagate’s CFO, Gianluca Romano told a Deutsche Bank conference: “We are still going through the final assessment, but from what I have seen until now, I do not see any particular restriction for us in term of being able to continue to keep the Huawei or any other customers in China. So, we do not think we know we need to have a specific license.

We have asked Seagate about this investigation and a spokesperson said: “Seagate confirms that it complies with all applicable laws including export control regulations. We do not comment on specific customers.”

How Micron aims to compete with 3D XPoint

Micron aims to develop new storage-class memory products that compete with Intel’s Optane – using technology based on Compute Express Link (CXL)

Micron’s EVP and Chief Business Officer Sumit Sadana said yesterday: “We will end 3D XPoint development immediately and cease manufacturing 3D XPoint products upon completing our industry commitments over the next several quarters.” That means shipping Optane chips to Intel.

Explaining the company’s decision to withdraw from manufacturing 3D Xpoint in an investor call, Micron said the switch from proprietary CPU-to-Optane PMEM links to open CXL interconnects means its “focus is on addressing data-intensive workload requirements while reducing barriers to adoption, such as software infrastructure changes.”

Investor call

Micron said in prepared remarks it had derived substantial knowledge gain from 3D XPoint. “The knowledge, experience and intellectual property gained in this effort will give us a head start on several important products that we will introduce in the coming years… we will continue our technology pathfinding efforts across memory and storage, including our work toward future breakthroughs in storage-class memory.”

“Memory was always the strategic long term market opportunity for 3D XPoint.” However, significant problems have delayed progress: “One important challenge that 3D XPoint memory products face in the market is that the latency of access requires significant changes to data centre applications to leverage the full benefits of 3D XPoint.

“These changes are complex and extremely time-consuming, requiring years of sustained industry-wide effort to drive broad adoption. In addition, there are important cost-performance trade-offs that need to be characterised and optimised for each workload.”

Sadana said Micron is using its XPoint process technology and X100 XPoint SSD design teams in developing new CXL-based storage-class memory products due in the next few years. That means Micron will build product to compete with Optane and its 3D XPoint technology.

Intel said in a statement: “Micron’s announcement doesn’t change our strategy for Intel Optane or our ability to supply Intel Optane products to our customers.”

Analysts’ view

Jim Handy.

I asked Jim Handy from Objective Analysis for his take on tMicron’s XPoint withdrawal. To set the scene, he says that, for its entire history 3D XPoint memory has lost significant sums.  By his estimations this loss was roughly $2bn in 2017 and 2018, dropping to $1.5bn in 2019.  Micron, in its call, explained that its production of 3D XPoint was costing the company about $400m annually.

Blocks & Files: What will be the likely effect on Intel?

Jim Handy: I don’t anticipate a big impact on Intel.  Here’s why: In the prepared statements for Micron’s Investor Call yesterday management said that the company would continue to ship 3D XPoint to honour its commitments for the next several quarters. That removes any concern over short-term availability.  Intel has a small fab in New Mexico that already makes next-generation 3D XPoint chips and that can be ramped. I believe that it was once Intel’s largest fab maybe 20 years ago, so it’s certainly a large enough facility – it just needs additional tools. Of course, since Micron’s selling the fab that makes today’s 3D XPoint in Utah, Intel could simply buy it and solve the problem instantly.

Blocks & Files: How might Intel respond?

Jim Handy: I suspect that Intel has seen this coming for a long time and has a very solid contingency plan in place. The company will simply move from Plan A to Plan B. Of course, they will have to do some extra work to calm their Optane customers.

Blocks & Files: How should it respond in your view?

Jim Handy: I would favour the purchase of the Utah facility. It would be a seamless transition. I doubt that there’s a frenzy of likely purchasers since it will need significant re-tooling if it is to be used to produce something other than 3D XPoint.

Blocks & Files: What will be the effect on others storage-class memory suppliers, such as Samsung with its Z-SSD and Everspin with MRAM?

Jim Handy: Neither of these technologies (nor any other) plays into the heart of the Optane market, which is a persistent DIMM that sells for half of DRAM’s price. Z-SSD is not a DIMM, and Z-NAND is ill-suited for use in a DIMM (slow writes, erase-before write, etc.) so it’s not a fit for a DIMM, nor is Kioxia’s XL-FLASH. Everspin and Renesas MRAM, as well as Adesto’s and Panasonic’s ReRAM, all sell for considerably more than DRAM.

Blocks & Files: How would you sum up the state of Optane in the market now?

Jim Handy: I see little actual change.  Intel is definitely not left in a lurch, and Micron will be better off without the XPoint losses that it has incurred in recent years. While customers will be put in a position of having a single source for the technology, with no prospects of getting an alternate source in the near term, Optane’s unique product positioning will prevent Intel from being able to gouge, since Optane must sell at sub-DRAM prices to make any sense. I don’t anticipate any other memory makers rushing in to fill the void since they have visibility into both Micron’s and Intel’s losses in this market.

Overall Handy says we should expect Intel to continue to promote its Optane technology to provide a strong competitive advantage against AMD processors.

The Webb view

Mark Webb of MKW Ventures Consulting gave B&F his five-point take on Micron’s withdrawal;

  1. Intel needs to find a new supplier. Intel has a few back up plans, none are very cost effective. Intel can’t put more cash into this.
  2. 3D XPoint is by far the leading high density persistent memory. MRAM is a different market. Micron is abandoning development and the fab. Clearly this is not a good indicator of confidence in the technology’s revenue growth.
  3. Optane DIMMS, Persistent Memory. are the main market. Intel’s customers report that Optane Persistent memory has uses and is effective in certain application. The question is how many applications and how many servers are needed with Optane (DIMMS). Right now the numbers are far less than 10 per cent of servers need Optane Persistent Memory.
  4. Optane SSDs are a niche for data centres. This is true for Z-NAND as well. Persistent Memory is the market that matters
  5. CXL is the future of memory/storage. It is not clear why Micron thinks 3D XPoint is not applicable for CXL memory.
Micron X100 3D XPoint-based SSD.

Controlling difficulties

An industry insider who declined to be named told me: “As we understand from a source there, they [Micron] were unable to build the [X100 SSD] controller. XPoint is essentially phase change memory, so the controllers are entirely different from NAND controllers. The decision was made not because of market opportunity, but rather because of execution and market timing.”

“The [XPoint] silicon gets written to in entirely different ways… Understand this new type of media does not get written to in the classic program/erase cycle method, as NAND flash does. All of the mechanics are completely different. … you can’t use conventional flash controllers for this.”

NetApp rolls out Spot services to Azure and Apache Spark

NetApp has released Spot Wave, a data management service for Apache Spark, and added Azure Kubernetes Service to the list that Spot Ocean – its Kubernetes-orchestrated container app deployment service – supports.

Spot and its containerised app deployment technology was acquired by NetApp in June last year. This enabled NetApp to offer containerised app deployment services based on seeking out the lowest cost or spot compute instances that met service level agreements.

Amiram Shachar.

Amiram Shachar, NetApp’s VP and GM for Spot, said in a statement: “The necessity for organisations to balance cloud infrastructure cost, performance and availability for optimal efficiency is complex and time-consuming. Spot Wave and Ocean are solving that problem by providing a serverless experience for Spark andensuring their infrastructure is continuously optimised.”

The Spot code or engine is said to be AI-based and it supplies the foundation of NetApp’s Spot Ocean, which supports Amazon Web Services ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service) instances as well as the Google Kubernetes Engine. NetApp has now added support for Azure Kubernetes Service.

Spot Wave builds on and Spot Ocean to automate the deployment of Apache Spark big data applications on the Big Three clouds.

NetApp Spot Wave screenshot

Spot Wave automates the provisioning, deployment, autoscaling and optimisation of these Spark applications. There is no need for users to set up server instances in the cloud. Wave runs Spark jobs on these clouds’ containerised infrastructure using a mix of spot, on-demand and reserved instances, which can provide up to 90 per cent cost saving compared to only using on-demand instances. 

We are looking at a future where NetApp’s Astra data management suite for Kubernetes workloads, can help develop containerised apps and Spot runs them.

Micron’s 3D XPoint departure is not good news for Intel Optane

Analysis. Micron’s decision, announced yesterday, to scrap 3D XPoint development and sell its Lehi fab, which makes XPoint chips for itself and Intel, has thrown a giant spanner in the Optane works. Does storage-class memory have a future?

3D XPoint Optane Persistent memory (PMEM) acts as a means of enlarging memory capacity by adding a slow Optane PMEM tier to a processor’s DRAM. This shortens application execution time by reducing the number of IOs to much slower storage. The technology works but is difficult to engineer, which is why it has taken Intel much of the five years since Optane’s launch in 2015 to build a roster of enterprise software firms that support Optane PMEM.

Micron’s Lehi semiconductor foundry.

Take-up has not been helped by Intel’s treatment of Optane as a proprietary technology with a closed interface to specific Xeon CPUs. There is no Optane support for AMD or Arm CPUs which would enlarge the Optane PMEM market – but at the cost of Xeon processor sales.

Micron has decided that, in the wake of the rise of GPU-style workloads such as graphics, gene sequencing, AI and machine learning, the overarching need is for more memory bandwidth from CPUs, GPUs and other accelerators to a shared and coherent memory pool. This is different from the Optane presumption that CPUs are limited by memory capacity.

Compute Express Link scheme.

The Compute Express Link (CXL) is the industry-standard way to link processors of various kinds to a shared memory pool. Micron has said it supports CXL and will develop memory products that use it.

In the Micron worldview, Optane’s role would be as a CXL-connected storage-class memory pool. Other storage-class memory products, such as Everspin’s STT-MRAM, will also likely need to support CXL in order to progress in the new CPU-GPU-shared memory processing environment. That is, if SCM has a role at all.

SCM’s role

Storage-class memory occupies a price performance gap between faster and higher-priced DRAM and slower and lower-priced NAND flash. Its problem has been that in SSD form it is seen as too expensive for the speed increase it provides. In PMEM (DIMM) form it is too expensive and needs complex supporting software, making it a relatively poor DRAM substitute. No-one would use Optane PMEM if DRAM was (a) more affordable and (b) more of it could be attached to a CPU.

As the world of processing moves from a CPU-only to a twin multi-CPU and multi-GPU model, memory needs to be sharable between all these processors. That requires a different connectivity method than the classic legacy CPU socket method. High-bandwidth memory (HBM) stacks memory dies above an interposer card which connects with a CPU. It is not much of a stretch to envisage HBM pools connected to CPUs and GPUs across a CXL fabric.

High Bandwidth Memory concept.

There are several SCM suppliers, none of which have made much progress compared to Intel’s Optane. Samsung’s Z-NAND is basically a faster SSD. Everspin’s STT-MRAM is seen as a potential DRAM replacement and not a subsidiary, slower tier of memory to DRAM; that’s Optane’s role. Spin Memory’s MRAM is in early development. Weebit Nano’s ReRAM is also in relatively early development.

It has taken Intel five years to get to the point where it still doesn’t have enough software support to drive mass Optane PMEM adoption – which shows that these small startups face a monumental problem.

The lesson of Optane PMEM is that all these technologies will need complex system and application software support and hardware connectivity if they are to work alongside DRAM.

Perhaps the real problem is that there is no storage-class memory market. The CPU/GPU connectivity and software implementation problems are so great as to deny any candidate technology market headroom.

Micron has judged that the SCM game is not worth the candle. Intel now has to decide if it should go it alone. It could double down its Optane investment, by buying Micron’s Lehi fab, or it could decide to spend its Optane and 3D XPoint development dollars elsewhere.

Hello Azure. Pure Cloud Block Store is here

Pure Storage has made its Cloud Block Store available on the Azure Marketplace.

Cloud Block Store is the cloudified version of Purity OS, the operating system that runs on the company’s FlashArrays. The software provides high-availability block storage, a DR facility and Dev/Test sandboxes. All these instantiations can be handled through Pure1 Storage Management.

Cloud Block Store enables bi-directional data mobility between FlashArray on-premises, hosted locations and the public cloud. The service is already available on AWS.

Aung Oo, Partner of Director Management for Microsoft Azure Storage, issued a statement: “Pure Cloud Block Store on Azure, which is built with unique Azure capabilities including shared disks and Ultra Disk Storage, provides a comprehensive high availability and performant solution.”

Pure has said it may roll out CBS to other public clouds – Google Cloud springs to mind. The company is also considering expanding storage protocol support – files and S3 objects spring to mind.

The company has announced a Pure Validated Design for Microsoft SQL Server Business Resilience to provide business continuity for SQL Server databases running on premises. This enables disaster recovery in the cloud, with Cloud Block Store for Azure acting as a high-availability target.

Pure CBS replication for DR.

With the Azure coverage, Pure joins HPE, Infinidat, NetApp, IBM’s Red Hat and Silk in providing a common block storage dataplane across their on-premises, hosted, AWS and Azure instances. Silk and Red Hat go further by covering GCP as well. 

The hybrid multi-cloud environment is becoming a reality and we expect newer vendors, such as VAST Data and StorONE, to follow suit.

AWS slashes Amazon S3 Glacier data movement prices

Amazon Web Services has cut some S3 Glacier prices by 40 per cent, AWS Chief Evangelist Jeff Barr revealed yesterday.

“We are lowering the charges for PUT and Lifecycle requests to S3 Glacier by 40 per cent for all AWS Regions… Check out the Amazon S3 Pricing page for more information,” he wrote.

A PUT request moves S3 data into Glacier. A Lifecycle request migrates data from one S3 storage class to another, with the aim of saving storage costs. S3 does not transition objects smaller than 128 KB because it’s not cost effective.

AWS S3 lifecycle waterfall

“You can use the S3 PUT API to directly store compliance and backup data in S3 Glacier. You can also use S3 Lifecycle policies to save on storage costs for data that is rarely accessed,” Barr wrote.

We could not immediately discern how to compare the before and after prices, and have asked AWS for specifics. On its Amazon S3 pricing page, AWS notes “there are per-request ingest fees when using PUT, COPY, or lifecycle rules to move data into any S3 storage class.“

However, these fees are not displayed – and so there is no simple way to find out how much S3 Glacier PUT and Lifecycle requests cost. Customers are told to estimate their costs using an AWS pricing calculator. But this estimates prices for all AWS services, including S3 Glacier, based on your proposed usage.

Update: AWS told us: “The price reduction for PUTs and Lifecycle transitions requests for S3 Glacier reduced prices by 40 per cent in all AWS Regions. For example, for US East (Ohio) Region we reduced the price from $0.05 down to $0.03 per 1,000 requests for all S3 Glacier PUTs and Lifecycle transitions.”

The basic Glacier storage costs of $0.004/GB/month remain unchanged.