Home Blog Page 316

Lenovo boosts low end all-flash array with end-to-end NVMe

Lenovo DM5100F all-flash array in 2RU x 24-slot chassis

Lenovo has juiced up its entry level all-flash array with NVMe SSDs, NVMe/FC access and faster Fibre Channel support. The company said the new ThinkSystems DM5100F array is suitable for analytics and AI workloads.

Lenovo teamed up with NetApp in August to produce the all-flash ThinkSystem DM Series. According to the company, the new system delivers 45 per cent higher performance than its precursor, the DM5000, which uses SAS SSDs and 16Gbit/s FC access.

DM arrays use NetApp ONTAP software, while the hybrid flash/disk DE Series use SAN OS, NetApp’s software for its E-Series arrays.

The DM5100F scales out to 48 NVMe SSDs, with capacity topping out at 737.28TB. This is less than the DM5000 which holds 144 SAS SSDs for a maximum 2.2PB capacity.

The DM5100F’s maximum controller memory is 128GB, twice that of the DM5000F’s 64GB. The new model also has 16GB of NVRAM – double the DM5000F’s 8GB. The increases reflect the greater burden on the DM5100F controller from the NVMe SSDs, NVMe/FC access and overall increased IOPS performance. 

Lenovo’s new array requires ONTAP 9.8, which is also available for the other DM Series models. 

Lenovo ThinkSystem DM5100F array’s 2RU x 24-slot controller chassis

All the DM Series arrays now get S3 object access support, adding to existing block and file access protocols (FC, iSCSI, NFS, pNFS, SMB, NVMe/FC). There is transparent failover and management of object storage. Customers can add cold-data tiering from the SSDs to the cloud, or replicate data to the cloud.

A new DB720S Fibre Channel switch links servers to the DM and DE Series arrays, and it adds 64Gbit/s Fibre Channel speed plus lower access latency to the existing 32Gbit/s and 16Gbit/s switches in Lenovo’s product locker. (This is an OEMed version Broadcom G720 switch.)

Cloud-based management 

Lenovo has released Intelligent Monitoring 2.0, an update of its cloud-based management tool for the DM and DE Series arrays. This enables customers to monitor and manage storage capacity and performance for multiple locations from a single cloud-based interface. V2.0 improves the analytics and adds AI-based prescriptive guidance.

Pure overtakes NetApp in Gartner magic quadrant for primary arrays

Pure Storage is rated highest in Gartner’s 2020 primary arrays Magic Quadrant (MQ), overtaking NetApp, last year’s front runner.

Gartner defines primary arrays as all-flash or hybrid flash and disk on-premises storage arrays, that deliver block services (structured data workloads) and possibly file and object access.

Now for the Blocks & Files standard MQ explainer. The magic quadrant is defined by axes labelled ‘ability to execute’ and ‘completeness of vision’, and split into four squares tagged ‘visionaries’, ‘niche players’, ‘challengers’ and ‘leaders’.

The 2020 primary arrays MQ Leaders’ box is split into two groups, with IBM moving up from the trailing leaders to join Pure, NetApp, Dell and HPE in the, umm, leading leaders.

Notable changes include Inspur moving up from niche players to challengers. Oracle has fallen back in the niche players box.

Five vendors have dropped out of the 2020 MQ. They are Western Digital, which sold its IntelliFlash line to DDN, NEC, Infortrend, Synology and Kaminario (now rebranded as Silk.) Gartner’s MQ criteria for inclusion require the supplier to provide on-premises arrays, and this criteria excludes Silk, which has morphed into a software-only vendor

To merit inclusion in the MQ, suppliers must also have generated more than $50m in sales revenue over the past year. This may explain the exit of NEC, Infortrend and Synology. It could also account for the non-appearance of VAST Data and StorONE, which might otherwise have been expected to appear in the visionaries box

Gartner analysts list three strategic planning assumptions:

  • By 2025, 50 per cent or more of enterprises will have moved to an OPEX storage consumption model.
  • By 2025, 20 per cent or more enterprises will be using NVMe-oF. Just five per cent use it currently. 
  • By 2023, at least 20 per cent of enterprises will be using cloud storage management tools to link their arrays to the public cloud for backup and disaster recovery.

Could it be Magic (Quadrant)?

Here is the 2020 primary array Magic Quadrant, kindly made available by Infinidat.

And, for comparison, here is last year’s primary array MQ;

The green diagonal represents a balance of completeness of vision and the ability to execute.

Cohesity writes two new chapters for its everything DMaaS story

Cloud Love

Cohesity this week officially launched DataProtect-as-a-Service. The data management vendor also said it will SaaS-ify SiteContinuity, the new disaster recovery software it announced in September. The moves show the company’s progress in delivering all its data management functions as a service (DMAaS).

Cohesity made the twin announcements at Amazon’s re:Invent 2020 yesterday – an appropriate venue as the services are built atop the AWS public cloud. (Cohesity proclaimed its intention to deliver DataProtect-as-a-Service and partner with Amazon in October – you can read our story here.)

Matt Waxman

Matt Waxman, VP of Product Management at Cohesity, said Cohesity Data Management-as-a-Service “removes the complexities of managing infrastructure”.

Cohesity will continue to offer on-premises software for customers that want to retain their on-premises infrastructure. Management of all versions of Cohesity DataProtect is handled through the company’s Helios cloud admin console.

Cohesity argues that customers prefer a unified DMaaS product set, covering several aspects of data management, such as backup, disaster recovery, file saturate and so forth, with single point management. The alternative is a mix of point products that may not cover the same data management functions.The company said it will make more SaaS announcements in coming quarters.

The data management industry is moving wholesale towards DMaaS, with Commvault announcing a DRaaS yesterday, joining Zerto and Druva, which have operated backup-as-a-service for some time. Rubrik offers the Polaris management facility as a service and we expect it to follow the SaaS course for its data management software.

Cohesity DataProtect delivered as a Service is available immediately in the US and Canada through resellers and in the AWS Marketplace, and elsewhere in the coming quarters. SiteContinuity delivered as a Service will be available in early access preview in early 2021 and general availability is planned for Spring 2021.

The next game changer? Amazon takes on the SAN vendors

Amazon has re-engineered the AWS EBS stack to enable on-premises levels of SAN performance in the cloud. Make no mistake, the cloud giant is training its big guns at the traditional on-premises storage area networking vendors.

The company revealed yesterday at re:Invent 2020 that it has separated the Elastic Block Store compute and storage stacks at the hardware level so they can scale at their own pace. AWS has also rewritten the networking stack to utilise its high-performance Scalable Reliable Datagrams (SRD) networking protocol, and so lower latency.

The immediate fruits of this architecture overhaul include EBS Block Express, the “first SAN built for the cloud”. AWS said the service is “designed for the largest, most I/O intensive mission-critical deployments of Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics that benefit from high-volume IOPS, high throughput, high durability, high storage capacity, and low latency.”

Pure conjecture from us, but Amazon could hit the SAN storage suppliers squarely in their own backyards by introducing EBS Block Express to the AWS Outposts on-premise appliance.

Mai-Lan Tomsen Bukovec, VP Storage, at AWS, said in a statement: “Today’s announcements reinvent storage by building a new SAN for the cloud, automatically tiering customers’ vast troves of data so they can save money on what’s not being accessed often, and making it simple to replicate data and move it around the world as needed to enable customers to manage this new normal more effectively.”

Mai-Lan Tomsen Bukovec

AWS noted that many customers had previously striped multiple EBS io2 volumes together to achieve higher IOPS, throughput or capacity. But this is sub-optimal. The alternative – on-premises SANS – are “expensive due to high upfront acquisition costs, require complex forecasting to ensure sufficient capacity, are complicated and hard to manage, and consume valuable data center space and networking capacity”

Now EBS io2 Block Express volumes can support up to 256,000 IOPS, 4,000 MB/second of throughput, and 64TB of capacity. This is a fourfold increase over existing io2 volumes across all parameters. The new volumes have sub-millisecond latency and users can stripe multiple io2 Block Express volumes together to get better performance.

Decoupled compute and storage

AWS yesterday said decoupling of compute and storage in the EBS service has enabled it introduce a new class of Gp (general purpose) volume for general purpose workloads such as relational and non relational databases. Capacity grows in lockstep with improvements in performance ((IOPS and throughput) with the existing Gp2 volumes and this means customers can end up paying for storage that they don’t need.

AWS has addressed this with Gp3 volumes, to enable users to utilise a claimed 4x performance increase over Gp2 volumes – without incurring a storage tax. As well as independent scaling, Gp3 volumes are priced 20 per cent cheaper than Gp2. Migration from Gp2 to Gp3 is seamless, AWS says, and handled via Elastic Volumes.

Tiering and replication

The Archive Access (S3 Glacier) and Deep Archive Access (S3 Glacier Deep Archive) tiers, announced in November with S3’s Intelligent-Tiering, are now generally available. Customers can lower storage costs by putting cold data into progressively deeper and lower-cost AWS archives.

S3 Replication enables the creation of a replica copy of customer data within the same AWS Region or across different AWS Regions. This is now extended to replicate data to multiple buckets within the same AWS Region, across multiple AWS Regions, or a combination of both.

AWS io2 Block Express volumes are available in limited preview.

Pliops strengthens board with appointment of Mellanox founder

Eyal Waldman

Mellanox founder Eyal Waldman has joined the board of Pliops, an Israeli data storage-tech startup. His role will be to help guide Pliop’s growth and scale its technology to new use cases. He will also advise on financial decisions, personnel and overall strategy, and meet key customers and partners.

“Pliops is one of those companies that is poised to make a huge impact. This is a pivotal time in the data centre and I’m looking forward to working with the Pliops team as they roll out their technology,” said Waldman, the former CEO of Mellanox, which was acquired earlier this year by Nvidia for $7bn. He left Nvidia in November.

According to Waldman, “Pliops is tackling the most challenging issues that are vexing to data centre architects – namely, the colliding trends of explosive data growth stored on fast flash media, ultimately limited by constrained compute resources.”

Eyal Waldman
Eyal Waldman.

To date, Pliops has raised $40m to fund the development of a storage processing unit (SPU), which we consider to be a sub-category of the new class of data processing units (DPU). The Pliops card hooks up to a server across a PCIe link and accelerates and offloads the server’s X86 CPUs. The company had originally targeted launch for mid-2019 but is now sampling its storage processors to select customers and expects general availability in Q1 2021.

Waldman’s Mellanox experience, connections and know-how should help the company in a competitive environment that is heating up.

Pliops must contend with VMware and Nvidia’s Project Monterey DPU vision. Nvidia also told us this week of its plans to add storage controller functions to the Bluefield SmartNIC.

Also Pliops SPU is similar in concept to another startup, Nebulon and its SPU, which has a cloud-managed and defined software architecture. Nebulon said it has bagged HPE and Supermicro as OEMs.

Commvault gives VMware workloads some more loving in latest DR software release

Commvault has updated its new DR software with recovery automation for VMware workloads.

The upgrade also sees Commvault Disaster Recovery gain orchestration to, from and between on-premises and Azure and AWS environments. The orchestration can be within zones or across regions, and features simple cross-cloud migration support. It seems reasonable that Commvault will in due course add Google Cloud support.

ESG senior analyst Christophe Bertrand gave his thumbs up to the upgrade: “Commvault Disaster Recovery’s multiple cloud targets and speedy cross-cloud conversions make it extremely compelling. With everything going on in the world today, a true disaster could be right around the corner for any company. It’s critical to have enterprise multi-cloud tools in place to mitigate data loss and automate recovery operations immediately.”

The competition between DR specialist Zerto, which recently moved into backup, and data protector Commvault, which recently moved into DR, is hotting up. Cohesity has also moved into automated DR with its SiteContinuity offering.

Commvault released Commvault Disaster Recovery in July. Its automated failover and failback provide verifiable recoverability and reporting for monitoring and compliance. The software enables continuous data replication with the automated DR capabilities, capable of sub-minute Recovery Point Objectives (RPOs), along with near-zero Recovery Time Objectives (RTOs). 

Commvault cites additional benefits for the software such as cloud migration, integration with storage replication, ransomware protection, smart app validation in a sandbox, and instant mounts for DevOps with data masking. The latter feature moves it into the copy data management area, competing with Actifio, Catalogic, Cohesity, Delphix and others.

Google builds out Cloud with Actifio acquisition

Google is buying Actifio, the data management and DR vendor, to beef up its Google Cloud biz. Terms are undisclosed but maybe the price was on the cheap side.

Actifio has been through torrid time this year. The one-time unicorn refinanced for an unspecified sum at near-zero valuation in May. It then instituted a 100,000:1 reverse stock split for common stock which crashed the value of employees’ and ex-employees’ stock options.

Financial problems aside, Google Cloud is getting a company with substantial data protection and copy data management IP and a large roster of enterprise customers.

Matt Eastwood, SVP of infrastructure research at IDC, provided a supporting statement: “The market for backup and DR services is large and growing, as enterprise customers focus more attention on protecting the value of their data as they accelerate their digital transformations. We think it is a positive move for Google Cloud to increase their focus in this area.”

Google said the acquisition will “help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios.” It also expressed commitment to “supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”

This all suggests Actifio software will still be available for on-premises use.

Ash Ashutosh, Actifio CEO, said in a press statement: “We’re excited to join Google Cloud and build on the success we’ve had as partners over the past four years. Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”

Ash Ashutosh video.

Actifio was started by Ashutosh and David Chang in July 2009. The company took in $311.5m in total funding across A. B, C, D and F-rounds. The latter was for $100m in 2018 at a $1.3bn valuation.

What Actifo brings to Google Cloud

Google Cloud says Actifio’s software:

  • Increases business availability by simplifying and accelerating backup and DR at scale, across cloud-native, and hybrid environments. 
  • Automatically backs up and protects a variety of workloads, including enterprise databases like SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, as well as virtual machines (VMs) in VMware, Hyper-V, physical servers, and Google Compute Engine.
  • Brings significant efficiencies to data storage, transfer, and recovery. 
  • Accelerates application development and reduce DevOps cycles with test data management tools.

All-flash arrays shine in anaemic quarter for HPE storage

HPE revenues have returned to pre-pandemic levels – more or less – but data storage lags behind the rest of the business, with revenues down three per cent Y/Y to $1.2bn.

However, all-flash arrays (AFA) and hyperconverged were bright spots. AFA revenue grew 19 per cent Q/Q driven by increased adoption of the Primera AFA, which was up 43 per cent Q/Q ,and the Nimble AFA, which was up 27 per cent Q/Q. We don’t have Y/Y numbers for these two products.

Antonio Neri, HPE CEO
Antonio Neri

In the earnings call CEO Antonio Neri said: “In storage, we have been on a multiyear journey to create an intelligent data platform from edge-to-cloud and pivot to software-as-a-service data storage solutions, which enable higher level of operational services attach and margin expansion. And our strategy is getting traction.

“Our portfolio is well positioned in high-growth areas like all-flash array, which grew 29 per cent year over year; big data storage, which had its sixth consecutive quarter of growth, up 41 per cent Y/Y; and hyperconverged infrastructure where Nimble dHCI, our new hyperconverged solution, continued momentum and gained share, growing 280 per cent Y/Y. We also committed to doubling down in growth businesses and investing to fuel future growth.”

HPE emphasised Q/Q growth to show it is climbing out of a pandemic-caused drop in revenues. Big Data grew 27 per cent Q/Q thanks to to increased customer demand for AI/ML capability. Overall, storage accounts for 16.7 per cent of HPE’s revenues. (A minor point – in HPE’s compute business the Synergy composable cloud business grew five per cent Q/Q.)

CFO Tarek Robbiati said: ‘Our core business of compute and storage is pointing to signs of stabilisation, and our as-a-service ARR (annual recurring revenue) continues to show strong momentum aligned to our outlook.”

For comparison, NetApp yesterday reported Q2 revenues up 15 per cent Y/Y, while Pure Storage last week reported revenues down four per cent Y/Y.

HPE’s outlook is for a mid-single digits revenue decline Y/Y next quarter.

NetApp’s high-end AFA sales lead it out of pandemic recession

NetApp has posted its second successive quarter of revenue growth, thanks to an unexpected boost in high-end all-flash storage array sales.

The company recorded $1.42bn in revenues for the second fiscal 2021 quarter ended October 30, 2020,three per cent higher than a year ago and above guidance. Net income fell 43.6 per cent to $137m.

CEO George Kurian’s said in a press statement: “I am pleased with our continued progress in an uncertain market environment. The improvements we made to sales coverage in FY20 and our tight focus on execution against our biggest opportunities continue to pay off.”

Quarterly revenue buy fiscal year chart shows NetApp climbing out of a revenue dip

Highlights in the quarter included a 200 per cent jump in public cloud services annual recurring revenue (ARR) to $216m, and all-flash array run rate increasing 15 per cent to $2.5bn. NetApp said 26 per cent of its installed systems are all-flash, which leaves plenty of room to convert more customers to AFA systems.

Hardware accounted for $332m of the $749m product revenue, down 18 per cent Y/Y, with software contributing $417m, up 14 per cent. Product revenue in total declined three per cent Y/Y.

On the earnings call, CFO Mike Berry said the company is “on track to deliver on our commitment of $250m to $300m in fiscal ’21 Cloud ARR and remain confident in our ability to eclipse $1bn in Cloud ARR in fiscal ’25.”

The outlook for NetApp’s Q3 is $1.42bn at the mid-point, one per cent up on the same time last year. NetApp hopes that Covid-19 vaccination programs will lead the overall economy to growth after that in calendar 2021.

NetApp gives PowerStore a kicking

Kurian’s prepared remarks included this sentiment: ”We are pleased with the mix of new cloud services customers and growth at existing customers. We saw continued success with our Run-to-NetApp competitive takeout program, an important component of our strategy to gain new customers and win new workloads at existing customers.”

That program targets competitors’ product transitions, such as Dell’s Unity to PowerStore transition. Dell bosses recently expressed impatience about PowerStore’s revenue growth rate in its quarterly results.

Kurian talked about market share gains in the earnings call: “If you look at the results of all of our major competitors, [indiscernible], Dell, and HP, there’s no question we have taken share. I think our product portfolio is the best in the market.” He called out high-end AFAs as doing well – which was unexpected according to Berry. This drove NetApp’s “outperformance in product revenue and product margin”.

Kurian gave PowerStore a kicking when replying to an analyst’s question: “I think as not only we have observed, but many of our competitors have also observed, the midrange from Dell has not met expectations. It is an incomplete product. It is hard to build a new midrange system. And so it’s going to be some time before they can mature that and make that a real system. And you bet we intend to take share from them during that transition… We’re going to pour it on.”

Riding the disk replacement wave

NetApp’s AFA revenue growth should continue, according to Kurian. “We think that there are more technologies coming online over the next 18 to 24 months that will move more and more of the disk-based market to the all-flash market. We don’t think that all of the disk-based market moves to all-flash. But as we said, a substantial percentage of the total storage market, meaning let’s say 70 to 80 per cent will be an all-flash array portfolio.”

He is thinking of QLC flash (4bits/cell) SSDs as they enable replacement of nearline and faster disk drives. Kurian said: “QLC makes the advantage of an all-flash array relative to a 10k performance drive even better. So today, there are customers buying all-flash arrays, when they are roughly three times the cost of a hard drive. With QLC, that number gets a lot closer to one and a half to two times.”

Also, the “economics of all-flash are benefited by using software-based data management”.

Micron shrugs off Huawei hit, raises Q1 financial guidance

Micron has upped revenue guidance for the first quarter ended December 3 from $5bn-$5.4bn to $5.7bn-$5.75bn.

The US chipmaker has also increased its gross margin and EPS guidance for the quarter. Investors are happy and the stock price rose 5.7 per cent in pre-market trading.

Micron said it switched production from Huawei to other customers more quickly than it has previously anticipated. Huawei, hitherto Micron’s biggest customer, is subject to a US trade ban.

In addition, Micron may have recorded stronger than expected DRAM sales, according to Wells Fargo analyst Aaron Rakers.

It will be interesting to see if this is Micron-specific news or Samsung and SK hynix are also benefiting from an end-of-year boost.

Nvidia plots invasion of the storage controllers

Nvidia is adding storage controller functions to its BlueField-2 SmartNIC card and is gunning for business from big external storage array vendors such as Dell, Pure and VAST Data.

Update: Fungible comment added; 3 December 2020.

Kevin Deierling, Nvidia SVP marketing for networking, outlined in a phone interview yesterday a scheme whereby external storage array suppliers, many of whom are already customers for Nvidia ConnectX NICs, migrate to SmartNICs in order to increase array performance, lower costs and increase security. The BlueField SmartNIC incorporates ConnectX NIC functionality, making its adoption simpler.

Deierling said Nvidia is already having conversations with array suppliers; “BlueField is a superb storage controller. … We’ve demo’d it as an NVMe-oF platform … This is very clearly a capability we are able to support, at a cost point that’s a fraction of the other guys.”

As BlueField technology and functionality progresses, array suppliers could consider recompiling their controller code to run on the BlueField Arm CPUs. At this point the dual Xeon controller setup changes to a BlueField DPU system – more than a SmartNIC, and the supplier says goodbye to Xeons, Deierling declared. “It’s software-defined and hardware-accelerated. Everything runs the same, only faster.”

SmartNICs and DPUs

SmartNICs are an example of DPU (Data Processing Unit) technology. Nividia positions the DPU as one of three classes of processors, sitting alongside the CPU and the GPU. In this worldview, the CPU is a general-purpose processor. The GPU is designed to process graphics-type instructions much faster than a CPU. The DPU acts as a co-ordinator and traffic cop, routing data between the CPU and GPU.

Kevin Deierling

Deierling told B&F that growth in AI use by enterprises was helping to drive DPU adoption; “One server box can no longer run things. The data centre is the new unit of computing. East-west traffic now dominates north-south traffic. … The DPU will ultimately be in every data centre server.”

He thinks the DPU could appear in external arrays as well, replacing the traditional dual x86 controller scheme.

In essence, an external array’s controllers are embedded servers with NICs (Network Interface cards) that link to accessing servers and may also link to drives inside the array. Conceptually, it is easy to envisage their replacement by SmartNICs that offload and accelerate functions like compression from the array controller CPUs.

DPU discussion

Today, the DPU runs repetitive network and storage functions within the data centre. North-south traffic is characterised as network messages that flow into and out of a data centre from remote systems. East-west traffic refers to network messages flowing across or inside a data centre.

The east-west traffic grows as the size of the datasets increases. Therefore it makes increasing sense to offload repetitive functions from the CPU, and accelerate it at the same time, by using specialised processors. This is what the DPU is designed to do.

The DPU can run server controlling software, such as a hypervisor. For example, VMware’s Project Monterey ports vSphere’s ESXi hypervisor to the BlueField 2’s Arm CPU and uses it to manage aspects of storage, security and networking in this east-west traffic flow. VMware functions such as VSAN and NSX could then run on the DPU and use specific hardware engines to accelerate performance.

SoCs not chips

The DPUs will help feed the CPUs and GPUs with the data they need. Deierling sees them as multiple SoCs (system on chips) rather than a single chip. A controlling Arm CPU would use various accelerator engines to handle packet shaping, compression, encryption or deduplication, which could run in parallel.

Fungible, a composable systems startup, aggregates this work in a single chip approach, but this does not allow the engines to work in parallel, according to Deierling.

In rebuttal, Eleena Ong, VP of Marketing at Fungible, gave us her view: “The Fungible DPU has a large number of processor elements that work in parallel to run infrastructure computations highly efficiently, specifically the storage, network, security and virtualisation stack.

“The Fungible DPU architecture is unique in providing a fully programmable data path, with no limit on the number of different workloads that can be simultaneously supported. The high flexibility and parallelism occur inside the DPU SoC as well as across multiple DPU SoCs. Depending on your form factor, you would integrate one or multiple DPUs.”  

SoC it to Nebulon

Nebulon, another startup is trying to do something similar to Nvidia, with its ‘storage processing unit’ (SPU). This consists of a PCIe card on which there are dual Arm processors and various offload engines to perform SAN controller functions at a lower hardware cost than an external array with dual Xeon controllers. That’s a match with Deierling’s definition of a DPU.

Nebulon SPU

To infinitesimal and beyond! Micron extends DRAM roadmap

Micron has updated its DRAM roadmap from three to four cell shrink stages, enabling more DRAM capacity per wafer and lowering costs per GB.

The US chipmaker intends to shrink the cell or process node size progressively with the following steps, starting from the no longer mainstream 20nm node process size to the 10nm (19nm-10nm range) node process:

  • 1Xnm – (c19-17nm) older DRAM technology process node size
  • 1Ynm – (c16-14nm) mainstream DRAM bit production technology today
  • 1Znm – (c13-11mn) 15 per cent of Micron DRAM bit production in 3Q20 
  • 1αnm process node – 1 alpha – volume production in first half of 2021
  • 1βnm process node – 1 beta – in early development
  • 1ɣnm process node – 1 gamma – early process integration 
  • 1δnm process node – 1 delta – pathfinding and may need EUV technology

The 1 delta node size is a new entry on Micron’s DRAM roadmap. We do not have indicative process node sizes below 1Znm.

Micron 1 alpha nm DRAM technology

Bit density growth rate slowed with the transitions from 1Xnm to 1Yn and 1Znm, Micron said. However, the company has accelerated the growth rate with a 40 per cent increase from 1Znm to the 1αnm process node size.

Wells Fargo analyst Aaron Rakers informs subscribers Micron has a strong position in 1Znm DRAM production. Citing the research firm DRAMeXchange, he estimates Micron’s 1Znm output at 15 per cent of its DRAM bit production in 3Q20 versus Samsung and SK hynix at six per cent and zero. 

Other things being equal, 1Znm DRAM costs less to manufacture than the preceding 1Ynm node.

In Violet

Micron uses Deep Ultra Violet (DUV) multi-patterning lithography to lay out DRAM cell die details on a wafer. As the process node size in shrinks below the 10nm cell size level, the wavelength of the light beam becomes a constraint.

ASML, the dominant supplier of lithography machines for the chip industry, has developed EUV (Extreme Ultra Violet) scanners which emit smaller wavelength light. This technology etches narrower lines on the wafer and so enable smaller process sizes – i.e. more DRAM dies on the wafer, and consequently higher capacity per wafer and lower cost per GB. But the capital outlay is significant – currently ASML makes only 30 EUV lithography machines a year – they weigh 180 tons and cost $120m each.

Samsung use EUV in 1Znm process nodes and SK hynix plans to use EUV technology for volume production of 1αnm DRAM and1βnm DRAM. Micron thinks EUV will not be cost-competitive until 2023 or later, meaning the 1δnm process node.