Home Blog Page 321

Catalogic enters the Kubernetes backup fray with CloudCasa

Ken Barth, Catalogic
Ken Barth

Catalogic Software, the copy data management vendor, is expanding into containerised app backup.

The new cloud-native SaaS, called CloudCasa, supports Kubernetes and Red Hat OpenShift clusters and is built using Kubernetes. It is platform agnostic and, as well as OpenShift, provides backup for VMware Tanzu and Amazon, Google, Microsoft and IBM Kubernetes services.

Catalogic COO Sathya Sankaran said in a press statement: “Kubernetes has been the driver of the single largest shift in the data protection ecosystem in recent years… CloudCasa is truly disruptive and allows unlimited CSI snapshots as well as backup of cluster metadata and container resources to our managed storage for free.”

CloudCasa backups from and restores to clusters deployed on-premises and in the cloud. Data is always encrypted, in transit and at rest. Automatic scaling and unlimited cloud storage at the user’s disposal.

The company wants us to understand that CloudCasa is not a retrofit of any existing backup appliance software but a “reimagination of backups leveraging Catalogic’s proven expertise in snapshot and copy data management across multiple storage vendors.”

Su casa es CloudCasa?

Ken Barth

Ken Barth, Catalogic’s CEO, said: “The launch of CloudCasa is a game changer for Catalogic, its customers, and those in need of data protection and disaster recovery for Kubernetes. It’s truly a subscribe and use solution, so elegant, and an answer to a pain point in the burgeoning Kubernetes market.”

Game changer? Maybe for Catalogic but we note the company is somewhat late to the container backup game. Players already on the field include Clumio, Commvault, Dell PowerProtect, Druva, Pure’s Portworx and Veeam’s Kasten. Some, like Kasten, are already shipping third generation product. Catalogic, with a v1.0 offering, will need to provide a better functionality than these suppliers.

CloudCasa launches next week as a public beta at KubeCon +. It will be generally available through public cloud marketplaces and marketplaces of popular distributions such as RedHat OpenShift, SUSE Rancher and VMware Tanzu.

The initial free offering has no limits on clusters or worker nodes per user or organisation. Backup retention is a maximum of 30 days, and more premium plans are in the offing.

Kasten now supports multi-cluster Kubernetes deployments

Kasten has announced the third major version of its K10 containerised backup application to support groups of Kubernetes clusters and users.

K10 provides containerised data protection from an application point of view and features backup, migration and disaster recovery. V3.0 supports multi-cluster Kubernetes deployments and multi-tenant cloud environments.

Niraj Tolia

The containerised application deployment scene and associated Kubernetes cluster infrastructure is set to become larger in scale and more complicated to manage, monitor and operate. Managing at the cluster group level with groups of users will become a necessity.

Niraj Tolia, head of Kasten, which recently became a Veeam subsidiary, said: “We’re watching the growing dependence on multi-tenant Kubernetes deployments within singular enterprises before our eyes.”

Multi-tenant cloud environments and multi-cluster Kubernetes deployments are increasingly common in enterprises today, Kasten says. It cites VMware’s The State of Kubernetes 2020 survey, which reports about 20 per cent of enterprise K8s deployments have more than 50 clusters in production, and adoption is expected to accelerate.

New features in K10 3.0

  • Multi-cluster dashboard views to get get the aggregate and real-time status of parameters like the total number of clusters, policies, and applications,
  • Kubernetes-native security authentication for appropriate levels of access and action within and across clusters, helping support multi-tenancy,
  • Cross-cluster policy enforcement to simplify the management of backups at scale through automation.
  • Custom Cluster Group Definitions so users can create their own groupings so global policies can be distributes to any logical group of clusters with the click of a button.
  • Individual Cluster Shifting to search for an individual cluster and define and operate on policies specific only to that cluster.

Veeam proclaims that it has 400k customers

Veeam notched up 21 per cent subscription revenue growth y/y in the third quarter, driving customer count past 400,000.

The data protection software vendor reported 375,000 customers in October 2019, so the latest figures indicate a monthly net customer gain of at least 2,000 per month since then. Veeam measures subscription growth by its annual recurring revenue (ARR) number.

Privately-held Veeam is not obliged to report earnings figures, but is happy to release smoke signals instead. To whit, CEO Bill Largent boasted in a statement today: “Our last quarter was very strong, and we’re looking forward to a great finish in 2020 as we continue to roll out new solutions that serve the complex needs of our customers.”

Veeam cites a 1H 2020 IDC Semi-Annual Software Tracker for Data Replication & Protection report, that ranks it first in EMEA market share by revenue. Veeam experienced the fastest revenue growth year-over-year in 1H20, among the top five vendors, other vendors, and overall market average, according to the tech analyst firm.

Veeam said its fastest growing product is Veeam Backup for Microsoft Office 365, which recorded 85 per cent Y/Y growth. And it claims it continues to win share in the core data centre backup and recovery market.

Furthermore, the company points to a renewed focus on container backup with last month’s acquisition of Kasten. Veeam is integrating Kasten into its core cloud data management and protection platform. Veeam’s Act 1 was about virtualized servers, Largent said, while Act II focuses on cloud and containers.

The stage is set for a war of attrition as well-funded suppliers duke it out for cloud backup supremacy.

New broom at Hitachi Vantara sweeps away two top execs

Two senior Hitachi Vantara execs have left the company without fanfare, signalling a shakeup by new CEO Gajen Kandiah.

A Hitachi Vantara spokesperson confirmed that CMO Jonathan Martin and Digital Solutions business unit President Brad Surak are no longer with the business. In their place CEO Gajen Kandiah is “acting as interim President of the Digital Solutions Business Unit, and John Magee, previously head of Digital Solutions Marketing, is the new VP of Marketing, reporting to Gajen.”

The spokesperson revealed Kandiah “expects to make a leadership team announcement early in the New Year. The changes are in line with Gajen’s evolving vision for the company which you can read about in his latest blog.”

Gajen Kandiah.

Kandiah, the former head of Cognizant’s digital business, was recruited as CEO in July. In his blog, he says customers are concerned with how Hitachi V’s storage, cloud, data management, analytics and consulting capabilities all fit together. They are also focusing on “the likely permanence of the changes COVID-19 has wrought on the way we live, work, learn and play. Many of [them] tell me: there is no going back.”

He outlines five focus areas for Hitachi Vantara.

  1. Lowering data centre costs through increased storage array scalability and automation. (He cites the VSP 5000 array, soon to be delivered in software-only form.)
  2. Enabling hybrid cloud IT with end-to-end offerings.
  3. Expanded Kubernetes offerings to run applications in any cloud, edge device, vehicle, or power plant.
  4. More app modernisation, data management and analytics offerings.
  5. Operational and Information technology convergence at the Edge in the data generated by machines, devices and remote workers.

Kandiah said Hitachi V should evolve its portfolio “to keep pace with – or leapfrog -the changing market. You can count on us to create and deliver the solutions clients need, whether we build them ourselves or tap an ecosystem of partners to do so.”

Hitachi Vantara’s two business units, Digital Infrastructure and Digital Solutions, were formed via the merger of Hitachi’s storage subsidiary Hitachi Vantara with Hitachi Consulting. This reorg was completed in January and a big round of layoffs ensued.

Blocks & Files would be unsurprised if Hitachi V sees a declining need for in-house consultants to spec and implement its increasingly automated end-to-end IT product and service offerings.

Seek and ye shall find (unstructured data), with Quantum ATFS

Quantum has displayed the first fruits of its April 2020 acquisition of Atavium in April – a general file storage product called ATFS. The company has also updated StorNext and ActiveScale products.

ATFS (All-Terrain File System) is distinguished by its ability to improve file content searches using object-style metadata tags. Traditional file metadata is limited to name, folder, owner, type, size, data – basic categorisation that is little help in finding content. ATFS classification tagging can find wanted files faster and link files in new ways – for example, files of all types across an enterprise connected to a project.

Jamie Lerner, Quantum CEO, supplied a statement: “Our customers are dealing with massive video and unstructured data growth, and it will be the ability to harness the value of this data – to ‘enrich’ this data – that will drive businesses forward. This is what will drive the next discovery, the next innovation, new ways to communicate and entertain, and new business models.”

ATFS supports standard NFS and SMB file storage and API access. Its classification, run automatically when data is ingested, can drive file placement on storage tiers. Users may visualise data in virtual file system views for collaboration across users and organisations without creating duplicate copies nor loosening data security.

ATFS screen shots.

That means grouped files distributed across an organisation do not need to be collected in one physical storage place for overall processing.

ATFS can run as a VM in the cloud and move data to S3 as a tier. It supports two modes when using S3: native and managed. Native mode allows for applications to mount the S3 repository and use data in it for other workflows. In managed mode, ATFS can retrieve subsets of files, perform read ahead, and optimise data placement.

Files can also be moved between StorNext and ATFS where necessary. 

StorNext and ActiveScale

StorNext 7.0 is equipped with new user interface, new API, new tiering engine with NVMe SSD, SSD, HDD, object storage, and tape tiers, and policy-driven file movement across these tiers. The NVMe SSD tier adds extra performance to the system.

StorNext can run as a virtual machine (VM) or in containers in the cloud or in Quantum-shipped hardware. Since it can run as a VM, using a host server’s storage, the server, or Quantum appliance, can run other VMs and so function as a hyper-converged system.

ActiveScale, Western Digital’s object-based archival storage system, was bought by Quantum in February. It represents an object storage tier for both StorNext and ATFS.

A Quantum diagram positions ATFS and the StorNext File System (SNFS); 

*3DX means 3D NAND SSDs. QLC (4bits/cell) flash and Optane or similar media

The ActiveScale line gets a smaller capacity three-node object storage system, small object aggregation, and object lock to protect critical data. Object Lock renders objects immutable for defined retention periods. Object aggregation groups small objects into a single larger one to make writing more efficient and increase capacity utilisation. 

ATFS, StorNext and ActiveScale are available on subscriptions on a per TB basis.

Comment

Quantum’s move into general file storage is a significant departure from its data protection to tape legacy and current StorNext video file management focus.

The Atavium acquisition represents a potential growth opportunity. In a way it is similar to StorNext in that file data is moved between storage tiers to balance the need for fast access from necessarily costly storage to long-term archival on the lowest-cost storage and intervening access tiers. What’s new with ATFS is the way that files can be identified for movement so that overall file storage costs are lower and file workflows are made easier.There is an ATFS datasheet which can provide additional information.

Dell powers up PowerProtect product portfolio

Dell has upgraded the PowerProtect data protection line-up with Data Manager Kubernetes support and new hardware appliances.

There are three types of PowerProtect appliances: DD Series (Data Domain) backup targets; IPDA DP Series products running Avamar software for SMB customers; and X400 products running PowerProtect software for larger customers. PowerProtect software is also available on its own to run in virtual machines (VMs) on-premises or in the public cloud.

Jeff Boudreau, head of Dell’s Infrastructure Solutions Group, emphasised the need for different products in his press statement: “Data protection is not a one-size-fits-all proposition. Dell Technologies continues to advance our target and integrated appliances portfolio and software-defined data protection offerings.”

DP series

The DP Series provides backup, recovery, replication, deduplication, cloud readiness with disaster recovery, and long-term retention to the public cloud. Dell is boosting the hardware with up to 30 per cent more logical capacity, 38 per cent faster backups and 45 per cent quicker restores.

PowerProtect DP Series chassis.

There are now the original DP4400 plus new DP5400, DP5900 and DP8900 models, which succeed the DP5300, DP5800, DP8300 and DP8800. A datasheet summarises their main characteristics.

November 2020 DP Series product portfolio

A comparison with the old IDPA DP Series product table shows the various speed and capacity improvements.

May 2019 IDPA DP Series portfolio

Data Manager

We note PowerProtect Data Manager added protection for Kubernetes-orchestrated containers running alongside vSphere virtual machines in September. This was part of VMware’s Tanzu initiative to combine virtual machine and container support in vSphere. 

Data Manager provides agentless, application-consistent protection of open source databases, such as PostgreSQL and Apache Cassandra, in Kubernetes environments. Customers can now also protect Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS).

The software protects in-cloud workloads in Microsoft Azure and AWS, and new integrations provide native vCenter Storage Policy Based Management integration for VM protection. Workflows within a VMware vSphere environment can be used to assign data protection policies. 

There is also a VMware-certified Data Manager offering to protect the VMware Cloud Foundation infrastructure layer.

One more thing

Dell’s Cyber Recovery software provides provides automated data recovery from a secure, isolated vault to ensue clean data. It is the first product to receive endorsement from Sheltered Harbour, a not-for-profit org that aims to improve IT security in the financial sector.

Infinidat hires chief product officer. Organogram looks sketchy

Infinidat has appointed a head of product who reports direct to the executive chairman. What does it mean for Kariel Sandler, co-CEO who runs R&D and Operations? And what does it mean for the co-CEO structure that the high-end storage array vendor put in place in the wake of a funding round and subsequent departure of the company’s founder CEO?

Shahar Bar-Or is the incoming chief product officer and general manager of Infinidat’s Israel operations. He joins the company from Western Digital where he was VP for embedded engineering and Israel site manager.

Shahar Bar-Or

His responsibilities include R&D, quality assurance and validation, product innovations, operations and IT.

Infinidat executive chairman Boaz Chalamish, beamed: “In Shahar, we have exceptional talent to steer our product teams globally and at our Israeli headquarters. He has the best mix of leadership skills, expertise, experience and drive to help Infinidat reach its ambitious growth targets and move forward in strategic innovation.” 

Infinidat promoted CFO Nir Simon and COO Kariel Sandler as co-CEOs in May following a D-series round involving existing investors. Chalamish, previously CEO of Clarizen, a project management software company, was installed as exec chairman.

The amount raised was undisclosed, but the price included the demotion and subsequent departure of founder-CEO Moshe Yanai. He was shown the door by the Infinidat board in June because of poor business performance, according to the Israeli publication Calcalist.

Organogram

In B&F’s view, Bar-Or’s reporting line effectively makes Chalamish de facto CEO.

We think that the co-CEO structure is no longer stable. The two co-CEOs neither have control over product strategy nor the entire Israel operation, which somewhat diminishes their responsibilities.

We asked Infinidat if this reporting structure effectively makes Chalamish the CEO? And if Simon and Sandler will revert to their CFO and COO roles respectively? Also, we noted that Bar-Or is a flash memory exec and asked if his appointment meant that Infinidat products will make greater use of flash memory?

We got a non-answer: “This is an organisational decision on the reporting structure by Infinidat.”

We also asked why Boaz Chalamish is not listed on Infinidat’s leadership webpage in the board of directors section? We said we assumed this is just an oversight and not a message that he is leaving. Understandably, there was no reply to our impertinent suggestion.

Kioxia completes transition to PCIe 4 with new client SSDs

Kioxia has launched its first PCIe 4 SSDs for PCs, completing a “portfolio transition to PCIe 4.0, enabling the future of gaming, mobile computing and workstation applications.”

Kioxia recently introduced CD6 and CM6 data centre U.2 and hyperscaler XD6 short ruler format SSDs using the PCIe Gen 4 bus,

Kioxia XPG7 M.2 drive.

The new XG7 client SSD is available in 256GB, 512GB and 1TB versions and the XG7-P has 2TB and 4TB versions, all in the M.2 2280 gumstick format. We think both models use 96-layer TLC 3D NAND, the same as the recently-announced XD6, but Kioxia does not say.

Indeed the company has not supplied any detailed performance, latency, endurance or other datasheet-type information.

PCIe 4.0 is twice as fast as PCIe 3.0 and represents a step change in an SSD’s random read/write and sequential read/write performance. The XG7 uses 4 PCIe Gen 4 lanes.

The 1TB XG7provides 2x sequential read speed and c1.6x the sequential write speed of the 1TB XG6, according to Kioxia. That implies up to 6.3GB/sec sequential read bandwidth and 4.6GB/sec sequential write throughput.

The XG7 supports the NVMe 1.4 specification and System Management Bus (SMBus) for thermal management through a sideband channel. Planned options are TCG Pyrite 2.01 and TCG Opal SSC 2.01 encryption support.

XG7 drives are sampling with potential OEM customers and we might see workstations, gaming PCs, ordinary desktops and notebooks using them in 2021.

Where’s Intel?

Kioxia may have completed its PCIe 3 to PCIe 4 transition but we still await the first PCIe 4 SSDs from Intel. Even Intel’s Optane SSDs are still using the PCIe 3 interface. Market leader Samsung, Micron, SK hynix, Western Digital and Seagate have all introduced their first PCIe 4 products, so Intel can’t be far behind.

Intel did tweet out a hint that it was adding PCIe 4 support to Optane in January, but that was almost 11 months ago. Perhaps selling its NAND foundry and SSD operations to SK hynix has delayed PCIe 4 adoption.

Xilinx takes Samsung computational storage drive to market

Samsung has announced the SmartSSD CSD flash drive, a compute-on-storage device that uses a Xilinx FPGA to offload a host server’s CPU. Xilinx will sell the drive.

Jim Elliott, corporate SVP of Memory Sales and Marketing at Samsung Semiconductor, said in a press statement: “The industry is beginning to realise just how much the SmartSSD CSD will be able to boost performance in the datacenter and far beyond, and with the latest Xilinx tools for application development we anticipate dramatic growth in a wealth of acceleration applications.”

Samsung SmartSSD CSD and Kintex FPGA.

Samsung launched the first generation SmartSSD in October 2018. This drive.has a PCIe Add-in-Card (AIC) format, features a Xilinx Zynq FPGA with ARM cores and uses 3D NAND organised in TLC mode. Samsung and Xilinx jointly developed a runtime library for the Zynq FPGA.

The new 4TB CSD flash drive comes in a U.2 (2.5-inch) format with a PCIe gen 3 x4 interface. It replaces the Zync FPGA with a newer Kintex UltraScale+ KU15P FPGA, which does without the Arm cores. Instead, it has API-driven programmability and new software libraries to enable faster computational storage development. There is also a faster SSD controller.

Thanks to the U.2 format, up to 24 CSDs can be fitted into a 2U server – many more than the first generation SmartSSD. This was limited by PCIe slot availability, resulting typically in six or fewer SmartSSDs per server.

So, 24 x 4TB CSDs can hold 96TB of raw data, and more with compression. This should mean that the drives can now do the data processing – and in parallel – freeing the server main memory from many tasks. The CSD can increase its capacity threefold with compression, according to Xilinx, but this of course depends upon the data type.

With all these enhancements, the SmartSSD CSD can accelerate processing performance 10x or more for applications such as database management, video processing, artificial intelligence layers, complex search and virtualization. Xilinx suggests SmartSSD CSDs with Bigstream’s hyper-acceleration layer allows customers to increase speed up to 10x on Apache Spark analytics workloads.

Xilinx partner Eideticom’s NoLoad SSD uses the SmartSSD CSD to provide up to 10x compression at line-rate while using 70 per cent less server CPU resource. Deployment requires no application changes and it ties directly into any file system.

The NoLoad software framework enables applications such as Databases (Hadoop, RocksDB, Cassandra and MySQL) to offload server CPU storage tasks.

SmartSSD CSDs are available for pre-order today and begin shipping with general availability in January.

AWS automates Glacier S3 tiering – for a small consideration

Amazon yesterday unleashed a barrage of product updates at the AWS Storage Day. Additional tiering services for Amazon S3 Glacier archive storage was probably the most notable announcement but new features for EBS, EFX, FSx, DataSync, Snow offerings and the Storage Gateway also scrambled to gain our attention.

Glacier, the AWS S3-based object storage service, includes archive and deep archive tiers for infrequently and rarely accessed data. For a fee, AWS will now, monitor your data in Glacier and automatically move cold data into and between the archive and deep archive tiers, and newly-accessed archive data up a tier.

“These new optimisations will reduce the amount of manual work you need to do to archive objects with unpredictable access patterns,” writes Marcia Villalba, a senior developer advocate at AWS.

Archive access tiers

There are two new access tiers: Archive Access with the same performance and pricing as the S3 Glacier storage class; and Deep Archive Access, which has the same performance and pricing as the Deep Archive storage class.

S3 Intelligent-Tiering automatically moves objects that haven’t been accessed for 90 days to the Archive Access tier, and to the Deep Archive Access tier after 180 days. Customers pay about $1 per TB per month in the Deep Archive tier. 

AWS Archive tiering.

Archive access tier objects are retrieved in three-five hours, and Deep Archive access tier object retrieval takes up to 12 hours. The new tiers join the existing Frequent Access and Infrequent Access tiers.

Objects smaller than 128KB are kept in the Frequent Access tier. For each object archived to the Archive Access tier or Deep Archive Access tier, S3 uses 8KB of storage for the name of the object and other metadata (billed at S3 Standard storage rates) and 32 KB of storage for index and related metadata (billed at S3 Glacier and S3 Glacier Deep Archive storage rates). 

To recap, AWS offers:

  • Simple Storage Service (S3) Standard
  • S3 Standard Infrequent Access (IA)
  • S3 Glacier – retrieval within 3 – 5 hours
  • S3 Glacier Deep Archive – retrieval within 12 hours

There are now four access tiers, with automated data movement services, matching these; Frequent Access (S3 Standard), Infrequent Access (S3 Standard Infrequent Access), Archive Access (S3 Glacier), and Deep Archive Access ( S3 Glacier Deep Archive.)

More AWS storage stuff

For the record, we note AWS’s other storage announcements yesterday.

FSx (File Systems) 

  • FSx for Lustre gets storage quotas
  • FSx for Windows File Server gets DNS alias access and file share access from container-based Windows workloads running on ECS,
  • FSX systems can have scheduled, policy-driven backup from AWS Backup

EBS (Elastic Block Store) gets a Cold HDD (sc1) volume type for low-cost magnetic storage for large, sequential, cold-data workloads.

EFS (Elastic File System) – users can create an EFS when using the EC2 Launch Instance Wizard, get the new EFS added to the instance and mounted automatically when the instance is launched.

DataSync

  • DataSync can run fully-automated data transfers between datasets stored in S3, EFS, or FSx for Windows File Server, without deploying DataSync agents
  • DataSync network bandwidth can be adjusted dynamically up or down

Snow family

  • Virtual machine images in raw (disk image) format as AMIs into Snowball Edge Storage Optimized and Snowball Edge Compute Optimised devices,
  • Import Windows 2012 and Windows 2016 virtual machine images and use them to launch instances on Snowball devices.
Snowball Edge device.

Storage Gateway

  • Import Windows 2012 and Windows 2016 virtual machine images and use them to launch instances on Snowball devices,
  • Create a schedule to control the maximum network bandwidth consumed by Tape and Volume Gateways,
  • Uploads to File Gateway can trigger Amazon CloudWatch or Amazon EventBridge notifications and so initiate automated workflows,
  • File Gateway supports access-based limits to ensure users only see the SMB file shares, folders, and files that they have permission to open.

Read an AWS blog by Chief Evangelist Jeff Barr to explore these points in more detail.

Micron steals a march on rivals with 176L 3D NAND shipments

Micron today revealed it has begun volume shipments of 176-layer 3D NAND, to set a new layer count benchmark.

The 176L tech means Micron has advanced over the competition and, all other things being equal, should be able to manufacture NAND at a lower cost per bit.

Update. Information about Micron using string-stacking added. 10 Nov 2020. Crucial use of 176L NAND in products info added 17 Nov.

Micron is targeting the mobile storage, autonomous systems, in-vehicle infotainment, and client and data centre SSD markets with the tech. It should lead, Micron says, to ultra-fast edge computing, enhanced AI inferencing, and graphic-rich, real-time multiplayer gaming. The 176L die size is up to 30 per cent smaller than competitors’ 128L products, which means space-constrained products such as smartphones can dedicate less space for NAND.

Micron EVP for technology and products Scott DeBoer issued a canned quote: “Micron’s 176-layer NAND sets a new bar for the industry, with a layer count that is almost 40 per cent higher than our nearest competitor’s. … this technology sustains Micron’s industry cost leadership.”

The company is shipping dies to external and internal customers and expects external customer’s products using them to appear next year. Micron said: “Crucial is integrating Micron’s new 176L NAND in to some consumer SSD products now and over time as product lines are refreshed. The seamless integration model prevents inventory and part number logistical challenges for partners and customers.”

The dies use charge-trap technology and have a logic processing layer built underneath the layered NAND cells, to keep the overall die footprint small. That’s called a CMOS-under-array (CuA) architecture.

Micron CMOS-under-Array scheme.

Performance-wise, the 176L dies have a more than 35 per cent latency advantage over Micron’s 96L NAND and are over 25 per cent better than its 128L tech. Its maximum data transfer rate is 1,600 mega-transfers per second (MT/s) on the Open NAND Flash Interface (ONFI) bus, claimed to be industry-leading. The two things together mean system boot can be quicker and apps stored in NAND can launch faster.

Micron’s 96L and 128L NAND featured a maximum of 1,200 MT/s. The 176L product has 15 per cent faster mixed workload performance in mobile storage, compared to Micron’s equivalent 96L product.

The 176 layer number is 37.5 per cent more than any 128-layer NAND, and 22.22 per cent more than Intel’s 144-layer tech announced in May, and scheduled to ship in product by year-end. It is also a 57.1 per cent larger than Western Digital and Kioxia’s 112-layer product, again with an expected  product ship date by year-end.

Samsung, SK Hynix and YMTC  are all at the 128-layer count stage. Samsung expects to reach 176L by April next year, and will use a string-stacking method, combining two 88L dies.

Consultant Mark Webb of MKW Ventures tells us: “Micron does use string stacking for 176L. … Micron stack is also 88L I believe. String stacking is not two dies. It is one die/substrate [and]t has to do with how you break up the layers (think major stack and minor stack).

“A 176L logical die will probably have 190+ actual layers. There are additional layers in the process for select and for dummy layers.”

Western Digital SSDs get into the zone

Western Digital has announced its first zoned SSD, the ZN540, and is targeting the device at cloud infrastructure providers.

Officially called the Ultrastar DC ZN540 ZNS NVMe SSD, the drive is for customers who need a large number of drives to run as efficiently as possible. WD suggests the drive is suited for multi-tenancy environments and data-centric applications such event stream processing. It is claimed to deliver up to 4x performance and 2.5x QoS improvements compared to conventional SSDs.

The ZN540 is a U.2 (2.5-inch) format, dual-port drive using an NVMe 1.3 PCIe Gen 3 interface.

The drive is built with 96-layer 3D NAND organised as TLC (3 bits/cell) and has up to 8TB capacity. WD does not say what the other capacity points are. Because server software manages the ZN540, WD is providing neither performance data nor endurance numbers. It is not backwards-compatible with existing SSDs

The host server manages data placement for the ZN540, a design that optimises performance and prolongs usable life by minimising writes. When the host application knows which data is variable in content, and its size, it can place different types of data in different zones. That means unchanging data blocks are not disturbed by deletions of small data items in the same blocks – which would eventually trigger a garbage collection process. 

The ZN540 complies with the ZNS Command Set 1.0 specification (TP 4053, TP 4076). However, only WD produces drives that meet this specification. In other words, customers are looking at single-sourcing. They also need their Linux host server SW updated to v5.9 to support zoned SSDs and the application code to manage the zoned SSDs. We understand Kioxia is active in the NVMe technical working group for ZNS. That’s promising.

WD is sampling the ZN540 to select customers.

Zoning scheme

Zoning divides the SSD’s capacity into zones and places data in a zone according to its IO type. A zone can only be written sequentially and it must be erased before new data is written to it. Non-zoned SSDs have controller software with a Flash Translation Layer (FTL) which manages data placement on the drive.

The FTL finds and recovers erased blocks, 4KB in size, in a so-called garbage collection process which involves moving data around the drive (read, move and rewrite) to create 4K blocks of empty space that can be used for new data. Empty space to act as an internal  buffer is needed for this and this space cannot be used for storing data.

WD zoned SSD scheme.

Many data writes to the drive are smaller than 4KB in size, and deleting them involves a 4KB block being left partially written, hence the need for garbage collection processing. Host-managed zone drives have zone management software to do this job. It should aggregate small random writes into a 4KB data set or chunk which can be written with a single write process into a 4KB block in the SSD.

This reduces the total amounts of writes to the drive, known as the Write Amplification Factor) and so extends the drive’s life, while stopping any reads being delayed by a garbage collection process.

The Zoned Storage website has information that can help storage engineers and designers.

And in other news

WD today also announced a capacity bump for the WD Blue content creation gumstick drive, and launched the X SN530n NVMe industrial-class gumstick drive.

The WD Blue SN550 is an NVMe M.2 format drive with 2TB capacity, doubling the current Blue SN550’s 1TB maximum. It is intended for internal use on PCs and notebooks and supports up to 2.6GB/sec sequential read speed. The drive costs $249.99 MSRP and is available now.

The NVME M.2 format is also used for the IX SN530 ,which is designed for harsh environment use. Its working temperature range spans from -40°C to +85°C, and it has a 20G operating vibration specification. The capacity range is 256G to 2TB with WD again not specifying the intermediate levels. This TLC drive delivers up to 2.4GB/sec sequential reads and 1.95GB/sec sequential writes. It supports projected 5.2PB written during its life.

An SLC Version, due in January, will have an 85GB to 340GB capacity range and a support 24PB written during its working life.

The IX SN530 is sample shipping now.