Home Blog Page 352

Toshiba publishes list of consumer HDDs that use shingled magnetic recording

Toshiba has revealed which of its desktop-type drives use SMR (shingled magnetic recording) technology, with its potential for slower performance due to continuous random writes.

Some users have complained that desktop SMR drives can exhibit poor performance in certain instances, such as loading a large gaming application composed of a large number of files.

This is because when a file is read, the operating system’s access time metadata is updated and written back to the drive. Such access time collection and storage is a default element of file metadata in Windows and MacOS. Continuous access time updates in the OS are random disk writes and so fall into the SMR performance vulnerability zone.

Toshiba uses SMR technology – previously undocumented – in several desktop drives and in some video surveillance HDDs:  P300 6TB, P300 4TB, DT02 6TB, DT02 4TB, DT02-V 6TB and DT02-V 4TB.

Certain notebook PC, game consoles, and external consumer drives also use SMR: L200 2TB, L200 1TB, MQ04 2TB and MQ04 1TB.

Toshiba said it “works extensively with notebook and desktop PC vendors on the selection of the appropriate storage media to help ensure the data integrity, reliability and planned lifetime requirements of the system”.

The company does not use SMR in the N300, a NAS drive intended for the consumer market – unlike Western Digital which uses SMR in some low-end WD Red NAS devices.

Micron reinvents storage IO stack for the solid state age

Micron has devised a modified storage IO stack for Linux that delivers lower latency, faster performance and longer life. The US chipmaker said the ‘heterogeneous-memory storage engine’ (HSE) is host-level software, not device-level.

HSE works with SSDs and storage class memory and is extensible to new interfaces and storage devices for applications across databases, IoT, 5G, AI, HPC and object storage.

The code optimises performance and endurance by orchestrating data placement across DRAM and multiple classes of SSDs or other solid-state storage devices. It implements a key:value store, and scales to terabytes of data and hundreds of billions of keys per store.

A storage engine hooks up an application such as a database to storage drives and their controllers and enables them to talk direct to drives. It is not the actual drive controller. Micron’s HSE code sits in a host and replaces a standard or existing storage IO stack, so it needs to be integrated with the application. Micron has facilitated this integration by making HSE open source.

Micron has tested HSE-enabled workloads against the RocksDB storage engine, using YCSB (Yahoo! Cloud Serving Benchmark) workloads and four Micron 9300 SSDs. HSE improved performance throughput 6x, reduced latency 11x and lengthened flash endurance 7x. It achieved this by reducing write amplification.

Micron has also integrated HSE with MongoDB and claims an 8x throughput improvement.

HSE is available on Github, where there is an HSB Wiki resource. A blog by Larry Hart, Micron director of product marketing, provides additional information.

HSE uses

We think Micron’s HSE initiative is motivated in part to encourage third-party vendors to modify their applications to work with its upcoming 3D XPoint SSD drives.

Ceph is a potential integration candidate for HSE, and Stefanie Chiras, VP and GM of Red Hat Enterprise Linux, said in the announcement release: “We see enormous potential in the technologies being introduced by Micron, especially as it takes an innovative approach in lowering the latency between compute, memory and storage resources.”

Scality, an object storage supplier, has also provided a supporting quote, courtesy of field CTO Brad King: “While our storage software can support ‘cheap and deep’ on the lowest-cost commodity hardware for the simplest workloads, it can also exploit the performance benefits of technologies like flash, storage class memory and SSDs for very demanding workloads.

“Micron’s HSE technology enhances our ability to continue optimising flash performance, latency and SSD endurance without trade-offs.”


Kioxia’s software-enabled flash could be a game changer in SSD management

Kioxia today introduced software-enabled flash (SEF), a radical development in solid state storage management that gives users the ability to optimise for specific workloads using flash ‘personalities’.

Currently, the standard SSD controller assumption is that one-size-fits-all, apart from relatively crude read- or write-optimisations for specific products. Kioxia overturns this with a software-defined flash controller that enables dynamically reconfigurable flash at the SDD level or – for hyperscalers – the build-it-yourself flash storage pool.

Eric Ries, SVP, memory storage strategy division at KIOXIA America, said in a statement: “Our customers have been pushing for the ability to drive operational efficiency in the data centre programmatically, and SEF technology will meet this need by placing access and control of flash directly in the hands of hyperscale programmers.”

SEF virtualizes the dies and enables the operator to dynamically control how flash is optimised across thousands of dies to match it to specific workload needs. For instance, hyperscalers can gain better latency control, with host software managing tasks on the SSD through API access. This means background activities will not hinder latency-sensitive work.

As workload requirements change, a hyperscaler or large enterprise could reconfigure a population of SSDs and their dies to provide more performance and cost efficient use for the new workload.

The SEF and API scheme also means that host software can be used to manage the NAND dies across flash generation changes.

Virtual devices

The controller or SEF unit hardware is a system-on-chip (SoC) unit with a micro-controller and flash dies mounted on a printed circuit board.

The SoC has a PCIe interface. Sub-units handle NAND block and page programming of timing, read tasks with error correction, cell health, defect management, and endurance extension algorithms. A DRAM controller sub-unit enables the optional addition of DRAM.

The SEF SoC sub-divides the NAND dies under its control into sub-domains or ‘virtual devices’. Each virtual device can have different characteristics such as quality of service arrangements and their own personality. These could include block device, Zoned Names Spaces, TRocksDB, Firecracker or a custom hyperscale flash translation layer (FTL). The host can control data placement using the virtual devices.

The virtual devices are dynamically reconfigurable through API access. Some or all of these personalities can operate in parallel on the same SSD, with the SEF software isolating the virtual device domains from each other. This capability may seem more useful as SSD capacities rise. The API code is open source and gives access to the full capacity of the NAND dies.

Check out a Kioxia technical introduction to its software-defined flash controller and API.

Data centre SSD sales soar at the expense of hard drives

Update 1 May 2020: Hitachi Vantara dispute Gartner numbers.

Data centre server and SSD revenues are expected to soar between 2019 and 2024. At the same time disk drive revenues will contract, Gartner’s first data centre semiconductor revenues report reveals.

Wells Fargo analyst Aaron Rakers has distributed a summary to his subscribers. According to Gartner, semiconductor server revenues were $30.3bn in 2019 and will grow 8.1 per cent CAGR to $44.7bn in 2024.

Data centre SSD revenues will increase 24.9 per cent CAGR from $7.74bn in 2019 to $23.5bn in 2024. Hard disk drive revenues will decline at -3.8 per cent CAGR. 

Gartner forecasts DRAM revenues will increase 11.9 per cent from 2019’s $11.8bn to 2024’s $20.7bn and NAND flash memory revenues will jump 25.5 per cent from $7.4bn to $23.2bn. Emerging memory technologies such as 3D XPoint and PCM will grow from 64.9 per cent from current lowly level to about $1bn (our very rough estimate, based on the chart below).

Supplier revenue shares

Gartner details all-flash array supplier revenues in a separate report and Rakers has helpfully made a chart that uses the data.

Dell EMC is in first place, with $688m in revenues in the final 2019 quarter. NetApp is second ($501m) and IBM ($372m) and Pure ($368m) are in joint third place.

Rakers has made a supplier revenue share pie chart to aid vendor comparison:

Huawei is fifth with $321m. Then comes HPE at $213m, while Hitachi Vantara missed out on the big flash array bucks, with revenues of about $80m. This is low. A fast-growing startup such as VAST Data could soon overtake it.

Update 1 May 2020: A Hitachi Vantara spokesperson told us: “While we admire and appreciate our analyst friends, sometimes they have to make best guesses and those guesses can be wrong. The Gartner numbers cited [above are analyst estimates only and they were not corroborated by Hitachi as accurate. In fact, we can assure you these estimates are significantly under our actuals.”

We’re told Hitachi Vantara is a privately held, wholly owned subsidiary of Hitachi, Ltd. As such, it does not publicly disclose its revenues to the industry analyst community for the purposes of market share reporting. 

Your occasional storage round-up featuring Kioxia, Samsung, Veritas and more

Kioxia America

Kioxia America has added snapshots and clones to its KumoScale all-flash storage system.

KumoScale

It has done this by including Kubernetes CSI-compliant snapshots and clones to the KumoScale NVMe-over Fabrics-based software. This means faster backup for large, stateful containerised applications such as databases. Generally, database operations have to be quiesced during database backup. Snapshotting takes seconds and releases the database for normal operation more quickly than other backup methods based on copying database records or their changes.

KumosScale snapshotting works with the CSI-compatible snapshot feature of Kubernetes v1.17. Users can incorporate snapshot operations in a cluster-agnostic way and enable application-consistent snapshots of Kubernetes stateful sets, directly from the Kubernetes command line.

Separately, Kioxia has rebranded all the old Toshiba Memory Corp. consumer products with the Kioxia moniker, meaning microSD/SD memory cards, USB memory sticks and SSDs. These are for use with smartphones, tablets and PCs, digital cameras and similar devices.

Qumulo gets cosy with Adobe

Scale-out filesystem supplier Qumulo is working with Adobe so work-from-home video editing and production people can collaborate to do work previously done in a central location.

Two Adobe software products are involved: Adobe Premiere Pro and After Effects and the two companies say they enable collaborative teams to create and edit video footage using cloud storage with the same levels of performance, access and functionality as workstations in the studio.

You can register for a Qumulo webinar to find out more.

Samsung develops 160+ layer 3D NAND

Samsung has accelerated the development of its 160-plus layer 3D NAND,a string stacking arrangement of two x 80+ layer chips, Korea’s ET News reports. This will be Samsung’s seventh 3D NAND generation in its V-NAND product line.

Samsung’s gen 6 V-NAND is a 100+ layer chip – the precise layer count is a secret, which started sampling in the second half of 2019.There is no expected date for the 160+ layer chip to start sampling. But it looks like Samsung wants to ensure it is a generation ahead of its competitors, and thereby have lower costs in $/TB terms.

China’s Yangtze Memory Technology Corporation announced this month that it is sampling string-stacked 128-layer 3D NAND. SK hynix should sample a 128-layer chip by the end of 2020.

WekaIO launches Weka AI

WekaIO, a vendor of fast filesystems, talks of AI distributed across edge, data centres (core) and the public cloud, with a multi-stage data pipeline running across these locations. It says each stage within this AI data pipeline has distinct storage IO requirements: massive bandwidth for ingest and training; mixed read/write handling for extract, transform, load (ETL); ultra-low latency for inference; and a single namespace for entire data pipeline visibility.

Naturally Weka’s AI offering meets all these varied pipeline stage requirements and delivers fast insights at scale.

Weka AI is a framework of customisable reference architectures (RAs) and software development kits (SDKs) with technology alliances like Nvidia, Mellanox, and others. The company said engineered systems with partners ensure that Weka AI will provide data collection, workspace and deep neural network (DNN) training, simulation, inference, and lifecycle management for the AI data pipeline.

Weka claims its filesystem can deliver more than 73 GB/sec bandwidth to a single GPU client. You can check out a datasheet to get more information.

Veritas says dark data causes CO2 emissions

Data protector and manager Veritas says storing cold, infrequently-accessed data on high-speed storage makes the global warming crisis worse. This so-called dark data sits on fast flash or disk drives and so consumes energy that it doesn’t actually need.

Veritas claims on average 52 per cent of all data stored by organisations worldwide is ‘dark’ as those responsible for managing it don’t have any idea about its content or value.

The company estimates that 6.4 million tonnes of CO2 will be unnecessarily pumped into the atmosphere this year as a result. It cities IDC forecast that the amount of data that the world will be storing will grow from 33ZB in 2018 to 175ZB by 2025. This implies that, unless people change their habits, there will be 91ZB of dark data in five years’ time.

Veritas’ announcement says we should explore its Enterprise Data Services Platform to get more information on data protection in the world of dark data – but there’s no specific information there linking it to decreasing dark data to reduce global warming.

Shorts

Databricks is hosting the Hackathon for Social Good as part of the Spark + AI Summit virtual event on June 22-26. The data analytics vendor is encouraging participants to focus on one of these three issues for their project: provide greater insights into the COVID-19 pandemic; reduce the impact of climate change; or drive social change in their community

Enterprises with office workers accessing critical data face having these people, sometimes thousands of them, work from home and, be default, using relatively insecure Internet links to access this sensitive data. They can set up virtual private networks (VPNs) to provide secure links but this entails additional complexity. FileCloud  says it can provide VPN-levels of security with its seamless access to on-premises file shares from home without a VPN.

Its software uses common working folders. There is has built-in ransomware, anti-virus and smart data leak protection and there is no need to change file access permissions.

DBM Cloud Systems, which automates data replication with metadata, has joined Pure Storage’s Technical Alliance Partner program. That means DBM’s Advanced Intelligent Replication Engine (AIRE) is available for Pure Storage customers to replicate and migrate petabyte-scale data directly to Pure Storage FlashBlade, from most object storage platforms, including AWS, Oracle, Microsoft and Google.

In-memory real-time analytics processing platform GigaSpaces has announced its v15.5 software release. The upgrade doubles performance overall and introduces ElasticGrid, a cloud-native orchestrator, which the company claims is 20 per cent faster than Kubernetes.

Igneous has updated its DataDiscover and DataFlow software services.

DataDiscover provides a global catalog of all a customer’s files across its on-premises and public cloud stores. A new LiveView feature provides real-time insight into files by type, group, access time, keyword and so forth. The LiveViews (reports) can be shared with other users, taking account of their permissions. Users can find files faster with the LiveView feature.

DataFlow is a migration tool. It supports new NAS devices and cloud filesystems or objects without vendor lock-in. Data can be moved between the on-premises and public cloud environments, whether they it is stored in NFS, SMB or S3 object form. NFS and SMB files can be moved to an S3 object store.

Nutanix HCI software has been certified to run with Avid Media Composer video editing software and the Avid MediaCentral media collaboration platform. It says it is the first HCI-powered system to be certified for Avid products.

Verbatim is launching a range of M.2 2280 internal SSDs, delivering high speeds and low power consumption for desktop, ultrabook and notebook client upgrades.

Infinidat adds K8s support and multi-cloud storage data services

Infinidat has plunked a CSI (container storage interface) driver into high-end InfiniBox arrays, along with services that support multi-cloud environments.

Infinidat VP Erik Kaulberg said in a phone briefing last week that companies are getting strategic about containers, and he cited Rancher Labs CEO Sheng Liang’s proclamation that “Kubernetes is the new Linux; you run it everywhere.”

He said multi-cloud is “the strategic direction for most companies”, and that this had led Infinidat to write a CSI driver that supports both strategic trends. This entailed adding services on top of the basic Kubernetes integration.

Customers are accustomed to the storage data services that legacy applications provide and they want these services with containerised applications, Kaulberg said.

The Infinidat CSI driver provides persistent volume (PV) block storage to a Kubernetes pod – which is a set of containers with shared storage and networking and a runtime specification. The driver provides a range of services such as dynamic provisioning, resizing, cloning, snapshots, external dataset import and restores.

This enables the transfer of PVs between InfiniBox arrays in a customer data centre, and to and from Infinidat’s public Neutrix cloud service. The PVs can be transferred from there to Kubernetes pods running in an EKS cluster in AWS, Azure or GCP.

Elastic Data Fabric

These storage systems form an Elastic Data Fabric. This clusters multiple Infinidat storage systems across multiple on-premises data centres and data in public clouds into a single global storage system that is scalable up to multiple exabytes.

The PVs are movable within the Elastic Data Fabric and are accessed across Fibre Channel, iSCSI, NFS and NFS TreeQs links. (NFS TreeQ is an Infinidat NFS implementation featuring directory quotas.)

Kaulberg expects container numbers to grow substantially as enterprises adopt a microservices approach. That means PV counts will increase and in anticipation Infinidat has scaled its CSI driver count to support hundreds of thousands of PVs.

Infinidat’s CSI driver enables InfiniBox to operate with VMware’s Tanzu Kubernetes Grid, Red Hat OpenShift, docker, Google’s Anthos and other Kubernetes suppliers.

InfiniBox arrays are distinguished by their use of DRAM caching, supported by some flash storage, to provide flash levels of performance with disk storage levels of pricing. It is the only prominent block and file storage array supplier that does not have an all-flash array.

The InfiniBox CSI driver is available free of charge with an Apache 2 license via Github and VSX, with native deployments by an OpenShift Operator or Helm.

Critical thinking: NetApp builds Scale-out Data Protection with Commvault

NetApp has launched a backup / disaster recovery system based on Commvault software that runs NetApp HCI and stores backup data on its all-flash FAS arrays and StorageGRID object storage.

According to NetApp, Scale-out Data Protection (SDP) protects “all major operating systems, applications, and databases on virtual and physical servers, NAS shares, cloud-based infrastructures, and endpoint/mobile devices”.

Brett Roscoe, VP, product management, NetApp, said: “The launch of SDP provides our joint customers with a simple, turn-key solution that uses NetApp HCI to enhance the scalability and robustness of the Commvault software in protecting their most critical data across hybrid cloud environments.”

NetApp and Commvault said they have around 1,200 joint customers.

SDP

SDP has a NetApp validated architecture and incorporates Commvault Complete Backup and Recovery Software, which executes as a traditional, backup orchestrating media server in the HCI system.

Blocks & Files diagram.

Commvault runs on the source systems and its snapshot capability provides the first line of defence against data loss, with its support of more than 300 array snapshot engines. SDP has near-instant restoration capabilities, according to NetApp, because of this.

The primary backup tier is a NetApp AFF array, which is designed for fast access to primary data file and block storage.

NetApp Commvault SDP diagram

Protection data can be copied across to a StorageGRID object storage system for secondary longer term retention. The StorageGRID system can be in the same data centre or remote, thus providing disaster recovery capability. Restored virtual machines from the StorageGRID or AFF systems can be fired up in the NetApp HCI control appliance as a stopgap until the damaged source systems can be made functional again.

Protection data can also optionally be written to supported public clouds. Customers can get air-gapped ransomware protection from S3-accessed tape-based services in these clouds; meaning AWS Glacier/Glacier Deep Archive and Azure Archive. This functionality has not specifically been tested in the NetApp validated architecture.

Scale-out

NetApp and Commvault emphasise the scale-out capabilities of the NetApp HCI control system. Coupled with the AFF system as the primary tier, this confirms that SDP is a high-end data protection system for critical data.

An all-flash FlashBlade data protection array from Pure Storage also uses fast flash and is positioned as a primary data flash array and being used for backups. Pure’s FlashArray//C uses slower speed flash.

The NetApp SDP product bundle is available from NetApp and its channel partners.

Western Digital begins mending fences in WD Red NAS drive SMR spat

Western Digital has heralded a positive shift in its approach to users of shingled WD Red NAS drives, via a short statement on the company blog.

As described in our recent article Western Digital admits 2TB-6TB WD Red NAS drives use shingled magnetic recording (SMR), some users can experience performance problems in situations such as adding the drives to RAID groups which use conventionally magnetic recording (CMR) drives.

Fellow disk drive makers Seagate and Toshiba use undocumented SMR technology in consumer desktop drives, but only WD has used them in low-end NAS drives.

WD wrote in the un-bylined blog, dated April 22:

The past week has been eventful, to say the least. As a team, it was important that we listened carefully and understood your feedback about our WD Red NAS drives, specifically how we communicated which recording technologies are used. Your concerns were heard loud and clear. Here is that list of our client internal HDDs available through the channel:

A table in the blog lists which of its internal consumer/small business/NAS drives use SMR and CMR technology:

WD said it will update its marketing materials – brochures and datasheets – to provide similar data and “provide more information about SMR technology, including benchmarks and ideal use cases”.

The final paragraphs affirm that WD recognises some customers are experiencing problems and is doing something about it:

“Again, we know you entrust your data to our products, and we don’t take that lightly. If you have purchased a drive,please call our customer care if you are experiencing performance or any other technical issues. We will have options for you. We are here to help.

More to come.”

Caringo claims cost advantage for object storage appliances

Caringo, an object storage software supplier, has launched a set of appliances that run a new version of its Swarm software.

Swarm 11.1 includes built-in content management, search and metadata management. It has improved S3 compliance, faster software performance, email and Slack alerting and has integrate Elasticsearch 6.

Caringo claims Swarm Server Appliances (SSA) start at 32 per cent less than the cost of other on-premises object storage systems, and 42 per cent less than Amazon S3 storage service fees for the same capacity over 3 years.

CEO Tony Barbagallo said the new appliances “can deliver instant access to archives, enabling remote workflows and streaming services”.

The company has launched four appliances.

  • The 1U SSA (Single Server Appliance) with 2 x 7.68TB SSDs for remote offices and small-to-medium workloads,
  • s3000 1U Standard Server with 12 x 14TB disk drives giving 168TB raw (111.4TB usable after replication and erasure coding) and clustered with minimum 3-nodes,
  • hd5000 4U High-Density server with 60 x 14TB drives meaning 840TB raw (665TB usable).
  • m1000 1U Management Server with 4 x 960GB SATA SSD and a single 256GB NVMe SSD. 
Caringo Swarm Server Appliances.

A cluster can scale to more than 1000 nodes. That delivers 504TB raw in 3u. 

Caringo s3000 Standard Storage Appliance

All software functions run in virtual machines on the SSA but in the m1000 Management Servers when clustered s3000 and/or hd5000 appliances are used. They can run on VMs in a customer’s virtual environment. Using the m1000 means content-related software functions run in flash, while bulk storage uses nearline drives.

Content can be backed up to any S3-compliant target, either in the public cloud or on-premises. 

The appliances and Swarm 11.1 are available now.

The ‘nines’ numbers

Caringo claims the SSA provides 10 ‘nines; of data durability (99.99999999 per cent) while a cluster of s3000s and hd5000s can provide between 13 and 25 ‘nines’ (up to 99.99999999999999999999999 per cent) dependent upon the specific data protection method and number of deployed nodes.

Two ‘nines’ (99%) means you could lose one object out of 100 in a year. Five ‘nines’ (99.999%) means you could lose one object out of 100,000 in a year. Ten ‘nines’ means a loss of up to one object from 10,000,000,000 (10 billion in a year. And 25 ‘nines’ means you could lose one object from 10,000,000,000,000,000,000,000,000 objects in a year. That’s  one in 10 septillion objects lost per year.


‘Recovery timing’ is everything: SK Hynix sees revenue growth on server demand, but warns of uncertainty ahead

Korean fabber SK Hynix‘s re-positioning toward newer and denser DRAM and NAND products paid off in the first 2020 quarter as it reported 6 per cent y/y revenue growth from its DRAM and NAND operations.

However, the firm’s CFO, Cha Jin-seok, said that the point at which global economies affected by the COVID-19 pandemic bottom recover was crucial to ascertain demand, telling an earnings conference call: “The biggest factor to our demand forecast is the stabilisation of COVID-19 and the recovery timing of global economic activity. If the economic recession is prolonged, we can’t rule out that even memory demand for servers could slow down.”

SK Hynix is the second largest manufacturer of memory semiconductors globally, and competes with Samsung and Micron.

Its first 2020 quarter saw revenues of ₩7.2trn ($6.2bn), up from the year-ago ₩6.7trn ($6.0bn), and net income falling from ₩1.1bn ($947m) a year ago to ₩649bn; ($559m) a 41 per cent drop. It was mainly because of its substantial product cost decreases that it was able to to make a profit, helped by SSD sales. 

The company had a dismal 2019 as lower demand created supply gluts leading to price falls. As a result it decided to accelerate transitions to denser DRAM and NAND processes which would lower production costs and enable it to compete better. In DRAM that meant planning a transitioning from 1Ynm to 1Znm products and increasing layer counts in 3D NAND.

How did the novel coronavirus pandemic affect the company in the short term? PC and mobile DRAM demand fell but server demand remained strong. NAND shipments rose because of this server demand strength.

Speaking about the firm’s new fab in Wuxi, Jiangsu Province and its $13.2bn 53,000m2 M16 semiconductor factory in Incheon city, Gyeonggi-do province, SK Hynix said: “With Wuxi, as you know, we have done the buildout last year, and the equipment starts to be moved in.” It added that it was on track for completion and that “for m16 as well work is still underway to complete the clean room by the end of this year.” 

It says the rest of the year is full of unprecedented uncertainty because of the pandemic. The company expects that global smartphone sales will decline, but the demand for IT products and services based on the social distancing trend will drive a growth in server memory demand in the mid- to long-term.

SK Hynix plans to move some DRAM capacity to making CMOS sensors. It will boost production of 1Ynm mobile DRAM and start mass-producing 1Znm DRAM in the second half of the year. The company is also boosting production of GDDR6 and HBM2E DRAM.

Wells Fargo managing director analyst Aaron Rakers noted new gaming consoles in the second half could increase GDDR6 demand with HBM2E demand lifted by high-performance computing needs and for high performance computing to boost HMB2E (high bandwidth) sales.

He thinks 5G smartphone sales could potentially increase in the second half of the year which would lift also DRAM demand.

On the NAND front, SK Hynix will focus more on 96-layer 3D NAND, lessening the amount of 72-layer product. It will also start 128-layer product mass production in this, the second quarter of 2020. Rakers told subscribers: “The company expects combined 96 and 128-Layer NAND to exceed 70 per cent of shipments in 2020.”

The company aims to sell more SSDs, now accounting for 40 per cent of NAND flash revenues, and add a data centre PCIe SSD product line to widen its market and increase its profitability.

The business picture for SK Hynix, looking ahead, is not that bleak. Absent a prolonged pandemic, it should be able to continue growing.

WFH economy fuelled ‘strong, accelerated’ demand from cloud, hyperscale, says Seagate as nearline disk ships drive topline up 18%

Seemingly driven by the remote work trend of the past few months , Seagate revenues rose strongly in its latest quarter, fuelled by demand for high-capacity drives from public cloud and hyperscale customers.

It reported revenues of $2.72bn, 18 per cent up on a year ago, in its third fiscal 2020 quarter ending March 31, 2020. Its net income was $320m, 64.1 per cent higher than a year ago.

The Seagate money-making machine’s quarterly progress.

While the Seagate topline swan glided smoothly over the waters, its feet paddled furiously to overcome supply chain and logistics problems, and build and ship record exabytes of nearline disk capacity. Consumer and mission-critical drive numbers were more affected by the pandemic.

CEO David Mosley said: “We delivered March quarter revenue and non-GAAP EPS above the midpoint of our guided ranges, supported by record sales of our nearline products and strong cost discipline,” in a prepared quote.

Summary financial numbers:

  • Free cash flow – $260m
  • Gross margin – 27.4 per cent
  • Diluted EPS – $1.22
  • Cash and cash equivalents – $1.6bn

Total hard disk drive (HDD) revenues were $2.53bn, up 19 per cent y/y. But non-HDD revenues, which includes Seagate’s SSD business, was more affected by pandemic supply chain issues, showing a mere 1.6 per cent y/y rise to $192m

Earnings call

In the earnings call Mosley said Seagate had worked to overcome pandemic-related supply chain problems, saying: “Today, our supply chains in certain parts of the world are almost fully recovered, including China, Taiwan and South Korea and we see indications for conditions to begin improving in other regions of the world.”

He said: “Demand from cloud and hyperscale customers was strong and accelerated toward the end of the quarter, due in part to the overnight rise in data consumption, driven by the remote economy brought on by the pandemic. …. The strength in nearline demand more than offset below seasonal sales for video and image applications such as smart cities, safety and surveillance, as COVID-19 related disruptions impacted sales early in the quarter.”

But: “With the consumer markets among the first to get impacted by the onset of the coronavirus, we saw greater than expected revenue declines for our consumer and desktop PC drives.”

Capacity rises

Seagate shipped 120.2EB of disk drive capacity, up 56.7 per cent y/y, with an average of 4.1TB per drive. Mass capacity (nearline) drives accounted for 57 per cent of Seagate’s overall revenue in the quarter ($1.56bn ), up from 40 per cent a year ago. This was 62 per cent of Seagate’s HDD revenues, up from 44 per cent a year ago.

CFO Gianluca Romano said: “The mass capacity part of the business is really growing strongly.” Mosley confirmed that Seagate should ship 20TB HAMR drives by the end of the year.

Nearline drives rule, it seems, with continued demand expected in the next quarter from cloud service suppliers and hyperscalers, and possibly the quarter after that too.

Seagate’s guidance for the fourth fy2020 quarter is for revenues of $2.6bn plus or minus 7 per cent.

Western Digital implies WD Red NAS SMR drive users are responsible for overuse problems

Western Digital published a blog earlier this week that suggests users who are experiencing problems with their WD Red NAS SMR drives may be over-using the devices. The unsigned article suggests they should consider more expensive alternatives.

WD said it regretted any misunderstanding: “WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.”

Western Digital Shingled Magnetic Recording diagram

The WD blog contains two paragraphs about perfomance:

“WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.”

The second paragraph explains: “The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users.”

WD suggests: “If you are encountering performance that is not what you expected, please consider our products designed for intensive workloads. These may include our WD Red Pro or WD Gold drives, or perhaps an Ultrastar drive. Our customer care team is ready to help and can also determine which product might be best for you.”

Defining moments

We think that the WD Red NAS SMR drives are not ideal for customers experiencing problems. The workload rate number – 180TB written per year ignores the need for an intermittent workload leaving sufficient idle time for background data management. 

WD shingled tracks diagram.

We also think that terms used by WD are not defined. For example:

  • What is data intensity?
  • What does a “typical small business/home NAS workload” mean? Apart from workload up to 180TB/year.
  • What does “intermittent” mean? Does it mean X minutes active followed by Y minutes inactive? What are X and Y?
  • What does “sufficient idle time” mean? Does it mean Z minutes per hour? What is ZX?

This woolliness makes it difficult to understand if a WD Red NAS SMR drive is suited to a particular workload or not.

The trade-off for HDD vendors

We asked Chris Evans, a data storage architect based in the UK, what he thought about WD’s blog. We publish his response below:

Chris Evans

With any persistent storage medium, we are at the mercy of how that technology is implemented. The trade-off for HDD vendors has been in making products capable of ever-increasing capacities while continuing to deliver reliability. Almost all the new techniques used in HDD capacity gains have a side effect. 

A few years ago, for example, HDDs started to get rate limits quoted – this wasn’t explicitly mentioned in product specifications, but obviously needed to be added as a warranty restriction because drives couldn’t write 24×7 with some of the latest technologies.

SMR represents a significant challenge (I wrote about it recently here – https://www.architecting.it/blog/managing-massive-media/) to the extent that WD’s own website (zonedstorage.io) references drive-managed SMR as having “highly unpredictable device performance”.  

That WD website, dated 2019, states: “Drive Managed disks are suitable for applications that have idle time for the drive to perform background tasks such as moving the data around. Examples of appropriate applications include client PC use and external backup HDDs in the client space.”

Evans continues: “I would expect in this circumstance that all HDD manufacturers explain when and how they are using SMR. It could be that SMR is used as a background task, so drives can cope with a limited amount of sustained write I/O, after which the performance cliff is hit and the drive has to drop to a consolidation mode to restack the SMR data. Customers would then at least know if they purchased SMR technology, that some degree of performance impact would be inevitable.

Whilst HDD vendors want to increase capacity and reduce costs (the $/GB equation is probably the only game in town for HDDs these days), a little transparency would be good. Tell us when impactful technology is being used so customers can anticipate the challenges – and of course appliance and SDS vendors can accommodate for this in their software updates.”