Home Blog Page 374

Kasten: We can slay the microservices backup and restore dragon

If you want to back up data used by containerised applications where do you start?  

As Chris Evans, a storage consultant, argued in a recent Blocks & Files interview, the lack of backup reference frameworks is a serious problem when protecting production-class containerised systems.

So what are the options? You could:

  • Back up data at the infrastructure layer, looking at the storage arrays or storage construct in hyperconverged systems. 
  • Backup each container which uses persistent data on the infrastructure’s storage resources, 
  • Focus on an application, which is built from containers which use the storage resources in the underlying infrastructure.
Application, container and infrastructure layers in a micro-services environment

According to Kasten, a California storage startup, the focus should be on the application, as applications use sets of containers which, in turn, use the underlying server, storage and networking resources.

It is difficult to isolate individual application container data from myriad others in a storage array backup that focuses on logical volumes and/or files. An application focus avoids these problems, Kasten said.

If you backup and restore at the container level you recover at the container level. Restoring one container’s data without awareness of the situation of the other containers at that time risks inconsistency. In other words, separate containers have different versions of what should be the same data.

Kasten

Kasten’ is German for ‘box’, which seems appropriate. The Silicon Valley company was founded in January 2017 by CEO Niraj Tolia and engineering VP Vaibhav Kamra, who worked together at Maginatics and EMC and have been friends since university. It raised $3m in a March 2017 seed round and $14m in an A-round in August this year. Kasten said it has enterprise customers but hasn’t revealed any names yet.

Kasten founders

Kasten’s K10

Kasten’s software, called K10, provides backup and recovery for cloud-native applications and application migration.

It runs within, and integrates with, Kubernetes on any public or private cloud. The software uses the Kubernetes API to discover application stacks and underlying components, and perform lifecycle operations.

K10 provides backup, restore points, policy-based automation, compliance monitoring and supports scheduling and workflows. It supports the open source Kanister framework, available on Github, with workflows captured as Kubernetes Custom Resources CRs). This has block, file and object storage building blocks.

It supports NetApp, AWS EBS, Dell EMC, and Ceph storage and supports the Container Storage Interface (CSI). K10 uses deduplication technology to reduce storage capacity needs and network transmission requirements.

Kasten claims, without giving any details, that K10 is up to ten times cheaper than un-named legacy products and has up to 90 per cent faster mean time to recovery than using volume snapshots.

Migration

K10 has a migration capability that moves an entire application stack and its data in multi-cluster, multi-region and multi-cloud environments.

There are several reasons for doing this, Kasten said, such as disaster recovery, avoiding vendor lock-in, and sending data to test and dev and continuous integration environments.

Check out a K10 datasheet here. There is a free use trial available on the Kasten website.

Infinidat: All-flash arrays don’t do hyperscale – use disk drives instead

Infinidat has no plans to move from disk to quad-level flash for its capacity storage, arguing that all-flash arrays at scale are prohibitively expensive.

The high-end storage vendor make InfiniBox arrays with 10PB effective capacity and fast data access based on DRAM caching with an intermediate NAND tier. The company is prepping Availability Zone (AZ) clustering technology for release next year. This will cluster up to 100 Infinidat arrays, with effective capacity up to 1,000PB per AZ.

Infinidat argues all-flash arrays are a technological dead-end at multi-petabyte scale. Disk is cheaper per TB than flash, has a longer working life, and memory cacheing makes it faster than all-flash arrays.

Simply put, 1PB of disk is more affordable than 1PB of flash and needs less electricity. Scale that up to 1,000PB and the disk-vs-SSD cost and electricity usage differences are huge.

Three Infinidat execs briefed Blocks & Files to hammer home this message.

Their stance puts Infinidat at odds with rival array vendors Pure Storage, NetApp and VAST Data, a well-funded newcomer.

Recently-announced QLC flash SSDs are closer in price to disk drives and have a shorter write cycle total than mainstream TLC (3bits/cell) SSDs. Over-provisioning and controller software updates to zone areas of flash in the SSDs are intended to ameliorate this.

VAST Data relies on a single QLC flash tier for its capacity storage while NetApp intends to add QLC flash to its all-flash arrays next year. Pure Storage announced the QLC-using FlashArray//C in August this year. All three suppliers believe that a QLC capacity tier provides faster data access than nearline 7,200rm disk drives.

QLC ‘has horrid reliability’

Infinidat owes its existence to disk drives and invalidates the flash-will-kill-disk-drives thesis, CTO Brian Carmody said this week in a telephone briefing.

As enterprise data storage needs move into the exabyte era, NAND storage costs are simply too high. “Disk drives are going to save the enterprise storage industry,” he proclaimed.

His colleague Ken Steinhardt, field CTO, told us the company “could use QLC but the cost and reliability deltas are significant. QLC wouldn’t increase performance but would increase cost”.

Carmody added: “QLC is ten times more expensive than nearline SAS disk and has horrid reliability. Hyperscalers have zero interest in it.”

According to Stanley Zaffos, product marketing SVP, “small shops up to 100TB or so will be okay with all-flash arrays. Disk drives are more attractive at scale: 1,000TB and beyond.”

He noted: “Nearline drives are very competitive against SSDs on a TB/watt basis. At 0.8watts/TB the advantage gets even stronger as you scale out capacity to 10PB.”

And the future of disk drives? Carmody thinks multi-actuator disk drives, with a pair of read-write heads per platter, and championed by Seagate, are important: “We are working with major hyperscalers and drive manufacturers on hard disk drive roadmaps. Multi-actuation is huge. Seagate has a seat on our board and is an investor.” 

Quantum Corp manages a little growth spurt

Quantum Corporation has recorded revenue growth and reduced losses for its second fiscal 2020 quarter as it prepares for a return to a major stock exchange listing.

The tape and video file workflow storage supplier grew revenue 18 per cent to $105.8m and reduced losses from $21.6m to $2.3m.

The company forecasts $106m- $112m for the December quarter, which is typically its strongest quarter. For the full fiscal year Quantum expects $424m-$430m revenue. At its peak in fiscal 2007, the company posted $1bn revenue.

Jamie Lerner, CEO, said in a prepared quote: “Our strategic transformation accelerated in the second quarter as we reported double-digit revenue growth, margin expansion, and excluding non-recurring items, continued reductions in operating expenses, all of which led to continued profitability.”

He emphasised the company’s video file workflow storage focus: “We are well-positioned as a recognised industry leader in the storage and management of video and video-like data, and this accelerating trend should support future profitable growth for Quantum.”

Quantum sees growth at last

Last quarter Quantum emerged from accounting hell with restated quarterly SEC reports after an 18-month investigation to rectify accounting mistakes made by previous management. During this 18-month period it changed its CEO three times, lost an NYSE listing and came under attack from activist investor VIEX.

The business appears to returning to health but there is a long way to go. Cash and cash equivalents were $6m at quarter end, compared to $10.8m six months ago. Outstanding long-term debt was $153.6m. 

Quantum has recruited Regan MacPherson as chief legal and compliance officer to help prepare it for a major public stock exchange re-listing.

Nutanix kicks the Buckets into object storage

Nutanix launched Objects, its renamed Buckets object storage service, earlier this year. Let’s take a look.

In essence, Nutanix Objects is software designed for backup, archiving, immutable WORM (Write Once – Ready Many) storage, analytics and DevOps storage. Objects supports HYCU and Commvault as backup data sources, and more are on their way. 

Nutanix users can set up object storage services on their existing clusters or set up object storage-focused clusters with storage-dense nodes. The object storage exists as part of Nutanix Enterprise Cloud Platform, along with its file and block storage and virtual machines and is enabled through a software update. 

This means that Nutanix users with a need for object storage no longer have to source an external object store.

Objects software components

A Nutanix system, either single node or clustered nodes, has a controller virtual machine (CVM) running the Acropolis Operating System (AOS) HCI software. The CVM creates a single storage pool, a Distributed Storage Fabric (DSF) aggregated from each node’s own direct attached storage (DAS). Object storage is carved out from this, along with file and block storage, and inherits DSF capabilities such as snapshots, clones, erasure-coding, deduplication, compression and high availability. 

Nutanix Buckets has an S3-compatible interface alongside VMs, files, and block on the enterprise cloud platform

As with any object store the data is stored in a flat namespace that can scale out to petabyte and beyond levels. An object has three components: unique key or identifier; the data itself; and metadata. The metadata is expandable.

Object data is stored in Buckets, a sub-repository in the object store, and can be encrypted to FIPS 140-2  level. A bucket is roughly similar to a file in a folder but files have fixed metadata and exist in a file:folder hierarchy, not a flat address space. Users can apply policies such as WORM status and versioning to their Buckets.

Customers can use Nutanix Buckets object storage as an S3 backup target, writes Laura Jordana, a technical marketing engineer at Nutanix. This is “deployed and managed from within Nutanix Prism and provides an S3 endpoint over HTTP or HTTPS that any S3-compatible application can connect to.”

There is an Object Volume Manager, another VM, which provides front-end S3 services, a data management Object Controller which interfaces to AOS, metadata services, and lifecycle management and maintenance services. Metadata is held in a key:value store.

Objects can tag versions (copies) with user metadata signifying a project ID, compliance level, content type and more. Large sets of data such as a backup file can be split up into chunks for multi-part ingest into the store.

Nutanix Buckets S3 adapter software is based on the open source Minio software stack, as this Nutanix document explains.

Nutanix will add Objects support for the VMware ESXi hypervisor in a future release. Performance testing with HYCU is described here.

NVMe v1.4 resolves data centre SSD noisy neighbour problems

At the risk of ruining the ending … Data centre SSDs have their own domain of stubborn problems and they are not remotely like client SSDs. Remedies do not have to include re-inventing the SSD, as over- engineering and NVMe v1.4 fix the problems.

How noisy neighbour and long tail latency problems happen 

Latency explained – think of network wire speed as a fast motorway with sparse traffic moving at the speed limit. Speed limits are analogous to maximum network wire speeds. Latency is bad because it slows effective network wire speed.

Think of latencies like stoplights on that motorway. Stop lights cause traffic to pile up, increase travel times and diminishes the amount of total traffic handled. Stoplights are analogous to data centre system latencies.

Noisy neighbours cause latency problems. 

Latency problems are uniquely important in data centre and similar multi-user situations. How and why?

SSD noisy neighbour problems arise when several concurrent flash writes or multiple container workloads compete for the same SSD resources. This causes increased latency.

Noisy neighbour problems are increasing due to three industry trends: larger capacity SSDs, SSDs with reduced write performance, and SSDs with lower endurance. I’ll explain each. 

  • Larger capacity SSDs store more data and therefore serve more and more simultaneous I/O. More I/O to a single device increases the probability of noisy neighbour problems. 
  • Reduced write performance SSDs with each generation of NAND; SLC to MLC to TLC to QLC, writes are slower. This increases the probability of simultaneous writes and noisy neighbours.
  • Lower endurance SSDs with each generation of NAND, SLC to MLC to TLC to QLC, write endurance is diminished increasing garbage collection, and error correction, increasing the probability of conflicting writes and noisy neighbors.

Why this matters … Latency-increasing noisy neighbours become costly in high-value and time-sensitive data centre workloads such as credit card fraud analytics, with a time-based response SLA.

Additionally, Noisy neighbouring becomes high impact in many clustered database applications where the query completes only after the slowest SSD responds.

NVMe v1.4 as the noisy neighbour and long tail latency remedy 

NVMe v1.4 was released in July 2019 and focuses on cloud/hyperscale features. 

NVM Sets serves to isolate noisy neighbours by separating and allocating NAND media so workloads (or containers) using one NVM Set does not impact other workloads on other sets.

In the diagram below: NVM set A is separate and isolated from NVM set BNVM Set A consists of physical die ‘NS A1; physical die ‘NS A2’; and physical die ‘NS A3’. 

NVMe Sets isolate noisy neighbours

NVMe IO Determinism eliminates read latency outliers caused by SSD housekeeping.

A chunk of time (shown below in green) is allocated to deliver predictable read latency. This is the deterministic mode. Another chunk of time (shown in red) is allocated for housekeeping and read latency is then unpredictable. This is non-deterministic mode.

NVMe deterministic IO gets interesting when applied with multiple SSD when IO determinism is coordinated across a group of SSDs. SSDs in deterministic mode are employed while SSDs in non-deterministic mode are conveniently omitted from service. This remedies the unpredictable read latency problem.

Don’t believe me… believe Facebook – the solution for inconsistent latency and consistent quality of service is NVMe Sets as set out in its Flash Memory Summit 2018 presentation.


The NVM Express 1.4 specification can be found here.

Short term; there are worse things than over engineering. Simply spreading the workload over more servers with more SSDs is reasonable. Also organise for additional temporary servers and SSDs during peak times.

Longer term; the more elegant, more affordable, and long term remedy for noisy neighbour and latency determinism is NVMe v1.4.

Note: Consultant and patent holder Hubbert Smith (Linkedin) has held senior product and marketing roles with Toshiba Memory America, Samsung Semiconductor, NetApp, Western Digital and Intel. He is a published author and a past board member and workgroup chair of the Storage Networking Industry Association and has had a significant role in changing the storage industry with data centre SSDs and Enterprise SATA disk drives.

Superman in glass: Microsoft shines light on Project Silica’s quartz slab archive

Microsoft has revealed more about its Project Silica glass-based archive at Ignite in Orlando, saying it can store the 1978 Superman movie in a three-inch square slab of quartz.

Microsoft’s interest with Project Silica is to develop a long-lasting cold data storage archive for its Azure public cloud – which should last longer than ten years.

As The Register reported in April last year year Microsoft is working with the UK’s University of Southampton on the project. It involves recording voxels, volumetric elements of data, in 3D layers inside quartz glass media. These are created using femotosecond laser pulses to write polarisation-based patterns into the glass.

Project Silica’s square quartz glass slab.

The media is a 75 by 75 by 2 millimetres thick (2.95 x 2.95 x 0.08 inches) slab of quartz glass. It is mounted in a holding frame which moves it from left to right and forwards-backwards underneath the laser,  as a video shows.

Project Silica video screen grab – https://www.microsoft.com/en-us/research/video/project-silica-storing-data-in-glass/

The laser fires pulse of light lasting one quadrillionth, or one millionth of one billionth, of a second at the slab, focusing on a specific depth. Theoretically there can be up to 100 layers and the slab stores 75.6 GB of data plus error redundancy codes. We are not told how many voxels there are in each layer.

We could envisage a 3D lattice structure with voxels at the lattice line intersection as in this diagram: 

Blocks & Files illustrative concept of a 4-layer 3D lattice construct with voxels (yellow circles) at lattice line intersections. The voxels are oriented in tracks long the x and y lattice lines in each plane (layer).

That means the mechanical movement of the holding frame has to be precise, for voxel positioning accuracy. We think, streams of voxels are written along a track to minimise start-stop holding frame movement transitions and the time they take. With the laser pulses being so fast, the bulk of the data writing time will be taken up with moving the glass slab underneath the laser.

Voxel variations

The voxels vary by x, y, z position, orientation, and size.  Orientation is used to encode a colour and the size is varied by changing the power of the laser pulse. We think a voxel encodes a pixel in a movie frame; it’s a logical assumption.

The glass slab, once written, is archived and accessed for reads using a read head with a computer-controlled microscope/camera below the slab and light shone through the glass from above. The glass slab is placed in a read head frame and moved in x.y directions to bring voxels underneath the light source.

Warner Brothers

Warner Brothers worked with Microsoft on this as it needs to archive films. The attraction of Project Silica scheme is that the glass slabs do not need storing in temperature-and humidity-controlled conditions, do not need their data periodically refreshed, are pretty much immune to shock, heat and water damage and can store the recorded data for a 1,000 years.

The read head system focses on the data layer of interest with a camera capturing the light pattern, the set of polarisation images from that layer. These images are processed to obtain the orientation (colour) and strength (size) of each voxel. The read head then focuses on the next layer. Software, using machine learning models, rebuilds the original data from the read voxel pattern values.

But, there is no way of verifying that data has been written until the glass slab is read. Therefore, a movie archiving workflow, using Project Silica, would need a verification read process..

There was a previous 1,000-year archive storage technology, the Millenniata M-Disk from 2009. However, Millenniata went bankrupt in 2016 and the company which bought its assets, Yours.co, no longer has a functioning website. The disks might have lasted 1,000 years but the technology business framework collapsed inside ten.

Check out a Microsoft blog to read more about archiving the Superman movie.

WD samples Ultrastar DC SS540, cites robust demand for SAS SSDs

The NVMe interface is taking over for data centre SSD drives – but it is not wiping out the SAS protocol just yet. Western Digital has refreshed its slower SAS data centre SSD line, upping density from 64- to 96-layer NAND flash.

According to WD, SAS capacity demand is expected to grow 24 per cent annually through 2022, which is why it is worth making the Ultrastar DC SS540.

The new drive follows on from last year’s DC SS530. Both use TLC flash and have 12Gbit/s SAS dual-port capability but the new drive has fewer endurance and capacity options.

They are: 800GB, 1.6, 3.2, and 6.4TB at the 3DWPD level; and 960GB, 1.92, 3.84, 7.68, and 15.36TB at 1DWPD.

The SS540’s performance is up to 470,000/240,000 random read/write IOPS, 2.23GB/sec sequential reading and 2.21GB/sec sequential writing.

That makes it slightly slower than the DC SS530’s 2.31GB/sec at sequential reads but ever so slightly faster at sequential writes (2.2GB/sec). The SS540 is also faster at random read IOPS, as the SS530 cranked out 440,000 and the same – 240,000 – for random write IOPS.

A WD NVMe data centre drive such as the DC SN340 pumps out data faster – 3GB/sec or so – and its read latency is 128µs, which is slightly better than the SS540’s 150µs.

However, NVMe drives will soon be able to use the PCIe gen 4 interface, which is twice as fast as the current PCIe gen 3, with its 1GB/sec lane bandwidth. This will enable performance to streak ahead of SAS drives. 

The SS540 has a five-year warranty and 2.5 million hour MTBF rating. The drive supports self-encryption based on TCG enterprise standards and FIPS validation.

The Ultrastar DC SS540 is sampling and in qualification with select customers. Mass production is scheduled for Q1 2020.

Fujitsu upgrades DX storage arrays with Xeon SP, better management software

Fujitsu went into launch-frenzy mode today, pumping out two new storage arrays, upgrading older models with faster processors, rolling out new management software and announcing three storage benchmarks and a guarantee programme.

The company makes ETERNUS DX arrays in S4 hybrid and S2 all-flash forms. The freshly announced arrays are new generation models, dubbed S5 for the hybrid and S3 for the all-flash. Altogether, the S5 generation provides more IOPS, storage efficiency and lower latency than its S4 predecessor.

Flash

ETERNUS AF50 S3

The AF150 S3 uses SAS interface SSDs and is aimed at the small and medium business market. It uses the latest Xeon processors, in common with its big brothers the AF250 S3 and AF650 S3. The AF250 gets larger system memory. The AF250 S3 and AF650S also get hardware-accelerated compression and deduplication, delivering higher IOPS, efficiency and reduced latency.

Fujitsu AF150 S3 specs

Hybridisation

ETERNUS DX900 S5.

The DX900 S5 is a top-end, mid-range, hybrid array with capacity up to 70PB and million-level IOPS capability.

It joins the DX60, DX100, DX200, DX500, DX600 arrays which are all upgraded with Xeon SP processors to match S5 specs. They also get hardware-accelerated storage compression and deduplication in the midrange hybrid arrays (DX200 and higher. DX900 S5 offers compression only), a unified hypervisor-less lean stack and NVMe cache in the midrange.

Fujitsu DX900 S5 specs

The high-end DX8900 array retains its S4 generation label but this should update to S5 specs in due course.

Single glass of pane

New ETERNUS SF storage management software covers the all-flash and hybrid arrays It provides monitoring features through a GUI, including shared functions such as replication, migration and the operation of storage clusters.

Fujitsu is also launching Infrastructure Manager (ISM) to manage software-defined data centres to replace ServerView. Fujitsu will support ServerView until March 2021, followed byive years of extended service and support.

ISM provides single-pane-of-glass monitoring of components across data centres, including servers, storage, power and cooling, backups and UPS. It provides automated firmware updates for Fujitsu PRIMERGY servers, ETERNUS and NetApp storage, and Cisco and Extreme Network switches.

There are two support options. ISM Essentials includes monitoring and firmware update of all supported devices, including servers, storage and network switches. ISM Advanced supports multiple hardware configurations, physical and virtual network connection indicators and firmware baseline updates. It is compatible with third party devices and integrates with VMware, Microsoft System Center and Ansible environments.

Benchmarks

Fujitsu has announced three SPC-1 v3 storage benchmarks. These test a storage array with a single business critical-type workload and support deduplication and compression: 

Coincidentally, Korean supplier Gluesys also announced an SPC-1 benchmark result today and we have included it in the table above.

The three Fujitsu systems generally have better price performance numbers than the company’s older arrays which provide similar SPC-1 IOPS performance. However the Gluesys array bagged the best-ever price/performance number of any array in this benchmark. Fujitsu’s AF150 S3 is in second place.

You can check out the results here.

YMMV

The new Fujitsu Storage ETERNUS AF/DX Global Guarantee Program commits to zero downtime, data reduction and 100 per cent SSD availability, some degree of customer satisfaction and support for array expansion and growth.

The new ETERNUS storage systems are available to order this month via Fujitsu and its channel partners. The S3 all-flash arrays and S5 hybrids will be generally available in early 2020. Pricing varies according to configuration.

ISM advanced pricing is according to the number of servers and nodes in a system and varies by country. For ISM Advanced, support licenses are mandatory.


Microsoft Azure Blob storage integrates with Scality

Scality and Microsoft have teamed up to enable Scality’s RING object storage to accept data sent from Azure Stack Hub and Azure Stack Edge using the Azure Blob Service REST API. Zenko, Scality’s hybrid cloud data orchestrator tool, also supports this API.  

Azure Stack Edge us the renamed Azure Data Box Edge and Azure Stack Hub is the renamed Azure Stack. Scality’s BlobServer is its implementation of the Azure Blob front end API and will be available as a public repository on Github, under an Apache 2.0 license, once the Azure Blob API-supporting RING and Zenko go GA.

Scality BlobServer schematic

Zenko can also use Azure’s public cloud Blob storage as back-end storage, completing the circle. Zenko is based on Scality’s implementation of the Amazon S3 API, aka S3 Server.

Tad Brockway, corporate VP for Azure Storage, Media and Edge at Microsoft, issued a supportive canned quote: “Customer interest in hybrid cloud and edge deployments is increasing, and we see a market need for Azure edge solutions with scale-out storage support. We are glad to see Scality enable their products to support Azure Blob storage for a range of hybrid cloud and edge storage use cases.”

Close to the Edge

Azure Stack and Azure Data Box Edge are ways to deploy Microsoft’s Azure public cloud services on premises: Both were originally conceived of as means to collect data on-premises and send it to Microsoft’s Azure public cloud.

However, as edge IT ideas have taken hold – with nearly every IT installation outside a central data centre now called an edge deployment – so have ideas about the need for local storage and processing.

Networking edge data to the public cloud for processing takes too long and costs too much because of the huge volumes involved. The better option is to process locally-stored data at the edge and send a post-process and small subset of data and results to the public cloud.

Users face having to support two public cloud object storage APIs: Amazon’s S3, which is effectively an object storage; and Azure’s Blob API. Soon, no doubt, there will be a third – Google Cloud object storage.

We need a super-cloud object storage API which combines all three – but doubt this is on the horizon as the public cloud suppliers are fierce competitors.


Dell EMC debuts NVMe VxRail HCI appliances

Dell EMC has rolled out two NVMe appliances for the VxRail hyperconverged range. It has also added better management and is automating network fabric setup in multi-rack VxRail deployments.

The P580N is an all-NVMe four-socket system with second generation Xeon SP processors. It provides twice the CPU and memory per system over the prior generation P570.

The E560N has all-NVMe SSDs and complements the E560 hybrid and E560F all-SAS SSD systems. Dell EMC doesn’t say how the E560N’s performance compares to its E-class systems.

Automating fabric setup

Dell EMC today introduced SmartFabric Services (SFS) to help automate network fabric setup across VxRail racks. IT admins enter one command per switch and SFS will automate more than 99 per cent of the configuration steps for multi-rack leaf and spine fabrics.

SFS fully automates fabric configuration for six switches in a two-rack deployment, and up to 14 switches in a six-rack deployment across a single site.

VxRail ACE

Dell EMC today also announced the VxRail Analytical Consulting Engine (ACE), a management system that provides users with global views of their clusters, health scores, drill down analytics, anomaly alerts, predictive capacity analysis and upgrade orchestration.

This is an HPE InfoSight-like system, using cloud monitoring and a  data lake filled with historical data of VxRail usage, and machine learning to optimise configurations and streamline infrastructure management. Watch an ACE video here.

VxRail ACE is available this month. The VxRail P580N, E560N HCI hardware and Dell EMC SmartFabric Services go on sale in December.

.

16TB drive is ‘fastest nearline product ramp in Seagate’s history’

Seagate met Wall Street’s anaemic expectations in the first quarter. But the company expects a revenue uptick for the rest of the fiscal year, with 16TB enterprise disk drive shipments leading the way.

Revenues were $2.58bn in the first 2020 quarter ended October 4, 13.8 per cent down on a year ago but a little above the company’s $2.55bn guidance. Net income was $200m, compared with $450m last year.

During the quarter Seagate repurchased 9.2 million shares for $450m. Cash and cash equivalents was $1.8bn at quarter end.

The mid-point revenue outlook for the December quarter is $2.72bn. Seagate expects revenues to continue rising as fiscal 2020 progresses.

CEO Dave Mosley said in a statement: “Seagate had a solid start to the fiscal year… Exabyte levels were near record levels in the first quarter driven by improving demand conditions for mass capacity storage.”

Seagate is ramping up 16TB drives production to meet this demand. 

Exabytes shipped are now rising in the enterprise nearline and edge/client consumer electronics markets

Earnings call

In the earnings call Mosley said that the 16TB drive was the “fastest nearline product ramp in Seagate’s history”.

CFO Gianluca Romano said the company expects “16-terabyte to be our highest enterprise revenue product in fiscal Q2 and our largest company revenue contributor in fiscal Q3… We have a huge expectation for volume increase demand in the next two or three quarters.”

Seagate forecasts the mass capacity storage revenue total addressable market to more than double from current levels by 2025.

The company aims to ship 18-terabyte drives in the first half of calendar year 2020, Mosley said. The company is “on-track to ship 20-terabyte HAMR drives in late calendar year 2020. We expect to see demand for dual actuator technology to increase as customers transition to drive capacities above 20 terabytes.”

Net:net

Seagate continues to focus on generating cash from the enterprise capacity disk drive business. It has a minor presence in the SSD market and gives little impression of wanting to invest significantly to grow that business.

Western Digital CEO to retire

Stephen Milligan, Western Digital CEO, will retire when the disk drive giant finds a successor. He will then take an advisory role until September 2020 to help an orderly succession.

WD announced the news along with its latest quarterlies ended Oct 4. Revenues in Q1 were $4bn and net loss was $276m. The company forecasts next quarter revenue of $4.1b -$4.3bn, which is more or less equal last years’s $4.23bn.

According to Milligan fiscal year 2020 is “off to a good start. The continued success of our capacity enterprise drives for the data center was the primary driver of the upside we experienced in the fiscal first quarter,” He said in a statement. “The overall demand environment remains solid. We continue to believe the flash industry has passed a cyclical trough, with improving trends across our flash product portfolio.”

Steve Milligan

Steve Milligan

WD’s praise for its soon-to-be-departed CEO was fulsome. He “has led the Company’s ongoing evolution from a provider of storage components to a global diversified enabler of data infrastructure. He also led the acquisition of SanDisk in 2016, which further positioned Western Digital as a long-term growth brand.

“Under Milligan’s leadership, Western Digital has built a strong portfolio of HDD and flash products, including industry leading capacity enterprise drives and 3D-flash technology, that uniquely positions the Company to provide new architectures and capabilities to manage the volume, velocity and variety of data.”

This leaves out the fact that he set up WD’s data centre systems business which it is now exiting.

Under his leadership WD bought Tegile and its flash and disk storage array business in August 2017. He combined this with WD business unit HGST’s March 2015-acquired Amplidata object storage business. Milligan was running HGST at that time.

Now that data centre storage business is being dismantled, with the Tegile business sold to DDN and the archival object storage up for sale. Getting into the data centre storage systems business was a mistake. And it happened on Milligan’s watch.

Milligan also pushed the WD-Toshiba NAND foundry joint venture to near breaking point when Toshiba was trying to sell its interest to raise capital, with lawsuits and foundry lockouts threatened by Toshiba.

Good news in Q1 shipments?

WD shipped 29.3 million disk drives in the quarter, fewer than the 34.1 million reported a year ago but it’s the second quarter showing a sequential rise. Exabyte shipments were up 23 per cent as its 14TB data centre drives were bought in large numbers; WD doesn’t reveal Y-o-Y exabyte per cent changes nor its actual exabytes shipped number.

A chart shows the data centre disk drive upwards trend. 

Client Devices is comprised of notebook and desktop HDD, consumer electronics HDD, client SSD, embedded, wafer sales and licensing and royalties. Client Solutions is comprised of branded HDD, branded flash, removables and licensing and royalties. Data Center Devices and Solutions is comprised of enterprise HDD, enterprise SSD,data centre software, data centre solutions and licensing and royalties.

The data centre line shows a distinct upward trend. This was driven by disk drive sales rather than SSDs, as a second chart shows.

Data centre disk shipment units were up and so too were client (PC and notebook) disk drive shipments.

The chart shows a decline in total disk drive unit ships – dotted purple line – has bottomed out.