Home Blog Page 300

MackStor launches 200TB DoItRite spinning NAND hybrid drive

Storage veterans have devised a 3-tiered, 200TB hybrid SSD-HHD device, the DoItRite 2000, containing both spinning disk platters and a non-rotating platter of NAND chips combining NVMe SSD speed and extraordinary capacity.

The engineers at MackStor, veterans from Maxtor and other now vanished disk drive manufacturers, have taken a 9-platter, helium-filled, disk drive base system and replaced the top platter with a non-rotating disk containing 30TB of NAND. This is divided into 25TB of QLC (4 bits/cell) flash and 5TB of much faster SLC (1 bit/cell) flash to provide a fast landing zone for write data as well as low latency read access.

Dr Theodore Hirate, CEO and co-founder of MackStor, said in a statement: “We have solved the dilemma of choosing between disk capacity and SSD speed with the DoItRite 2000, by combining both media in a single drive that’s perfectly aligned with the post-pandemic data storage challenges facing us all.” 

The eight disk platters hold 20TB of shingled magnetic recorded data. The drive controller’s software provides automatic and policy-driven tiering of data between the three tiers with machine learning algorithms identifying data for up- and down-tiering.

The controller uses three Arm chips; one for the disk platters, a second for the SSD, and third to provide management and data management services, including deduplication and compression. The effective capacity after data reduction is increased fourfold – to a staggering 200TB. Only deduplicated and compressed data is written to disk.

The drive transfers data across an NVMe PCIe gen 4 x 8 lane interface at 5GB/sec read and write and can deliver 2.6 million random read and 2.8 million random write IOPS. The latency is less than 500µs for 99.99 per cent of data requests. Power consumptIon is modest and heat generation no more than a standard, legacy hard drive. The drive is warranted for 5 years or 5 PB total data written; whichever comes first.

Jet Black Desiato, Chief Marketing Officer at MackStor, said: “This DoItRite 2000 drive will blow the socks pff the legacy disk and flash storage drive technology laggards out there. Our radically lower cost per terabyte will lead to wholesale domination of all enterprise and hyperscaler storage markets.”

The DoItRite 2000 drive is sampling shipping from today, April 1, 2021.

StorMagic branches out into digital asset management

StorMagic has bought the assets of SoleraTec, a small California video storage company – its second acquisition in 12 months.

Update: Hans O’Sullivan comments added. April 1, 2021.

The virtual SAN storage startup has rebranded SoleraTec’s digital asset data lifecycle management software as ARQvault. There are two offerings: ARQvault Video Surveillance and ARQvault Digital Evidence Management.

Brian Grainger, StorMagic CRO said in a statement: “The SoleraTec asset purchase marks our second major expansion in just twelve months. Now armed with virtual SAN, encryption key management and video solutions, StorMagic can truly deliver a forever data platform to address the needs of our edge customers.”

ARQvault provides a multi-tier, multi-location object storage facility with searchable contents. The software handles edge-generated data such as video surveillance and police-generated bodycam, car and interview room footage and stores it over the long term. 

ARQvault scales to thousands of sites and integrates with analytics packages. It is half the cost of all-disk storage and all of its data is available and searchable, like disk storage, StorMagic says.

‘Forever’ refers to ARQvault’s policy-driven tiering for its object storage across direct-attached disk, SAN and NAS all-flash, hybrid and disk storage, LTO tape, Sony Optical disk and public cloud object storage. The archived data is distributed and there is no single point of failure.

ARQvault diagram

Video and other asset data is stored in one or more Vaults which can be in different sites. Each Vault site has a server and storage. The servers can be X86 or Arm-based. ARQvault stores metadata, which is used in searches. Vaults respond to search requests in parallel. 

Each server has its own database which correlates to the video it is storing. As a guideline, every 16TB video storage requires 10 GB of disk storage as a minimum for the database. High-res videos have low-res proxies generated to speed search.

Second acquisition

CEO Hans O’Sullivan left StorMagic in March 2020, 14 years after starting the Bristol UK headquarterd business. Shortly after his departure, StorMagic bought KeyNexus, an encryption key management startup. The company still has to announce a replacement for O’Sullivan.

Hans O’Sullivan

A StorMagic spokesperson told us: “Hans O’Sullivan is no longer CEO, and stepped aside last year to let new leadership take over as the company continues to significantly grow through … last year’s acquisition of KeyNexus. I can confirm StorMagic is in the process of onboarding a new CEO and will be able to announce the appointment soon.”

O’Sullivan told B&F that his resignation was quite ordinary: “There was nothing sinister or unusual about my leaving StorMagic, it was in full agreement with the Board and myself.”

He said: “The acquisition of KeyNexus and its strong team was actually instigated and completed by me, the closure of which was one of my last actions, I felt it would add significant product and technology that fitted with the StorMagic ethos of software defined, automated and targeting the edge. It also helped round out the management team by adding a new CTO and engineering management.”

And: “I know StorMagic will continue to grow and do well and have full confidence in the Board and management team and in full agreement with their strategic direction.”

We understand that the search for a new CEO was not helped by the pandemic.

Asked who made the SoleraTec acquisition decision, Grainger told us: “We have a board of directors and an executive leadership team of which I’m part of both. Of course with these types of larger decisions it’s the board, the ELT as well as our shareholders that approve these types of decisions. … I was the lead of the asset purchase … but of course with the support of the board and shareholders.”

Penguin teams up with Red Hat Ceph and Seagate for big fast object storage

Penguin Computing has launched a hefty, fast Ceph-based object storage system for analytics and machine learning workloads. And it has a hefty name to match – ‘Penguin Computing DeepData solution with Red Hat Ceph Storage’.

DeepData comprises Penguin servers, Ceph from Red Hat and Seagate’s Exos E 5U84 disk chassis, to build a Ceph cluster. This can hold petabytes of object data and streams it out in parallel from the cluster nodes.

Seagate said it conducted extensive testing with Penguin Computing and Red Hat to optimise DeepData’s storage performance.

Seagate 5U84 chassis

The Exos E 5U84 is classed as a JBOD (Just a Bunch of Disks) and contains 84 drives in a 5U rack chassis. It has a 12Gbit/s SAS interface with a maximum of 14.4GB/sec deliverable from a single I/O controller.

A blog by Red Hat Principal Solutions Architect Kyle Bader states: “We were able to achieve a staggering 79.6 GiB/s aggregate throughput from the 10-node Ceph cluster utilised for our testing.” That’s 85.5GB/sec from a disk-based data set composed of 350 million objects.

DeepData throughput chart.

Each of the cluster configuration’s storage nodes were configured with an E 5U84 equipped with 84 x 16GB disk drives, 12Pib (12.9PB) in total across the cluster. The servers ran Ceph software, and used TLC SSDs to store object metadata; block allocations, checksums, and bucket indexes, and so provide faster object data access. The server hardware is not specified in the blog. 

Bader writes: “We combined these [16TB] drives with Ceph’s space-efficient erasure coding to maximise cost efficiency. Extracting every bit of usable capacity is especially important at scale. … We fine-tuned the radosgw, rados erasure coding, and bluestore to work together towards our goals.”

Fungible Inc. aims to re-invent the programmable data centre

Fungible Inc. today launched Fungible Data Center, a turnkey composable system featuring its own software, networking technology, and DPU chips.

Pradeep Sindhu, CEO and co-founder of Fungible said in a statement: “At Fungible, we believe that if we build a solution that addresses the most challenging requirements in data centres – specifically, hyperscale data centres running the most data-intensive applications, then data centres of all scales, on-premise or cloud, core to edge will reap the benefits as well.”

He added: “Today, we deliver the ‘composable’ piece with the first incarnation of Fungible Data Centers, fully managed by the innovative Fungible Data Center Composer software.”

Fungible Data Center scales from a few shelves in a rack, containing servers, storage and network gear, through a single rack to multiple racks. Fungible relies on its own specially-designed DPU chips, TrueFabric networking scheme, and an out-of-band control plane to dynamically compose optimised and working server configurations from pools of CPU+DRAM, GPUs – in the near future, storage, networking and virtualization resource. The component elements are returned to the resource pools when the application ends. Fungible says customers can get higher resource utilisation this way than with commodity X86 servers, storage and networking.

The company dubs the server component separation “hyper-disaggregation” and claims its scheme provides “performance, scale and cost efficiencies not even achievable by hyperscalers”. 

Fungible Rack.

The Composer software incorporates technology gained from Fungible’s September 2020 acquisition of Cloudistics. Composer runs on host servers and provisions independent virtual data centres in a peer-to-peer, geo-distributed private cloud, from the component resource pools.

Fungible delivers:

  • Standard compute and GPU servers are equipped with the Fungible Data Services Platform – a standard full-height, half-length PCIe card powered by a Fungible S1 DPU. The Fungible Data Services Platform card comes at three configurations/performance points: 200G, 100G and 50G
  • Fungible Storage Cluster comprising a cluster of Fungible FS1600 scale-out disaggregated storage nodes, each powered by two Fungible F1 DPUs
  • Standard TOR switches and routers for data, BMC and management
  • Fungible Data Centre Composer – a centralised software suite that enables bare metal composition, provisioning, management and orchestration of infrastructure at all scales.

The Fungible data services card offloads”data-centric processing” from the application servers, accelerating overall application performance.

Composed virtual data centres are separated by independent hardware-accelerated security domains, fine-grained segmentation, robust QoS, line-rate encryption and role-based access control. The software supports multi-tenancy and servers are not shared between tenants to avoid CPU side-channel attacks.

Liqid, another composable systems startup, composes similar server elements using its software and a control plane running across the PCIe bus, with no composability hardware running in the servers or storage components. It also supports Ethernet and InfiniBand, and can compose FPGAs, Optane drives and extended memory pools.

Liqid has been making sales to supercomputer and HPC customers. Fungible is also targeting this market, and has drummed up a supportive generic quote from Brad Settlemyer, senior scientist in Los Alamos HPC Design group: “The co-design of software and hardware to support data services and data analysis is integral to meeting our efficiency targets and advancing our national security mission.”

Fungible is setting up deals with OEMs such as Supermicro and Jupiter Networks to deliver worldwide deployment, support and training. Other partners include Dell, HPE and Lenovo. The Fungible Data Center is available immediately and the entry-level 200G system provides a proof-of-concept testing rig.

Comment

This is the most ambitious composability technology launch since HPE announced its Synergy technology at the tail end of 2015. The details are different but Fungible’s intent is the same – to make better use of data centre server, storage and networking racks to save paying for unused resource.

Fungible says it can set the composable data centre world on fire because it has its special software, networking and DPU chip recipe sauce. Like the other suppliers it has identified a problem; poor IT hardware utilisation, and says it has the answer; an automated programmable data centre using its software-driven chips and wiring.

However the sixth year since Synergy hit the streets composable data centres remain a niche technology. DriveScale has started up and gone away (bought by Twitter). Dell Technologies has its MX7000 but there is no scent of the world being set on fire by it, or by Western Digital’s OpenFlex technology. Only Liqid, which links to Dell and Western Digital kit, appears to be making progress. We shall see, soon enough, if Fungible will also break through.

HYCU bags $87.5m in massive A-Series round

HYCU has bagged $87.5m in a huge A-Series funding, in its first capital raise since it was spun out of Comtrade Software in 2018.

Simon Taylor

CEO Simon Taylor said the company will spend the new money on growing its application, public cloud and SaaS-based offerings and on hiring 100 sales people in Boston, its home town. The company currently employs 200 people worldwide.

HYCU has developed a cloud-native backup-as-a-service and derives its name from its first product, Hyper-Converged Uptime (HYCU), an application for the Nutanix platform.

HYCU also supplies data protection services for VMware, Google Cloud and Azure customers, and claims more than 2,000 customers in 75-plus countries.

Bain Capital Ventures led the fund raise and Enrique Salem, former CEO of Symantec and BCV partner, will join HYCU’s Board of Directors. He expressed “tremendous confidence that HYCU is poised to win the next transformation in enterprise infrastructure.” His colleague, Stefan Cohen, a principal at Bain Capital Ventures, is also joining the board.

HYCU’s Cloud-native backup competitors include Clumio, Druva, Cohesity and Rubrik.

Keep on trucking: Seagate offers Sneakernet at scale

Seagate has opened Lyve Data Transfer Service – basically the idea is to use a truck to carry drives, supplied by Seagate, from A to B.

The Sneakernet-style service incorporates Lyve Mobile drives, data shuttles, arrays, and services, to enable businesses to move mass data quickly, securely, and simply from edge to central locations.

A Seagate-commissioned IDC survey found that enterprises frequently move data on drives between different locations. More than half of 1,000+ surveyed businesses move data daily, weekly, or monthly, and  the average total of physical data transferred  is  473TB.

“Seagate has simplified how mass capacity data is securely captured, aggregated, transported, and managed,” Jeff Fochtman, SVP marketing at Seagate Technology,said. “Our Lyve portfolio gives the distributed enterprise a simple and innovative mass-data storage solution to lower overall storage TCO, move, scale, and monetise data,” 

Florian Baumann, CTO of Automotive and AI at Dell, said in a press statement: “Moving hundreds of terabytes of data from a fleet of vehicles to the data centre poses numerous challenges for our customers. Seagate’s Lyve Data Transfer Services offer a great solution by physically moving data. It’s a simple and scalable solution and fills a gap that our customers had in the data gravity process.”

Lyve Data Transfer Service is roughly similar to Amazon’s Snowball concept in which a ruggedised drive is transported from a customer site to an Amazon data centre. In 2011, The Register wrote: that the now-defunct Australian “Cloud provider Ninefold is now letting people “sneakernet” their initial data dump, sending a SATA or USB storage device to the company for loading into its storage cloud.”

We asked Seagate some questions about the service.

Blocks & Files: After the end-user loads data onto the Lyve Drive, which organisation then physically transfers the drive to the destination?

Seagate: Once the devices are with the customer, the customer is responsible for shipping them. To keep the drives safe from damage Lyve Mobile shuttles and arrays are ruggedized and feature Seagate Secure technology, which offers hardware encryption for data at rest and in motion and user management to unlock the device and manage unique encryption keys.

Blocks & Files: In what geographies and over what distances does this organisation operate?

Seagate: As above, the customer is responsible for shipping the drives so this will depend on where they operate. For the time being Lyve Data Transfer Services is only available in the US, with plans to expand to EMEA later this year.

Seagate Lyve Mobile Transfer Services webpage.

Blocks & Files: How is a pickup and delivery scheduled?

Seagate: Customers can schedule pickup and delivery via the Lyve Data Transfer Services portal on Seagate’s website. Customers can sign up for the services on a flexible subscription model that can scale according the customer’s needs.

Blocks & Files: How is it tracked?

Seagate: As above, Seagate is not responsible for the shipping.

Comment

This service is appropriate, we think, when there either no network connection between the remote and a central data centres, which is unlikely, or, more likely, the network has insufficient bandwidth to transmit the collected data in a reasonable time.

This isn’t a new idea, as the term “sneakernet” indicates, and that dates from the 1980s. Computer scientist Andrew Tanenbaum wrote this in 1996: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.” Fast forward 25 years and watch out for that truck carrying all those Seagate drives.

Kasten intros open source benchmark for K8s storage

Veeam’s Kasten unit has announced the open source Kubestr storage benchmark  project for Kubernetes.

Michael Cade, senior technologist at Kasten, explains: “Kubestr offers a simple, easy way to make informed storage decisions, validate storage configurations, including the ability to leverage storage snapshots for data management, and perform benchmarking to ensure your application has the necessary performance and speed.”

So what prompted Kasten to devise this benchmark? Kubernetes represents a quickly changing computing paradigm with many moving parts and decisions to be made,” Cade says, “The storage choices are overwhelming and every decision will have an impact on speed and performance.”

Kasten Kubestr slide

“Each Kubernetes application will have different requirements for IOPS, bandwidth and latency, so the storage choice is not trivial.

Kubestr enables developers and operators to identify storage options present in a Kubernetes cluster, validate storage configurations, and check storage performance using benchmarking tools like fio. It offers capabilities that help determine the optimal data store to achieve the performance required for microservices running stateful workloads. 

The fact that it’s open source should enable Kubestr to provide vendor-independent recommendations. Find out more at an April 17 webinar.

Kasten provides data protection services for Kubernetes-orchestrated containers and was acquired by Veeam for $150m in October 2020.

Cohesity is now valued at $3.7bn

Investors want to buy $145m of employee-held shares in Cohesity at a valuation of $3.7bn. The new marker represents a 48 per cent jump on April 2020, when Cohesity raised $250m in a E-series round at a $2.5bn valuation.

The tender offer led by Steadfast Capital Ventures was made to Cohesity employees who want the option to sell a portion of their equity for cash in the bank. Although Cohesity CEO Mohit Aron told Bloomberg today that an IPO was not far off, the timing of the offer suggests an IPO is not imminent

Sandesh Patnam, Managing Partner, U.S., at Premji Invest, a member of the tender syndicate, issued a quote: “We believe the new valuation is a fraction of the value Cohesity will be worth long-term.” 

Cohesity today said it had record-breaking results in its fiscal second quarter ending January 31, 2021, with 90 per cent-plus Y/Y growth in annual recurring revenues, and 50 per cent growth in customer wins globally.

The company ended the quarter with more than 2,300 customers and saw a 300 per cent increase in the number spending more than $5m on Cohesity’s products and services. The number of partners that have booked $1m or more in business with Cohesity grew 46 per cent Y/Y.

Your occasional storage digest with Lightbits Labs, Western Digital and Samsung

In this week’s roundup Lightbits Labs strengthesm the enterprise credentials of its NVMe/TCP array; Western Digital says it needs fewer layers with its 3D NAND than others suppliers; and Samsung has introduced a cheaper PCIe gen 4 gumstick NAND drive.

LightBits adds snapshots and clones

Lightbits Labs has added snapshot and thin clone functionality to its LightOS software. The company supplies an NVMe SSD array accessed over NVMe/TCP to provide RDMA-class block data access latency and speed over vanillaTCP/IP networks.

LightOS already features thin provisioning, compression, high-availability and data protection. A Lightbits spokesperson told us most legacy vendors provide this functionality already. For customers and proponents of software defined storage over NVMe/TCP, this capability is new and it bolsters the overall value of this approach.

LightOS 2.2 lets users create space-efficient snapshots and clones. It supports up to 1,024 snaps and/or clones on a single volume, and up to 128,000 snapshots and clones per cluster. Making a read-only snapshot needs a few seconds and trace metadata only is retained. Changes applied to the parent volume are reflected in the snapshot in 4K blocks, with corresponding snapshot storage capacity usage.

For backup applications, snapshots can be scheduled, isolated from production workflows, and backed up reliably in the background. You can also quickly and easily revert/restore back to earlier snapshots as needed, helping to maintain operational uptime and Quality of Service (QoS) thresholds that might otherwise be compromised by data loss or corruption events.

Thin clones are writeable snapshots. They consume flash capacity with changes applied to the parent or the clone itself. Whereas 100 clones of a 10GB image would otherwise consume 1TB of storage, LightOS allows you to maintain the 10GB total storage footprint across all 101 images, allocating additional storage capacity to the 100 clones only as changes are applied.

Western Digital /Kioxia need fewer 3D NAND layers

Western Digital’s Srinivasan Sivaram, President for Technology and Strategy  claims WD and Kioxia need fewer 3D NAND layers than Micron, Samsung, and Sk hynix do, because they scale – shrink – their NAND in more directions. 

Presenting to an investor conference on March 18, he declared: “If you do not scale you perish. Scale or perish has been our mantra throughout in the flash industry.” When you scale the cost per bit decreases and new markets open up. 

Sivaram said that to “the lazy man, scaling is just adding more layers.” But only adding more layers is inefficient; it just adds more cost. The better way is if you laterally scale and use the vertical layering dimension as a lever, as a multiplier, then you can see greatly improved results.

He suggests that “when someone tells you, aha, I gotten … a 128-layer, 168-layer, 176 layers, you have to be careful. You ask, why are you a 128 layers, why can’t you do with less? This is what we have delivered with our BiCS5 generation. When the industry is saying a 128 layers, we get the same scaling through 112 layers with aggressive lateral scaling.”

Samsung adds DRAM-less PCIe gen 4 SSD

Samsung’s 980 NVMe SSD is a successor to its 970 EVO Plus and does away with an on-drive DRAM buffer, using instead a set-aside part of the host system’s memory; Host Memory Buffer (HMB) technology. This overcomes any performance drawbacks of dumping the drive’s own DRAM, and makes it cheaper, the company says.

The 980 stops at the 1TB capacity point whereLIghtbits Labs, as the 970 EVO Plus went up to 2TB.

According to Samsung, the 980 has six times the speed of SATA SSDs. Sequential read and write speeds come in at up to 3,500 and 3,000 MB/sec, while random read and write performances are up to 500K IOPS and 480K IOPS respectively. A larger SLC buffer helps keep the write spreed high.

We have tabulated the main details for the 970 EVO Plus, the new 980 and 980 Pro and a comparison WF Blue SN550 drive.

Samsung says the 980 has up to 56 per cent better power efficiency compared to the previous 970 EVO, meaning improved laptop battery power time.

The 980 is available for a manufacturer’s suggested retail price of $49.99 for the 250GB, $69.99 for the 500GB and $129.99 for the 1TB version.

Shorts

Gartner said Cloudian (HyperStore), Cohesity, Nutanix (Nutanix Files and Nutanix Objects) and Veritas (Enterprise Vault) have all been recognised as a 2021 Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage. 

IDC has published its annual DataSphere and StorageSphere forecast reporting that In 2020, 64.2ZB of data was created or replicated. A revised model forecast that global data creation and replication will experience a 23 per cent CAGR over the 2020-2025 forecast period. IoT data (not including video surveillance cameras) is the fastest-growing data segment, followed by social media.

A Veeam Data Protection Report 2021 found that 58 per cent of backups fail leaving data unprotected. It says businesses are being held back by legacy IT and outdated data protection capabilities, and COVID-19 challenges.

Acronis has released Acronis DeviceLock DLP 9.0 to provide data loss protection for endpoints. It has integrated user activity monitoring, new analytic capabilities, and discovery support for Elasticsearch databases. Administrators can exercise granular control over allowable actions and processes while maintaining regulatory compliance and use built-in auditing and analysis tools.

Alluxio, a supplier of open source cloud data orchestration software, announced the integration of RAPIDS Accelerator for Apache Spark 3.0 with the Alluxio Data Orchestration Platform to accelerate data access on Nvidia computing clusters for analytics and AI pipelines. Validation testing of the integration showed 2x faster acceleration for a data analytics and business intelligence workload and a 70 per cent better ROI compared to CPU clusters.

DataStax today announced a collaboration with IBM to deliver DataStax Enterprise with IBM in hybrid and multi-cloud environments. DataStax Enterprise is scale-out NoSQL database built on open source Apache Cassandra.

Elastic has announced Elasticsearch, Kibana, and Elastic Cloud v7.12 with a frozen data tier, save search in background, autoscaling improvement and new instance types in Elastic Cloud.  Elastic Enterprise Search 7.12 has a new architecture. Elastic Observability 7.12 gets a new correlation capability in Elastic APM. Elastic Security 7.12 gets analyst-driven correlation and behavioural analysis in the Elastic Agent.

Hammerspace v4.6 is available in the AWS Marketplace and adds a Global File System with a single namespace across different geographical sites. Data and metadata can be replicated across these sites and data in global shares is available for read-write on multiple sites at the same time. Hammerspace can be deployed across different Availability Zones for data and access resilience within an AWS Region. V4.6 also has metering-enabled Consumption and Backup and Recovery.

Redstor has announced a scalable backup for Azure Kubernetes Service (AKS) purpose-built for Microsoft cloud partners to protect Kubernetes, Azure machines and M365 workloads. It allows IT administrators of all levels – not just Kubernetes experts – to protect and recover all data, including that held within Kubernetes clusters on Azure, in minutes. 

Object (and file) storage supplier Scality says it has a Container Storage Interface (CSI) compatible provisioning capability planned for RING SOFS. This will augment existing NFS v4/v3 and SMB 3.0 file protocols to provide automated volume provisioning from containerised applications running in Kubernetes. It see this as a requirement from existing customers and expects the trend to increase. Scality has updated its RING8 scale-out filesystem software adding high-availability business continuance and disaster protection, detailed utilisation metrics for billing and chargeback, improved speed and ease of use, plus support for the open-source Prometheus system monitoring tool and API.

SoftIron is introducing Ceph support services at, it says, highly competitive pricing and branded HyperSafe. Options include SoftIron Ceph Support Takeover – resuming services from a previously existing support vendor. They also include HyperDrive Migration from a legacy Ceph installation to SoftIron’s HyperDrive appliances, and emergency assistance.

Yellowbrick has appointed Jonathan Reid as iChief Revenue Officer. The company has a new Velocity Partner Network and has entered into a distribution agreement with Arrow Electronics.

Cloud backup and storage service supplier Backblaze  has appointed Mark Potter as Chief Information Security Officer.

Dell EMC all-flash array sales soar above NetApp (and everyone else)

Gartner number crunching shows Dell EMC is building a substantial all-flash array (AFA) revenue lead over NetApp. NetApp remains in second place in this market but Huawei and Pure Storage are catching up fast.

Wells Fargo senior analyst Aaron Rakers told his subscribers: “Gartner estimates that NetApp exited 2020 with a ~16 per cent all-Flash systems revenue share, following Dell EMC at 27 per cent and ahead of PureStorage and IBM both at estimated ~12 per cent revenue share positions. Gartner estimates that NetApp’s all-Flash FAS arrays account for over 90 per cent of NetApp’s total all-Flash systems revenue in calendar 2020; accounting for over 55 per cent of NetApp’s total systems revenue.”

He added: “Gartner estimates that the AFA market accounted for ~55 per cent of total primary storage industry revenue in 2020, or ~$8.82 billion, -5 per cent y/y. However, all-Flash capacity shipped accounted for approximately 12 per cent of total external storage capacity shipped.”

Rakers has charted supplier AFA revenues since the first 2013 quarter, showing how their growth rates have differed:

This chart, which uses calendar years, shows NetApp breaking away from a group of similar revenue-earning suppliers in the 1Q17 to 3Q17 period and catching up with leader Dell EMC. It actually beat Dell EMC AFA revenues in 1Q17. However, momentum was not sustained and NetApp began drifting back to the chasing pack from 1Q19 onwards.

In 4Q17 IBM, HPE and Pure were level pegging with $250m-300m in AFA revenue, while Huawei was down at the $100m level. NetApp was in the $350m – $400m area, and Dell EMC led with $550m-plus revenue. Three years later, in 4Q20, HPE was still at the $250m level, IBM has grown slightly to just above $300m, Pure is around $325m and Huawei has soared to $350m-ish. NetApp has grown to the $400m point, while Dell EMC still leads but at a higher level, nudging $700m.

In general, it seems that HPE has not done as good a job as Dell EMC, IBM and NetApp in selling to the installed customer base or gaining new customers – which is what Pure has had to do. Hitachi has done even less well than HPE.

A pie chart shows supplier AFA revenue market shares;

HPE ranked fifth in Q4 2020 with nine per cent, behind Pure and IBM, in joint fourth place with a 12 per cent share each, Huawei at 13 per cent, NetApp at 16 per cent, and Dell EMC at 27 per cent. Hitachi is a long way behind, on three per cent.

Rakers has separately tracked NatApp AFA and Pure Storage revenues and compared them.

The curves on the chart show quite evident seasonality with end-of-year peaks and, apart from F1Q20, no sustained evidence over multiple quarters that Pure’s revenues are catching up with NetApp’s, or that NetApp is drawing ahead of Pure.

This is despite NetApp having no equivalent product to Pure’s file+object FlashBlade which pulls in around $50m a quarter. NetApp’s all-flash SGF6024 StorageGRID is positioned primarily as high-performance object store with NAS access layered on top. If the company also positioned the system as a fast-restoring and unified file-object engine, it could perhaps dent FlashBlade’s sales momentum.

FlashBlade is Pure Storage’s billion dollar babe

Pure Storage’s beancounters say the company is close to surpassing $1bn sales revenue for its groundbreaking FlashBlade all-flash storage array.

The company this week said it gained several hundred customers in its fiscal 2021 (ended January 2021) and claimed more than 25 per cent of the Fortune 100 are Flash Blade customers.

FlashBlade delivers unified file and object storage using proprietary flash drives. The system is used as a backup target where rapid restores are needed. Prior to FlashBlade’s arrival on the storage scene in January 2017, all-flash arrays were used for primary data storage – with predominantly block access – only. It now quite usual for filers and object storage companies to support all-flash configurations.

FlashBlade bezel

The company said FlashBlade has recorded consistent year-over-year growth every quarter since launch. In January 2019 Pure said FlashBlade had a $250m run rate, based on $55.1m revenues in the quarter.

Matt Burr, Pure’s VP and GM for FlashBlade, revealed in December 2020 that “FlashBlade’s compound annual growth rate (CAGR) over the past two and a half years has been 79 per cent. [and] FlashBlade is built to meet the mass transition of file and object to flash that we anticipate in the next two to three years. “

FlashBlade uses TLC NAND. Pure introduced a QLC FlashArray//C in August 2020 to attack hybrid flash/disk array competition as part of an extended FlashArray primary storage array product line.

FlashArray//C has a 5.9PB effective maximum capacity, compared to FlashBlade’s 3.3PB effective maximum. Blocks & Files would not be surprised if Pure introduced a new FlashBlade model using QLC flash and with a substantially higher maximum effective capacity than 3.3PB.

Need to store PBs of data? SSDs don’t cut the mustard, says Tosh HDD exec

Hard drives remain indispensable, and predictions that SSDs will replace them in the enterprise are wrong. So says Rainer Kaese, a senior biz dev manager at Toshiba Storage Solutions.

Rainer Käse

He has penned his thoughts in a blog post, “How to store (petabytes of) machine-generated data”, arguing that there is no way SSDs can be used to store the petabytes and coming exabytes of data generated by: institutions such as CERN – with its 10PB/month data storage growth rate; applications such as autonomous driving and video surveillance; the Internet of Things; and, hyperscalers exemplified by Facebook, AWS, Azure and Google Cloud.

Kaese writes: “This poses enormous challenges for the storage infrastructures of companies and research institutions. They must be able to absorb a constant influx of large amounts of data and store it reliably.”

He declares: “There is no way around hard disks when it comes to storing such enormous amounts of data. HDDs remain the cheapest medium that meets the dual requirements of storage space and easy access.”

Why?

The only candidate disk replacement technology is NAND SSDs but “Flash memory… is currently still eight to ten times more expensive per unit capacity than hard disks. Although the prices for SSDs are falling, they are doing so at a similar rate to HDDs.”

We know this, but SSD replacement proponents such as Wikibon analyst David Floyer, and SK hynix say SSD pricing will fall faster than disk drive pricing because SSD capacities will rise faster than disk drive capacities.

Kaese argues that the coming deluge of machine-generated (IoT) data cannot be deduped effectively, so lower SSD pricing based on deduped-enlarged effective capacity won’t apply. It will be a raw TB/$ comparison.

Flash fabs

He adds: “Flash production capacities will simply remain too low for SSDs to outstrip HDDs.” Flash fabs cost billion of dollars and take two years to build. But HDD output can be increased relatively easily “because less cleanroom production is needed than in semiconductor production.”

In a recent article, Wikibon’s Floyer claimed flash capacity is already being made overall than disk capacity: “Flash has already overtaken HDDs in total storage petabytes shipped.” He argues NAND’s volume production superiority is driving flash prices down faster than disk drive prices.

Disk energy assist

Kaese notes that “new technologies such as HAMR (Heat-Assisted Magnetic Recording) and MAMR (Microwave-Assisted Magnetic Recording) are continuing to deliver [disk drive] capacity increases.” We should assume a 2TB/year disk capacity increase rate “for a few more years,” he says. This will continually decrease disk’s $/TB costs.

Floyer, in contrast, argues that HAMR and MAMR costs will be too high; “Wikibon believes HDD vendors of HAMR and MAMR are unlikely to drive down the costs below those of the current PMR HDD technology.”

Who is right?

Kaese references an IDC forecast “that by the end of 2025, more than 80 per cent of the capacity required in the enterprise sector for core and edge data centres will continue to be obtained in the form of HDDs and less than 20 per cent on SSDs and other flash media.”

Wikibon makes its own prediction: “Wikibon projects that flash consumer SSDs become cheaper than HDDs on a dollar per terabyte basis by 2026… Innovative storage and processor architectures will accelerate the migration from HDD to NAND flash and tape using consumer-grade flash.”

Who should we believe? Do we take on board the disk drive makers’ spin or Wikibon flash comments? Blocks & Files will return to the topic in 2026, drawing on the services of Captain Hindsight, our infallible friend.