Home Blog Page 308

Pavilion Data takes pole position in GPUDirect race

Pavilion Data, the high-end storage array maker, sends data to an Nvidia DGX-A100 GPU server faster than DDN, VAST Data and WekaIO, according to a Nvidia-validated test result.

Developed by Nvidia, GPUDirect is a data transmission scheme that provides a direct path between storage and GPUs. This eliminates or reduces I/O bottlenecks for data intensive AI applications.

The Pavilion-Nvidia performance is revealed in a video. [Find it on a Pavilion partner resource webpage entitled “Nvidia Pavilion Summary of GPU Direct Testing”.] Pavilion sent block data to the DGX-A100 at 182GB/sec read and 149GB/sec write. File data transmission speeds were 191GB/sec read and 118GB/sec write. 

Pavilion Nvidia video screen grab.

Blocks & Files now has four performance results for storage systems using GPU Direct to send data to an Nvidia DGX-A100 and they are:

Pavilion is clearly top of the DGX-A100 charts, coming in 10 per cent faster than DDN and VAST, and 26 per cent faster than WekaIO.

A little bit more about Pavilion

Pavilion Data’s 4RU HyperParallel Flash Array (HFA) has up to 20 controllers linked across NVMe to up to 72 SSDs, with NVMe-oF external access, providing end-to-end NVMe performance.

Blocks & Files schematic of Pavilion system.

The company’s HyperOS 3.0 provides the HyperParallel File System with multi-chassis, global namespace support for NFS and S3. It exhibits linear scaling across an unlimited number of clustered HyperParallel Flash Array systems.

This week in storage climbs aboard Europe’s climate-neutral data centre pact

In this week’s data storage news roundup, Dell EMC adds sync replication-based high availability to PowerStore arrays; Delphix adds ransomware protection; and Europe data centre firms aim for climate neutrality by 2030.

European Climate Neutral Data Centre Pact

Twenty-five cloud infrastructure providers and data centre operators and 17 associations have signed the Climate Neutral Data Centre Pact, an initiative to make data centres in Europe climate neutral by 2030. They say this is an historic and unprecedented commitment by an industry to proactively lead the transition to a climate neutral economy.

The Climate Neutral Data Centre Pact establishes a Self Regulatory Initiative which has been developed in co-operation with the European Commission. It supports both the European Green Deal, which aims to make Europe the world’s first climate neutral continent by 2050, and the European Data Strategy.

The Self Regulatory Initiative sets ambitious goals that will facilitate Europe’s essential transition to a greener economy. It commits signatories to ensuring their data centres are climate neutral by setting ambitious measurable targets for 2025 and 2030 in the following areas:

  • Prove energy efficiency with measurable targets
  • Purchase 100% carbon-free energy
  • Prioritise water conservation
  • Reuse and repair servers
  • Look for ways to recycle heat

Progress towards achieving climate neutral data centres will be monitored by the European Commission twice a year.

PowerStore Metro Node

Dell EMC has announced a Metro Node for its PowerStore array line. This is an active-active failover scheme. Both nodes, separated by metro-level distances, are fully mirrored and can simultaneously write at both sites. 

The metro node is a 2U cluster; a pair of 1U systems with 32Gbit/s Fibre Channel connectivity.

That means a LUN at one site can be synchronously replicated to the other. The Metro Nodes support zero time Recovery Point Objective (RPO) and Recover Time Objectives (RTO). That means there is no data loss and no time to recovery; instant failover. The failover is accomplished by using virtual machine witness technology. 

There is a metro round trip (RTT) boundary limitation of under 10ms. Different Dell EMC arrays can be connected in this way via Metro Nodes. There is no performance impact on the arrays at either end of the metro link of the ongoing replication.

A PowerStore metro node also supports local configurations for continuous application availability, and data mobility to non-disruptively relocate workloads. It can enable a storage technology refresh without application downtime. Metro Node supports consistency group add/delete operations without disruption. 

Delphix adds ransomware detection and fix

Delphix has announced an immutable data time machine that will not overwrite data and a so-called open-box approach to ransomware detection and recovery.

It says analytics from storage or backup systems often treat data as a closed box and leave blind spots that can be hidden behind normal change data patterns. Companies have resorted to manually provisioning, testing, and validating data at a claimed incredible cost. This can take weeks or months. Delphix says it enables “programmable data” to enables data testing and validation as if it were in an “open box.”

Jedidiah Yueh.

Jedidiah Yueh, CEO of Delphix, said: ”You can drive a truck through the holes in legacy ransomware solutions. Once-a-day backups are insufficient. Companies can’t afford to lose a day’s worth of transactional data. In addition, closed-box backup approaches fail to safeguard against undetected attacks, sometimes for months. Companies need an open-box detection approach, especially for mission-critical data.”  

The company claims that Delphix Data Platform open box testing and validation can greatly reduce windows of undetected data loss, while also drastically reducing operating costs and complexity.

Delphix also claims air gap isolation for its stored data, but this is a virtual air gap. The company states: “With flexible replication configurations, Delphix can isolate data and provide a highly secure network implementation as well as advanced security for identity and controls—enabling a cyber data air gap to prevent data loss and tampering.”

Translate cyber air gap as a virtual air gap; one not involving physical offline separation of stored data media, such as tape, from any IT networked device.

Eat your shorts

Data Dynamics has an Unified Unstructured Data Management Platform product which is a file lifecycle management product. V8.4 adds Object to Object Replication with automated data movement.

A coalition of leading Europe-based technology companies, research institutions and not-for-profit organisations, has launched Data Sovereignty Now, to lobby European policy makers to ensure that control of data remains in the hands of the people and organisations that generate it.

Total fines of €272.5m, about $332.4m, have been imposed for a wide range of infringements of Europe’s GDPR data protection laws, according to law firm DLA Piper. The figure is taken from the law firm’s latest annual General Data Protection Regulation (GDPR) fines and data breach report of the 27 European Union Member States plus the UK, Norway, Iceland and Liechtenstein. Italy’s regulator tops the rankings for aggregate fines having imposed more than €69.3m (about $4.5m) since the application of GDPR on 25 May 2018. Germany and France came second and third with aggregate fines of €69.1 million and €54.4 million respectively. 

IBM has containerised Spectrum Scale, its high-performance, parallel file system software. A blog by IBM Lab Services Consultant Ole Myklebust says: “We now can containerise the Spectrum Scale cluster and run this on top of OpenShift.”  Client clusters running on the Red Hat Open Shift Container Platform (OCP) don’t need any internal storage or internal filesystem. They can access storage/filesystem on any non-containerised Spectrum Scale storage cluster at the 5.1.0.1 level of code. The separate Spectrum Scale Storage Cluster could be virtual, physical or even an ESS (IBM Elastic Storage Server). 

N2WS announced backup monitoring with new Datadog integration and security enhancements in the latest release of N2WS Backup & Recovery for AWS. The SW gets support for Amazon FSx for Windows File Server and Amazon FSx for Lustre, nNew file and folder level recovery for archived data stored on Amazon S3, and enhanced security capabilities for Disaster Recovery Accounts on Amazon Web Services. 

By the way N2WS is an initialism for “Not 2 Worry Software.”

Snowflake CEO Frank Slootman has co-written a second book with freelancer Steve Hamm, “Rise of the Data Cloud.” It basically says Snowflake’s cloud data warehouse is wonderful, the world needs it, and you should buy its services at once.

Nimbus Data has announced Tectonic, a bring-your-own-SSDs business model for its ExaFlash al-flash array. CEO Thomas Isakovich tells us: “We provide our ExaFlash all-flash array – the system, all software, and 24×7 support — as a flat rate subscription – but let customers choose their own SSDs. This way, customers can slash the cost of the most expensive part of the all-flash array: the SSDs. They can bypass the 5-10x mark-up and get SSDs for the actual market price. If customers still prefer to buy the SSDs from us, they can – and we charge a price per GB that is directly mapped to the market price of flash (as documented by TrendFocus in DRAMeXchange). This model gives customers freedom from lock-in and guarantees customers the lowest cost always.”

The Canadian Pacific Railway has deployed Intel Optane PMem to bulk out its DRAM in its SAP system. It now uses three times fewer SAP production nodes. An Intel slide shows the benefits;

UK (Wales)-based media industry object storage supplier Object Matrix has partnered with  PoINT Software & Systems to enable efficient lifecycle management with MatrixStore. PoINT Storage Manager identifies inactive data which can be automatically migrated to MatrixStore, reducing the load on primary storage. At the same time, customers maintain fast and easy access to their entire archive. 

Respondents to VAR survey by William Blair analyst Jason Ader emphatically endorsed the notion that COVID is an accelerant to cloud adoption. That said, hybrid cloud models will continue to be the norm for most organisations. Customers are espousing a cloud-first, SaaS-first approach. COVID has made a strong case for IT moving from cost centre to revenue/productivity driver, boosting the CIO’s stature in the organisation. This realisation, coupled with increased IT infrastructure required to support hybrid in-office/WFH environments in a post-COVID world, bodes well for 2021 IT budgets.

In-memory compute supplier GridGain Systems, said it had great momentum coming out of 2020. This included its drvrnyh consecutive year of revenue growth, net-new business growth of 53 per cent with just 6 per cent churn, a 63 per cent increase in the number of transactions over $250,000 compared to 2019, and more than 500 per cent sales growth in Apache Ignite support subscriptions.

Scale-out NAS supplier OpenDrives has raised $20m in its B-round of funding, bring the total raised to $30m. It will use the capital to grow the company, accelerating product development to introduce modern high performance computing (HPC), on-premises and hybrid cloud capabilities for global businesses in enterprise, media and entertainment, pharmaceutical, and federal government agency markets. 

Backup-as-a-Service data protector Rewind has closed a $15m A-round of funding led by Inovia Capital. Rewind has more than 80,000 business customers around the globe. Rewind backs up SaaS applications like BigCommerce, QuickBooks Online, Shopify, and Shopify Plus The cash will accelerate its product development pipeline, bringing new data protection offerings to market faster than any current BaaS provider. Rewind will also use the new capital to strengthen its R&D, sales, marketing and customer service teams to support its global market expansion. 

Platform9, which provides open-source SaaS-managed software for private and edge clouds, announced multi-version Kubernetes support, enhanced cluster deployment options, and upgraded manageability. Use cases span Technology, Retail, Telco, Media, and Entertainment verticals.

Twitter consumes DriveScale, gain weight in persistent block storage

Composable infrastructure startup DriveScale and its team have been absorbed into Twitter.

Nick Tornow, a platform lead at Twitter, tweeted: “DriveScale’s extremely experienced team will bring deep knowledge of storage protocols, technologies, and products to help us develop a persistent block-level storage product in our data centres to accelerate application development across the company.” 

Terms of the acquisition were not revealed but it looks like a distressed purchase. In our view, DriveScale faced being out-spent in engineering, marketing and sales by better funded competitors, and was left behind in a technology sense. Possibly the Covid-19 pandemic adversely affected DriveScale’s business as well.

The future for DriveScale’s customers and their use of the technology is not clear at time of writing but all the signs are that the company’s general composable system development days are over. In December, DriveScale CTO Brian Pawlowski joined Quantum as its Chief Development Officer. It now seems like he knew the writing was on the wall.

DriveScale CEO Gene Banman said on Twitter: ”Today I am pleased to announce that DriveScale is joining Twitter. Our extremely talented team with deep knowledge of compute and storage solutions will help accelerate application development at Twitter. 

“The last 8 years have been an incredible journey for DriveScale. None of this would have been possible without our amazing team, customers, partners, analysts and friends. We couldn’t have done it without you.  Thank you to each and everyone of you that joined us on this wild ride.”

DriveScale’s journey

DriveScale was founded in 2013 by Duane Northcutt, chief scientist Tom Lyon, and chief architect Satya Nishtala. The company has taken in $26m in funding, with $8m most recently raised in 2018. The intent was to provide dynamically composed server systems and storage, and it initially developed a DriveScale Adapter that sat in a rack and used Ethernet to connect any diskless server to any SAS storage drive in JBODs. 

The company then developed its DriveScale Software Composable Infrastructure and added SSD support. This provides a software control plane to compose hardware resources, which included GPUs, and return them to a pool when no longer needed. Flash storage resources can be configured by this software at chassis, drive, and sub-drive level.

DriveScale composable infrastructure scheme

DriveScale added support for Optane, Western Digital’s OpenFlex architecture, OpenStack and Mellanox BlueField SmartNICs in 2020.

However, competitors such as Liqid, with $50m in funding, developed a product that supported composing Optane drives, FPGAs, and GPUs with PCIe Gen 4 support. It has won a string of supercomputer composability contracts and is set to dominate that niche.

Another competitor, Fungible, with $311m in funding, has developed a very high-performance chip to compose and manage data centres including hyperscale ones, at scale and with storage hardware accelerator chips supported. 

SSDs will crush hard drives in the enterprise, bearing down the full weight of Wright’s Law

IDF Merkava Mk4 tank with Trophy APS (
IDF Merkava Mk4 tank with Trophy APS ("מעיל רוח") during training

The supposed limitation of flash memory manufacturing capacity compared with hard disk drive manufacture is a “myth”, according to the Wikibon Research analyst David Floyer.

More flash capacity is already made than HDD capacity. This volume production superiority is driving flash prices down faster than disk drive prices and, as night follows day, SSDs will take over from HDDs in data centres.

Using ‘Wright’s Law’ as a reference point – more on that later – Floyer forecasts that production efficiencies will result in SSDs becoming cheaper than HDDs on a dollar per terabyte basis by 2026. The benefits will spill over to the enterprise uses cases that are currently dominated by nearline hard drives.

After 2025, Floyer writes, “Wikibon projects the HDD shipments will decline by about 27 per cent per year. The main reason is that flash will be the dominant technology for almost all large-scale storage. HDD production will primarily be for the replacement and extension of existing HDD installations.”

SSDs are cheaper to operate than disk drives, needing less power and cooling, and are much faster to access. But they cost more to make. Floyer’s prediction of mass disk drive replacement hinges on the idea that SSDs will become cheaper on a $/TB basis, because of Wright’s Law, and so become cheaper than disk drives. But when?

Crossover timing

Wells Fargo senior analyst Aaron Rakers in August 2019 predicted enterprise storage buyers will start to prefer SSDs when prices fall to five times or less that of hard disk drives. He noted an 18x premium in 2017 for enterprise SSDs over mass capacity nearline disk drives. This dropped to a 9x premium in 2019. He did not predict when the 5x premium crossover point would be reached.

Intel thinks it will happen in just under two years time. SSDs will reach total cost of ownership crossover with hard disk drives in 2022. This is because penta level (5bits/cell) 3D NAND will lower SSD cost. Rob Crook, Intel’s general manger of its Non-volatile Storage Group, said at an Intel event in in December last year: “We’re on the right path to replace HDDs,” and Intel has solid plans to move to PLC in the future.  

Floyer disagrees with this timing, citing Wright’s Law in evidence.

The Wright Stuff

Wikibon argues the cross-over timing between SSDs and HDDs can be determined using Wright’s Law. This axiom derives its name from the author of a seminal 1936 paper, entitled ‘Factors Affecting the Costs of Airplanes, in which Theodore Wright, an American aeronautical engineer, noted that airplane production costs decreased at a constant 10 to 15 per cent rate for every doubling of production numbers. His insight is also called the Experience Curve because manufacturing shops learn through experience and become more efficient.

In a new Wikibon report, QLC Flash HAMRs HDD, Floyer has applied Wright’s Law to NAND and HDD production. Wright’s Law predicts cost-saving efficiencies will be made in line with increased production of NAND flash products. This means total production of NAND, adding together consumer and enterprise SSD products. The production of consumer and enterprise SSDs together builds product volumes high enough for Wright’s Law to have its beneficial effects on pricing, according to Floyer’s analysis.

Conversely, the scope for production-related efficiencies for HDDs falls as fewer disk drives are made. The outcome is that SSD $/TB will drop faster than HDD $/TB.

“Wikibon projects that flash consumer SSDs become cheaper than HDDs on a dollar per terabyte basis by 2026, in only about 5 years (2021),” he writes. “Innovative storage and processor architectures will accelerate the migration from HDD to NAND flash and tape using consumer-grade flash. …

“Flash is lower cost over 5 and 10 years for almost all file-based workloads than HDD, when storage management and space are factored in.” Also, he notes, “Flash has already overtaken HDDs in total storage petabytes shipped.”

In the chart above, the orange line in this log-scale x-axis chart is the hard disk drive (HDD) $/TB street cost over time, plotted against the left-hand x-axis. The blue line represents the street $/TB SSD cost over the same time period, and the two lines are projected to cross in 2026. The green dotted line stands for the ratio between the SSD and HDD costs, plotted against the values on the right vertical axis.

Wikibon’s Floyer includes a chart entitled “NAND Flash has already Overtaken HDD in Storage Exabytes Shipped.” This shows that more flash exabytes are already being manufactured than disk exabytes.

The dotted white line on the chart shows annual flash shipments in exabytes, representing the sum total of SSD capacity and consumer flash products such as USB sticks and camera cards. The orange area represents total HDD exabyte shipments over time.

According to Floyer, the table at the bottom of the chart “shows that total flash accounted for 435 exabytes in 2020, compared to HDD with 310 exabytes. Most of the ‘other flash’ segment is consumer flash, and the fabs are already built and in production. More will be steadily built over the next decade. The cost of flash includes the investment in flash fabs. Myth exploded.” 

To this end, “Wikibon believes that the HDD industry will continue to develop PMR technology and should get to between 24-26 TB/HDD drive in the next five years.”

Floyer writes: “The volume of consumer HDDs shipped was rapid in the early days. This volume drove down the HDD cost (Wright’s Law), which drove increased HDD volumes to be purchased. The rapid increase in sales allowed for aggressive research and development to implement new HDD storage technologies with improved storage density.

“Initially, these new technologies were more expensive than the previous ones. However, the high volumes drove production costs down quickly (Wright’s Law), and costs continued to decline.”

Flash manufacturing limitation?

Floyer chart showing actual HDD unit numbers to 2020 and projections to 2030. Consumer and enterprise (data centre) HDDs are separately indicated. He says consumer HDD unit numbers drove the production efficiency cost-savings

Members of the HDD preservation society say HDDs will retain their current $/TB advantage by increasing capacity through HAMR and MAMR technology. These heat and microwave energy-assisted magnetic recording technologies enable a disk platter to have much smaller bits and more tracks on the platter surfaces. That drives up areal density and so lowers the cost/TB.

According to this viewpoint, SDDs will retain their 5x or greater cost premium despite lowering their own cost/TB because HDD capacity is getting less expensive at the same or a greater rate. 

HDD preservation society members also say that there is simply isn’t enough flash manufacturing capacity to match the millions of petabytes of disk drive manufacturing capacity that exists today. 

For example, Colm Lysaght, a senior director at Micron, the US memory maker, took this tack, telling Blocks & Files by email in August 2019: “Clearly SSD price/GB will get closer to HDD price/GB over time. … However, the raw number of EB needed for a “wholesale switch” from nearline HDD to SSD is far too large for the NAND flash industry to contemplate. The capital investment needed to generate the EB required … is prohibitively expensive.”

He concluded: “SSDs may nibble (and maybe even munch) at the nearline HDD market, but both will coexist for many years to come.”

HAMR and MAMR dead end

Floyer thinks disk technologies like HAMR and MAMR will not change the outcome as it will not be financially viable for the HDD manufacturers to introduce them in mass quantities. This is because the number of drives required to reduce HAMR/MAMR costs below current HDD recording technology costs is too great. 

Floyer writes: “Wikibon believes HDD vendors of HAMR and MAMR are unlikely to drive down the costs below those of the current PMR HDD technology.”

He concludes: “In Wikibon’s opinion, investments in HAMR and MAMR are not the HDD vendors’ main focus. Executives are placing significant emphasis on production efficiency, lower sales and distribution costs, and are extracting good profits in a declining market. Wikibon would expect further consolidation of vendors and production facilities as part of this focus on cost reduction.”

NAKIVO hardens ransomware defences, takes SharePoint under its backup wing

NAKIVO has extended its backup software to: support SharePoint Online; add ransomware protection facilities; and give managed server providers (MSPs) a way to manage tenant resources.

NAKIVO’s new Backup & Replication 10.2 backs up SharePoint Online sites and sub-sites, recovers document libraries and lists to the original or a different location. Search functionality enables users to locate items for compliance and e-discovery purposes. Microsoft Azure does not backup SharePoint users’ data; that’s the responsibility of the users.

Nakivo graphic

AWS S3 ObjectLock support lets users set up retention periods for immutable S3 buckets with no subsequent change in the retention period allowed. Ransomware can’t touch the data in these buckets.

NAKIVO Backup & Replication 10.2 allows MSPs to provide a more reliable service, by organising and schedule infrastructure resources to prevent noisy neighbours. The MSPs can allocate data protection resources such as hosts, clusters, VMs, Backup Repositories and Transporters, to tenants.

NAKIVO provides perpetual license and subscription license options in six configurations, ranging from Basic to Enterprise Plus. The company says it is typically 50 per cent cheaper than competitors. A Microsoft 365 subscription price is $10/user/year. Per-workload subscription pricing is also available. A perpetual license price for a virtual server is $99/socket sat the Basic level and $899/socket at the Enterprise Plus level.

A bit more about NAKIVO

NAKIVO says it has more than 5,000 resellers 16,000 customers, which include Coca-Cola, DHL, Honda, Radisson, SpaceX, and the US Army and Navy. The US-headquartered company competes against mainstream backup suppliers such as Commvault Dell EMC and Veritas. It is also up against blitzscaling competitors such as Clumio, Cohesity, Rubrik and Veeam. So, to date it has been overshadowed on the marketing front.

NAKIVO founders have self-funded the company with the proceeds of a previous US-based and sold-startup.  Sergei Serdyuk, VP product management, told us: “For the first five years we spent zero on marketing. We lived on what we made.”

Its actual US footprint is slim, comprising the CEO and a few other staff. The company. conducts the bulk R&D and engineering in the Ukraine with help from a Vietnamese office. Serdyuk said it would be cost-prohibitive to have engineering located in the USA.

Expect to hear more about NAKIVO. It has now grown large enough to set aside some money for marketing.

.

Samsung gets flashier- and a bit faster -with the mainstream 870 EVO SSD

Samsung has updated the 860 EVO SATA SSD to the 870 EVO version, changing the flash from 64-layer to 128-layer V-NAND, and making it slightly faster.

Samsung 870 EVO.

KyuYoung Lee, VP of Samsung’s memory brand biz, is on message with this statement: “Representing the culmination of our SATA SSD line, the new 870 EVO delivers a compelling mix of performance, reliability and compatibility for casual laptop and desktop PC users as well as Network Attached Storage (NAS) users.”

The performance overview numbers for the 870 and 860 are below.

They are pretty similar but the 870 EVO improves random read speed, delivering 38 per cent improvement in random read speed over the previous 860 model at a queue depth of 1.

The 870 EVO is sold in 2.5-inch format only, unlike the 860 which was also available as an M.2 format drive..

The 870 has the same endurance as its predecessor, with up to 2,400 TB written (TBW) for the duration of the five-year warranty. The endurance rating is 150TBW for the 250GB drive, and doubles as the capacity increases to 500GB and on through 1TB and 2TB to 4TB.

Both EVO models use a 6Gbit/s SATA interface, AES-256 bit encryption and an Arm-based Samsung controller.

Like the 860, the 870 EVO has a variably sized SLC write cache, called Intelligent TurboWrite technology, to increase its speed.

We’re told the 870 EVO supports TRIM and S.M.A.R.T. functionality and comes with Samsung’s Magician software for drive management. It also comes with drive cloning software.

Lastly, this product is somewhat similar to the existing 870 QVO SSD, but the latter drive uses quad-level flash and goes up to 8TB in capacity.

Manufacturer’s suggested pricing starts at $39.99 for the 250GB model, and continues with $69.99 for 500GB, $129.99 for 1TB, $249.99 for 2TB and $479.99 for 4TB.

HDDs remain big cash-generating machine for Seagate

Seagate has delivered another $2.5bn-plus revenue quarter as it spits in the eye of nearline disk cannibalisation by flash.

Top line numbers for the second fiscal 2021 quarter ended January 1 were $2.62bn in revenues, a smidge down in the year-ago $2.69bn, and net income of $280m, a bigger smidge down from last year’s $318m. The revenue number beats Seagate’s guidance from last quarter. Seagate’s mid-point revenue guidance for next quarter is $2.65bn, a bit less than last year’s $2.72bn.

In Q4, bulk capacity nearline disk drives accounted for 58 per cent of revenues; and legacy desktop and notebook/fast enterprise 2.5-inch drives contributed 38 per cent of revenues. Enterprise systems and SSDs made up the balance.

There were record exabyte capacity shipped numbers, at 97.2EB – well up on the year-ago’s 71.3EB. Seagate reckons nearline drive capacity shipped will grow 35-40 per cent Y/Y.

Dave Mosley.

“Seagate delivered strong, double-digit revenue, earnings and free cash flow growth in the December quarter supported by broad-based improvement across nearly every served market and geography, and we had solid customer demand for our mass capacity products,” CEO Dave Mosley said.

“We are well positioned to benefit from the tremendous opportunities we foresee ahead and remain focused on enhancing value for our customers, employees and shareholders.”

That tremendous opportunity is the gigantic increase in data storage needs both on-premises and, more importantly, in the hyperscaler and public cloud markets.

Seagate is moving cautiously from ongoing perpendicular magnetic recording technology (PMR) to next-generation heat-assisted magnetic recording (HAMR) but it is in no rush. The first HAMR drives, with 20TB capacity, shipped without fanfare in November. The company is ramping up 18TB drive production ready to take over from existing 16TB drives as the market sweet spot transitions. 

It is also developing the Lyve Storage Platform line of storage arrays and systems to extend its market but so far this is micro-potatoes, revenue-wise.

Dell EMC PowerProtect X400 makes quiet exit from the planet

Dell EMC launched the PowerProtect X400 appliance in May 2019, GA’d in July. And then what? Not an awful lot, judging from this November 2020 Evaluator Group report (paywall) on PowerProtect, which reveals “PowerProtect X400 was discontinued in Aug. 2020 in favour of emphasising the stand-alone options.”

This may be a record of sorts, going from GA (general availability to EOL (end of life) in just over 12 months.

We asked Dell EMC what happened to the X400 Appliance. A spokesperson told us: “In an effort to simplify our data protection appliance portfolio and meet the evolving needs of our customers, we made the decision in 2020 to focus our integrated data protection appliance efforts on the PowerProtect DP series.

“The software that powers the X400, PowerProtect Data Manager, remains available in our portfolio and writes to existing and new PowerProtect appliances. We now offer an integrated PowerProtect appliance with the new PowerProtect DP series and a target-based PowerProtect appliance with the PowerProtect DD series.”

Obit

The X400 Appliance was announced as Dell EMC’s PowerProtect reveal in May 2019. This was new data protection and management software that supported three hardware products; the IPDA DP Series products running Avamar software for SMB customers, and (later) the DD Series (Data Domain) backup targets. The X400 ran with the PowerProtect software and stored the backups.

PowerProtect HW in May 2019.

It was both a scale-up and scale-out system but had limited data reduction in that deduplication was restricted to just one X400 enclosure (or cube) in the scale-out design.

Dell EMC to use Druva for Backup-as-a-Service

Scoop! Dell EMC is in negotiations with Druva to offer the startup’s backup as a service and will announce the deal in weeks, according to industry sources.

The tie-up would give Dell a competitive tool to combat the new wave of as-a-service data management suppliers such as Clumio, Cohesity and HYCU. It would also give Druva a shortcut to Dell’s huge enterprise base.

Druva CMO Thomas Been declined to confirm discussions, pointing us instead to a deal between the two companies. “Our offering is available to Dell Technologies customers through their Extended Technologies Complete program, which provides customers with a portfolio of complimentary third-party offerings for Dell Technologies solutions.”

We had a good rummage in the Dell Tech catalog but were unable to find details of this arrangement. Maybe our search skills were at fault, so we asked Druva to point us in the right direction. We have yet to hear back.

We also asked Dell EMC and a spokesperson told us: “I looked into Druva and what I can tell you is that they’re a member of our Extended Technologies Complete program, which offers customers a portfolio of third-party offerings that can be used alongside our products.”

Let’s remind you that EMC once had its own in-cloud backup services firm – Spanning, which it acquired in 2014. EMC was in turn bough by Dell in 2016, which then sold Spanning Cloud Backup to Insight Venture Partners in 2017. Now it’s part of Kaseya.

Dell Capital is an investor in Druva.

IBM storage sales ride (slowly) in tandem with mainframe cycle

IBM executive Arvind Krishna. 5/30/19 Photo by John O’Boyle

Analysis: Storage sales from IBM’s Systems unit declined in its fourth calendar quarter as the z15 mainframe cycle continues to wind down. 

As The Register reports, IBM’s Q4 2020 revenues of $20.4bn were down six per cent Y/Y with its $1.3bn net income 65 per cent down Y/Y. Full year revenues of $73.6bn were down five per cent Y/Y and net income was $4.3bn, down 51 per cent. Yet the IBM money-making machine has delivered, with the company ending the quarter with $14.3bn cash in hand.

IBM annual revenues and net income to 2020 – 11 years of overall decline.

IBM’s revenues are languishing, and have been for seven years. Last year’s Red Hat acquisition has yet to boost revenues. So Q4’s Systems segment storage revenues – $390m – represents just 2 per cent of what IBM Chairman and CEO Arvind Krishna has to worry about. The systems segment as a whole represents 12 per cent of IBM’s revenues, so in this context the storage component is chump change, and its revenues wax and wane with the mainframe cycle.

Mainframe cycle blues

The z15 mainframe launched in September 2019 and we see IBM storage revenues rose in the fourth 2019 quarter, and the first and second 2020 quarters, only to fall back in the third and fourth quarters as the new mainframe boost to high-end storage array sales weakened.

Spot the z15 mainframe cycle boost to storage revenues in Q4 2019 and Q1 2020.

The previous z14 mainframe was launched in April 2018. If we assume a 30-month cycle between mainframe versions then the z16 can be expected around April 2022. Storage revenues in IBM’s Systems segment can be expected to languish until then, absent any other developments.

Outside IBM, a data storage business with $390m in quarterly revenues would be fairly important, but not massive – it’s about the quarter of the size of NetApp.

IBM’s quarterly financial reporting structure does not help in trying to get a picture of IBM’s storage business. There are four big business units or segments with storage products spread across two of them; 

  • Global Business Services
  • Cloud and Cognitive Software
  • Systems
  • Global Technology Services

Cloud and Cognitive Software include Cloud and Data Platforms, Red Hat and Cognitive Applications.

Storage products like Spectrum Scale, Cloud Object Storage and the Red Hat storage software products tuck into the Cloud And Cognitive Software segment.  IBM does not split out storage product revenue in this segment

In the Systems segment, CFO Jim Kavanaugh’s statement included this: “Revenue was down 19 per cent, driven by product cycle dynamics,… We saw the product cycle dynamics play out in IBM Z, Power and Storage. Power revenue was down at a level consistent with last quarter, and Storage revenue was also down, driven by high-end storage tied to the IBM Z cycle.”

But he said Red Hat “delivered double-digit growth in both Infrastructure and App Development and emerging technologies.”

We can say with a degree of confidence that the Systems business’ storage revenues are in a long term decline and that two quarters of positive z15 mainframe cycle influence has been lost. But the storage revenues in the Cloud and Cognitive Software segment are unknown. We suspect that the Rd Hat acquisition has had a positive impact and think that IBM’s storage cloud (IBM COS) revenues may also be growing.

Therefore the overall Big Blue storage picture may be more upbeat than the Systems storage results indicate.

Hyperconverged diverges into two paths – Enterprise and Edge, says GigaOm

Radar Antenna
Radar Antenna

According to GigaOm, there are two distinct hyperconverged infrastructure (HCI) markets, namely Enterprise and Edge. Accordingly, the analyst firm has compiled two separate “radar screens”, its version of Gartner’s Magic Quadrant.

VMware, Nutanix and Dell EMC dominate Enterprise hyperconverged systems by revenues. But for the purposes of the GigaOm radar screen, report author Enrico Signoretti sets market share aside and concentrates on technology.

He summarises the enterprise HCI market thus: “Hyperconvergence for the enterprise market is both mature and consolidated. VMware Cloud Foundation (VCF) holds the lion’s share of the market in terms of deployments, and also enjoys technology leadership. At the same time, alternative solution stacks are gaining popularity by offering compelling value and innovative approaches.”

Signoretti thinks “interest has shifted from core virtualization features to the platform ecosystem and integration of core, cloud, and edge components. Other aspects of hyperconvergence infrastructure (HCI) that are quickly gaining traction include automation and orchestration, as well as integration with Kubernetes. The final goal is to build hybrid cloud infrastructures that can provide a consistent user experience across different environments while enabling applications and data mobility.”

This can be achieved both by classic HCI and disaggregated HCI, with separate compute and storage.

Here is the enterprise HCI Radar screen.

Enterprise HCI radar screen. Note: The Radar screen methodology is explained here.

We can see outperformers VMware and Nutanix lead, with Cisco, Dell and HPE in the same leaders’ ring. NetApp, Hitachi Vantara and outperformer Microsoft are entering the leaders ring, leaving DataCore alone in the Challengers’ ring.

Close to the edge

Signoretti says there is a lot overlap between Enterprise and SME/Edge HCI systems. However, the SME/Edge HCI systems have a smaller minimum cluster size and higher efficiency in small configuration. They also have software tools to manage numerous sites, which may number in the thousands, as hands-on management is impractical

Here is the GigaOM radar screen for small and medium enterprise and Edge.

SME/Edge radar screen

The leaders are StorMagic, then VMware, with HPE and Nutanix following. Then we see Dell, Starwind, outperformers Scale Computing and Microsoft, with outperformer Cisco entering the leaders’ ring from the Challengers’ ring while Data Core, Hitachi Vantara and Syneto are making progress in that ring. NetApp, which has recently announced smaller configurations, is entering the Challengers’ ring from the new entrants ring.

Signoretti reckons “market leaders in the enterprise HCI segment can’t scale down their edge solutions too much if they want to maintain full compatibility with their data centre solutions.” The smaller players can capitalise on this by tailoring their systems for efficiency and cost.

Pivot3 declined to participate in Signoretti’s research.

VAST and Nvidia show how to push data to GPUs fast – over NFS

VAST Data and Nvidia today published a reference architecture for jointly configured systems built to handle heavy duty workloads such as conversational AI models, petabyte-scale data analytics and 3D volumetric modelling.

The validated reference set-up shows VAST’s all-QLC-flash array can pump data over plain old vanilla NFS at more than 140GB/sec to Nvidia’s DGX A100 GPU servers.”There’s no need for parallel file system complexity,” Jeff Denworth, VAST Data CMO told us. All-flash HPC parallel file systems are prohibitively expensive, he argues.

Jeff Denworth.

“We’ve worked with Nvidia on this new reference architecture, built on our LightSpeed platform, to provide customers a flexible, turnkey, petabyte-scale AI infrastructure solution and to remove the variables that have introduced compromise into storage environments for decades,” Denworth said.

Tony Paikeday, senior director of AI Systems at Nvidia, said: “AI workloads require specialised infrastructure, which is why we’ve worked with VAST Data, a new member of the Nvidia DGX POD ecosystem, to combine their storage expertise with our deep background in optimising platforms for AI excellence.”

Reference checking

For the reference architecture setup, VAST Data uses LightSpeed technology, NFS over RDMA, NFS multi-path across a converged InfiniBand fabric, to hit 140GB/sec-plus data delivery to a DGX A100. This is about 50 per cent fast than delivery to Nvidia’s prior DGX-2’s Tesla V100 GPUs.

The DGX A100 has eight Tesla A100 Tensor Core GPUs. These are 20x faster than the Teslas V100s.

VAST Data – Nvidia DGX A100 ref architecture diagram. The file servers are commodity X86 servers. Four A100s were used to reach 140GB/sec

For the test results quoted in the reference architectures, scaling was linear from one to four A100 systems. VAST is working on scaling further to eight A100s.

Are Weka and DDN faster?

The 140GB/sec speed is fast but not a record-breaker. A Nvidia presentation in November 2020 shows a WekaIO/Nvidia DGX-A100 system delivered 152GiB/sec (163.2GB/sec) to a single DGX-A100 across eight InfiniBand HDR links with 12 HPE ProLiant DL325 servers.

The same deck shows a DDN A1400X all-NVMe SSD system, running the Lustre parallel file system, delivering 162GiB/sec (173.9GB/sec) read to a DGX A100. The DDN slide’s text included this comment about its 162GiB/sec throughput; “That’s nearly 1.6X more throughput than with traditional CPU IO path and 60X more than NFS.”

Also this Weka/DDN comparison with VAST is not an apples-for-apples one as both use NVidia’s GPUDirect CPU IO path sidestep scheme. The VAST-Nvidia reference architecture does not. But Nvidia and VAST have actually demonstrated 162GiB/sec (173.9GB/sec) when using GPUDirect to link VAST’s systems to the DGX A100; the same as DDN.

I guess we can expect an updated reference scheme in due course,

VAST-Nvidia reference architecture systems will be sold through VAST’s channel partners – Cambridge Computer, Trace3 and Mark III in the USA, Xenon in Australia, and Uclick in South Korea. No news yet on European coverage.