Home Blog Page 118

Pure Storage revenues bounce back in Q2

In spite of a revenue decline in the first quarter of fiscal 2024, Pure has bounced back with positive growth in its second quarter.

Update. Dell storage decline corrected to 3 percent from 11 percent (which was ISG decline). 5 Sep 2023.

The company reported revenues of $688.7 million for the quarter ending August 6, a rise of 6.5 percent year-over-year and surpassing projected figures. The reported loss of $7.1 million down from a $10.9 million profit from the same period a year ago.

Charlie Giancarlo, Pure Storage
Charlie Giancarlo

CEO Charlie Giancarlo said: “We are pleased with our financial results this quarter. While the macro environment continued to be challenging, we outpaced our competitors and saw strong growth in our strategic investments, particularly in Flashblade//S, Flashblade//E and Evergreen//One… I have never been more confident in our long-term growth strategy or in our opportunity to lead this market.” 

Indeed, when compared to its competitors, Pure exhibited strong performance. Dell’s storage results showed a 3 percent decrease to $4.2 billion, NetApp fell by 10 percent to $1.43 billion, and HPE experienced a 4.5 percent decline, settling at $1.1 billion.

Q2 also saw record sales for Pure’s scale-out FlashBlade unified file+object products, which includes a significant deal (minimum $10 million) for its FlashBlade//S product designed for generative AI work. The FlashBlade//E’s sales growth, based on QLC NAND, has been faster than any previous product launched by Pure.

Subscription services revenue saw year-over-year growth of 24 percent, reaching $288.9 million. This resulted in an ARR of $1.2 billion, a 27 percent increase. Evergreen//One’s consumption-as-a-service subscription sales witnessed a twofold rise year-over-year. The global customer base expanded by 325 this quarter, surpassing 12,000, which now includes 59 percent of the Fortune 500 – a growth from the previous quarter’s 56 percent. This means an addition of 15 Fortune 500 clients.

Pure revenues

CFO Kevan Krysler said: “We were very pleased with record sales across our FlashBlade portfolio, and doubling sales of our Evergreen//One subscription offering this quarter. With our Purity software working directly with raw flash, we have established substantial differentiated advantages and business value for our customers, while at the same time expanding our margins.” 

Pure’s competitors use off-the-shelf SSDs in contrast to Pure’s proprietary direct flash module (DFM) drives. Krysler said: “The majority of the capacity we now ship is based on QLC (4bits/cell) raw flash,” which should give Pure a pricing advantage over customers using more expensive TLC (3bits/cell) commercial SSDs. 

”US revenue for Q2 was $495 million dollars and International revenue was $194 million dollars,” he added, implying that Pure has growth opportunities outside the USA. 

Pure’s enterprise business and Evergreen//One both surpassed expectations, contributing to the reversal of Q1’s revenue decline. According to Krysler: “That’s a testament to our field really adjusting to our customers’ buying behavior.”

Financial summary

  • Gross margin: 70.7 percent vs 70.1 percent in Q1
  • Operating cash flow: $101.6 million
  • Remaining performance obligations: $1.89 billion, up 26 percent Y/Y
  • Free cash flow: $46.5 million
  • Total cash, cash equivalents & marketable securities: $1.2 billion

Regarding the Portworx Kubernetes storage business, Giancarlo said: ”Portworx had a good quarter, so we’re very pleased with the progress overall of Portworx. I would say that the the enterprise market for cloud native applications for stateful cloud-native applications has probably progressed a bit slower in the last year than we had expected early on.

“But our expectation is that 5 to 10 years from now, all applications will be designed in a cloud native with containers and Kubernetes. So we’re very confident about the future… We’re #1 in that space, and we expect that to continue.”

Disk dying

CTO Rob Lee reiterated Pure’s view on flash replacing disk drives in the earnings call: “Disk is – well, a dead technology spinning, so to speak.” He said Pure’s DFM gave it a competitive lead over SSD-using competitors such as Dell, HPE and NetApp. “We’ve got a three to five-year structural and sustained competitive advantage over, frankly, the rest of the field that I think is trapped on SSD technology.”

IBM FlashSystems use proprietary IBM flash drives and Pure may face stronger competition there.

Giancarlo added: ”The last refuge for hard disks now is in the secondary and tertiary tier. And now we’re able to reach price parity with them at a procurement cost and yet have much lower total cost of ownership and be smaller and be more reliable.

“There’s no other markets that are going to hold revenue for hard disks that flash won’t penetrate. And what that means is just lesser revenue and therefore, lesser investment in ongoing development of hard disks. That’s also going to be a problem for the vendors. So it’s unfortunate. I don’t hold any malice. But similar to markets in the past, you’re just – when these transitions take place, CDs over vinyl or DVDs over VHS, there’s just no stopping progress.”

AI

Pure has more than 100 customers using its products in the traditional AI and newer generative AI fields. Giancarlo said: “AI systems are typically greenfield. So we’re not generally replacing. What we are competing with are solely all-flash systems. Hard [disk drive] systems just can’t provide the kind of performance necessary for a sophisticated AI environment.”

Older datasets stored on disk have a growing need to be made accessible for AI processing and Pure hopes customers will move these datasets to its faster flash storage. Lee said: “That’s where we see a tremendous opportunity for us with especially in our FlashArray//C line.”

The outlook for next quarter is for revenues of $760 million, 12.4 percent higher than a year ago. Krysler said the guidance “assumes continued strong subscription revenue growth fueled by our Evergreen//One subscription services.” 

Giancarlo added: “We’re expecting stabilization through the end of the year and hopefully an improvement towards the end of the year beginning of next.”

HPE storage biz holds its own against rivals

HPE showed its storage business doing proportionally better in the third quarter than its main competitors who all saw worse revenue declines.

Update: Pure results on Aug 30 showed 6 percent revenue growth, outpacing HPE.

Revenues for the quarter, ended July 31, were $7 billion, up just 0.7 percent year-over-year, with a profit of $464 million, 13.4 percent up annually. The company’s four main business units had mixed fortunes, with compute down 13 percent to $2.6 billion, storage down 4.5 percent to $1.1 billion, Intelligent Edge (Aruba) up a CEO-pleasing 50 percent to $1.4 billion, and passing storage revenues for the first time. HPC and AI revenues of $836 million rose 1 percent year-over-year as a large set of orders started their prolonged delivery and revenue recognition journey.

HPE CEO Antonio Neri focused on operating efficiency, the edge, HPC/AI, and Greenlake numbers in his earnings call comments, saying: ”In Q3, our Intelligent Edge business contributed 20 percent of our total company revenue. It is now the largest source of HPE’s operating profit at 49 percent of our total segment operating profit. Our HPE GreenLake hybrid cloud platform is accelerating our other service pivots, delivering an annualized revenue run rate, or ARR, of $1.3 billion, a 48 percent increase year-over-year. Our strategic shift towards edge, hybrid cloud and AI delivered through our HPE GreenLake cloud platform is working, and we are delivering on our financial commitments.”

HPE segment revenues
HPEs Intelligent Edge revenues (dark blue line) have just exceeded its Storage revenues (green line).

The large decrease in compute revenues, 13 percent, contrasted with the 4.5 percent storage decrease. That held up well when compared to recent storage results from Dell, down 11 percent to $3.76 billion, NetApp, down 10 percent to $1.43 billion, and Pure Storage, down 5 percent to $598 million in its prior Q1 fy24 earnings. But, in its latest, Q2, results, announced on August 30, Pure said it had 6 percent Y/Y revenue growth, much better than HPE, Dell and NetApp.

The economic environment is depressed, but AI and similar digital projects are getting priority in customer IT spending. Neri said: “While the broader IT market is still pressured, demand for our products and services grew sequentially in the third quarter across all key segments of our business, driven by high-growth areas like AI and HPE GreenLake … we exited the quarter with the largest HPC and AI order book we have ever had.”

It was all down to the Alletra storage product line apparently. Interim CFO Jeremy Cox said: “HPE Alletra revenue grew triple digits in Q3 for the fifth consecutive quarter. It is now one of our higher revenue products and thus growth rates may normalize. This product is shifting our mix within Storage to higher-margin, software-intensive revenue and is a key driver of our ARR growth. We’ll continue to invest in R&D and our owned IP products in this business unit, such as our new file-as-a-service and HPE Alletra MP offerings.”

The Alletra file-as-a-service mention refers to the OEM’d VAST Data software and that came too late in the quarter to have any material effect on sales.

HPE did not reveal its all-flash array revenue numbers.

Neri said that HPE’s storage plan was working: “The team and I drove an intentional strategy to pivot that portfolio, which was a conglomerate of different offerings that we built over 15 years or so to one consistent architecture that allows customers to consume data services, both primary and secondary in a cloud-native way and a subscription-based model.

“So, HPE Alletra is our primary storage that now covers pretty much all the price segments, price bands if you will, of the traditional storage from general purpose to business critical to mission-critical. And we address block and file. And in the future, we’re also going to address the object piece.”

No doubt HPE partners Cloudian, Scality, and VAST will all be hoping to supply their object storage software to enable HPE to address the object piece.

Neri added this point about Alletra: “This business went from zero to in excess of $1 billion very, very quickly. And it’s amazing that it’s one of the fastest-growing products in our portfolio, growing triple digits. But what I’m really pleased is that it comes with a significant subscription, which is growing double digits.“

We don’t know which parts of the Alletra portfolio, the Nimble or Primera product components, are growing fastest, but we can be sure that other parts of HPE’s storage portfolio are looking less bright, such as SimpliVity HCI and StoreOnce deduplicating backup appliances. Maybe they will get an Alletra brand makeover.

The outlook for HPE’s next quarter is revenues between $7.2 billion and $7.5 billion, $7.35 billion at the mid-point and 6.6 percent down on a year ago. It will make fiscal 2023 revenues of $29.1 billion, 2.2 percent more than fiscal 2022.

This is low overall growth, and the Q4 number suggests that HPE’s storage revenues may decline again. Perhaps AI storage demand may help growth in fiscal 2024.

Huawei OceanStor A310 feeds Nvidia GPUs at speed

The latest OceanStor all-flash array from Huawei is set to feed Nvidia GPUs almost four times faster than the current top-rated system, IBM’s ESS 3500on a per-rack unit basis.

Update. Text and graphics corrected to show Huawei A310 has 8 nodes per 5RU chassis. 5 Sep 2023.

Systems are compared using Nvidia’s Magnum GPU Direct in which data is sent direct from an NVMe storage resource direct to the GPUs without a storage host system being involved, by first copying the data into its memory for onward transmission to the GPUs. All the systems involved scale out in some way and that means they can add nodes (controller+drive combinations) to reach a set IO bandwidth level. We compare them on a per-node basis to see how powerful they are. 

Huawei briefed us on the coming OceanStor A310 system, the development of which was revealed in July as a data lake storage device for AI. Evangeline Wang, Huawei Product Management and Marketing, said: “We know that for the AI applications, the biggest challenge is to improve the efficiency for AI model training. The GPUs are expensive, and it needs to keep running to train the models as soon as possible. The biggest challenge for the storage system during the AI training is to keep feeding the data to the CPU, to the GPUs. That requires the storage system to provide best performance.”

The A310 has a 5RU enclosure containing up to 96 NVMe SSDs, processors, and a memory cache. The maximum capacity has not been revealed, but 96 x 30TB SSDs would provide 2.88PB. A rack of eight A310s would hold 23PB, which seems data lake size.

Each 5RU chassis can deliver up to 400GBps sequential read bandwidth, 208GBps write bandwidth, and 12 million random read IOPS performance. The chassis contains eight individual nodes. Up to 4,096 A310’s can be clustered together and performance scales linearly. A clustered set of nodes shares a global file system. According to a Huawei blog, the company’s Global File System (GFS) “supports the standard protocols (NFS/SMB/HDFS/S3/POSIX/MP-IO) for applications.” Node software can provide data pre-processing such as filtering, video and image transcoding, and enhancement. This so-called near-memory computing can reduce the amount of data sent to a host AI processing system.

Huawei slide
Huawei slide

The system uses SmartNICs, with TCP offloading, and employs a massively parallel design with append-only data placement to reduce SSD write amplification. A separate metadata-only partition reduces the garbage collection inherent in having metadata distributed with data.

Huawei slide
Huawei slide

We compared the Huawei A310 sequential bandwidth numbers to other GPU Direct-supporting systems from Dell, DDN, IBM, NetApp, Pure, and VAST Data for which we have sequential bandwidth performance numbers. First of all we looked at per-node numbers:

IBM and DDN are still number 1 and 2 respectively, with VAST third in read bandwidth. Huawei was fourth fastest in read bandwidth terms and third fastest looking at write bandwidth. The A310 chassis is equivalent, in rack unit terms, to five VAST Data Ceres nodes in their 1U enclosures. We can compare systems on performance per RU to compensate for their different enclosure sizes, and the rankings then differ:

Huawei’s A310, with its small nodes, was fastest overall at both sequential reading and writing, with its 41.6/80GBps sequential write/read bandwidth vs IBM’s 30/63GBps numbers. VAST has the third highest read bandwidth per RU at 60, close behind second-placed IBM’s 63GBps/RU. Its write performance, 11.33Gbps, is slower.

VAST Data uses NFS rather than a parallel file system, and the parallel file systems from DDN and IBM are faster at writing.

We don’t know whether Huawei’s GFS is a parallel file system, though we suspect it is. The company said A310 technical documents weren’t ready yet. The A310 should be shipping some time in 2024 and will provide a challenge in the regions where it is available to the other suppliers.

Bootnote

For reference our table of supplier bandwidth numbers is reproduced here:

VAST Data making its presence known in the HPC market

VAST Data is the third most popular clustered file system in US academic and government high-performance computing (HPC) institutions, according to a UCAR survey.

UCAR, the University Corporation for Atmospheric Research, based in Boulder,Colorado, manages the National Center for Atmospheric Research on behalf of the National Science Foundation. It surveyed HPC system professionals in its community earlier this year to find out about their HPC environments and received 54 responses from academic and government institutions. Download the survey results here.

Page 43 of the survey results charted the answer to the question: “What clustered file systems do you have at your institution?” The most popular was Lustre (54.2 percent) followed by Spectrum Scale (now called Storage Scale and originally the General Parallel File System or GPFS) with 41.7 percent. Next was VAST Data with 33.3 percent, with BeeGFS in fourth place with 10.4 percent. Panasas and Ceph are both on 8.3 percent.

UCAR survey chart.

Only 2.1 percent had Weka.io’s file system – the same percentage as for Qumulo, ZFS, and NetApp. 

This is only a snapshot of some US HPC sites, and certainly not exhaustive, but it provides an independent view of VAST Data’s presence in the HPC market. A question mark over the survey is the low placement of Weka.io and its parallel file system software. We asked Weka what it thinks about this chart and survey.

VP product marketing Colin Gallagher told us: “This is a non-representative survey with a small sample set based on what seems to be a single Slack group. As such, the characteristics of the survey respondents cannot accurately reflect the market at large. I’d be loath to treat anything like this as fact or news. These types of surveys can be good for qualitative information, but quantitative conclusions are often very biased or inaccurate. The survey is also poorly designed and conflates technology. For example, VAST is not a clustered file system. In a more rigorous, valid sampling – which unfortunately doesn’t exist – we would see Weka with a larger market share based on what we know of our customer base.”

We compared this survey with one from Hyperion Research in late 2022, looking at HPC systems and their file system usage. It looked at NFS and other scale-out file systems separately from parallel file systems, unlike UCAR, and found many institutions used both. In the parallel file system case, Lustre had the largest share, at 46 percent of 139 respondents, with Spectrum Scale having 22 percent, much lower than in the UCAR survey. BeeGFS had 6 percent and Ceph 4 percent, both lower again than in the UCAR survey.

File system adoption chart

VAST had a 5.3 percent presence in the NAS and scaleout file system environment, with 132 respondents saying they used it in their largest system. Qumulo had a higher ranking than VAST, 8.3 percent, while PowerScale (Isilon OneFS) had the same percentage as VAST. Weka.io does not appear as an individual supplier in Hyperion’s chart.

File system adoption chart featuring VAST

The UCAR survey may have skewed results because the number of respondents is lower than in the Hyperion survey, is focused on atmospheric HPC environments, and, of course, its questions were different. Nevertheless, VAST Data is making its presence felt in the HPC market and, if UCAR’s result is accurate, has a large presence already.

Cisco partnership with Nutanix raises questions over HyperFlex

Cisco and Nutanix have set up a global strategic partnership to produce the Cisco Compute Hyperconverged with Nutanix product, which is to be sold through Cisco’s channel.

It comprises Nutanix Cloud Platform – its hyperconverged software suite – running on Cisco UCS servers and networking in the chassis as well. The two claim it is the industry’s most complete hyperconverged solution for IT modernization and business transformation.

Jeremy Foster, SVP and GM for Cisco Compute, said: “Customers are asking for solutions that are simple, sustainable, and future-ready. This partnership answers with a complete solution spanning virtual compute, networking and storage across customer datacenters and public clouds.”

Nutanix chief commercial officer Tarkan Maner echoed this, saying: “As organizations look to keep up with the pace of innovation, they need an integrated  hardware and software platform to support application deployment anywhere. This partnership will deliver an expanded market opportunity for both organizations.”

The Cisco-Nutanix offering integrates Cisco’s SaaS-managed UCS compute, networking, and Intersight management with the Nutanix Cloud Platform suite: Nutanix Cloud Infrastructure, Nutanix Cloud Manager, Nutanix Unified Storage, and Nutanix Desktop Services. It will support UCS rack and blade servers, including initial support for C-Series Servers and planned, future support for UCS X-Series. The offering will be sold by Cisco using its go-to-market reach.

HyperFlex specter at the feast

What you wouldn’t know from this is that Cisco has its own hyperconverged Infrastructure (HCI) offering: the HyperFlex HX-Series, based on software it obtained by acquiring HCI vendor Springpath and its HALO software for $320 million in 2017. This software was combined with Cisco’s UCS servers and its networking technology to produce the HyperFlex product line.

Cisco HyperFlex HCI hardware
Cisco HyperFlex HCI system hardware

Since then the HCI market has become dominated by VMware with vSAN and Nutanix. The other players – such as Cisco HyperFlex, HPE with Nimble dHCI and Simplivity, and Scale Computing – have relatively minor market shares. In January IDC gave VMware a 41.5 percent revenue share ($982.3 million) with Nutanix second on 24.6 percent ($581.2 million). Huawei (4.7 percent), Cisco (4.4 percent), and HPE (3.5 percent) were a long way behind.

GigaOM in a February HCI Radar report said networking giant Cisco had fallen back from being an enterprise HCI Leader in January 2021 to being a Challenger in 2023.

In March this year, Cisco introduced HyperFlex Express products, offering a lower-cost entry point than before, plus HyperFlex nodes built with AMD EPYC processors. Cisco’s DD Dasgupta, VP product management, cloud and compute business unit, blogged at the time: “We were able to drastically reduce the number of combinations that went into the new Hyperflex Express product, optimizing available supply and lowering cost. We’re passing on these savings to our customers and partners by reducing the entry point for HyperFlex by up to 50 percent.”

Fast forward five months on from that 50 percent price cut and HyperFlex’s role in the Cisco-Nutanix partnership world is unclear. We asked Cisco some question about HyperFlex and the new Cisco-Nutanix offering to find out more. We’ve reproduced Cisco’s answers verbatim.

Blocks & Files: What is the positioning of this offering versus Cisco’s HyperFlex hyperconverged products which have Cisco’s HX Data Platform software running on UCS servers with Cisco networking? 

Cisco: This partnership expands choice and a simplified path to best-in-class solutions.  The new partnership opens up opportunities for customers to run the Nutanix platform on Cisco’s industry-leading, cloud-managed compute infrastructure. The companies have a mutually developed roadmap to deliver the full portfolio of Cisco servers, SaaS management, and Nutanix cloud platform capabilities, plus, the latest accelerator, network, and drive technologies offer customers a flexible, integrated hyperconverged platform to modernize their IT environments.  

Blocks & Files: Will Cisco Compute Hyperconverged with Nutanix co-exist with Cisco HyperFlex? 

Cisco: Cisco HyperFlex remains globally available. There are no changes to the HyperFlex product roadmap today. This partnership expands choice and a simplified path to best-in-class solutions.  

Blocks & Files: Is Hyperflex being discontinued? 

Cisco: Cisco HyperFlex remains globally available. Cisco has a strong history of collaborating with industry leaders to create converged solutions. This partnership follows demand from our customers to combine these best-of-breed solutions.  

Blocks & Files: Will there be a HyperFlex-to-Cisco Compute Hyperconverged with Nutanix migration path? 

Cisco: There will be opportunities for collaboration with both Cisco and Nutanix technology partners to build broader solutions with this new offering. Solution partners in the channel will be armed with an exciting new combination of leading technologies.  

We don’t know how HyperFlex use cases would differ from Cisco-Nutanix HCI use cases and think they are still being considered.

Cisco-Nutanix HCI product availability is expected within three months.

Five steps to mastering multicloud management

Three clouds
Three clouds

COMMISSIONED: As an IT leader, you have the daunting task of managing multicloud environments in which applications run on many public clouds and on-premises environments – and even the edge.

Operating in this cloud-native space is like conducting a symphony orchestra but with a major catch: You’re managing multiple orchestras (cloud providers and on-premises systems), rather than one. Each orchestra features its own musicians (resources), unique instruments (services) and scores (configurations). With so many moving parts, maintaining harmony is no trivial pursuit.

Take for example the relationship between different parts of a DevOps team. While developers build and test containerized applications in multiple public clouds, operators are playing a different tune, focused on scaling on-premises systems. These teams accumulate different management interfaces, technologies and tools over time. The result? Layers upon layers of complexity.

This complexity is then magnified when we consider data management challenges. Shuttling data between public clouds and on-premises systems becomes harder with the increase of data gravity, and even container orchestration tools, such as Kubernetes, require deep technical skills when it comes to managing persistent storage. And the data reflects this struggle. According to a recent VMware State of Kubernetes research report published in May, 57 percent of those surveyed cited inadequate internal experience as a main challenge to managing Kubernetes.

Achieving consistent performance, security and visibility across the entire symphony – your IT infrastructure – remains a hurdle when you account for the different instruments, music sheets and playing styles.

A playbook for managing multicloud and Kubernetes storage

There is no silver bullet for managing storage and Kubernetes in multicloud environments. But here are some steps you can take for running modern, cloud-native apps across public clouds and on-premises systems alike.

– Educate. Your IT operations and DevOps teams must learn about the various container and storage services available across public cloud and on-premises systems, as well as how to manage the technologies and tools that make them hum.

– Standardize. You’re not going to have a single tool to handle all of your needs, but you can simplify and streamline. For instance, standardizing on the same storage can help you reduce complexity and improve efficiency across multiple clouds and on-premises environments, making it easier to respond nimbly to shifts in your multicloud strategy. Pro tip: Examine where applications and data are living between on-premises systems and public clouds and whether those workload placements align with your goals.

– Automate. One of IT’s greatest magic tricks is automating manual tasks. Why manage containers and storage piecemeal? Tools exist to help IT operations staff automate chores such as provisioning storage, deploying containers and monitoring performance.

– Test. Testing and deploying your applications frequently is crucial in multicloud environments with numerous interdependencies. Not only will this help you detect and fix problems, but it will also ensure that your applications are compatible with various cloud and on-premises systems.

– Manage. Pick a multicloud management platform that empowers your team to consolidate activities from discovery and deployment to monitoring and day-to-day management from a single experience spanning multiple public clouds. Streamlining processes will make IT agile when responding to business needs.

Hype versus reality

The last step is critical. Yes, the multicloud management platform provides one unified experience for managing multiple resource types. But the onus is on you to select the right partners who can deliver on their promises and make life easier for your staff, rather than moving your processes to a system that doesn’t meet your IT service requirements.

And while there is talk of a supercloud, or one platform architecture to rule all clouds, solutions are emerging to help you manage your multicloud estate today.

Dell APEX Navigator for Multicloud Storage, available later this year, is a SaaS tool designed to help IT and DevOps teams provide deployment, management, monitoring and data mobility for APEX block and file storage across multiple public clouds and move data between on-premises systems and public clouds.

Dell APEX Navigator for Kubernetes, available next year, is designed to simplify management of Kubernetes persistent storage, enabling storage administrators and DevOps teams to deploy and manage Dell’s Kubernetes data storage software at scale across both on-premises and cloud systems.

Ultimately, these tools, which are integrated into the Dell APEX Console, help customers reduce time switching between different management experiences – so they can focus more on innovation or higher-level business tasks.

As an IT leader, the conductor of your own multicloud symphony, making sure all the orchestras in your IT infrastructure play in tune and at the right tempo is paramount – with consistent governance monitoring and management.
Just as a skilled conductor must master how to blend each orchestra’s unique sounds to create a unified masterpiece, managing a multicloud environment requires you to master each cloud provider’s offerings and each on-premises systems operations.

To orchestrate them effectively, you need a strategy that helps you maintain coherence and consistency as well as reliability and performance.

What steps will you take to master your multicloud symphony orchestra?

Click here to learn more about Dell APEX Console.

Brought to you by Dell Technologies.

VMware separates out HCI storage in vSAN Max

VMware has enabled external storage clusters for its vSAN hyperconverged infrastructure (HCI) product, three years after buying Datrium, which focused on the same technology.

The essence of HCI is that compute, storage, and networking are all contained in one chassis with the system scaling up or down by adding or removing hardware. However, that means if you want to additional storage capacity, you have to add more compute as well, and vice versa. Several HCI suppliers partially separated storage from HCI, providing separate resources that could be scaled up and down independently from compute, calling it disaggregated HCI or dHCI.

Nutanix, for instance, provided storage-only Acropolis nodes. HPE-acquired Nimble also has a dHCI product. Datrium was another and NetApp a fourth, with its halted Solidfire-based system

VMware’s Pete Koehler writes in a blog that vSAN MAX is “VMware’s new disaggregated storage offering that provides Petabyte-scale centralized shared storage for your vSphere clusters.” It’s based on the vSAN Express Storage Architecture introduced in vSAN v8.0. Koehler writes: “Storage resources are disaggregated from compute resources but achieved in such a way that it maintains the capabilities and benefits of HCI while providing the desired flexibility of centralized shared storage.”

The vSAN Max system provides unified block, file, and object storage. It’s scalable with a 24-host vSAN Max cluster delivering up to 8.6PB of capacity and up to 3.4 million IOPS. 

VMware says vSAN Express Storage Architecture (ESA), introduced in vSAN v8.0  is an optional, alternative vSAN architecture to the Original Storage Architecture (OSA) that provides different ways to process and store data. ESA provides a log-structured file system, vSAN LFS, that enables vSAN to ingest new data fast, prepare it for a full stripe write, and store metadata in a more efficient and scalable way.

There is a log-structured object manager and data structure, built around a high-performance block engine and key value store that can deliver large write payloads with less metadata overhead. This log-structured design is, VMware says, “highly parallel and helps us drive near device-level performance capabilities in the ESA.” 

VMware vSAN Max diagram
VMware vSAN Max diagram

A vSAN Max cluster can be set up in a single site or stretched across two. It is managed through vCenter Server exactly like traditional vSAN. Koehler suggests a few use cases including a tier one database storage resource, centralized storage for a set of vSphere clusters, and scalable storage for cloud-native apps.

The new vSAN Max offering is expected to be available in the second half of fiscal 2024, and will be licensed separately from existing vSAN editions, offered as a subscription and licensed on a per-tebibyte basis. The license includes everything needed to run a vSAN Max cluster – no licenses of vSphere etc. are needed. Find out more in a vSAN FAQ in VMware’s website.

A deeper look at high-end storage challenges and opportunities: No more tiers, cries storage architect

Interview Here is the second part of our interview with Mainline Information Systems storage architect Rob Young, following part 1. We talked about proprietary vs standard SSD formats, high-speed cloud block storage, CXL, file vs block and single tier vs multi-tier arrays and more.

Blocks & Files: What advantages does proprietary flash hold versus commercial off the shelf?

Rob Young

Rob Young: Pure [Storage] demonstrates the advantages in its DirectFlash management of their custom modules, eliminating garbage collection and write amplification with zero over-provisioning. With zero over-provisioning you get what you paid for. VAST Data, with its similar scheme, works with cheap (in their words) COTS QLC drives. Their stellar algorithms with wear-level tracking cell history is such a light touch that their terms of service will replace a worn out QLC drive for 10 years. Likewise, we read that VAST are positioned to go from working with 1000 erase cycles of QLC cell to the 500 erase cycles of PLC when PLC becomes a thing. 

What is an advantage of custom design? The ability like Pure to create very large modules (75 TB currently) to stack a lot of data in 5 units of rack space. The drives are denser and more energy-efficient. IBM custom FCM’s are very fast (low latency is a significant FCM custom advantage), and have onboard and transparent zero overhead compression. This provides less advantage these days with custom compression at the module level. Intel incorporated QAT tech and compression is happening at the CPU now. In development time QAT is a recent 2021 introduction. At one point, Intel had separate compression cards for storage OEMs. I don’t want to talk about that.

Let’s briefly consider what some are doing with QLC. Pure advertises 2 to 4 ms IO (read) response time with their QLC solution. In 2023 is that a transactional solution? No, not a great fit. This is a tier2 solution – backup targets, AI/ML, Windows shares, massive IO and bandwidth – a great fit for their target audience. 

Solidigm D5-P5336

So, when we discuss some of these solutions keep in mind, you still need something for transactional workloads and there are limited one-size fits all solutions. Now having said that, look at the recent Solidigm announcement of the D5-P5336. They are showing a 110 us small random read described as TLC performance in QLC form-factor. That’s 5x faster than the QLC VAST is currently using. Surely a game changer for folks focused on COTS solutions.

Blocks & Files: Do very high-speed block array storage instances in the cloud using ephemeral storage, in a similar way to Silk and Volumez represent a threat to on-premises block arrays?

Rob Young: Volumez is a very interesting newcomer. We see that Intersystems is an early showcase customer adopting Volumez for their cloud-based Iris analytics. What is interesting about Volumez is the control plane that lives in the cloud. 

On-prem storage growth is stagnant while clouds are climbing. These solutions are no more of a threat to on-prem than the shift that is organically occurring. 

Secondly, Volumez is a Kubernetes play. Yes, next-gen dev is focused on K8s but many Enterprises are like aircraft carriers and slow to pivot. There are a lot of applications that will be traditional 3-tier VM stack for a long time. What wouldn’t surprise me, with cloud costs typically higher versus on-prem, is we would see Volumez customers at some point do common sense things like test/dev footprint resides on-prem and production in a cloud provider. Production resides in the cloud in some cases just to feed nearby analytics plus it is smaller footprint than test/dev. Cloud mostly owns analytics from here on, the great scale of the shared Cloud almost seals that deal.

Blocks & Files: How might these ephemeral storage-based public cloud storage instances be reproduced on-premises?

Rob Young: Carmody tells us an on-prem version of Volumez will happen. Like Volumez, AWS’s VSA uses ephemeral storage also. How will it happen? JBOF on high-speed networks and it might take a while to become common but suppliers are about today. 

Cloud has a huge advantage in network space. Enterprise on-prem lag cloud in that regard with differing delivery mechanisms (traditional 3-tier VMs and separate storage/network .) What Volumez/AWS VSA have in common is there is no traditional storage controller as we know it. 

VAST Data D-nodes are pass-thru storage controllers, they do not cache data locally. VAST Data gets away with that because all writes are captured in SCM (among other reasons.) Glenn Lockwood in Feb 2019 wrote a very good piece describing VAST Data’s architecture. What is interesting is the discussion in the comments where the speculation is the pass-thru D-Node is an ideal candidate to transition to straight fabric-attached JBOF. 

Just prior to those comments, Renen Hallak (VAST co-founder) at Storage Field Day spoke about a potential re-appearance of Ethernet Drives. The pass-thru storage controller is no longer at that point. It’s just getting started with limited Enterprise choices but folks like VAST Data, Volumez and others should drive fabric-based JBOF adoption. Server-less infrastructure headed to controller-less. Went off the rails pie-in-the-sky a bit here but what we can anticipate is several storage solutions become controller free with control planes in the cloud and easily pivot back and forth cloud to on-prem. For all the kids in the software pool, “hardware is something to tolerate” is the attitude we are seeing from a new storage generation.

Blocks & Files: How might block arrays use CXL, memory pooling and sharing?

Rob Young: Because it is storage, a CXL-based memory pool would be a great fit. The 250 nanosecond access time for server-based CXL memory? Many discussions on forums about that. From my perspective it might be a tough sell in the server space if the local memory is 3-4x faster than fabric-based. However, and as mentioned above, 250 ns second added overhead on IO traffic is not a concern. You could see where 4, 6, 8, 16 PowerMAX nodes sharing common CXL memory would be a huge win in cost savings and allow for many benefits of cache sharing across Dell’s PowerMAX and other high-end designs.

Blocks & Files: Should all block arrays provide file storage as well, like Dell’s PowerStore and NetApp’s ONTAP do? What are the arguments for and against here?

Rob Young: I touched on a bunch of that above. The argument against is the demarcation of functions, limiting the blast radius. It isn’t a bad thing to be block only. If you are using a backup solution that has standlone embedded storage, that is one thing. But if you as a customer are expected to present NFS shares as backup targets either locally at the client or to the backup servers themselves you must have separation. As mentioned, Pure has now moved file into their block, unified block and file. 

Traditionally, legacy high-end arrays have been block only (or bolt-on NFS with dubious capability.) Infinidat claims 15 percent of their customers use their Infinibox as file only, 40 percent block and file. The argument for is there is a market for it if we look at Infinidat.  Likewise, Pure now becomes like the original block/file Netapp. There is a compelling business case to combine if you can or the same vendor block usage on one set of arrays, file on other. Smaller shops and limited budgets combine also. 

The argument against it is to split the data into different targets (high-end for prod, etc.) and a customer’s architectural preferences prevail. There is a lot of art here, no “one way” to do things. Let me point out a hidden advantage and a caution regarding other vendors. Infinidat uses virtual MAC addressing (as does Pure – I believe) for IP addressing in NAS. On controller reboot via outage or upgrade, the hand-off of that IP to another controller is nearly instant and transparent to the switch. It avoids gratuitous arps. One of the solutions mentioned here (and there are more than one) typically takes 60 seconds for IP failover due to arp on hand-off. This renders NFS shares for ESXi DataStores problematic for the vendor I am alluding to and they aren’t alone. Regarding the 15 percent of Infinidat’s customers that are NFS/SMB only; a number of those customers are using NFS for ESXi. How do we know that? Read Gartner’s Peer Insights, it is a gold mine.

Blocks & Files: How might scale-out file storage, such as VAST Data, Qumulo and PowerScale be preferred to block storage?

Rob Young: Simple incremental growth. Multi-function. At the risk of contradiction, Windows Shares, S3 and NFS access in one platform (but not backup!) 

Great SMB advantages of same API/interface, one array as file, other as block. More modern features including built-in analytics at no additional charge in VAST and in Qumulo.  In some cases, iSCSI/NFS transactional IO targets (with caveats mentioned.)  Multiple drive choices in Qumulo and PowerScale. NLSAS is quite a bit cheaper – still – take a dive on the fainting couch! For some of these applications you want cheap and deep. Video capture is one of them. Very unfriendly to deduplication and compression.

In VAST Data’s case, you will be well poised for the coming deluge. Jeff Denworth at VAST wrote a piece The Legacy NAS Kill Shot basically pointing out high bandwidth is coming with serious consequences. That article bugged me ever since I read it, I perseverated on it far too long. He’s prescient, but like Twain quipped: “it is difficult to make predictions, particularly about the future.” What we can say is AI/ML shows you with a truly disaggregated solution that shares everything with no traffic cop is advantageous. 

But the high bandwidth that is coming when the entire stack gets fatter pipelines, not just for AI but in general as PCIe5 becomes common, 64 Gbit FC, 100 Gbit ethernet and higher performing servers will make traditional dual-controller arrays strain under bandwidth bursts. Backup and databases are pathological IO applications, they will be naughty and bursting much higher reads/writes. Noisy neighbor (and worse behavior) headed our way at some point.

Blocks & Files: How do you view the relative merits of single-tiered all-flash storage versus multi-tiered storage?

Rob Young: Phil Soran, founder of Compellent, in Eden Prairie, Minnesota, came up with auto-tiering; what a cool invention – but it now has a limited use case. 

No more tiers. I get it and it is a successful strategy. I’ve had issues with multi-tier in banking applications at month-end. Those blocks that have long since cooled off to two tiers lower are suddenly called to perform again. Those blocks don’t instantly migrate to more performant layers. That was a tricky problem. It took an Infinidat to mostly solve the tiering challenge. Cache is for IO, data is for backing store. To get the data into a cache layer quickly (or it is already there) is key and they can get ahead of calls for IO. 

Everyone likes to set it and forget it with stable/consistent performant all flash arrays. IBM in their FS/V7K series provides tiering and it has its advantages. Depending on budget and model, you can purchase a layer of SCM drives and hot-up your data to that tier0 SCM layer from tier1 NVMe SSD/FCM – portions of the hottest data. The advantage here is the “slower” performing tier1 is very good performance on tier0 misses.  There is a clear use case for tiering as SCM/SLC is still quite expensive. Also, there is your lurker, let’s put an IBM FS9000 IO less than 300 us but more than Infinidat SSA. Additionally, tiering is still a good fit for data consumers that can tolerate slower retrieval time like video playback (I’m referring to cheap and slow NLSAS in the stack.)

NetApp and Google deepen Cloud Volumes Service collab

NetApp
NetApp CloudJumper

The NetApp fully managed Cloud Volumes Service for Google Cloud is now known as Google Cloud NetApp Volumes, with Google assuming the role of the primary provider and manager of the service.

It is based on NetApp’s ONTAP operating system and its file-based data functions running on the Google Cloud Platform as a native GCP service. It provides multi-access protocol support for NFS V3 and v4.1, as well as SMB, and has features such as snapshots, clones, replication, and cross-region backup. It is the only Google storage service supporting both NFS and SMB.

Sameet Agarwal, VP and GM for Google Cloud Storage, said: “Today’s announcement extends our ability to deliver first-party storage and enterprise data management services so organizations can rapidly deploy, run, protect, and scale enterprise workloads with the familiar tools and interface of Google Cloud.”

Ronen Schwartz, SVP and GM for Cloud Storage at NetApp, added: “We see our mission as creating cloud storage and data services that are as forward-thinking and easy to use as possible, and this partnership allows us to continue making this vision a reality.”

NetApp made its ONTAP services available on GCP back in 2018 and has the most comprehensive cross-cloud and on-premises file storage system offering available covering AWS, Azure, and Google. Customers with NetApp systems in their datacenters can use AWS, Azure, and GCP datacenters as extensions of their own for tiering, bursting, business continuity, disaster recovery, migration, etc.

Customers can run either Windows or Linux applications as virtual machines in a few clicks and without any refactoring. Storage volumes can scale from 100GiB to 100TiB. There is support for instant capacity additions or changes between performance tiers without downtime. Built-in block-level incremental backups provide protection without needing downtime or slowing app performance.

NetApp tells us that being a Google Cloud first-party service means that its customers now have a smoother and more integrated operational experience including:

  • Full integration with Google Cloud Console, APIs, and gcloud CLI management
  • Full integration with Google Cloud billing – no marketplace hoops to jump through, full-fledged burndown of the Google Cloud Committed Use Discounts (CUD) since this is a first-party offering
  • Improved integration in IAM, Cloud Monitoring, and Cloud Logging for better resiliency
  • Simplified UI, structured around user workflows in the Google Cloud ‘look and feel’
  • Simplified and improved documentation, along with the Feedback button to send recommendations directly to Google Cloud
  • Administrative volume and service actions
  • SOC2 Trust Service Criteria – Type 1 compliant service operations and controls, inherited from Google Cloud
  • Service Level Agreement of 99.95 percent for data plane without CRR (Cross-Region Replication) and 99.99 percent with CRR

NetApp will be working with Google Cloud on the feature roadmap and deeper integration with the array of Google Cloud services such as GCVE, GKE, etc. Read a Google blog and online NetApp documentation to find out more.

Storage news roundup – 24 August

Data protector Acronis has renewed its relationship with the UK’s Southampton Football Club. Channel partner TMT will provide the club with the full suite of Acronis cyber protection offerings.

Cloud and backup storage provider Backblaze has announced price increases. B2 Cloud Storage’s  monthly pay-as-you-go rate increases from $5/TB to $6/TB with free egress for customers for up to three times the amount of data stored. Prices for its Computer Backup business, effective October 3, will increase from $7/month to $9/month.

Postgres contributor EnterpriseDB (EDB) is partnering more with Google Cloud to make two of EDB’s offerings available on Google Kubernetes Engine (GKE): EDB Community 360 PostgreSQL on GKE Autopilot and fully-managed database-as-a-service EDB BigAnimal on Google Cloud.

Victor Chang Cardiac Research Institute (VCCRI) is using Filecoin decentralized storage to store  125+ TiBs (137.439 TB) of cardiac research data, including raw datasets from published papers consisting of thousands of images of cells and encrypted backups of their SyncroPatch machine, a laboratory technique for studying currents in living cells. 

FileShadow is  a SaaS service that collects a user’s content from cloud storage, email, hard drives, etc.and puts it into a FileShadow Library which can be shared with others. It now enables small businesses and individuals to publish content from their FileShadow Libraries to Facebook Pages. 

TrueNAS supplier iXsystems has been awarded a Top Workplaces 2023 honor by Knoxville Top Workplaces and the Knoxville Sentinel News.

Lenovo is a significant all-flash array supplier according to IDC (Q1 CY23):

  • Lenovo is in #4 position WW on AFA Total Storage (Dell, NetApp, Pure and Lenovo)
  • Lenovo is in #2 position WW looking at AFA Price Bands 1-6 ($0-100k) (Dell, Lenovo, Pure and HPE)
  • Lenovo is in #1 position WW and EMEA looking at AFA entry Price Bands 1-4 ($0-25k)

In Q1 CY23, according to IDC, All Flash Arrays accounted for 45 percent of the entire storage array market. For Lenovo, it is actually 61 percent.

Cloud filesystem and services supplier Nasuni has appointed Jim Liddle as its Chief Innovation Officer. Liddle, formerly Nasuni’s VP of Product, was founder and CEO of acquired Storage Made Easy. He will lead the development and implementation of Nasuni’s data intelligence and AI strategies

Cornelis Networks, Penguin Solutions and Panasas have partnered to define and test a reference design of Penguin Computing’s Altus servers running Scyld ClusterWare and Panasas ActiveStor storage appliances connected with the Cornelis Networks Omni-Path Express fabric. This should save new and existing customers time and money. the Cornelis Omni-Path Gateway is an efficient and cost-effective alternative to direct connected Panasas storage in data centers with existing file system storage using Ethernet in the back-end. Read a solution brief here.

Peer Software announced PeerIQ, a self-hosted analytics engine that enhances storage analytics, observability and monitoring, with a single pane of visibility into heterogeneous hybrid and multi-cloud storage environments – going beyond the limitations of singular vendor storage systems. PeerIQ is a virtual appliance that contains a dashboard and analytics environment that offers tools for monitoring the health and performance of PeerGFS, Peer’s Global File Service, and an organization’s replication environment.

PeerIQ provides organizations with insights into their storage infrastructure, as the engine ingests storage metadata. The platform then performs analysis and delivers trending information through intuitive visualisation and reporting. Its dashboards are viewable via a web browser and provide a visual and interactive interface that displays telemetry data that is updated automatically.

Rubrik has announced generative AI product integrations with VMware to help customers recover better from malware attacks:

  • Get step-by-step guidance to recover unaffected files from the most recent snapshot and affected files from the snapshot prior to suspicious threat detection — layered on top of a clean virtual machine built from a gold master template.
  • Identify vSphere templates to reconstruct clean and safe virtual machines and avoid introducing undetected vulnerabilities within the operating system.
  • Get clear recommendations on which snapshot or files to select for a clean and successful recovery through Rubrik’s data threat analytics integrated with Azure OpenAI.

Data analytics accelerator SQream announced its no-code ELT and analytics platform Panoply is launching an AI Flex Connector helper which leverages generative AI to streamline the path to business intelligence. This tool will make it easier for users to collect all of their business data – from CRMs, user applications, and other tools – into one single source, and minimize the technical requirements to generate quick data insights.

Cloud datawarehouser Snowflake’s revenue growth continued with a 36 percent Y/Y rise to $674 million in its second fy2024 quarter. Its customer count rose to 8,537 from last quarter’s 8,167, up 370. The expected loss was $226.9 million, not much different from the year-ago $222.8 million loss on revenues of $497.2 million. 

The DRAM recession is ending. TrendForce reports that rising demand for AI servers has driven growth in HBM shipments. Combined with the wave of inventory buildup for DDR5 on the client side, the second quarter saw all three major DRAM suppliers experience shipment growth. Q2 revenue for the DRAM industry reached approximately $11.43 billion, marking a 20.4 percent QOQ increase and halting a decline that persisted for three consecutive quarters. Among suppliers, SK hynix saw a significant quarterly growth of over 35 percent in shipments. 

Trendforce reports NVIDIA’s latest financial report for FY2Q24 reveals that its data center business reached US$10.32 billion—a QoQ growth of 141 percent and YoY increase of 171 percent. The company remains optimistic about its future growth. TrendForce believes that the primary driver behind NVIDIA’s robust revenue growth stems from its data center’s AI server-related solutions. It expects NVIDIA’s to extend its reach into the edge enterprise AI server market, underpinning steady growth in its data center business for the next two years. Which storage suppliers will benefit?

SkiBig3 works on vacation planning for ski resorts in Banff National Park – Banff Sunshine, Lake Louise Ski Resort and Mt. Norquay. It has selected the VergeOS UCI platform over VMware, Hypeer-V and public cloud options for its better data protection and scalability.

According to a global study conducted by S&P Global Market Intelligence and commissioned by WEKA, the adoption of artificial intelligence (AI) by enterprises and research organizations seeking to create new value propositions is accelerating, but data infrastructure and AI sustainability challenges present barriers to implementing it successfully at scale. Visit www.weka.io/trends-in-AI to read the full report

Zadara has partnered with managed services provider Node4 to deliver multi-tenanted immutable and secure storage for its Veeam backup and recovery platform via its zStorage Object Storage offering. 

Switzerland-based Zstor is introducing DapuStor Haishen5 Series PCIe 5.0 SSDs in Europe. They have sequential read and write speeds reaching up to 14000/8000 MBps, and 4K steady-state random read and write speeds of up to 2800/600,000. Form factors include U.2, E1.S and E3.S and they use Marvell Bravera SC5 SSD controllers.

Frore goes insane in the membrane with cooling tech

Frore Systems showed off a 64TB U.2 SSD cooled by its AirJet Mini at Flash Memory Summit 2023 that removes 40W of heat without using bulky heat sinks, fans or liquid cooling.

AirJet is a self-contained, solid-state, active heat sink module that’s silent, thin, and light. It measures 2.8mm x 27.5mm x 41.5mm and weighs 11g. It removes 5.2W of heat at a 21dBA noise level, while consuming a maximum 1W of power. AirJet Mini generates 1,750 Pascals of back pressure, claimed to be 10x higher than a fan, enabling thin and dust-proof devices that can run faster because excess heat is taken away.

Think of the AirJet Mini as a thin wafer or slab that sits on top of a processor or SSD and cools it by drawing in air, passes it over a heat spreader physically touching the device, then ejects it from another outlet. Alternatively, AirJet Mini can be connected to its target via a copper heat exchanger.

Inside the AirJet Mini are tiny membranes that vibrate ultrasonically and generate the necessary airflow without needing fans. Air enters though top surface vents and is moved as pulsating jets through the device and out through a side-mounted spout.

Cross-sectional AirJet Mini diagram from Frore Systems
Cross-sectional AirJet Mini diagram

AirJet is scalable, with additional heat removed by adding more wafers or chips. Each chip removes 5W of heat, two chips can remove 10W, three chips 15W, and so on. A more powerful AirJet Pro removes more heat – 10.5W of heat at 24dBA, while consuming a maximum 1.75W of power.

AirJet can be used to cool thin and light notebook processors or SSDs, and enable them to run faster without damage. Running faster produces more heat, which the AirJet Mini removes.

OWC Mercury Pro in its 3.5-inch carrier, with 8 x M.2 SSDS inside, 4 on top, 4 below, each fitted with an Air Jet Mini

OWC built a demonstration a portable SSD-based storage device in a 3.5-inch carrier, which it exhibited at FMS 2023. Inside the Mercury Pro casing are 8 x 8TB M.2 2280 format SSDs, each with an attached AirJet Mini. Its bandwidth is between 2.2GBps and 2.6GBps for sequential writes. We don’t know how it would perform without the Frore cooling slabs, though.

Speed increases may not be the only benefit, however, as a similar-sized Mercury Pro U.2 Dual has an audible fan. Frore’s cooling does away with the noise and needs less electricity.

We could see notebook, gaming system and portable SSD device builders using Frore membrane cooling technology so their devices can be more powerful without needing noisy fans, bulky heatsinks or liquid cooling. 

OWC has not committed to making this a product. Get a Frore AirJet Mini datasheet here.

NetApp revenue drops for yet another quarter

NetApp’s financial performance for a third consecutive quarter reflected the challenging economic climate as the company reported revenues were lower than expected, despite surpassing estimates.

In its first fiscal quarter of 2024, ended July 28, revenues were down 10 percent year-over-year to $1.43 billion. This was, however, above their forecast midpoint. NetApp reported profit of $149 million, representing a 30 percent fall from the previous year. The hybrid cloud segment generated revenues of $1.28 billion, a 12.3 percent decrease, while public cloud revenues stood at $154 million, marking a 16.7 percent increase. The public cloud annual run rate (ARR) saw a modest 6 percent rise to $619 million, but the direction of travel has some way to go to offset hybrid cloud revenue declines – $180 million this quarter.

CEO George Kurian said: “We delivered a solid start to fiscal year 2024 in what continues to be a challenging macroeconomic environment. We are managing the elements within our control, driving better performance in our storage business, and building a more focused approach to cloud.”

The market was affected by the challenging economic situation with muted demand and lengthened sales cycles. Billings dropped 17 percent annually to $1.3 billion. All-flash array sales were presented in ARR terms as being $2.8 billion, a 7 percent drop on a year ago. Positive momentum surrounds NetApp’s recently introduced the AFF C-Series array, which uses more affordable QLC flash. The product is pacing to be the quickest-growing all-flash system in the company’s history and NetApp expects AFA sales to rise. NetApp launched its SAN-specific ASA A-Series ONTAP systems in May and Kurian hopes they will “drive share gains in the $18 billion SAN market.”

Financial summary

  • Operating cash flow: $453 million up 61 percent year-over-year
  • EPS: $0.69 vs $0.96 a year ago
  • Share repurchases and dividends: $506 million
  • Cash, cash equivalents, and investments: $2.98 billion

A concerning note is that NetApp’s product sales have been on a downward trend for the past five quarters. At present, they stand at $590 million, marking a 25 percent year-on-year decline. Service revenues witnessed a 5 percent rise to $842 million but fell short of offsetting the decline in product sales.

Addressing the performance of NetApp’s all-flash and public cloud revenues, Kurian mentioned “focusing our enterprise sellers on the flash opportunity and building a dedicated model for cloud … The changes have been well received, are already showing up in pipeline expansion, and should help drive top line growth in the second half.” 

Kurian said flash revenues were particularly good last year because NetApp benefited “from elevated levels of backlog that we shipped in the comparable quarter last year. If you remove that backlog, flash actually grew year-on-year this quarter.” NetApp is second in the AFA market behind Dell, according to IDC. The CEO expects NetApp’s “overall flash portfolio to grow as a percentage of our business through the course of the year,” with AFA sales growing faster than hybrid flash/disk array sales.

Public cloud is a particular problem, with Kurian saying: “I want to acknowledge our cloud results have not been where we want them to be and assure you we are taking definitive action to hone our approach and get back on track … First party storage services, branded and sold by our cloud partners, position us uniquely and represent our biggest opportunity.” That means AWS, Azure, and Google, with news coming about the NetApp-Google offerings.

He added in the earnings call that “subscription is where we saw a challenge, both a small part of cloud storage subscription as well as CloudOps and we are conducting a review” to find out more clearly where things went wrong.

NetApp acknowledged the surge in interest in generative AI and said it was well represented in customers’ projects.

NetApp and Pure Storage

NetApp has been announcing its all-flash array storage ARR numbers for the past seven quarters. That gives us a means of comparing them to Pure Storage’s quarterly revenues, either by dividing NetApp’s AFA numbers by four to get a quarterly number, or multiplying Pure’s quarterly revenues by four to get an ARR. We chose the former, normalized it for Pure’s financial quarters, and charted the result: 

By our reckoning Pure’s revenues, based on all-flash sales, passed NetApp’s last quarter, but NetApp has regained the lead with its latest quarter’s revenues.

NetApp was asked in the earnings call about Pure’s assertion “that there won’t be any new spinning disk drives manufactured in five years.” Kurian disputed this, saying: “When you cannot support a type of technology, like our competitors cannot, then you have to throw grenades and say that that technology doesn’t exist because you frankly can’t support it.”

Next quarter’s revenues are expected to be $1.53 billion, with a possible deviation of $75 million. This represents an estimated 8 percent annual decline. Kurian said NetApp expects to see “top line growth in the back half of FY’24,” meaning the second and third quarters should see a revenue uptick. That should be mostly due to increased AFA sales with some contribution from the public cloud business.