VAST Data is the first storage provider to be integrated with Cisco’s Nexus 9000 Ethernet switches in the networking giant’s Nexus HyperFabric for Ethernet AI stacks connected to Nvidia GPU farms.
Cisco has developed programmable and high-bandwidth Silicon One ASIC chips for its fastest network switches and routers. The June 2023 G200 product, for example, with 512 x 100GE Ethernet ports on one device, supports 51.2 Tbps and features advanced congestion management, packet-spraying techniques, and link failover. The Silicon One hardware is available as chips or chassis without Cisco software. Cisco has developed its Ethernet AI Fabric, using Silicon One, which is deployed at three hyperscale customers.
Renen Hallak
Nexus HyperFabric AI clusters, using Nexus switches, are designed to help enterprises build AI datacenters using Nvidia-accelerated computing, which includes Tensor Core GPUs, BlueField-3 DPUs, SuperNICs, and AI Enterprise software, through Cisco networking and the VAST Data Platform. This VAST system also supports BlueField-3 DPUs. VAST has certified Cisco Nexus Ethernet-based switches with its storage, delivering validated designs. Cisco customers can monitor and correlate storage performance and latency using VAST’s APIs, pulling both network and storage telemetry back to the Nexus HyperFabric.
Renen Hallak, CEO and co-founder of VAST Data, said: “This is the year of the enterprise for AI. Traditionally, enterprises have been slower to adopt new technologies because of the difficulty of integrating new systems into existing systems and processes. This collaboration with Cisco and Nvidia makes it simple for enterprises to implement AI as they move from proof of concept to production.”
Jonathan Davidson, Cisco EVP and GM for its networking, added: “With an ecosystem approach that now includes VAST Data and Nvidia, Cisco helps our enterprise customers tackle their most difficult and complex networking, data and security challenges to build AI infrastructures at any scale with confidence.”
Jonathan Davidson
The Cisco Nexus HyperFabric features:
Cisco cloud management capabilities to simplify IT operations across all phases of the workflow.
Cisco Nexus 9000 series switches for spine and leaf that deliver 400G and 800G Ethernet fabric performance.
Cisco Optics family of QSFP-DD modules to offer customer choice and deliver super high densities.
NVIDIA AI Enterprise software to streamline the development and deployment of production-grade generative AI workloads
NVIDIA NIM inference microservices that accelerate the deployment of foundation models while ensuring data security, and are available with NVIDIA AI Enterprise
NVIDIA Tensor Core GPUs starting with the NVIDIA H200 NVL, designed to supercharge generative AI workloads with performance and memory capabilities.
NVIDIA BlueField-3 data processing unit DPU processor and BlueField-3 SuperNIC for accelerating AI compute networking, data access and security workloads.
Enterprise reference design for AI built on NVIDIA MGX, a modular and flexible server architecture.
The VAST Data Platform, which offers unified storage, database and a data-driven function engine built for AI.
The BlueField-3 DPUs can also run security services like the Cisco Hypershield, which enables an AI-native, hyperdistributed security architecture, where security shifts closer to the workloads needing protection.
VAST and Cisco claim they are simplifying network management and operations across all infrastructure endpoints. Congestion management with flow control algorithms, and visibility with real-time telemetry are provided by the private cloud (on-premises) managed Cisco Nexus Dashboard.
Select customers can access the VAST Data Platform with Cisco and Nvidia in beta now, with general availability expected in calendar Q4 of 2024.
Comment
Cisco has developed a FlashStack AI Cisco Verified Design (CVD) with Pure Storage for AI inferencing workloads using Nvidia GPUs. A similar FlexPod AI CVD does the same for NetApp. In February, Cisco said more Nvidia-backed CVDs will be coming in the future, and it looks like the VAST Data example has just arrived.
We understand that HyperFabric will support AI training workloads. This leads us to understand that a higher-speed Silicon One chip is coming, perhaps offering 76.5 Tbps or even 102.4 Tbps.
In effect, it looks like Cisco has developed a VAST Stack AI type offering which could potentially pump data faster to Nvidia GPU farms than existing arrangements with NetApp and Pure Storage.
Data protector N2WS is enabling server backups in AWS to be restored to Azure in a cross-cloud backup and disaster recovery feature.
N2WS (Not 2 Worry Software) provides IAAS; cloud-native backup, DR and archiving facilities for larger enterprises and also MSPs. It protects RDS, Aurora, RedShift and DynamoDB databases, S3 and VPC settings in AWS. The company offers agentless, application-consistent SQL server backup and performs disaster recovery for Azure VMs and Disks in the same target region. N2WS claims users can perform disaster recovery for Azure virtual machines and disks in minutes. The N2WS service is sold through the AWS and Azure Marketplaces.
Ohad Kritz.
Ohad Kritz, CEO and co-founder of N2WS, said in as statement: “N2WS’ cloud-native backup and disaster recovery solution really makes a difference for organizations. With cyber threats on the rise, we have to keep innovating. As leaders in the industry, it’s our job to stay ahead.”
The BDR (Backup & Disater Recovery) capabilities between Amazon Web Services (AWS) and Microsoft Azure mean users are protected against an AWS cloud failure, we’re told. The BDR features allow enterprises and MSPs to back up servers in AWS and quickly recover volumes in Azure, offering cross-cloud protection and ensuring compliance with new data isolation regulations.
N2WS has announced a number of additional new features;
Immutable snapshots in Amazon S3 and EBS, and Azure
Consolidated reports highlighting all customer’s backup policies, a claimed game-changer for enterprises and MSPs managing extensive backup environments with hundreds of policies.
VPC Capture & Clone: ELB enhancement – users can capture and clone all meaningful networking and configurations settings, including Elastic Load Balancers, enabling organizations to restore services during a regional outage and ensure security configurations are applied across all environments.
Disaster Recovery for DynamoDB. Until now, only same-region restore was supported for DynamoDB tables. DynamoDB tables can now be copied between AWS regions and accounts. This allows for instant restoration in the event of a full-scale regional outage or malicious activity that locks users out of their AWS accounts. Additionally, it enables the migration of DynamoDB table data between regions.
NIC/IP Support during Instance Restore. Secondary IP and additional NIC can now be added during an instance restore, enabling users to modify the network settings to ensure proper communication between a restored instance and its environment.
Time-Based Retention. New time-based backup retention periods can be selected in addition to the generation-based retention periods already in place, providing flexibility in choosing. This is available for all target types and storage repositories.
Options to Customize Restore Tags. When restoring a target, users now have a comprehensive toolset to assist in editing tags. Previously, they either chose to restore tags or not. Now, they can add, modify, and delete them.
N2WS is a Clumio competitor and, with this release, aims to help its enterprise and MSP customers combat cybersecurity attacks, ensure data sovereignty, enhancing data security, and optimize costs.
Startup AirMettle has joined the STAC Benchmark Council. AirMettle’s analytical data platform is designed to accelerate exploratory analytics on big data – from records to multi-dimensional data (e.g. weather) to AI for rich media – by integrating parallel processing within a software-defined storage service deployed on-prem and in clouds.
…
Catalogic has announced the newest version of its Catalogic DPX enterprise data protection software, focusing on enhancements to the DPX vStor backup repository technology. There are significant advancements in data immutability and recovery functionalities, and vStor Snapshot Explorer feature, all designed to bolster the security and flexibility of the company’s flagship enterprise backup solution. Users can now directly access and recover files on snapshots stored in vStor, simplifying the recovery process and reducing recovery time during critical operations.
…
Cloudera has acquired Verta’s Operational AI Platform. The Verta team will join Cloudera’s machine learning group, reporting to Chief Product Officer, Dipto Chakravarty. They will draw on their collective expertise to help drive Cloudera’s AI roadmap and enable the company to effectively anticipate the needs of its global customer base. Founded on research conducted at MIT by Dr. Manasi Vartak, Verta’s former CEO, and then further developed with Dr. Conrado Miranda, Verta’s former CTO, Verta was a pioneer in model management, serving, and governance for predictive and generative AI (GenAI). It addresses one of the biggest hurdles in AI deployments by enabling organizations to effectively build, operationalize, monitor, secure, and scale models across the enterprise. Verta’s technology simplifies the process of turning datasets into custom retrieval-augmented generation applications, enabling any developer—no matter their level of machine learning expertise—to create and optimize business-ready large language models (LLMs). These features—along Verta’s genAI workbench, model catalog, and AI governance tools—will enhance Cloudera’s platform capabilities as it continues to deliver on the promise of enterprise AI for its global customer base.
…
Lakehouse supplier Dremio has confirmed support for the Apache Iceberg REST Catalog Specification. This is the foundation for metadata accessibility across Iceberg catalogs. With this new capability, Dremio is able to seamlessly read from and write to any REST-compatible Iceberg catalog, we’re told, and provide customers with the open, flexible ecosystem needed for enterprise interoperability at scale. This news follows Dremio’s recent integration of Project Nessie into Dremio Software where customers can now use a powerful Iceberg catalog everywhere they use Dremio.
…
ExaGrid, which says it’s only Tiered Backup Storage solution with Retention Time-Lock that includes a non-network-facing tier (creating a tiered air gap), delayed deletes and immutability for ransomware recovery, announced it’s achieved “Veeam Ready-Object” status, verifying that Veeam can write to ExaGrid Tiered Backup Storage as an S3 object store target, as well as the capability to support Veeam Backup for Microsoft 365 using S3 directly to ExaGrid.
Mike Snitzer.
…
Hammerspace has appointed a long-time leader in the Linux kernel community – Mike Snitzer – to its software engineering team, where he will focus on Linux kernel development for Hammerspace’s Global Data Platform and accelerating advancements in standards-based Hyperscale NAS and Data Orchestration. He joins Trond Myklebust, Hammerspace CTO, as the second Linux kernel maintainer on the Hammerspace engineering staff. Snitzer has 24 years of experience developing software for Linux and high-performance computing clusters and will focus on Linux kernel development for Hammerspace’s Global Data Platform.
…
IBM has produced a Storage Ceph Solutions Guide. The target audience for this publication is IBM Storage Ceph architects, IT specialists, and storage administrators. This edition applies to IBM Storage Ceph Version 6. Storage Ceph comes with a GUI that is called Dashboard. This dashboard can simplify deployment, management, or monitoring. The Dashboard chapter features an Introduction, and then Connecting to the cluster Dashboard, Expanding the cluster, Reducing the number of monitors to three, Configuring RADOS Gateway, Creating a RADOS Block Device, Creating an image, Creating a Ceph File System, and Monitoring.
…
We asked the LTO organisation about tape endurance and customer tape copy refresh operations (re-silvering) frequency. It replied: “Life expectancy and migration serve distinct purposes in data management. The life expectancy of LTO tape cartridges is typically around 30 years under the conditions set by the tape manufacturer, and this ensures that the quality of archived data remains intact for reliable reading and recovery. However, individual tape admins may choose to migrate their data to a new cartridge more often than this.
“Typically, migration is performed every 7 to 10 years, primarily due to evolving infrastructure technologies. When a customer’s tape drives start ageing, getting sufficient service and support can become more difficult in terms of finding parts and receiving code updates, and this is completely natural with any type of electronic goods in the market. Customers will therefore often take advantage of the highest capacities of the latest generations, so there are fewer pieces of media to manage, at the same time that slots are freed within libraries to include more cartridges and increase the library capacity without the need to buy more libraries or expansions.
“Some tape admins might decide to retain drives to read older cartridges, and only migrate opportunistically as these cartridges are required to be read. For example, LTO-3 was launched back in 2005 – nearly 20 years ago – and yet today you can still buy an LTO-5 brand new tape drive which will read all the way back to LTO-3 media, enabling customers that have kept it all this time to still read it.
“When it comes to TCO, customers will need to decide on their migration strategy in order to calculate the differences between tape and disk.”
…
AI is driving the need for a new category of connectivity devices – PCIe retimers – to scale high-speed connections between AI accelerators, GPUs, CPUs, and other components inside servers. Marvell expanded its connectivity portfolio with new PCIe Gen 6 retimers (in 8 and 16-lane) built on the company’s 5nm PAM4 technology. The Alaska P PCIe retimer product line delivers reliable communication over the physical distances required for connections inside servers. According to Marvell, the 16-lane product operates at the lowest power in the market today – it’s sampling now to customers and ecosystem partners.
…
Micron has achieved its qualification sample milestone for CZ120 memory expansion modules using Compute Express Link (CXL). Micron is the first in the industry to achieve this milestone, which accelerates the adoption of CXL solutions within the datacenter to tackle growing memory challenges stemming from existing data-intensive workloads and emerging AI and ML workloads.
The Micron CZ120 memory expansion modules, which utilize CXL, provide the building blocks to address this challenge. These modules offer 128 GB and 256 GB densities and enable up to 2 TB of added capacity at the server level. This higher capacity is complemented by a bandwidth increase of 38 GBps that stems from saturating each of the PCIe Gen5 x8 lanes. Traditional SaaS enterprise workloads such as in-memory databases, SQL Server, OLAP and data analytics see a substantial performance increase when the system memory is augmented with CZ120 memory modules, delivering up to a 1.9x TPC-H benchmark improvement. Enhanced GPU LLM inferencing is also facilitated with Micron’s CZ120 memory, leading to faster time to insights and better sustained GPU utilization.
…
William Blair analyst Jason Ader tells subscribers that MSP backup services supplier N-able is exploring a potential sale after attracting interest from private equity. While the news is yet to be confirmed, N-able is reportedly in talks with peers in the software sector and private equity firms, with specific mention of Barracuda Networks, a cybersecurity company owned by KKR. Barracuda, which specializes in network security, was acquired by KKR in 2022 from Thoma Bravo for $4 billion. Similarly, N-able was spun out of SolarWinds in 2021, which was acquired by Silver Lake and Thoma Bravo for $4.5 billion in 2016, resulting in the two firms each owning about a third of N-able today.
…
Nvidia announced the world’s 28 million developers can download NVIDIA NIM inference microservices that provide models as optimized containers to deploy on clouds, data centers or workstations, giving them the ability to build generative AI applications for copilots, chatbots and more, in minutes rather than weeks. Nearly 200 technology partners — including Cadence, Cloudera, Cohesity, DataStax, NetApp, Scale AI and Synopsys — are integrating NIM into their platforms to speed generative AI deployments for domain-specific applications, such as copilots, code assistants and digital human avatars. Hugging Face is now offering NIM — starting with Meta Llama 3. Over 40 NVIDIA and community models are available to experience as NIM endpoints on ai.nvidia.com, including Databricks DBRX, Google’s open model Gemma, Meta Llama 3, Microsoft Phi-3, Mistral Large, Mixtral 8x22B and Snowflake Arctic.
…
Veeam backup target appliance Object First has announced expanded capacity of 192 TB for its Ootbi disk-based appliance. This means up to 768 TB of usable immutable backup storage per cluster. Ootbi version 1.5 is generally available today. General availability of the Ootbi 192 TB appliance is expected worldwide from August 2024. There are now three Ootbi models:
Ootbi 64 TB model: 10 x 8 TB SAS HDDs configured in RAID 6
Ootbi 128 TB model: 10 x 16 TB SAS HDDs configured in RAID 6
Ootbi 192 TB model: 10 x 24 TB SAS HDDs configured in RAID 6
…
On June 4, RelationalAI, creators of the industry’s first knowledge graph coprocessor for the data cloud, will be presenting alongside customers AT&T, Cash App, and Blue Yonder at the Snowflake Data Cloud Summit. Its knowledge graph coprocessor is available as a Snowflake Native App on the Snowflake Marketplace. Backed by Snowflake founder and former CEO, Bob Muglia, RelationalAI has taken $122 million in total funding.
…
Enterprise scale-out file system supplier Qumulo is the first networked storage supplier to join the Ultra Ethernet Consortium (UEC). It is collaborating with Intel and Arista Networks to advance the state of the art in IT infrastructure at the intersection of networking, storage, and data management. These technologies enhance the performance and operations of Qumulo’s Scale Anywhere Data Management platform. Qumulo has deployed over an exabyte of storage across hundreds of customers jointly with Arista Networks EOS-based switching and routing systems. Ed Chapman, vice president of business development and strategic alliances at Arista Networks, said: “Qumulo joining the UEC is further validation that Ethernet and IP are the right foundation for the next generation of general purpose, cloud, and AI computing and storage.”
…
AI startup SambaNova has broken a record in GenAI LLM speed, with Samba-1 Turbo reaching a record 1,084 tokens per second on Meta’s Llama 3 Instruct (8B), according to Artificial Analysis, a provider of objective benchmarks & information about AI models. This speed is more than eight times faster than the median across other providers. SambaNova has now leapfrogged Groq, with the latter announcing 800 tokens per second in April. For reference, OpenAI’s GPT-4o can only generate 100 tokens per second and averages 50-60 tokens per second in real-world use.
…
The Samsung Electronics Union is planning to strike on June 7. TrendForce reports that this strike will not impact DRAM and NAND flash production, nor will it cause any shipment shortages. Additionally, the spot prices for DRAM and NAND had been declining prior to the strike announcement, and there has been no change in this downtrend since.
…
The SNIA’s Storage Management Initiative (SMI) has made the SNIA Swordfish v1.2.7 Working Draft available for public review. SNIA Swordfish provides a standardized approach to manage storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their datacenters. Swordfish v1.2.7 has been released in preparation for DMTF’s Redfish version 2024.2, enabling DMTF and SNIA to jointly deliver new functionality. The release also contains expanded functionality for Configuration Locking.
…
Data warehouser and wannabee data lake supplier Snowflake announced Polaris Catalog, a vendor-neutral, open catalog implementation for Apache Iceberg — the open standard of choice for implementing data lakehouses, data lakes, and other modern architectures.Polaris Catalog will be open sourced in the next 90 days to provide enterprises and the entire Iceberg community with new levels of choice, flexibility, and control over their data, with full enterprise security and Apache Iceberg interoperability with Amazon Web Services (AWS), Confluent, Dremio, Google Cloud, Microsoft Azure, Salesforce, and more.
…
Snowflake has adopted NVIDIA AI Enterprise software to integrate NeMo Retriever microservices into Snowflake Cortex AI, Snowflake’s fully managed large language model (LLM) and vector search service. This will enable organizations to seamlessly connect custom models to diverse business data and deliver highly accurate responses. In addition, Snowflake Arctic, the open, enterprise-grade LLM, is now fully supported with NVIDIA TensorRT-LLM software, providing users with highly optimized performance. Arctic is also now available as an NVIDIA NIM inference microservice, allowing more developers to access Arctic’s efficient intelligence. NVIDIA AI Enterprise software capabilities to be offered in Cortex AI include NeMo Retriever (info retrieval with high accuracy and powerful performance for RAG GenAi within Cortex AI) and Triton Inference Server withthe ability to deploy, run, and scale AI inference for any application on any platform. NVIDIA NIM inference microservices – a set of pre-built AI containers and part of NVIDIA AI Enterprise – can be deployed right within Snowflake as a native app powered by Snowpark Container Services. The app enables organizations to easily deploy a series of foundation models right within Snowflake.
…
StorONE will be showcased at VeeamON from June 3-5 in Fort Lauderdale, FL. StorONE seamlessly transforms into a backup target, integrating flawlessly with Veeam. The platform brings three distinct advantages to the backup and archival use case specific to Veeam:
Maximized Drive Utilization: More data can be stored on fewer drives without the performance penalty of de-duplication.
Built-In Security: From immutable snapshots to multi-admin approval, data security is managed at the storage layer for rapid recovery.
Fast Restores and Production Capability: StorONE is a fully featured array capable of fast restores and running as a production copy if required and has the necessary Flash resources.
“Veeam provides for backup data movement and cataloging along with security features, while StorONE adds an extra, complimentary layer of security, including multi-admin approval for changes,” said Gal Naor, CEO of StorONE. “StorONE also unlocks many advanced features of Veeam, as it is a fully-featured array that can be configured for backups and high-performance storage, allowing Veeam restores and data integrity features to operate efficiently yet at a dramatically less expensive price point than competing backup solutions.”
…
TrendForce reports that a reduction in supplier production has led to unmet demand for high-capacity orders since 4Q23. Combined with procurement strategies aimed at building low-cost inventory, this has driven orders and significantly boosted enterprise SSD revenue, which reached $3.758 billion in 1Q24 – a staggering 62.9 percent quarter-over-quarter increase. Demand for high capacity, driven by AI servers, has surged. North American are clients increasingly adopting high-capacity QLC SSDs to replace HDDs, leading to an estimate of more than 20 percent growth in Q2 enterprise SSD bit procurement. This has also driven up Q2 enterprise SSD contract prices by more than 20 percent, with revenue expected to grow by another 20 percent.
…
Western Digital says that on Monday, June 10, 2024, at 1:00 p.m. Pacific / 4:00 p.m. Eastern, Robert Soderbery, EVP And GM its Flash business and other senior execs will host a webcast on the “New Era of NAND.” They’ll share WD’s view of the new dynamics in the NAND market and its commitment to NAND Tech innovation. There’ll be a Q&A session and the live webcast will be accessible through WD’s Investor Relations website at investor.wdc.com with an archived replay available shortly after the conclusion of the presentation.
…
Software RAID supplier Xinnor is included in the Arm Partner Program. Xinnor’s latest solution brief, “xiRAID Superfast RAID Engine for NVMe SSD on Arm-Based BlueField3 DPU,” is now available at the Arm partner portal. “There is an insatiable amount of data being produced today, especially with advances in AI,” said Kevin Ryan, senior director of partner ecosystem marketing at Arm. “More than ever, these increasingly complex workloads require high-performance and efficient data storage solutions, and we look forward to seeing how Xinnor’s addition to the Arm Partner Program will enable greater innovation in this space.”
SPONSORED FEATURE: There’s a pressing need for efficient new data storage solutions given the growing trend of enterprises now deploying AI-enabled applications.
Where megabyte and terabyte storage loads were once commonplace for mere document and single image-type workloads, petabyte (1K terabytes) and even some exabyte (1K petabytes) jobs are now in production.
Factors that have fueled a boom in AI applications include large language models (LLMs) being used in everything from facial recognition software to recommendation engines on streaming services, all to improve user experiences and business processes. Across industries, there’s a growing need for automation, data analysis and intelligent decision-making. AI can automate repetitive tasks, analyze vast datasets to uncover patterns and make data-driven predictions or recommendations. This translates to potentially increased efficiency, productivity and innovation in various fields.
All of this entails vast amounts of data coming from social networks, GPS transmitters, security cameras, point-of-sale locations, remote weather sites and numerous other sources. This trend demands high-performance storage solutions to handle the large volumes of unstructured data involved in AI training and inferencing which can be spread across both on-premises and cloud environments.
A recent IEEE Spectrum report, “Why AI Needs More Memory Than Ever,” explored the ever-increasing data storage demands of AI systems, particularly focusing on the growing size of LLMs. It suggests that besides the demand for high performance, low power, low cost and high capacity, there is also an increasing demand for more smart management functions in or near memory to minimize data movement. As a result, the trend toward deploying hybrid clouds, where all of this is possible, is getting traction.
Traditionally, AI implementation has been marked by siloed solutions and fragmented infrastructure.
“When your applications and tools are running mostly in the cloud, it’s imperative for users to put their data closer to where these tools and applications run,” says Kshitij Tambe, Principal Product Manager at Dell Technologies. “So now if you have your data sitting on premises, and you are building some of these tools and applications to run in the cloud, then there is a big disparity. If you have one thing running in the cloud and enterprise data in the datacenter, this becomes very problematic. So that’s where the need for these hybrid cloud models will come in.”
Why RAGS add even more data to AI systems
They are powerful and require lots of storage, but the LLMs which provide the foundation of AI applications and workloads can only generate responses based on how they’ve been trained. To address this and ensure access to up-to-date information, some AI systems utilize a process called Retrieval Augmented Generation (RAG). RAG integrates information retrieval with prompts, allowing the LLM to access and leverage external knowledge stores. This approach necessitates storing both the base LLM and the vast amount of data it retrieves for real-time use.
With companies – especially long-established ones – building and using many different types of storage and storage devices over years in datacenter, edge and cloud deployments, it becomes a complex problem to managed data across multiple locations at the same time. What some storage admins wouldn’t give to have a single-screen, real-time look at all a company’s storage workloads – whether in production or not – wherever they are in the world!
That was a pipe dream for the longest time. But perhaps not anymore.
New data management platform and processes have emerged in the last year or so to handle these spread-out, next-generation workloads. One example is Dell APEX File Storage in the Microsoft Azure cloud, a NAS platform built to meet AI capacity, performance and data management requirements spanning multicloud environments which is part of Dell’s AI-Ready Data Platform.
Dell APEX File Storage for Microsoft Azure, which became generally available April 9th, bridges the large gap between cloud storage and AI-driven insights, says Dell. It also allows customers a degree of flexibility in how they pay for the service.
At the heart of Dell APEX File Storage for Azure lies PowerScale OneFS, a high-performance scale-out file storage solution already deployed by more than 16,000 customers worldwide.
By bringing PowerScale OneFS to the Azure cloud, Tambe says: “Dell enables users to consolidate and manage data more effectively, reduce storage costs and enhance data protection and security – all while leveraging native cloud AI tools to arrive at insights faster.”
APEX File Storage for Azure serves as a versatile connector to smooth the transition during cloud transformation and enable secure connections to all storage nodes, no matter what type of storage is utilized. A key bonus: the Microsoft interface and control panels have natural familiarity for IT administrators while the PowerScale OneFS replicates the user experience that storage IT professionals are familiar with on-premises.
The APEX File Storage for Azure solution is based on PowerScale OneFS and validated to work with other Dell solutions such as PowerEdge. APEX configurations and specifications include support for up to 18 nodes and 5.6PiB in a single namespace; no other provider can make this claim, boasts Dell. Thus, Dell APEX File Storage for Microsoft Azure puts its stake in the ground with the assertion that it is the most efficient scale-out NAS solution now in the market.
Analysis conducted by Dell indicates that in comparison to Azure NetApp Files, for example, Dell APEX File Storage for Microsoft Azure enables 6x greater cluster performance, up to 11x larger namespace, up to 23x more snapshots per volume, 2x higher cluster resiliency, and easier and more robust cluster expansion.
“Typically, customers might have three nodes, four or five nodes, but there is flexibility to go all the way up to 18 nodes in a single cluster,” says Tambe. “The new architecture of APEX is such that the larger the cluster size, and the larger your data set, it becomes more and more efficient – efficient in the sense of even by the metric of how much usable space you have in your data set.”
Integration and deployment on Microsoft Azure
As for data management, APEX File Storage for Azure offers a new path with integration of high-performance storage capabilities to deploy on Microsoft’s Azure infrastructure. The idea is to let admins easily move data from on-premises to the cloud using advanced native replication without having to refactor any storage architecture. That can deliver huge time savings which subsequently enable data management capabilities to help organizations design, train and run AI-enabled workloads faster and more efficiently, says Dell.
APEX File Storage for Azure leverages Azure’s cloud infrastructure and functionalities to benefit AI tasks in a few ways. Developing infrastructure for advanced AI models necessitates significant investment, extending beyond powerful compute resources to encompass critical data storage infrastructure. Training datasets can range in size from terabytes to petabytes, and concurrent access via numerous processes. Saving checkpoints which each potentially consist of hundreds of gigabytes is equally vital.
APEX File Storage directly integrates with several of the most common AI tools – including Azure AI Studio, to change the way developers approach generative AI applications and help simplify the journey from concept to production. It’s a developer’s playground for evaluating responses from large language models and orchestrating prompt flows says Dell, ensuring optimal performance and scalability.
And since OneFS supports S3 as an access protocol, getting APEX File storage to work with Azure AI Studio should be easy. Developers can point Azure AI Studio using OneLake Data Gateway directly to a OneFS directory, for example. This allows them to use files on OneFS clusters (AFS or on-prem) without copying the data to Blob Storage thus running fine-tuning of AI models with files remaining in a OneFS filesystem.
For providing scalability, APEX File Storage utilizes Azure’s cloud-native technologies, allowing it to elastically scale storage capacity and performance based on AI workload demands. This helps ensure smooth operation, even when dealing with large datasets used in AI training and processing.
For integration, APEX File Storage integrates directly with the Azure architecture, facilitating data transfer between on-premises and cloud environments. This eliminates the need to redesign storage infrastructure when moving AI workloads to the cloud. This combination creates the foundation for a universal storage layer that simplifies storage management in multicloud environments, says Dell.
For data management and protection, APEX File Storage offers features such as advanced native replication, data deduplication and erasure coding. These functionalities assist with data redundancy, security and efficient storage utilization, which are all crucial aspects of managing large datasets for AI applications.
Dell preceded the Microsoft Azure APEX initiative with an AWS version of the service last year. This stands as an example of Dell’s commitment to offering a wide range of storage and data management options for different cloud platforms to meet customer requirements.
Porsche Motorsport says NetApp is its exclusive Intelligent Data Infrastructure partner, accelerating access to Porsche’s data to drive trackside decision-making.
NetApp already sponsors the TAG Heuer Porsche Formula E race car team, which has earned four victories in the 2023-2024 Formula E season. Now NetApp has new relationships with Porsche Penske Motorsport in the IMSA WeatherTech SportsCar Championship (IWSC) and World Endurance Championship (WEC).
Gabie Boko, NetApp CMO, said: “Motorsports generate massive amounts of data during and even before a race that helps them develop their race strategy and respond to changing conditions. The Porsche Motorsport teams need reliable and instantaneous access to their data to stay competitive.”
IWSC is a sports car racing series based in the United States and Canada organized by the International Motor Sports Association. The FIA (Fédération Internationale de l’Automobile) WEC is a car racing world championship which includes the famous Le Mans 24 race. Penske raced a Porsche 963 in the 2023 season. The IMSA series includes the Rolex 24 at Daytona, and Penske races its 963 cars in the IWSC series as well.
Porsche Motorsport can access its trackside and garage data in real time, we’re told, supporting driver and team performance. NetApp technologies being used include:
Cloud Volumes ONTAP with cloud-stored data available anywhere
BlueXP data control plane used to create local, trackside copies of datasets to conduct low-latency analysis for real-time decision-making
Cloud backup to keep data available and safe
The TAG Heuer team uses data-driven simulations to help pick the best time to activate the car’s Formula E Attack Mode, which unlocks an additional 50 kilowatts of engine power for a burst of extra speed.
Penske IMSA race car
Porsche Penske Motorsport will use NetApp tech in North America to share and analyze the data it needs to drive similar complex decisions in IWSC and WEC races.
Carlo Wiggers, Director Team Management, Business Relations, and Esports at Porsche Motorsport, said: “Motorsports are more about technology than almost anything else. A skilled driver is only as good as the vehicle and race strategy can empower them to be.”
Wiggers said NetApp enables the Porsche Motorsport teams to have data available where it’s needed: “Our teams travel around the world for race events but still need a connection to the data generated by our home base in Germany. NetApp … can deliver that data reliably and speedily, giving us the confidence to expand our partnership to include the IWSC and WEC events in North America.”
Nasuni is undergoing a website redesign, including brand messaging, visuals, and a logo refresh, driven by a fluid data concept.
Cloud file services and collaboration supplier Nasuni, which hired Asim Zaheer as its CMO in October last year, reckons the brand’s new look and feel highlights an industry shift from rigid, hardware-based NAS solutions to more fluid, software-defined storage orchestrations in the hybrid cloud era.
Asim Zaheer
Zaheer said: “A brand identity is so much more than logos and colors – it’s the foundation of everything that represents a company and embodies its key values. Nasuni’s rebrand reflects a modern identity for a modern product, keeping pace with the company’s technological innovation. Additionally, this initiative encompasses the insights provided from our customers, industry experts, and top analyst firms.”
Nasuni, which is privately-owned and does not file quarterly numbers, says it has experienced rapid growth, surpassing the $100 million ARR milestone in January., now having gone past $130 million. It claims to have seen significant growth in data under management with the big three cloud providers: a 63 percent year-over-year increase in data held in Microsoft Azure; 37 percent for AWS; and 270 percent in Google Cloud. Nasuni then managed more than 170 petabytes of data inside the top three CSPs.
Velocity Partners CEO Stan Woods, who collaborated with Nasuni on the new branding, said: “A clear, compelling brand runs circles around a weak, unfocused brand every day of every week. Nasuni has combined a terrific brand story with its already strong tech advantage.”
We’re told that the high-impact, fluid design, featuring a splash of liquid, represents data that should flow freely and be securely available anywhere and anytime. The green in the fluid element is a nod to Nasuni’s legacy color palette. The hexagon serves as a visual metaphor for the strength, efficiency, and services Nasuni offers, providing a solid boundary for the ever-evolving “fluid” data.
Asigra is increasing the number of SaaS apps it protects and has a SaaSAssure platform that allows MSPs to offer data protection.
The Canada-based data protector now has pre-configured integrations to protect customer data in Salesforce, Microsoft 365, Microsoft Exchange, SharePoint, Atlassian’s JIRA and Confluence, Intuit’s Quickbooks Online, Box, OneDrive, and HubSpot, with more coming.
Eric Simmons
Asigra CEO Eric Simmons said: “SaaS app data protection has become a legal obligation and a crucial aspect of maintaining reputation and financial security.”
We think he might be overstating things here as national regulations over compliance and privacy may well apply with legal force to SaaS app operators but data protection, specifically in the backup sense, generally does not, unless mentioned in compliance regulatory requirements such as GDPR, HIPAA, PCI DSS, etc.
A Microsoft 365 connector bundle covering Exchange, OneDrive, SharePoint and Teams costs $1.30 per user per month. Other business connector apps – ADP, Big Commerce, DocuSign, CRM Dynamics, Epicor, FreshBooks, Freshdesk, HRBamboo, Monday.com, Salesforce, ServiceNow, Slack, QuickBooks, Zendesk, Box, Dropbox – cost $3.00 per user per app per month. OEM and enterprise agreements are available.
Asigra cites BetterClouds’ State of SaaSOps survey to say that SaaS apps will make up 85 percent of all business software in 2025. It claims that, with 67 percent of companies using SaaS apps that can experience data loss due to accidental or malicious deletions, it is imperative that users protect their own information. The SaaS app providers protect their own infrastructure but not customer data.
SaaSAssure app coverage
SaaSAssure is built on AWS and claimed features include:
Multi-tenant SaaS app coverage featuring data assurance, control, risk and compliance mitigation
Multi-Person Approvals (MPA), Multi-Factor Authentication (MFA), AES 256-bit encryption, ransomware protection, and more
Choice of backup targets include Asigra Cloud Storage, Bring Your Own Storage (BYOS), or data sovereignty to a location of their choice
Quick to set up and easy to use, progressing from start to protection in under five minutes
Multi-tenant capable single dashboard for required actions and notifications to maximize IT resources
Pre-configured multi-tenant SaaS App Integrations with the user only required to configure authorizations
Asigra says SaaSAssure is complementary to existing backup solutions, allowing MSP partners to expand their service portfolios and revenues without having to switch from existing backup software partner(s). It also includes Auvik SaaS Management to discover shadow IT and sanctioned SaaS app utilization enterprise-wide.
Competitor HYCU’s R-Cloud SaaS data protection scheme is claimed to cover more than 200 SaaS apps. It can be used direct by enterprises or through an MSP. The Own Company (OwnBackup as was) targets mission-critical SaaS app backup. Druva covers Microsoft 365, G-Suite, Slack, and Salesforce. Asigra is ahead of Druva and OwnBackup in the number of SaaS apps protected but behind HYCU.
SaaSAssure is available for immediate deployment. MSPs can register to receive updates or become a Launch Partner here. Check out a SaaSAssure video here.
Clumio co-founder and CEO Poojan Kumar has stepped from operational control to part-time board chairman, succeeded by CRO Rick Underwood.
The 2017-founded company provides SaaS data protection services to Amazon’s S3, EC2, EBS, RDS, SQL on EC2, DynamoDB, VMware on AWS, and Microsoft 365, storing its backups in virtual air-gapped AWS repositories.
Underwood joined Clumio as CRO in November last year. Seven months later he is taking over the top job. Three months after he joined, the company raised $75 million in a Series D funding exercise, with total funding rising to $262 million.
There was a fourfold growth in annual recurring revenue (ARR) for Clumio in 2023, the privately-owned company claims, with double-digit million dollars worth of ARR and more than 100 PB of cloud data protected. Following the last funding round, Clumio said it would develop protection for cloud databases, data lakes, high-performance storage, and support for other “major cloud providers.”
Clumio co-founder Kaustubh Patil is listed on LinkedIn as VP Engineering but Kumar’s post says CTO Woon Ho Jung runs products and engineering. Patil doesn’t appear on Clumio’s leadership webpage. We’ve asked the company about this.
B&F looks forward to seeing Clumio’s services appear in Azure and the Google Cloud Platform, and extending to possibly cover other SaaS applications. Perhaps there will be stronger focus on ransomware and security generally. Assigra, Cohesity-Veritas, Commvault, Druva, HYCU, Keepit, Rubrik, and other SaaS data protection companies will be watching what happens closely.
Interview. Yogesh Badwe, chief security officer at SaaS-based data protector Druva, caught up with Blocks & Files for a Q&Q session to discuss how ransomware as a data security problem is affecting cyber insurance.
Blocks & Files: Is ransomware a data security problem rather than a firewall, anti-phishing, credential stealing exercise?
Yogesh Badwe: Ransomware is a serious concern for businesses, and data security is absolutely a major part of that. If an attack is successful, ransomware can impact confidentiality, integrity and/or availability of data, and a strong approach to data security can reduce the probability of the negative outcomes that are associated with ransomware. While there is no silver bullet to preventing ransomware, ensuring a strong approach to data security alongside some of these other critical cyber hygiene practices – like properly segmented firewalls, anti-phishing practices, and strong password policies and management to prevent tactics like credential stealing – are all critical pieces of the puzzle to reducing the likelihood of being impacted by ransomware.
Yogesh Badwe
Blocks & Files: As ransomware attacks are increasing, what will happen to cyber insurance premiums?
Yogesh Badwe: As ransomware attacks increase, we have also seen an increasing trend of ransomware victims paying ransom. Average ransomware payments are also on the uptick. This likely changes the calculus for insurance underwriters. In response, we have and will continue to see a few things:
Scoped-down cyber insurance policies, with sub limits enforced on ransomware payments
Increased premiums
Premiums that are tightly tied to real-time risk postures (as opposed to one-time understanding of clients risks)
Increased stringency on risk assessments during the initial policy/premium formulation
Requirements for continuous monitoring – it is in the best interest of cyber insurance providers to monitor for and inform its clients of outside-in cyber weaknesses that they see. We will see increasing use of this outside-in open source security monitoring to mitigate risk faced by the clients.
Blocks & Files: Do open source supply chains contribute to the risks here and why?
Yogesh Badwe: Yes, vulnerable OSS supply chains can be a surface area that is targeted by ransomware threat actors for initial intrusion or lateral movement inside an organization. We have also seen persistent and well-resourced threat actors stealthily insert backdoors inside commonly used libraries.
Blocks & Files: What role does non-human identity (NHI) security play in this?
Yogesh Badwe: NHI is an increasing area of focus for security practitioners. From a ransomware perspective, NHIs are yet another vector for initial intrusion or lateral movement inside an organization. Orgs have spent a lot of time securing human identities via SSO, strong policies around password hygiene – rotation session lifetime, etc.
To put it into perspective, there are more than 10x NHIs to human identitiesand as an industry we haven’t spent enough time improving NHI security posture. In fact, over the last 18 months, the majority of public breaches have had some sort of NHI component associated with them.
The reality is that NHIs cannot have the same security policy enforcement that we assume for human identities. For example, holistically for all NHIs in an organization, it is difficult to have strict provisioning and de-provisioning processes around NHIs similar to what we do for humans – to enforce MFA and password rotation, and to notice the misuse of NHIs as compared to human identities.
Due to NHI sprawl, it is trivial for an attacker to get their hands on a NHI, and typically, NHIs have broad sets of permissions that are not monitored to the extent that human identities are. We’re seeing a number of startup companies focused on security NHIs get top-tier VC funding due to the nature and uniqueness of this problem.
Blocks & Files: Should there be a federal common standard for cybersecurity?
Yogesh Badwe: Absolutely. Approximately two decades ago we had GAAP (generally accepted accounting principles) come out, which laid down a clear set of guidelines, rules, expectations, and processes for the bare-minimum, baseline accounting standard in the finance world.
We don’t have a GAAP for security. What we have is a variety of overlapping (and sometimes subjective) industry standards and industry frameworks that different organizations use differently. Duty of care as it relates to reasonable security measures that an entity should take is left to the judgment and description of each individual entity without any common federally accepted definition of what good security looks like.
Only a federal common standard on cybersecurity will help convert the tribal knowledge of what good looks like into enforceable and auditable framework like GAAP.
Blocks & Files: Can AI be used to improve data security, and how do you ensure it works well?
Yogesh Badwe: Generative AI can be leveraged to improve a number of security paradigms, including data security. It can play a transformative role in everything from gathering and generating relevant context about the data and its classification, generating anomalies around permissions and activity patterns, and helping security practitioners either prioritize or help action and remediate data security concerns.
One simple example is a security analyst reviewing a data security alert around activity related to sensitive data. He or she can leverage generative AI to get context about the data, context about the activity, and the precise next steps to triage or mitigate the risk. The possibilities to leverage AI to improve data security are limitless.
How do we ensure it works well? Generative AI itself is a data security problem. We have to be careful in ensuring the security of data that is leveraged by GenAI technologies. As an example, we have to think about how to enforce permission and authorization that exists on source data, as well as on the output generated by the AI models.
It’s essential to continue with human-in-the-loop processes, at least initially, until the use cases and technology mature where we can rely on it 100 percent and allow it to make state changes in response to data security concerns.
Unified high-performance storage platform Quobyte has taken the wraps off its distributed File Query Engine, intended to allow users to query file system metadata at high speed.
Quobyte allows small teams to run large-scale high-performance computing (HPC) infrastructures across various industry segments, including education and health research.
Aimed at environments with massive data sets, File Query Engine offers a range of capabilities, including the ability to query user-defined metadata for AI/ML training, enabling users to label files with data directly, instead of managing small metadata files.
Additionally, administrators can quickly answer operational questions, such as identifying space-consuming cold files or locating files owned by specific users.
File Query Engine replaces slow file system tree walks (“find”), offering a faster and more efficient alternative for large volumes. It is integrated with Quobyte’s distributed and replicated key-value store, which stores metadata.
And, unlike other products, the engine does not require an additional database layer, resulting in faster queries and “significant” resource savings, claimed Quobyte. Queries are executed in parallel across all metadata servers for fast scans across the entire cluster or select volumes.
“File Query Engine is a game-changer for our customers,” said Bjorn Kolbeck, CEO of Quobyte. “It streamlines the process of querying file system metadata, offering fast and efficient results even for large datasets, AI, and machine-learning workloads.”
The technology is part of Quobyte release 3.22 and is automatically available without any configuration. Users can run file metadata queries using the command-line tool “qmgmt,” which supports output in CSV or JSON formats.
Additionally, queries can be initiated via the Quobyte API, providing “flexibility and ease of use”, said the provider.
Among various existing use cases, Quobyte unified block, file and object storage is being used by the HudsonAlpha Institute for Biotechnology in the US, to store primary life sciences and genomics data in a hybrid disk+flash system.
Nutanix notched up yet another quarter of solid revenue, ARR and customer count growth in its latest SEC results report and signed a deal with Dell aimed at capturing displeased VMware customers.
Revenue generated in Nutanix’ Q3 of fiscal 2024, ended April 30, was $524.6 million, up 17 percent year-on-year and beating the consensus Wall St analyst estimates by $9 million. The company reported a $15.6 million net loss, much better than the year-ago loss of $70.8 million. The third quarter is seasonally lower than Nutanix’ second quarter, and the software biz has not yet reached a sustained profitability status after last quarter’s landmark first ever profit.
President and CEO Rajiv Ramaswami said in a statement: “We delivered solid third quarter results reflecting disciplined execution and the strength of our business model,” with CFO Rukmini Sivaraman adding: “Our third quarter results demonstrated a good balance of top and bottom line performance with 24 percent year-over-year ARR growth and strong year-to-date free cash flow generation. We remain focused on delivering sustainable, profitable growth.”
Nutanix added 490 new customers in the quarter, taking its total customer count to 25,860. Its Average Contract Value (ACV) billings rose 20 percent Y/Y to $288.9 million, and annual recurring revenue (ARR) increased 24 percent to $1.8 billion.
Financial summary
Gross margin: 84.8% vs 81.6% last year
Free cash flow: $78.3 million vs year-ago’s $52.7 million
Operating cash flow: $96.4 million vs year-ago $74.5 million
Cash, cash equivalents and short-term investments: $1.651 billion
William Blair analyst Jason Ader told subscribers: “The top-line outperformance was mainly driven by large wins with the Nutanix Cloud Platform (NCP) and consistent renewals performance from steady infrastructure modernization demand.”
Nutanix is involved in more large deals and these can come with their own challenges. The Blair analyst said: “The number of opportunities in the pipeline with ACV billings greater than $1 million has grown more than 30 percent over the last three quarters, while the total dollar value of those deals is up 50 percent over the same period.” But the deals can take longer to negotiate to a close and add variability to Nutanix’ income rate.
Ramswami said in an earnings call with financial analysts “Our largest new customer win of the quarter was an eight-figure ACV deal with a North American-based Fortune 50 financial services company that was looking to streamline and automate the deployment and management of their substantial fleet of databases. … This win was substantially larger than our typical land win and marks the culmination of an approximately two year engagement.”
Sivaraman expanded on this, saying: “The dollar amount of pipeline from opportunities greater than $1 million in ACV has grown at well over 50 percent and for each of the last three quarters compared to the corresponding quarters last year. These larger opportunities often involve strategic decisions and C-suite approvals, causing them to take longer to close and to have greater variability in timing, outcome and deal structure.”
Both Workday and Salesforce also recently noted an elongation of project approvals for enterprise software contracts.
Dell standalone AHV deal
Nutnix is trying to respond to Broadcom’s VMware acquisition by making it easier for dissatisfied VMare customers to migrate to Nutanix by being able to run Nutanix’ AHV hypervisor on existing Dell servers. That means decoupling its AHV hypervisor from its full software stack, which stack has to run on Nutanix-certified hardware. The Dell AHV server will connect to storage-only HCI nodes available in the next few months and then, in calendar 2025, to Dell’s PowerFlex-based storage systems using IP networking.
This enables legacy 3-tier architecture customers wanting to depart from VMware to do so immediately rather than waiting for a 3-year depreciation cycle for their hardware to end. Ramaswami said: “This gives us easier insertion into accounts where they’re not quite ready to go depreciate their hardware yet, allowing us to then over time convert them over to HCI.”
Nutanix will support AHV running stand-alone on other OEMs’ servers and various storage nodes, but IP-access storage, not Fibre Channel. Migration then to the full Nutanix stack would be a land-and-expand type opportunity.
Nutanix’ AHV already runs on Cisco UCS servers and the AHV/UCS server combo connects to Nutanix storage-only nodes. Ramswami said: “We expect to see a growing contribution from Cisco in and of course, into FY’25.”
The outlook for Nutanix’ final quarter of fiscal 2024 is for $535 million +/- $5 million, a 8.3 percent year-on-year rise. Ader’s at William Blair, said: “The current guidance includes impact from the increasing variability that management has seen from larger deals in the pipeline, reflecting new and expansion bookings tracking below management expectations. Full-year guidance assumes modest impact from the VMware displacement opportunity (which management continues to see as a multi-year tailwind) and developing OEM partnerships, both of which should have a more material impact in fiscal 2025.”
Since buying VMware, new owner Broadcom has made a number of sweeping changes to licences, products and worldwide channel programs that govern who can and can’t sell VMware.
Dell’s revenues have finally grown after six successive quarterly drops, led by AI-driven server sales.
Revenues in the first quarter of Dell’s fiscal 2025 ended May 3, 2024, were up six percent year-on-year to $22.2 billion. The PC-centric Client Solutions Group (CSG) was flat at $12 billion: Commercial client revenue was $10.2 billion, up 3 percent year-on-year; and Consumer revenue was $1.8 billion, down 15 percent. The Infrastructure Solutions Group (ISG) pulled in $9.2 billion, 22 percent higher, driven by AI-optimized server demand and traditional server growth. Servers and networking booked $5.5 billion, up 42 percent. Storage sales didn’t grow and remained flat at $3.8 billion. The Texan tech corp reported $955 million net profit, up 65 percent.
Jeff Clarke
COO and vice chairman Jeff Clarke said: “Servers and networking hit record revenue in Q1, with our AI-optimized server orders increasing sequentially to $2.6 billion, shipments up more than 100 percent to $1.7 billion, and backlog growing more than 30 percent to $3.8 billion.”
Quarterly financial summary
Gross margin: 21.6 percent vs 24 percent a year ago
Operating cash flow: $1.0 billion
Free cash flow: $0.6 billion vs $0.7 billion last year;
Cash, cash equivalents, and restricted cash: $7.3 billion vs $? billion last year
Diluted earnings per share: $1.3, up 67 percent y/y
Dell returned $1.1 billion to shareholders through $722 million share repurchases and $336 million dividends.
A welcome rise in Q1 fy25 revenues after 6 successive down quarters
Within ISG, trad server demand grew sequentially for the fourth consecutive quarter and was up Y/Y for the second consecutive quarter. But AI servers led the charge, and the PowerEdge XE9680 server is the fastest ramping new server in Dell history. Storage was left behind. There was demand strength in HCI, PowerMax, PowerStore and PowerScale. The new PowerStore and PowerScale systems should help lift sales next quarter.
AI boosted server sales but model storage has not so far increased storage demand.
Dell is convinced a storage AI demand-led increase is going to happen. Clarke said in the earnings call: “our view of the broad opportunity hasn’t changed around each and every AI server that we sell. I think we talked last time, but maybe to revisit that, we think there’s a large amount of storage that sits around these things. These models that are being trained require lots of data. That data has got to be stored and fed into the GPU at a high bandwidth, which ties in network. The opportunity around unstructured data is immense here, and we think that opportunity continues to exist.”
He added: “We expect the storage market to return to growth in the second half of the year. And for us outperform the marketplace. … I would call out PowerStore Prime. The addition of QLC allows us to be more competitive, our performance and have a native … sync replication … allow us to be more competitive in the largest portion of the storage marketplace. And our storage margins need to improve and will improve over the course of the year.”
Dell wants us to know that, even with storage revenues flat, it retains its storage market leadership:
Bothe HPE and VAST Data have new disaggregated shared everything (DASE) block amd file storage systems, claiming more efficient scale-out than their compretitors. Dell will be hoping that its new QLC flash-aided PowerStore and PowerScale, especially the coming parallel file system extension for PowerScale, will stop any inroads into its market leadership.
In CSG, Dell introduced five NextGen AI PCs but sales have yet to take off. Clarke said: “We remain optimistic about the coming PC refresh cycle, driven by multiple factors. The PC install base continues to age. Windows 10 will reach end-of-life later next year. And the industry is making significant advancements in AI-enabled architectures and applications.”
Dell’s focus on AI, with its AI Factory series of announcements, gives it a near pole position in the market for helping customers adopt AI. The amount of AI adoption will depend on the technology delivering accurate and relevant results without going off into hallucinatory lies and mis-statements. AI has to stand for artificial intelligence and not artificial idiocy.