Data backup and cyber protection outfit Acronis is expanding its sports marketing program with the sponsorship of a major rugby union club, in partnership with one of its managed service providers.
Acronis’s latest #TeamUp Partner is Harlequins. As part of this, Infinigate Cloud UK, a distributor of secure cloud solutions for MSPs, will strengthen the digital infrastructure and enhance data security at the London rugby club using Acronis technology.
“We are honored that a historic club like Harlequins is entrusting Acronis with their cyber protection,” said Ronan McCurtin, RVP EMEA at Acronis. “Together with Infinigate Cloud UK, we’re providing innovative technology and support, ensuring the club’s data remains secure throughout the season and beyond.”
Harlequins will have access to Acronis Advanced Security + Endpoint Detection and Response (EDR), Acronis Cyber Protect Cloud, and Acronis Advanced Backup.
“This is an important partnership for the club and fortifies our digital defenses. It provides the club and its supporters the assurance that their data is securely protected,” said Harlequins CEO Laurie Dalrymple.
Founded in 1866, Harlequins, based in Twickenham, South West London, is a founding member of the Rugby Football Union. It is one of only nine clubs to have won the Premiership title since the league’s inception, most recently in the 2020-2021 season. Last year, the men’s team reached the semi-final stage of the Investec Champions Cup for the first time, underscoring their strength at the European level.
“We see great synergies between high performance in sport and high performance in business,” added Craig Gordon, VP sales, Infinigate Cloud UK and Ireland. “Our mission is to help organizations demystify the complexities of delivering secure cloud solutions.”
Acronis has sponsored various sports teams over the years. These include football clubs Ajax, Atlético Madrid, Inter, Liverpool, Manchester City, Fulham, Southampton, and the Williams F1 racing team, among many others.
MSP backup service supplier N-able has acquired Adlumin after OEMing the company’s malware detection offering for many months.
N-able provides data protection and other services to more than 25,000 managed services providers (MSPs). Adlumin supplies a cloud-native security operations platform featuring managed detection and response (MDR) and extended detection and response (XDR). The acquisition cost serious money: $220 million in cash, approximately $16 million in N-Able shares (1,570,762 shares), plus up to $30 million in potential cash earn-out payments payable in 2025 and 2026, making up to $266 million. N‑able anticipates that this acquisition will be immediately accretive to annual recurring revenue (ARR) and cash flow by the fourth quarter of 2025.
John Pagliuca
John Pagliuca, N‑able president and CEO, stated: “Our customers have been telling us for some time that cloud-native XDR and MDR solutions are mission-critical to their ability to fully secure their customers and users – which solidified our decision to partner with, and now, acquire Adlumin.
“We’ve proven out customer demand with robust growth and we determined that we could scale our business faster if we owned it. I’m thrilled to formally welcome them as a part of N‑able. Their security operations platform fits perfectly within our Ecoverse vision for unifying security and unified endpoint management into a single platform, allowing us to build upon the success we’ve already achieved together.”
Adlumin was founded in 2016 by CEO Robert Johnston and SVP Timothy Evans in Washington DC. The name is Latin for “add light.” Adlumin software collects and indexes data from almost any source, such as network traffic, web servers, VPNs, firewalls, custom applications, and application servers, and uses machine learning to detect malware infestation. It alerts admin staff and helps with remediation.
The company has raised $101.3 million in publicly disclosed funding, with a 2016 seed round for $300,000, a 2020 A-round for $6 million, a 2021 B1 round of $25 million, and a 2023 B2 round for $70 million.
N‑able will incorporate Adlumin’s technology with its platform that combines security, unified endpoint management, and data protection services. With this acquisition, N‑able aims to scale its security portfolio and ARR from the existing partnership for its MSPs and internal IT teams. In its latest quarter, N-able reported $116.4 million in revenues, up 8.3 percent year-over-year, with a $10.8 million GAAP profit.
Robert Johnston
Adlumin CEO Robert Johnston said: “Joining forces with N‑able marks an exciting new chapter in our mission to deliver enterprise-grade security to businesses of all sizes. By combining our security operations expertise with N‑able’s comprehensive endpoint management platform, we believe we’re uniquely positioned to help IT professionals stay ahead of evolving threats while scaling their security practices. We’re excited to accelerate this shared vision as part of the N‑able team.”
William Blair analyst Jason Ader told subscribers: “We like this acquisition for a few reasons: It should be accretive to N-able’s long-term growth rate as it brings in-house a well-regarded MDR/XDR platform that is expected to grow ARR 60 percent this year, net of the N-able contribution (and in so doing allows N-able to better control its product destiny in the security space). Because N-able was already ‘OEMing’ the Adlumin offering, we see reduced execution and integration risk (i.e. N-able knows the product and the team well), and we see meaningful channel and geographic synergies – Adlumin is almost all North America today and mainly sells through VARs (i.e. to in-house IT departments), while N-able mainly sells through MSPs and about 45 percent of its revenue comes from outside North America.”
Ader added: ”Key risks for N-able include macro pressures (which can often disproportionately affect smaller businesses), competition from MSP-focused technology vendors such as ConnectWise, Datto-Kaseya, Barracuda, Acronis, Ninja, AvePoint, and Veeam, and MSP market consolidation (potentially shifting pricing power to MSPs over time).”
Rubrik has released a new cyber resilience solution for Microsoft Azure Blob Storage, bringing enhanced security posture and recovery to Blob customers.
Blob Storage provides object storage for cloud-native workloads, archives, data lakes, high-performance computing, and machine learning, and is designed for storing massive amounts of unstructured data.
Anneka Gupta
“Rubrik Zero Labs recently found that 70 percent of all Rubrik-observed data in a cloud environment is object storage, which potentially has far lower security coverage compared to on-premises and SaaS data,” said Anneka Gupta, chief product officer at Rubrik. “More shocking is that nearly 90 percent of that data is estimated to be either text files or semi-structured files, representing data types that may vary in being machine readable, or covered by prominent security technologies and services. That’s a double whammy – until now.”
“The rise of generative AI and large language models is producing an explosion of data that needs to be secured,” added Aung Oo, general manager of Microsoft Azure Storage. “Protecting that torrent of critical AI data at scale in cloud object repositories like Azure Blob Storage requires a purposefully designed cyber resilience solution that Rubrik delivers.”
Aung Oo
The Blob protection offering can autonomously discover, classify, and provide context on all Microsoft Azure Blob Storage data, without requiring source data to leave the customer’s environment. It can also assess the security posture of sensitive data against security policies and the data requirements of the business.
Additionally, it can continuously monitor sensitive data within Blob for “risky” user activity, and provide early warning of emerging threats. It can also identify and remediate redundant Blob data to help reduce cloud costs, and support storing backup data to Blob Storage cool and cold tiers to lower total cost of ownership.
Finally, it can rapidly recover the most recent clean copy using a range of recovery patterns, including object-level and whole container.
Rubrik’s most recent integrations with Microsoft include comprehensive management of Microsoft 365, with an expansion of the Microsoft 365 Backup offering, as well as Rubrik Security Cloud with Microsoft Sentinel and the Azure OpenAI Service.
Cloud backup and storage supplier Backblaze announced the launch of a proposed follow-on public offering of $30,000,000 of shares of its Class A common stock. In addition, Backblaze expects to grant the underwriters a 30-day option to purchase up to an additional $4,500,000 of shares of Common Stock from Backblaze at the public offering price less the underwriting discount. The offering is subject to market and other conditions, and there can be no assurance as to whether or when the offering may be completed, or as to the actual size or terms of the offering.
…
Data protector Catalogic Software announced DPX vPlus 7.0, which includes:
Integration: DPX vPlus now supports Red Hat OpenShift, Proxmox with CEPH, and Canonical OpenStack, including full and incremental backups, file-level restores, and snapshot management.
Redesigned configuration wizard enhances the setup experience
Enhanced Security with AES-256 encryption for Microsoft 365 at rest and in transit. DPX vPlus now supports Impossible Cloud S3 storage, providing secure, high-performance backup destinations.
Expanded Hypervisor Support with Nutanix AHV and Nutanix Ready certification.
…
Real-time data replicator Cirata launched its Data Migration as a Service (DMaaS), powered by its Data Migrator software. This enables businesses to efficiently migrate Hadoop Distributed File System (HDFS) data, Hive metadata, and cloud data sources to various cloud platforms, including Amazon S3, Azure Data Lake Storage Gen 2, Google Cloud Storage, and Oracle Object Store, reducing migration timelines and minimizing disruption. It moves data without requiring any changes to existing applications or disrupting production systems, ensuring zero risk of data loss and continuity in business operations. More information here.
90 percent of UK enterprises are using or testing GenAI. Only 40 percent believe their GenAI applications are production-ready.
60 percent of UK enterprises admit that GenAI use cases have not yet made it into production internally.
Key hurdles to making GenAI production-ready include cost (41 percent), skills (40 percent), quality (37 percent) and governance (33 percent).
By 2027, 96 percent of UK enterprises plan to develop custom models based on proprietary data.
Only 11 percent of UK respondents believe AI is overhyped. In fact, 83 percent see the technology as crucial to their long-term goals.
Large organizations are flocking to GenAI, with 97 percent of companies with over $10 billion in revenue globally now using the technology in at least one internal business function. By 2027, 99 percent of all respondents worldwide expect GenAI adoption across both internal and external use cases.
Cloud content collaboration service supplier Egnyte appointed Elizabeth Hajjar as VP Sales, EMEA, responsible for developing and executing the overall sales strategy for the EMEA region and driving revenue growth and market share expansion in EMEA markets.
…
Search AI company Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications. The Elastic AI Ecosystem provides developers with a curated, comprehensive set of AI technologies and tools integrated with the Elasticsearch vector database, designed to speed time-to-market, ROI delivery, and innovation.
…
Geyser Data announced GA of its Tape-as-a-Service offering. It was jointly developed by Geyser Data and Spectra Logic, integrating Spectra’s object-based disk and tape technology with Geyser’s Cloud Tape Library software. It provides a cost-effective, enterprise-class cloud archive for businesses requiring large-scale data storage for compliance, disaster recovery, long-term data preservation, and AI-driven workloads. CEO Nelson Nahum previously ran Zadara. TaaS features include no egress or retrieval fees, no hardware required, dedicated tapes per customer with air-gapped isolation, integration with AWS S3, tape energy efficiency, and easy storage capacity expansion.
…
Graph AI provider Graphwise announced the immediate availability of GraphDB 10.8, which includes the next-generation Talk-to-Your-Graph functionality that integrates large language models (LLMs) with vector-based retrieval of relevant enterprise information and precise querying of knowledge graphs that hold factual data and domain knowledge. It says this new feature enables non-technical users to derive real-time insights and retrieve and explore complex, multi-faceted data through natural language. It also brings seamless, high-availability cluster deployments across multiple regions, ensuring zero downtime and data consistency without performance compromise.
GraphDB 10.8 reduces the R&D time for GenAI applications by offering a no-code framework based on GenAI-powered agents that intelligently combine multiple retrieval methods to deliver context-rich conversations and reduce non-determinism. To help AI developers fine-tune conversational agents (chatbots), it automatically heals retrieval query errors and provides quick access to the underlying method invocations, results and error messages.
…
Fahad Qureshi
SaaS data protector Keepit has appointed Fahad Qureshi as VP of Sales for the Americas and ANZ. He joins Keepit from Lumafield, where he served as Head of Sales. Prior to Lumafield, he spent 8 years at Qumulo. His intent is to establish the Americas as a leading market for Keepit’s cloud-native data protection services and use partners to do so.
…
Reuters reported Bain-backed Kioxia will have a market value of about 750 billion yen ($4.84 billion) based on the indicative price for its initial public offering, with the chipmaker to receive listing approval from the Tokyo bourse on Friday, according to its sources.
…
Estonia-based Leil Storage has introduced its EDU & Research Program, offering universities and research institutions complimentary access to SaunaFS, our customizable open-source file system designed to optimize storage efficiency. For more information visit leil.io/edu or contact Leil at EDU@leil.io
…
Elastx Cloud Platform (ECP), a Swedish cloud service provider, has chosen Lightbits to power its OpenStack Cloud Service. Elastx is “leveraging Lightbits’ high-performance, low latency, scalable software-defined storage across OpenStack and Kubernetes clusters for a resilient, cost-effective cloud platform that provides customers with secure, sustainable multi-availability zones for mission-critical applications … based on its seamless integration with OpenStack and Kubernetes, high NVMe-capacity density, high performance, lower TCO, and ease of manageability.”
…
MyAirBridge is a Czech-based business supplying Dropbox-like file sharing capabilities. We asked how does it differ from Dropbox? It replied:
Fair Play for Users: We don’t require recipients to have purchased storage space for data sharing. Only the sender needs a storage plan, allowing the recipient to access files without needing an account.
Platform Integration: The app combines direct transfer and data-sharing features seamlessly, presenting a minimalist interface with essential functions at your fingertips. Sharing can be customized for view-only, read-only, or read and write, enabling the recipient to upload, copy, and delete files within a shared folder.
Usability: Smart drag-and-drop functionality detects your current app context, offering relevant options. It supports keyboard shortcuts, dragging, and easy copying for a user-friendly experience.
Security: No tracking of users, with enhanced protection via CSP, end-to-end encrypted transfers, and options like password protection and 2FA.
Plans: PRO users can personalize their URL, using it as FTP for data collection, and add images to represent their brand. The Enterprise plan offers transfers with expiration that don’t consume storage capacity, supporting unlimited transfers, even with full storage.
Teamwork: Teamwork features allow users to have both personal and team profiles, with options to access main storage or a separate space. Enterprise includes 5TB per profile with unlimited team profiles for large organizations.
Automation & Accessibility: Storage access via WebDAV facilitates backup automation without software installation, with API access as another option. Incoming and outgoing transfers can be set with expiration to optimize capacity, even if the sender uses a free plan.
…
Cloud file services supplier Nasuni has revamped its partner program with a new tiered structure and value-based incentives will reward partners for their cloud-based certifications, innovative go-to-market strategies, and software-centric capabilities, rather than focusing on volume and revenue generation. There is a refreshed partner portal, a one-stop shop for partners, offering access to all the tools, resources, and information they need.
…
Graph database and analytics startup Neo4j has surpassed $200 million in annual recurring revenue, doubling its ARR over the past three years, and is on track to be cash flow positive in the coming quarters, as well claiming a 44 percent market share. It says it grew rapidly this year as organizations recognized graph databases as essential infrastructure for AI systems that leverage vast amounts of interconnected data. Neo4j is used by 84 percent of all Fortune 100 companies and 58 percent of the Fortune 500. Examples include Daimler, Dun & Bradstreet, EY, IBM, Merck, NASA, UBS, and Walmart.
…
On-prem cloud computer rack supplier Oxide says Lawrence Livermore National Laboratory’s HPC center has just completed installation of its first Oxide Cloud Computer, which will deliver cloud-like features that work seamlessly with HPC jobs, while maintaining security and isolation from other users. It will provide users in the National Nuclear Security Administration (NNSA) with new capabilities for provisioning secure, virtualized services alongside HPC workloads. LLNL plans to work with Oxide on additional capabilities and the deployment of additional Cloud Computers. LLNL particularly liked Oxide’s scale-out and disaster recovery capabilities. Oxide Computer states that the latest installation underscores its momentum in the federal technology ecosystem.
…
Redis and Microsoft announced Azure Managed Redis, a fully managed, first-party Redis offering now available in public preview on Microsoft Azure. This is the first-of-its-kind multi-tiered Redis service offered by a major cloud provider, including integration with Azure AI and Redis’s vector database capabilities. It features:
Performance: New data structures, the fastest Redis Query Engine to date, and capabilities like vector search and geospatial queries.
Reliability: Industry-leading 99.999 percent availability through multi-region Active-Active deployments.
Flexibility: Seamless migration for Azure Cache for Redis customers and support for diverse workloads, from memory-intensive apps to generative AI.
…
Seagate’s Lyve Cloud Object Storage has new flexible, cost-efficient archive tier for infrequent access offering the same millisecond latency, high durability, high throughput and similar SLA service as the Lyve Cloud Object Storage standard tier. The Seagate Lyve infrequent access tier is available via subscription and is priced at $3.75/TB per month. To learn more go to https://www.seagate.com/cloud/.
…
There will be an SNIA Swordfish Plugfestduring the week of April 28-May 2, 2025 in Denver, CO, in conjunction with SNIA’s Regional SDC Denver event on April 30. This collaborative Plugfest is targeted at both client users and datacenter infrastructure product developers.
…
Cloud data warehouser Snowflake continued its fast revenue growth with a 28 percent year-over-year revenue jump in its Q3 2024 to $942.1 million with a $324.3 million loss. It gained 369 new customers in the quarter, taking its total to 10,618. CEO Sridhar Ramaswamy said: “Our obsessive drive to produce product cohesion and ease of use has built Snowflake into the easiest and most cost effective enterprise data platform. That is what’s leading us to win new logo after new logo, expand within our customer base, and displace our competition over and over again.”
…
Snowflake is acquiring Datavolo, which supplies open data integration software powered by Apache NiFi. Datavolo says its software enables customers to capture all their unstructured data for LLM needs and replaces single-use, point-to-point code with fast, flexible, reusable pipelines. It was founded by CEO Joe Witt and COO Luke Roquet in September last year, being just 14 months old, and raised $21 million in an A-round in April. That gives a clue as to how much the acquisition cost Snowflake, providing virtually instant riches to Witt and Roquet.
…
Snowflake is partnering with Anthropic in a strategic multi-year partnership to bring Claude 3.5 models directly to customers’ data in Snowflake’s AI Data Cloud. This is to enhance how Snowflake’s data agents can analyze data, run ad-hoc analytics, generate visualizations, and execute other multi-step workflows. Snowflake’s agentic AI products will leverage Claude as one of the LLMs for this.
…
Cloud-based real-time analytics company StarTree announced new features in its StarTree Cloud offering:
Pauseless Ingestion: This maintains continuous data flow during segment building and upload phases, enabling businesses to deliver real-time, reliable insights at scale.
Performance Manager when using Apache Pinot which simplifies the process of optimizing query performance. It analyzes query structures and metrics to recommend enhancements—such as indexes, bloom filters, derived columns, or star-tree indexes. Users can apply these optimizations with a click, achieving immediate performance gains.
Schema evolution allows the system to accommodate new fields, indexes, altered data types, or other structural modifications without disrupting operations.
Data Backfill addresses incorrect or missing data by enabling users to seamlessly reload data from past events, filling any gaps in data flows. This capability is particularly valuable in real-time analytics, where continuous data integrity is essential.
Role-Based Access Control (RBAC) Management.
…
StorONE and Phison have joined forces to deliver a storage system that achieves 1 million IOPS with only four Phison Pascari X200 drives. The two say that, together they redefine data storage efficiency with fewer drives, delivering top-tier performance, comprehensive data protection, and seamless cybersecurity. The joint sytem can use either 15.36 or 30 TB Pascari X200 SSDs.
…
VAST Data says VAST 5.2’s new capabilities, including global SMB namespace, Write Buffer Spillover, and S3 Sync Replication, can streamline complex workloads for enterprise, AI, and high-performance computing environments.
…
HPC storage supplier VDURA and SSD manufacturer Phison collaborated in a show of strength at this year’s SC24 in Atlanta on Tuesday. In a record-setting feat, Hafþór Júlíus Björnsson, perhaps best known as “The Mountain” from Game of Thrones, lifted a capacity of 282.624 PB of data weighing in at 996 lb (450 kg). Phison supplied Pascari 122 TB capacity D205V drives for the lift, meaning about 2,316 were needed. The bar, as well as additional equipment, was provided by Rogue Fitness Equipment.
To celebrate the occasion, 2017 America’s Strongest Man, Jerry Pritchett, built customized Silver Dollar Boxes to hold the drives during the lift. VDURA and Phison donated $1,000 each to the Atlanta Community Food Bank. This contribution will help provide over 6,000 meals for local families in need.
VDURA and Phison’s Icelandic Game of Thrones strongman
…
Edge cloud service supplier Zadarahas expanded its Sovereign AI Cloud portfolio with AI inference services. AI capabilities have been integrated into all Zadara’s training and certification programs for partners
…
Palo Alto-based Zettar, calling itself a pioneer in exascale data transfer technology, announced a new production-ready solution, developed in collaboration with MiTAC Computing, and Nvidia, which revolutionizes high-speed data movement, both bulk and streaming. A single Nvidia BlueField-3 DPU with Zettar zx software transfers up to 500 TB of data per day across any distance, without traditional servers.
The zx software has a <40MB footprint ands runs in the DPUs. The zx transfers massive datasets at wire speed, accelerating critical AI and HPC workflows. It can replace traditional data transport methods like AWS Snowball, advancing data movement across private, hybrid, and public clouds. And it can transform a standard JBOF (Just a Bunch of Flash) into a cluster of collaborative data transfer nodes, achieving linear scalability in a compact form factor.
Zettar founder and CEO Chin Fang said: “The DPUs can do both bulk and streaming data transfers at high speed and nearly the same data rates for 1 MiB-1 TiB, locally (in the same host and/or cluster) and over any distance – latency is irrelevant. It doesn’t need any site-specific tuning and it also addresses a common challenge – casual exchange (i.e. up to a few TBs) of data with collaborators efficiently and easily.”
He added: “With the growing demand for large language models (LLMs), moving hundreds of TBs to many PBs of data over distance for training is becoming a critical challenge. Our DPU-based solution, combined with Zettar-designed data movement appliances available through our hardware partners, significantly enhances the utilization and ROI of large-scale AI and HPC infrastructure while reducing energy consumption.”
…
Software RAID vendor Xinnor announced several customer wins for its xiRAID product:
BeeGFS at one of the leading scientific universities in Singapore: Xinnor has partnered with On Demand System Pte, a prominent system integrator in Singapore, to deploy xiRAID to enhanced BeeGFS file system to optimize high-speed data access in a cluster environment, supporting the university’s advanced research needs.
Lustre at NHR@FAU: together with Megware, the German expert in HPC, Xinnor helped Friedrich-Alexander-Universität migrating from Ceph to xiRAID + Lustre, improving performance by about 4 times while almost doubling the available user capacity.
Block Storage at one of the most prominent tech universities in USA: xiRAID has been deployed in all NVMe servers at the university, to provide drive failure protection, while delivering close to theoretical maximum performance, meeting the demands of the institution’s cutting-edge research.
…
Xinnor debuted in the IO500 list published at SC24. A Lustre + xiRAID Classic system implemented by MEGWARE has been:
Ranked #2 among Lustre solutions in the 10-node research category
The latest quarterly results for NetApp were lifted again by record all-flash array sales.
Revenues in its Q2, ended October 25, 2024, were $1.66 billion, up 6.1 percent year-over-year and beating its $1.64 billion mid-point guidance, with a GAAP profit of $299 million, 22 percent more than a year ago. It was the company’s fourth successive growth quarter with annual recurring revenue from all-flash arrays reaching a record-high $3.8 billion. The company promptly updated its full year revenue forecast to $6.64 billion +/- $100 million from the prior $6.58 billion +/- $100 million.
George Kurian
CEO George Kurian stated: “I am extremely pleased with our Q2 performance. Revenue growth was driven by a 19 percent year-over-year increase in all-flash storage and strong performance in first party and marketplace cloud storage services. We achieved record Q2 operating margin and EPS, ahead of our expectations.”
Quarterly financial summary:
Consolidated gross margin: 72 percent vs 72 percent a year ago
Operating cash flow: $105 million vs $135 million a year ago
Free cash flow: $60 million vs $97 million last year
Cash, cash equivalents, and investments: $2.22 billion vs $3.02 billion last quarter
EPS: $1.42 vs $1.10 a year ago
Share repurchases and dividends: $406 million in stock repurchases
The free cash flow number was much lower than last quarter’s $300 million, and CFO Mike Berry said: “Our lower year-over-year cash flow results in Q2 were primarily driven by upfront payments for strategic SSD purchases which are forecast to be predominantly utilized during fiscal year ’25.”
NetApp pulled in $1.49 billion from its hybrid cloud segment revenue, 5.4 percent more than the year-ago $1.41 billion. The public cloud segment revenues rose 8.3 percent to $168 million, compared to $154 million a year ago. Billings rose 9 percent to $1.59 billion from last year’s $1.45 billion. Product sales were $768 million, up 8.1 percent.
Kurian said NetApp sales in the flash, block, cloud storage, and AI markets were “bolstered by both secular and company-specific tailwinds.” He pointed out that NetApp has “broadened our all-flash storage portfolio substantially with updated high-performance flash, capacity flash, and block-optimized products.”
He said: “This is clearly outpacing the market and all of the competitors.” It has certainly helped NetApp’s flash revenues.
Some 40 percent of NetApp’s installed base has converted to flash, with Kurian saying: “So even if flash is growing fast, the total installed-base is growing, and it is very large … We are outperforming the market and all of our competitors quite substantially. So it is clear that we are winning new wallet within existing customers as well as … net new to NetApp customers.”
NetApp’s all-flash array annual recurring revenue history
He added more color in the earnings call: “The majority of net new logos to NetApp come from smaller … commercial market [customers], but the high-end products are enabling us to win wallet share within existing customers. [ASA] Block wallet share is entirely new to NetApp. We’ve never sold a block-optimized product before. So it’s all net new wallet for us.”
Matt Bryson, a Wedbush analyst, gave his insights into NetApp’s upside, mentioning strength in the Americas, saying: “We believe key leadership changes NetApp has made in this geography are helping drive superior results. We would also note that NetApp did not show any signs of the slowdown in Fed spending other vendors encountered that were attributed to the lack of a new budget.”
He also cited a public cloud rebound and low-cost SSD media due to NetApp’s buying “largely in C2H23, when NAND pricing had cratered.”
Analyst Jason Ader told subscribers this was “another beat-and-raise quarter,” calling out NetApp’s billings, which “continued to surprise to the upside,” increasing 9 percent year-over-year, marking the company’s fourth consecutive quarter of year-over-year growth. He also pointed to NetApp’s Remaining Performance Obligations (RPO), the total value of contracted products and services that NetApp has yet to deliver to its customers.
He told subscribers that NetApp’s RPOs, “which the company started disclosing last quarter, came in at $4.4 billion (with unbilled RPO up 9 percent sequentially), driven by the company’s Keystone storage-as-a-service offering becoming a more meaningful part of the business.” NetApp says its RPO value is a “leading indicator of future growth in our business.”
As for Keystone, Kurian said: “We have yet to see a single customer churn on Keystone. So it has been an incredibly rock-solid product.”
There were no red flags raised in the earnings call, with Kurian saying a hundred or so on-prem sales wins across vertical markets and geographic regions in the quarter were “leading indicators of future large-scale inferencing deployments.”
He added: “We also have had some strong momentum with Google, with their distributed cloud for AI deployments in the public sector. And so we’re encouraged at the prospects of our AI business going forward.”
Again Kurian added detail when answering an analyst’s question: “There were a good number of them that were building GPU clusters, both SuperPODs and kind of BasePODs that we saw where they were essentially running the GPUs against high-performance file systems from NetApp and some that were building data lakes because their data was so scattered that they wanted to bring it together and they chose our infrastructure to power the data lake. So a mix of them. Interesting, it’s early in the cycle. We just brought to general availability our full stack with Lenovo, for example, with OVX, and we’re doing creative work with NVIDIA around some of their software.”
He also said he expects the developing disaggregated ONTAP Data Platform for AI product to be “deployed in customers by the end of next calendar year.”
NetApp has a strong installed base which isn’t going to buy more of its on-prem and public cloud products and services in the future.
Next quarter’s revenue outlook is $1.68 billion +/- $75 million, a 4 percent year-over-year increase at the midpoint, less than Q2’s 6.1 percent rise, but still a rise.
Wikipedia public domain image: https://commons.wikimedia.org/wiki/File:C_Merculiano_-_Cephalopoda_1.jpg
IBM Storage Ceph is a software-defined storage platform for enterprises that combines the open source Ceph storage system with a management platform, deployment utilities, and support services. Storage Ceph 8.0 is now generally available.
The Ceph platform provides massively scalable object, block, and file storage from a single solution. IBM Storage Ceph has been part of IBM’s storage portfolio and software-defined storage since January 2023, and runs on industry-standard x86 server hardware. It can be started with small workloads and can scale to petabyte-sized ones.
As well as RADOS (Reliable Autonomous Distributed Object Store) core updates, the main highlights of Storage Ceph 8.0 are NVMe/TCP block storage for VMware enhancements, along with NFS v3, v4.1, and v4.2, enabling file access for non-native Ceph clients.
There is also SMB v2 and SMB v3 Windows file access (Tech Preview), including MS Active Directory and ACLs support.
In addition, there is extended AWS compatibility, a new multi-tenancy feature with identity access management, as well as extended UX functionalities, new cluster configuration options, and security enhancements.
Storage Ceph RADOS core updates include RBD Snapshot improvements, with reduced CPU usage and reduced latency. Additionally, Storage Ceph IBM BlueStore compression offers improved storage efficiency without requiring additional software layers.
Marcel Hergaarden
Performance improvements realized by tuning and optimization include using the Storage Ceph Crimson backend (Tech Preview). Crimson is a next-generation backend for Storage Ceph that provides a more efficient and performant implementation of Ceph object storage capabilities, said IBM. Crimson takes advantage of modern programming techniques and technologies.
As an option, IBM offers Storage Ready Nodes for IBM Storage Ceph clients. The nodes are servers that have been fully tested and qualified for IBM Storage Ceph implementation and production use cases.
Marcel Hergaarden, product manager for IBM storage infrastructure, has published a blog explaining all the improvements offered by IBM Storage Ceph 8.0.
Mountain View-based Enfabrica has raised $115 million in C-round funding and says it will ship its 3.2 Tbps Accelerated Compute Fabric (ACF) switch chip in Q1 2025.
The ACF-S superNIC switch combines PCIe/CXL and Ethernet fabrics to interconnect GPUs and accelerators with multi-port 800-Gigabit-Ethernet connectivity.
Enfabrica raised $125 million in September 2023 for this work. The new funding takes its total raised since it was founded in 2019 to $290 million. Spark Capital led this latest oversubscribed equity financing round with new investors Arm, Cisco Investments, Maverick Silicon, Samsung Catalyst Fund, and VentureTech Alliance. Existing investors including Atreides Management, Alumni Ventures, IAG Capital, Liberty Global Ventures, Sutter Hill Ventures, and Valor Equity Partners participated as well.
Rochan Sankar
Enfabrica CEO Rochan Sankar stated: “We were the first to draw up the concept of a high-bandwidth network interface controller fabric optimized for accelerated computing clusters. And we are grateful to the incredible syndicate of investors who are supporting our journey. Their participation in this round speaks to the commercial viability and value of our ACF SuperNIC silicon. We will advance the state of the art in networking for the age of GenAI.”
The new cash will be used to drive ACF’s volume production ramp, grow Enfabrica’s global R&D team, and expand its next-generation product line development.
The ACF-S (switch) chip has high-radix, high-bandwidth, and concurrent PCIe/Ethernet multipathing and data mover capabilities. Enfabrica claims this is four times the bandwidth and multipath resiliency of any other GPU-attached network interface controller (NIC) product in the industry. And it says this level of data movement is demanded by the large-scale training, inference, and retrieval-augmented generation (RAG) workloads associated with the latest AI models.
Enfabrica ACF-S chip
It offers full-stack operator control and programmability, integrating software-defined networking (SDN) with remote direct memory access (RDMA) networking, which it says is widely deployed in AI datacenters. The switch has 800, 400, and 100 GbE interfaces, a high radix of 32 network ports, and 160 PCIe lanes on a single ACF-S chip. The term “radix” refers to the number of ports or connections a switch can support.
Enfabrica says the switch can be used to build AI clusters of more than 500,000 GPUs, from multiple vendors, with an “efficient two-tier network design, enabling the highest scale-out throughput and lowest end-to-end latency across all GPUs in the cluster.” The ACF-S software supports standard collective communication and RDMA networking operations through a consistent set of libraries compatible with existing interfaces. The RDMA networking is software-defined.
The ACF-S has Resilient Message Multipathing (RMM) technology, which “boosts AI cluster resiliency, serviceability, and uptime at scale” and “eliminates AI job stalls due to network link flaps and failures to improve effective training time and GPU compute efficiency.”
It also features Collective Memory Zoning, which provides “low latency zero-copy data transfers, greater host memory management efficiency and burst bandwidth, and higher system resiliency across multiple CPU, GPU, and CXL 2.0-based endpoints attached to the ACF-S chip.” This increases the efficiency and overall Floating Point Operations per Second (FLOPs) utilization of GPU server fleets.”
ACF-S chip in server
B&F understands that Enfabrica’s switch is intended for use in large-scale GPU clusters, which are currently built by hyperscalers such as the big public clouds, Meta, xAI with Colossus, other large language model developers like OpenAI, and GPU server farm suppliers such as CoreWeave. It’s a relatively limited number of customers who will need to buy thousands of Enfabrica’s chips if it is to succeed.
Nvidia interconnects its GPUs with NVLink, providing up to 1.8 TBps of bidirectional bandwidth. Enfabrica is GPU vendor-agnostic. The NVLink-C2C (Chip-to-Chip) extension supports CXL, meaning Enfabrica is betting that its technology can be used outside an all-Nvidia environment. That environment in the GenAI world is practically nonexistent today, but hyperscalers like Google, AWS, and Azure are developing their own GPU-like accelerators.
Sankar said: “Our ACF SuperNIC silicon will be available for customer consumption and ramp in early 2025. With a software and hardware co-design approach from day one, our purpose has been to build category-defining AI networking silicon that our customers love, to the delight of system architects and software engineers alike. These are the people responsible for designing, deploying and efficiently maintaining AI compute clusters at scale, and who will decide the future direction of AI infrastructure.”
SK hynix has started mass producing its 321-layer 3D NAND, winning the flash layer count race, which Western Digital has claimed to be over with its 218-layer NAND.
The Korean NAND fabber is using a so-called three-plugs technology to interconnect a triple stack of layered NAND, each stack being around 100 layers, and build a 1 Tbit TLC (3 bits/cell) chip. It sampled the 321-layer chips in August last year.
Jungdal Choi
Jungdal Choi, head of NAND development at SK hynix, reckons: “SK hynix is on track to advancing to the Full Stack AI Memory Provider by adding a perfect portfolio in the ultra-high performance NAND space on top of the DRAM business led by HBM (High Bandwidth Memory).”
SK says its plugs complete the NAND cell production process initiated by stacking substrates layer by layer. It says “repeating the cell formation process for each layer would be inefficient and increase manufacturing costs. Therefore, multiple layers of substrate are first stacked, then vertical holes called plugs are drilled through the layers before cells are formed next to the holes.”
The holes are not drilled but etched. Other NAND manufacturers call these holes Through Silicon Vias (TSVs). The etching equipment works well up to 100 layers but then becomes unreliable. Instead it creates the plugs in three sets of 100 layers, which are stacked vertically.
It says that with its three-plugs technique, “all the processes, including cell formation, can be performed simultaneously on all layers.”
“With this, SK hynix was able to conduct a single process to simultaneously fabricate the key structures – word lines and word line staircases – that apply voltage and the passageways for electrons.”
Word lines are the connections that bind the control gate of each layer of NAND cells. A word line staircase is a structure for exposing the word line of each layer to the top surface.
The plugs in each layer, however, may not be perfectly aligned.
SK hynix has another technology trick up its sleeve here. The plug is lined with CTF (Charge Trap Flash) film and this needs to be removed at the bottom where it meets an electrical pathway. This is called the connection point in the diagram above. The CTF film is a composite of oxide and nitride films that replaces a floating gate.
The company says: “Previously, etching gas was injected from the top of the plug to vertically remove the CTF film at the bottom of the plug. However, when stacking two or more plugs, the centers of the plugs were not aligned. This prevented the etching gas from reaching the bottom, damaging the CTF film on the side of the plug that serves as a cell.”
It solved this problem by providing a separate sideways source path for the etching gas. “The etching gas is injected into a separate pathway to reach the bottom of the NAND layer and remove the CTF film on both sides of the plug. With Sideway Source technology, the etching gas is not directly injected into the plug. Therefore, even if the plugs are misaligned, the interior remains undamaged. As a result, SK hynix has significantly reduced its defect rate, increased productivity, and addressed the problem of increased costs associated with multiple stacking.”
The horizontal pathway connections leave no voids at the bottom of the NAND layer.
SK hynix is emphasizing NAND performance. With 321-layer technology, it can match the capacities its competitors reach with QLC (4 bits/cell) chips using lower 3D NAND layer counts, and get faster than QLC performance and endurance with its TLC (3 bits/cell) formatting.
The company says this “latest product comes with an improvement of 12 percent in data transfer speed and 13 percent in reading performance, compared with the previous generation. It also enhances data reading power efficiency by more than 10 percent.” SK hynix sees its 321-layer NAND suited for AI applications needing low power and high performance.
Wedbush analyst Matt Bryson tells subscribers: “While the layer count is certainly impressive, we have encountered less promising feedback around the expected quality of Hynix’s new parts, with our conversations suggesting Micron and Kioxia/WD (with BiCS8) appear to be leading in anticipated next generation NAND bit performance.”
SK hynix said its 321-layer NAND chips will be available to customers in the first half of 2025.
At its Ignite event, Microsoft announced the Azure Boost data processing unit (DPU) to accelerate storage and networking efficiency in the Azure public cloud.
DPUs were a storage industry phenomenon a few years ago, with startups like Pensando, Fungible, Nebulon, Pliops, Kalray, Intel, and Nvidia saying that repetitive and specialized low-level storage and networking processing clogged up x86 CPUs whose main job is processing application data. Developing special ASIC or FPGA hardware to handle this processing would get it done quicker and offload the host x86 processor, speeding application processing.
Nebulon faced financial difficulties and was quietly acquired by Nvidia. Pensando was bought by AMD and Fungible by Microsoft as the general server, storage system, and network interface market refused to adopt DPU technology. Now Nvidia BlueField DPUs are becoming popular and the cloud hyperscalers, wanting to rent out x86 and Arm server processing capacity, see mileage in using DPUs to make more of that processing capacity available. Kalray and Pliops are still developing their technology.
In its Ignite Book of News, Azure Boost DPU is being introduced as Microsoft’s first in-house DPU “designed for scale-out, composable workloads on Azure.”
Fungible, co-founded by CEO Pradeep Sindhu and Bertrand Serlet in 2015, was bought for around $190 million by Microsoft in December 2022. Sindhu has been Corporate VP Silicon for Microsoft since then. Serlet was made a Microsoft VP of Software Engineering, but he left a year ago to be, as his LinkedIn entry says, a “Free Electron.” The Azure Boost DPU chip is based on Fungible chip technology.
Azure Boost DPU
Microsoft says it is optimizing every layer of its infrastructure in the era of AI “with Azure Boost DPUs joining the processor trifecta in Azure (CPU – AI accelerator – DPU), enhanced by hardware security capabilities of Azure Integrated HSM (Hardware Security Module), as well as continued innovations in Cobalt and Maia, paired with state-of-the-art networking, power management and hardware-software co-design capabilities.”
Azure HSM provides secure cryptographic key storage and operations. Cobalt is Microsoft’s in-house Arm-based CPU. The Maia 100 is its GPU-like AI training and inferencing acceleration hardware and software.
This cloud AI infrastructure grouping of x86 and Cobalt CPUs, Maia accelerators, and Azure Boost DPUs will, we understand, make the Azure Cloud infrastructure perform faster and more efficiently. However, it will do this in a proprietary way and not use industry standard hardware unlike, say, Meta, with its Open Compute Project.
Microsoft is investing heavily in developing its own silicon hardware and firmware for its private use. It must surely be using thousands of these chips inside its own operations.
Storage architect Chris Evans commented on Bluesky: “The amount of new silicon developed by Microsoft, AWS, GCP, etc, should be a worry for traditional vendors. It will represent a divergence from traditional standards and diverge the TCO models.” AWS has in-house Nitro hardware, and Google has also developed proprietary chip hardware.
The HPE Alletra file and block storage portfolio has gained object storage with the Alletra MP X10000 for large-scale unstructured data lakes and digital repositories.
It is designed for exabyte scale and achieves up to 6x faster performance versus the competition, which HPE says are “two of the leading vendors delivering object storage.” That might mean AWS with S3 and Azure with its Blob offering.
Tim Desai
HPE storage product marketer Tim Desai writes: “We are excited to introduce the HPE Alletra Storage MP X10000, a unique object storage solution designed to transform the way you handle data-intensive workloads and stay ahead of the curve.”
The X10000 uses Alletra Storage MP base hardware – scale-out storage nodes with a disaggregated shared everything (DASE) architecture. This features ProLiant server chassis with all-flash NVMe fabric-connected local storage and Aruba switches. HPE has OEM’d VAST Data software for its subscription-style GreenLake File Storage offering, which also uses Alletra MP hardware. The GreenLake for Block Storage offering has been rebranded as the Alletra Storage MP B10000 (B for block). We would not be surprised to see an Alletra Storage MP F10000 brand appear for GreenLake File Storage in the interest of consistency.
Desai says: “The X10000 is not just a storage solution. It’s a game-changer for enterprises looking to optimize their data management and drive business value.”
What makes it a game-changer? HPE said that, in addition to its performance and scalability, the X10000 features the GreenLake cloud operational experience with a common Alletra MP platform “that can be configured for different software-defined storage personas and use cases” – block, file, and now object. Desai said: “We are delivering on our vision to provide multiple storage services with multi-protocol support on a single disaggregated cloud managed platform.”
The X10000 has container-native software with Kubernetes-based orchestration, erasure coding, and inline data reduction. It has native AWS S3/S3a API support and incorporates a log-structured key value store, “optimizing the use of flash media for superior performance.” We are told: “There is no front-end caching or data movement between media. Each node adds a proportionate amount of performance to the cluster, enabling seamless scaling for concurrent users and client nodes.”
There is up to 20x data reduction, likely for backup data, and streamlined integration with any backup product; Commvault and Veeam initially with more in qualification.
The entry level is a three-node system and it can grow to hundreds of nodes. Capacity expansion is seamless, requiring minimal data movement, and upgrades can be performed non-disruptively. Performance and capacity can be rebalanced during expansions with minimal impact, eliminating the need for costly data transfers.
The X10000 “allows for future inline data services, bringing compute closer to the data.”
Management via HPE’s Data Services Cloud Console provides centralized monitoring, with a single UI, protection, self-provisioning, and proactive support for global scale infrastructure including Alletra Storage MP B10000, X10000, and GreenLake for File Storage.
By writing its own object storage software from the ground up, HPE appears to have rejected the option of using object storage from its two existing partners, Cloudian and Scality, either through licensing or acquisition. These two suppliers are now competing with in-house object storage from DataCore, Dell, DDN, HPE, Hitachi Vantara, IBM, Infinidat, NetApp, Pure Storage, and Quantum. Lenovo is the last main system vendor with no in-house object storage.
HPE says it’s collaborating with Nvidia to enable a direct data path for direct memory access (DMA) transfers between GPU memory, system memory, and the X10000, which is critical for AI applications. In other words, GPUDirect-like S3 over RDMA, which is already supported by Cloudian and MinIO. We think S3-over-RDMA will become table stakes for all object storage vendors.
Check out an Alletra Storage X10000 brochure here and data protection brochure here.
Bootnote
HPE also announced VM Essentials, a unified VM management facility for virtualized workloads across hybrid environments. It integrates existing virtualized workloads with the new HPE VME hypervisor. VM Essentials supports the main storage protocols, distributed workload placement, high availability, live migration, and integrated data protection. HPE claims that, by using GreenLake cloud and VM Essentials, enterprises can save up to 5x on TCO.
Qumulo says its Cloud Native Qumulo on AWS achieved over a terabyte per second throughput and more than one million IOPS using standard Network File System clients.
The company’s Core system software has been ported to Azure and AWS as Cloud Native Qumulo (CNQ). It offers both file and object protocols and Qumulo claims it has pricing “up to 80 percent lower than legacy cloud-based file solutions.” It also claims to be able to meet the performance and capacity needs of “virtually” any file-based application.
Steve Phillips
Steve Phillips, Qumulo’s head of product marketing and management, said: “For the first time, customers can use CNQ on AWS or Azure to dynamically scale performance or capacity, tailoring storage to meet changing workload demands from several gigabytes per second to now over a terabyte per second.”
He says CNQ has a cross-vertical appeal since customers in the healthcare and life sciences, media and entertainment, higher education, financial services, autonomous driving modeling, and energy verticals all rely on AI and high-performance computing to maintain a competitive edge.
CNQ has linear performance scaling and Qumulo says it delivers flash-class performance with an average cache hit ratio greater than 95 percent. From an AI processing point of view, CNQ “minimizes data load times and GPU idle cycles, reducing costs while speeding up time to training, tuning, and/or inference.”
Qumulo says standards-based CNQ has a collection of other benefits:
Elastic scalability adjusts to changing workload demands, delivering performance when needed and throttling back to save costs when demand is reduced. “Unlike other cloud-based file systems,” CNQ operates without pre-provisioned capacity or rigid volumes, and clients pay for only the capacity and performance they use.
Simplicity as customers can deploy CNQ in minutes and scale performance and capacity in seconds, freeing users to focus on business concerns while gaining faster access to their data.
Global namespace (GNS) to enable low-latency access to data from any other location without complex replication or pre-staging operations, enabling customers to connect on-prem data to cloud-based AI and high-performance computing services.
Cost savings as pricing is based on actual storage usage and performance delivered without requiring static pre-provisioned capacity.
Just as using AI and HPC is becoming a cross-industry feature, it is also becoming a cross-supplier feature. In the on-premises world, scale-out Qumulo is competing with parallel-access and GPUDirect-supporting filer suppliers such as DDN, Hammerspace, IBM’s Storage Scale, NetApp, Pure Storage, VAST Data, VDURA, and WEKA.
Dell is extending its PowerScale offering to add parallel file system access. Qumulo has GPUDirect support as a roadmap item. NetApp is extending its ONTAP file system with an ONTAP Data Platform for AI project.
It is happening in the object world as well. GPUDirect-type support is present with Cloudian and MinIO and coming to DDN Infinia and others. File and object storage suppliers are colliding as they try to offer the fastest, most power-efficient, and cost-effective way to feed unstructured data to AI training and inferencing GPU servers and HPC systems.
All the file and object storage suppliers are also emphasizing on-premises and public cloud hybridity, with global namespaces, common interfaces, and management.
Qumulo claimed a test said it was faster than WEKA in the Azure cloud in June, using an AI-related benchmark. It claimed at the time that this was both the industry’s fastest and most cost-effective cloud-native storage offering as its Azure run cost “~$400 list pricing” for a five-hour burst period. The SaaS PAYG (pay as you go) model pricing means metering stops when performance isn’t needed.
WEKA and Azure achieved 5GB/sec throughput in tests with the SMB protocol in February; a long way short of 1TB/sec.
We have no details about how CNQ passed the terabyte bandwidth milestone, and suspect that other cloud-native file storage vendors might achieve it if they throw enough nodes (cloud instances) into the pot. Certainly Qumulo has now set up a target for other vendors to reach.
Bootnote
CNQ is also available on AWS GovCloud, providing a secure, compliant option that meets government and regulatory standards.
Dell will provide APEX file and protection services to Microsoft’s Azure cloud customers and supply AI services for Copilot use.
APEX is a set of compute, storage, and networking services supplied through a public cloud-like subscription model. It is a hybrid on-prem/public cloud offering. APEX File Storage is based on Dell’s PowerScale OneFS scale-out file system software. It is due to be enhanced with parallel file system support, providing faster and more scalable access to file data. Dell is trying to show that it can provide file, cyber security, and AI development services to Microsoft customers – particularly those in the Azure cloud.
Arthur Lewis
Arthur Lewis, president of Dell’s Infrastructure Solutions Group, stated: “Our storage software, data protection, and service advancements help customers in Microsoft environments accelerate their transformation efforts quickly and securely.”
There will soon be a Dell-managed option for APEX File Storage for Microsoft Azure, giving customers simplified deployment and management. This service will offer burst capacity for performance-intensive AI workloads, data mobility and operational consistency across on-prem and cloud environments, and native integration with Microsoft AI tools.
This is complemented by Dell APEX Protection Services for Microsoft Azure, which will deliver Dell-managed, AI-powered cloud data protection, with data reduction, and cyber resiliency across edge locations, such as remote offices and datacenters. The cyber resiliency features zero trust security, immutability, encryption, multifactor authentication, and role-based access controls. AI-powered CyberSense threat intelligence enables “up to 80 percent less time spent on recovery,” Dell claims.
There are two new Dell security services for Microsoft environments:
Advisory Services for Cybersecurity Maturity Model Certification (CMMC) for Microsoft has specific recommendations for Microsoft offerings.
Managed Detection and Response with Microsoft has Dell staff monitoring, detecting, investigating, and responding to threats 24/7 across a customer’s IT environment.
The Microsoft AI-related services being introduced by Dell are:
Accelerator Services for Copilot+ PCs with guidance on features, implementation plans, best practices, and more.
Development and implementation services for Microsoft Copilot Studio and Azure AI helping with specific business needs.
Implementation Services for Microsoft Azure AI Service with AI application development on-prem with Azure AI services on Dell hybrid cloud solutions for Azure.
Aung Oo
Microsoft’s Aung Oo, VP of Azure Storage, stated: “Dell Technologies is enabling its customers to bring their existing knowledge, trusted platforms, and enterprise data to Azure to speed the adoption of critical technologies including Azure AI Services.”
Availability
Advisory Services for Cybersecurity Maturity Model Certification (CMMC) for Microsoft and Managed Detection and Response with Microsoft services are available now.
Dell-managed APEX File Storage for Microsoft Azure will be available in public preview beginning in the first half of 2025.
Accelerator Services for Copilot+ PCs are available now.
Services for Microsoft Copilot Studio, Microsoft Azure AI Studio, and Implementation Services for Microsoft Azure AI Service are available now.
Dell APEX Protection Services for Microsoft Azure will be available beginning in the first half of 2025.