Home Blog Page 8

Pure’s control plane gets more intelligent, more secure

Control plane enhancements for Pure Storage arrays and software stack are enabling customer admins to manage and secure fleets of Pure arrays and cloud instances, and their workloads more effectively.

Pure Storage has gained the top position in Gartner’s latest Enterprise Storage Platform Magic Quadrant and, with its Accelerate event announcements intends to stay there. It reckons that, Wedbush analyst Matt Bryson says: “AI is forcing enterprises to rethink data architectures, creating a window for Pure to take share from incumbents.” The announcements cover virtually the entirety of its offerings, from hardware up through the SW stack to AI-enhanced array and fleet management. We covered the HW and data plane announcements here and now look at the control plane news.

Pure Storage CTO Rob Lee
Rob Lee.

Pure CTO Rob Lee stated: “In today’s AI era, access to data is everything. Managing your data, not just storing it, is the new foundation for AI-readiness from cloud to core to edge. Success depends on having your data secure everywhere, and easily accessible anywhere, with a unified and consistent experience – in real time, at scale, across any workload.”

Pure says its Intelligent Control Plane (ICP) has activities covering fleet and remote management enforced by policies with presets for workloads. Workflows can be orchestrated and managed with an AI Copilot. The ICP now has real-time awareness of applications and workloads across a customer’s entire fleet of Pure Storage arrays on-premises and its instantiations in the public cloud. It can discover over-burdened arrays and move workloads to relieve the stress.

The cloud-based Pure1 management feature now has an AI Copilot with MCP (Model Context protocol) integration, enabling natural language-using AI agents to connect to it and open the door to admin staff using natural language to manage workloads across a Pure fleet. Pure Storage AI Copilot operates as both an MCP server and client, enabling integration with internal systems for hardware performance, subscriptions, and security data, as well as external tools like analytics engines and application monitors. 

There is a Portworx AI Copilot to help manage Portworx deployments, providing answers to natural language queries about, for example, Portworx cluster health, license states and offline nodes. Users can query Portworx clusters in the same way they interact with FlashArray systems, monitoring their Kubernetes and Portworx clusters at scale via instant interaction with an AI agent on the Pure1 Copilot user interface. 

Pure’s fleet-managing Fusion can now manage a customer’s Portworx instantiations, handling Portworx containerized apps. Fusion can be used to discover and migrate VMware workloads to KubeVirt.

ICP gets workflow orchestration templates through which, it says, production-ready workflows can be deployed in minutes. Connectors bring in other Pure products as well as third-party offerings from, for example, Google, Microsoft, PagerDuty, SAP, ServiceNow and Slack.  

Cyber Resilience

Pure says the standard approach to cyber defense—bolting on multi-vendor solutions that don’t cover the storage platform—does not work. We need native threat detection capabilities at the storage layer and that is what it’s providing; built-in capabilities.

Across-the-board advancements here provide better threat detection, security assessments and a cyber-recovery SLA. Pure’s AI Copilot – think conversing with a dashboard – provides visibility of anomalies, threat detection and can take proactive action. Customers can use Pure’s Log Center to investigate insider threats and anomalous user access and there is real-time malware scanning with ICAP (Internet Content Adaptation Protocol) and a file workload anti-virus capability.  

Pure has partnered with CrowdStrike for Threat Graph and Falcon SIEM (Security Information and Event Management) integration to get real-time Threat Graph intelligence to automatically detect and combat malicious activity and attacks. The integration with CrowdStrike Falcon next generation SIEM automates and accelerates incident response, with immediate action to automatically update policies, replicate, and isolate critical systems to contain threats. Alerts sent to security teams mean they can hopefully respond proactively before damage occurs. 

It is also partnering with Superna to get automated, real-time tile and user monitoring for threat detection and response at the data layer. This is integrated with both FlashArray and FlashBlade, and specifically targets attacks like data exfiltration or double-extortion ransomware. Compromised accounts are instantly locked when malicious activity is detected, and security policies are enforced automatically. 

Pure Protect Recovery Zones – clean rooms –  provide automatically-provisioned isolated recovery environments. This means customers can non-disruptively test and validate applications and data, or to remediate and recover from malicious attacks without affecting production environments.

A Veeam partnership provides Cyber-Resilience-as-a-Service with centralized, automated protection and recovery through a SaaS scheme that is enterprise- and fleet-wide, and has SLAs.

Availability

  • Pure Protect Recovery Zones are also some way off, expected to GA in Pure’s 1st fiscal 2027 quarter (February to April 2026.) 
  • The Portworx Pure1 AI Copilot is generally available (GA).
  • Pure1 AI Copilot Integration with Model Context Protocol (MCP) Servers  will be GA in Pure’s fourth fiscal 2026 quarter (November 2025 to January 2026)
  • Portworx integration with Fusion is slated to be GA in the first half of Pure’s fiscal 2027 (January to July 2026 period).
  • Real-time threat evaluation and response with CrowdStrike Threat-Graph and Falcon Next-Gen SIEM Integration should be GA in Pure’s 3rd fiscal 2026 quarter (August – October 2025).

Pure pushes performance and density with new FlashArray systems

Pure Storage is making a large number of announcements at its Pure Accelerate 2025 event, subsumed under an Enterprise Data Cloud heading.

It divides them into three groups: intelligent control plane, to deal with operational complexity and primarily Fusion-related, a unified data plane to handle siloed and fragmented data on-prem and in public clouds, and cyber resilience with SLA services to cover malware threats and regulation. A Pure graphic relates this tripartite concept to its product capabilities.

The Enterprise Data Cloud, built on Pure Fusion, which unifies Pure’s on-prem and public cloud storage, was announced in June, giving customers “the ability to easily manage their data across their estate with unrivaled agility, efficiency and simplicity.” We are told block, file, and object data is managed and governed by policy, every system and site contributes capacity and performance through a shared virtual layer, and all data is controlled through a single console. 

The hardware news is included in the unified data plane section and we’ll look at that section here, starting with the hardware since that underpins its on-premises presence. The intelligent control plane and cyber-resilience news will be covered in a second article.

Hardware

There are five FlashArray product lines, positioned in a 2D performance vs capacity space by a Pure diagram:

The June event saw the FlashArray//XL line upgraded to a fifth generation, R5. Pure has announced a new top-end //XL model, the FlashArray//XL 190 R5, an evolution of the XL 170. Pure International CTO Alex McMullan told us: “It’s essentially a higher performance version with a lot more memory on board and we’re also piloting effectively a DRAM RAM drive capability within this system as well.”

Alex McMullan

He added: “I’m not going to disclose how much DRAM is in there, but I’m just going to say I don’t think we can get any more in there if we tried. That’s really aimed at the highest level of performance requirement that we have.”

It has more cores in its Intel Emerald Rapids CPUs, more DRAM, PCIe gen 5, and, Pure says, 25 percent more capacity, 50 percent decreased latency and a 100 percent performance increase over the existing //XL170. Customers can non-disruptively upgrade from existing FlashArray//X or XL systems to it.

The block-only FlashArray//ST – first revealed in June and delivering over 10 million IOPS from its off-the-shelf commercial SSDs, not Pure’s own DFM drives – now has a new generation, an Emerald Rapids-based system, and 6.4 or 12.8 TB PCIe 5 SSDs. Kioxia is one supplier shipping such drives like the CD9P-V. The new ST has up to 400 TB of usable capacity. It’s an 18 million IOPS machine with 200 GB/s throughput, which McMullan says “is pretty impressive for a little 5RU box.” Its Purity OS supports snapshots, writable clones, and replication.

The FlashArray//X and //C lines have the same compute blades but different media. Their 3RU chassis can now hold 28 drives by default, providing a 40 percent bump in capacity. These arrays use the same distributed NVRAM technology seen in the FlashBlade and XL. They get updated to R5 hardware, meaning Emerald Rapids processors, giving the //X a 30 percent boost in performance over the prior R4 generation, with the //C getting a 40 percent improvement.

With Pure’s 150 TB DFMs (Direct Flash Module solid-state drives), McMullan said: “We’re about 30 times more dense than everybody else at this point in time, and it is going to get better,” as 300 TB DFMs are coming. He also said: “We’re already prototyping how on earth we get to build a system that’s based on petabyte size.” Such testing involves creating virtual NVMe drives of massive capacity: “When you’re looking at the block side of things, the targets and data spaces now are so big that we actually use a complete FlashArray to pretend to be an NVMe drive to connect to another FlashArray. So you have one FlashArray as the test one, and you’ve got 20 or 30 of these FlashArrays sitting behind it [with] all of those offering up one single NVMe drive of all their capacity.”

It’s a similar story on the object side: “We have to create 2 trillion objects in a bucket to be able to test at a scale that customers are demanding.” 

As well as the hardware, Pure’s unified data plane story has virtualization, AI, improved data reduction, and cyber-resilience parts to it.

Virtualization

The virtualization aspect is, in part, a response to Broadcom upsetting its acquired VMware customer base. Pure is happy to provide storage to this base but also supports Nutanix and its hypervisor, various other KVM-based systems, and VMware virtualization in the AWS and Azure clouds. But, McMullan said, “with the announcements from Broadcom that you now have to bring your own license to the public cloud. There’s going to be a challenge to that.” 

A fourth customer option is to migrate to virtualization with its Portworx and KubeVirt offerings. McMullan said: “KubeVirt is a very different product than it was 12 months ago. It’s seen a lot of maturity, a lot of investment, a lot more focus.”

Pure supports managed VMware services in both AWS (Amazon Elastic VMware Service) and Azure (Azure VMware Solution). There are also Pure Storage facilities in both clouds which are not fully-managed; Pure Storage Cloud Dedicated for AWS and a new offering, Pure Storage Cloud Azure Native for Azure, which was developed with Microsoft. McMullan explained: “Broadly, what this means is that using our Cloud Block Store, it’s a first class layer underneath the Azure VMware service essentially.” 

It is, he said, “a first-class service underneath Azure VMware service… a drop down. It’s not something that you do in marketplace and drag it over here. It has that full integration in terms of billing, cost, ingest.”

AI and dedupe

Pure is going to integrate its Key-Value Accelerator with Nvidia’s Dynamo. This can tier Gen AI LLM KV cache data from a GPU’s HBM to the associated CPU’s DRAM, then direct-attached SSDs and lastly to external storage arrays. McMullan tells us: “This is all based on VLM protocols, but the testing we’ve seen essentially improves inference and some parts of training by about 20X, but also it keeps the GPUs running at full tilt.”

“What that really brings to you is the ability to do better than 20 percent utilisation in your clusters, which is one of the big things we hear quite often from customers; that frustration.”

Pure’s FlashArrays have 7 or 8 compression algorithms they can use, depending on workloads and data type. This compression technology has been enhanced, called Deep Reduce, and provided for FlashBlade arrays as well. It provides always-on data reduction across storage protocols and array tenants. McMullan said: “This is really a second order advanced compression based on cardinality that we see for unstructured workloads, whether that’s medical records, whether that is genomics data, whether that’s geospatial. [It’s the] same capability that we had on FlashArray, but bringing it into FlashBlade with different algorithms, tuned for unstructured data sets.” 

Availability

The FlashArray//XL 190 will be generally available Q4 FY26 (February to April 2026), while the FlashArray//X R5 and FlashArray//C R5 are both generally available now. The Key-Value Accelerator-Dynamo integration will be available in the February-April period in 2026. Purity Deep Reduce will be generally available in the first half of Pure’s fiscal 2027 (second half of calendar 2026). 

Quantum restructures debt as Raptor tape library secures Veeam Ready status

Quantum is making progress financially and sales-wise with a debt for equity swap and Veeam tape library qualification.

The loss-making company has been through a boardroom refresh triggered by a company called Dialectic Capital Management buying up a substantial amount of its debt and gaining a board seat. The new board voted out chairman, president, and CEO Jamie Lerner in June, replacing him as CEO with Hugues Meyrath. There was a more or less complete C-level exec overhaul as Meyrath put his own team in place in pursuit of a company turnaround and growth strategy.

Hugues Meyrath

Sufficient progress has been made for Quantum and Dialectic to agree to the restructuring of the Dialectic-held Quantum debts. The Dialectic term debt of approximately $52 million is to be exchanged for senior secured convertible notes that have a three-year maturity. Dialectic has a warrant to purchase 2,653,308 shares of the company’s common stock as consideration. Capital raised to date through Quantum’s Standby Equity Purchase Agreement (SEPA) can be used to repay the existing term loan and an additional $15 million of SEPA proceeds to be retained by the company for working capital and general corporate purposes.

Meyrath stated: “This transaction to restructure a substantial portion of the Company’s outstanding term debt represents a significant step toward our goal of becoming debt-free. The proposed exchange of term debt for convertible notes demonstrates Dialectic’s belief in the Company’s strategic vision and long-term growth opportunities, while also aligning Dialectic as a future strategic partner. In addition, we believe this transaction provides increased financial flexibility to execute on our operating initiatives and revitalized go-to-market strategy.”

He added: ”We believe the increased financial flexibility provided by this transaction allows us to focus on our goal of delivering profitable performance and revenue growth.”

John Fichthorn

John Fichthorn, Managing Partner of Dialectic Capital Management and Quantum board member, said: “This transaction is an important milestone in Quantum’s ongoing operational and financial transformation. At Dialectic, we are committed to supporting the management in building a focused, profitable, and growing storage technology company with a clear strategy and strong alignment across all stakeholders. By restructuring the balance sheet and positioning Quantum for growth, our goal is to align the interests of management, employees, and shareholders through the performance of the equity.”

Quantum has also gained a product sales opportunity, as its Scalar i7 Raptor tape library has achieved Veeam Ready qualification from Veeam Software, the number-one data protection backup and cyber-resilience supplier. Quantum claims the Raptor has the highest storage density of any tape library on the market.

Quantum’s existing Veeam Ready systems include Scalar i3, Scalar i6, Scalar i6000 tape libraries, DXi backup appliances, and ActiveScale object storage. The Raptor can be paired with ActiveScale Cold Storage, offering Amazon S3 Glacier-compatible object storage for archiving.

Meyrath said: “By extending Veeam Ready qualification to our Scalar i7 Raptor, we’re delivering customers the broadest range of tested solutions – from disk and object storage to our entire line of tape libraries—so they can choose the right mix of performance, cost, and retention for their environment.”

The closing of certain transactions contemplated by the Transaction Agreement is subject to certain conditions, including the approval of the debt exchange by the company’s stockholders. 

StorageMAP vs SyncEngine: Datadobi explains the difference

Interview. Datadobi’s StorageMAP tool overlaps with the functionality of SyncEngine, VAST Data’s newly announced software to find, catalog, and capture file and object data in other suppliers’ storage or data management systems, and ingest it into VAST’s private, AI-focused universe.

StorageMAP, with its parallelized, multi-threaded, metadata scanning engine (mDSE), scans and lists a customer’s file and object storage environments, both on-premises and in the public cloud, and can then move data between storage tiers and archive cold data, according to policy-driven workflows.

We asked Steve Leeper, Datadobi’s VP of Product Marketing, some questions to draw out the differences between SyncEngine and StorageMAP.

Blocks & Files: Do StorageMAP and SyncEngine cover the same data sources? Where do they overlap or differ?

Steve Leeper

Steve Leeper: They don’t map the same sources, and they’re not meant to. SyncEngine is a VAST-native capability designed to support onboarding data into the VAST platform, whether for traditional migration or AI ingestion. It’s tightly integrated with VAST’s DASE architecture and optimized for ingesting data that will be processed within VAST’s “AI Operating System.”

StorageMAP, by contrast, is a mature, vendor-agnostic data management platform. It maps data across multi-vendor environments – including NAS, object, and cloud – and provides customers with deep visibility into their entire estate. It’s not just about moving data; it’s about understanding it, cleaning it, and making strategic decisions before migration even begins.

SyncEngine helps you move data into VAST. StorageMAP helps you decide what’s worth moving, and where it should go.

Blocks & Files: Can StorageMAP map VAST Data stored files and objects?

Steve Leeper: Yes. StorageMAP can discover and analyze data stored on VAST just as it does with other platforms. It’s designed to work across heterogeneous environments, including VAST’s architecture, and can provide insights into what’s stored, how it’s used, and whether it’s relevant to retain or migrate.

Blocks & Files: Can StorageMAP feed data to LLMs and AI agents? How?

Steve Leeper: StorageMAP plays a critical role in preparing enterprise data for AI workflows. While SyncEngine can ingest SaaS application data and vectorize it for RAG or inference, StorageMAP can onboard other platform data, especially legacy or multi-vendor sources.

Before any ingestion or training begins, StorageMAP helps clean up the source systems so only relevant, high-quality data is migrated. It identifies ROT (Redundant, Obsolete, Trivial) data and enables policy-driven archiving or deletion. StorageMAP helps ensure that AI models aren’t trained on irrelevant or bad data and that data intended for inferencing or for RAG (retrieval-augmented generation) is appropriate. 

StorageMAP doesn’t train your model; it makes sure your model isn’t trained on junk.

Blocks & Files: Can StorageMAP and Datadobi do things SyncEngine cannot?

Steve Leeper: StorageMAP offers:

  •  Global visibility and mobility across multi-vendor environments
  •  ROT (Redundant, Obsolete, Trivial) data identification and cleanup
  •  Policy-driven data movement, archiving, and deletion
  •  Pre-migration analysis and optimization
  •  Supports governance, compliance, and sustainability goals

SyncEngine is focused on VAST-centric migration and ingestion. StorageMAP is about strategic data management across the enterprise.

Blocks & Files: What would persuade a VAST Data customer to use StorageMAP rather than SyncEngine?

Steve Leeper: It’s not either/or, it’s complementary. Because SyncEngine and StorageMAP solve different, but interlocking, problems. SyncEngine is a VAST-native tool designed to ingest data into the VAST platform, whether for traditional migration or AI processing. It’s fast, integrated, and ideal for getting data into VAST’s DASE architecture.

SyncEngine assumes you already know what data to move. StorageMAP solves that problem. StorageMAP gives customers visibility into their entire data estate, across vendors, clouds, and formats, so they can identify what’s relevant, what’s ROT, and what’s potentially AI-worthy. SyncEngine doesn’t do that.

StorageMAP enables cleanup, classification, and policy-driven selection before anything touches VAST. That means less junk, lower costs, and better AI outcomes.

In environments shaped by M&A, multi-sourcing, or hybrid cloud, StorageMAP provides the global control plane and massively scalable data plane for mobility. SyncEngine is VAST-specific in terms of being designed to primarily pull data from other sources.

StorageMAP helps surface the right data for RAG and inference workflows from any vendor platform onto the VAST platform. SyncEngine can ingest and vectorize data, but vectorization is an expensive process. Better to use StorageMAP to identify, curate, and migrate the dataset(s) intended for VAST so that downstream processing on the VAST is accelerated. 

Solidigm adds E1.S liquid-cooled variant to PS1010 SSD line

Solidigm has designed a liquid-cooled PS1010 SSD variant in E1.S format, enabling it to better support dense AI workloads that would otherwise overheat it.

As promised back in March, the D7-PS1010 has been made compatible with liquid cooling by adding an E1.S variant, longer and narrower than its original U.2 and E3.S form factors. Solidigm says this form factor allows “single-sided direct-to-chip liquid cooling technology.” 

Screenshot showing Solidigm cold-plate cooling from a video

This “single-sided” terminology is confusing, implying that the cold plate has a single surface touching one side of the drive. But Solidigm’s Greg Matson, SVP and Head of Products and Marketing, stated: “This is the world’s first single-sided cold-plate solution that cools both sides of the SSD, delivering the most efficient storage subsystem available to alleviate the strain placed on SSDs in dense AI environments.”

In a YouTube video, we can watch Solidigm’s Director of Leadership Narrative and Evangelist, Scott Shadley, explaining this to a Tech Field Day audience: “We were given a footprint of 8 x 15 mm E1.S, which is what is in the brains of the direct-attached storage of the Nvidia reference design, and they said, ‘Find a way to take the fans out of it.’ The traditional way we do this is to fit eight of these guys with a fan with air fins on it.” 

The resulting liquid-cooled SSD module has “liquid in and out ports for each of the drives… We have a single cold-plate touching the side of the drive… We designed the frame, the actual enclosure of the drive, to allow a single-sided cold-plate attachment” that’s spring-loaded. “The cold plate is only touching one side of the drive, but active conductive cooling by way of the material and the way the frame was designed allows us to cool the top side of the product.” So it is not literally cooling both sides of the SSD. It touches and cools one side, and internal heat conduction means the whole of the SSD, including its other side, gets cooled as well.

Schematic video screengrab showing PS1010 E1.S drives being loaded into liquid-cooling frame with spring-loaded mechanism

The company says its PS1010 E1.S SSD “is one of the fastest PCIe 5.0 SSDs on the planet for Direct-Attached Storage (DAS) AI workloads,” meaning it can be fitted in liquid-cooled AI servers. It is also hot-swappable because the cold-plate is spring loaded.

It claims the PS1010 E1.S 15 mm air-cooled SSD drops energy usage by up to 33 percent when compared to similar products, such as Micron’s 9550.

Solidigm E1.S PS1010 9.5 mm and 15 mm form factors are available in 3.84 TB and 7.68 TB capacities. The company says it’s working closely with server ODMs and OEMs, such as Supermicro, to qualify the PS1010 9.5 mm and 15 mm E1.S SSDs on recommended vendor lists, in addition to other AI and server systems. 

HPE bigs up its primary block AFA growth

HPE says it has the second largest share of the primary block part of the all-flash array (AFA) market, citing IDC numbers, placing it third overall, behind Dell and NetApp.

Fidelma Russo

Fidelma Russo, HPE’s EVP and GM Hybrid Cloud and CTO, claimed in a LinkedIn post that HPE is outpacing the storage industry because it has “the fastest-growing all-flash block storage array in the industry,” adding: “Alletra Storage MP has recorded triple-digit year-over-year growth for the third consecutive quarter.”

Fellow HPE blogger Colin Gallagher said HPE’s Alletra Storage MP product had 33 percent year-over-year revenue growth in the second quarter of calendar 2025, making it “the fastest-growing all-flash block storage array among top peers.” That “top peers” qualification implies that smaller revenue all-flash storage suppliers grew faster still. We think that could refer to VAST Data.

The Alletra Storage MP system has disaggregated controllers and storage nodes, mirroring VAST Data’s basic architecture and, in a file version, runs VAST software. The IDC numbers come from the research company’s Worldwide Quarterly Enterprise Storage Systems Tracker (C2Q25 data), September 2025.

This looks at worldwide enterprise external OEM storage systems and hyperconverged infrastructure (HCI), and divides the market into primary block, file, and object storage. IDC’s report data is not publicly available. We understand that primary block is the largest revenue driver in the external OEM sector due to AI-driven demand for all-flash arrays.

Gallagher said HPE gained three points of AFA market share, reached 14.5 percent share in primary block, and delivered nearly 40 percent year-over-year growth in all-flash block arrays. If HPE can continue this growth rate, it could overtake NetApp in overall AFA revenues. In our storage world, that would be a significant upset.

Dell refreshes storage lines with QLC flash and beefed-up security

Storage market leader Dell has updated four of its block, file, and backup storage product lines with higher-capacity SSDs and stronger cyber-resilience.

The company says these updated arrays and appliances are part of its on-premises private cloud offerings. The Dell Automation Platform software provides automated delivery of these products, with pre-built, tested configurations for software stack deployments (e.g. VMware vSphere, PowerStore) on Dell hardware, eliminating manual installation. There are on-prem and SaaS deployment options. Dell Native Edge is integrated into the Automation Platform, providing a full-stack, end-to-end offering supporting virtualized and containerized workloads. Dell says this is optimized to simplify and secure operations across distributed cloud and edge environments. 

Travis Vigil

Dell’s Travis Vigil, SVP for ISG Product Management, stated: “Our latest storage and cyber resilience advancements are designed to help organizations build private clouds that are smarter, more secure, and ready to handle the demands of both traditional and modern workloads.” 

Early next year, Dell Private Cloud will expand to support Nutanix with a fully integrated offering built on disaggregated infrastructure. 

PowerMax

This is Dell’s high-end, mission-critical data storage array. You can update the new PowerMaxOS 10.3 software with a single click. There is up to 25 percent more IOPS performance on both PowerMax 2500 and 8500 systems from software changes.

There is QLC (4 bits/cell) support starting at 122TB (10 drive minimum) for the PowerMax 2500, and it can scale out at the single-drive level to 8.8 PB effective capacity per array. Dell says there is a new cache-centric architecture and innovative write-folding techniques to extend QLC flash durability 

Security has been improved with single sign-on Entra ID and encrypted email alerts. Admin life is made easier with single-click software updates, which complete in under six seconds, zero-touch management installs, and up to 66 percent fewer steps when setting up replication mode changes.

There is a Workload Planning Dashboard, through which admins can gain predictive insights with “what-if” simulations and detailed visibility into performance and capacity usage across multiple PowerMax arrays. 

PowerMaxOS 10.3 upgrades are free and non-disruptive for existing customers with a Dell service contract.

PowerStore

PowerStore sits underneath PowerMax in Dell’s storage array range, being a dual-controller, unified file and block storage array line, extending from the entry-level 500T and 500T DC through the 1200T, 3200T, 3200Q, and 5200T to the range-topping 9200T. A new 5200Q model – Q for QLC – supports high-capacity QLC flash drives, scaling to over 23 PB effective capacity per cluster (assuming 5:1 dedupe ratio), while offering the same performance as the 5200T TLC version. This compares to the 5200T’s 23.6 PB effective capacity per cluster. Dell says customers can have better optimized workload placement through integrating the 5200Q with existing PowerStore clusters. 

PowerStore arrays are also getting built-in anomaly detection, single sign-on and biometric authentication, HashiCorp key manager support, and replication over Fibre Channel. Smart Support Auto-Heal functionality with automated health checks and repairs will cut issue resolution time by up to 90 percent.

Dell and hyperconverged infrastructure (HCI) software vendor Nutanix already support Nutanix software running on PowerFlex arrays. This partnership is being extended to PowerStore. Thomas Cornely, SVP Product Management at Nutanix, said: “With our Nutanix Cloud Platform soon supporting Dell PowerStore, we will be giving customers a new choice in how they architect their virtualized environments.” 

PowerFlex Ultra

PowerFlex is containerized block storage software and the codebase for APEX Block Storage. Dell says that its PowerFlex Ultra release has a Scalable Availability Engine with a scale-out, distributed, erasure-coded architecture. It achieves up to 80 percent storage efficiency, with over 50 percent reduction in physical storage footprint. The system can tolerate the loss of two nodes at once, achieves ten nines (99.99999999 percent) data availability, and reduces costs. 

We’re told that parallel processing across nodes delivers sub-millisecond latency and enterprise-class throughput for demanding workloads.

PowerProtect

This is Dell’s deduping backup target appliance range, available as a hardware appliance and as a virtual appliance that can run in public clouds. The hardware appliance range runs from the new DD3410 entry-level model through the DD6410, DD9410, DD9910 to the DD9910F with generally increasing capacity and performance en route, although the DD9910F (Flash) has less maximum usable capacity – 35.4 PB – than the DD9910 with its 97.5 PB.

The DD3410’s usable capacity range is 8 to 32 TB and it is intended for use in remote offices and smaller environments.

There is a new software-defined PowerProtect Data Manager Appliance for centralized PowerProtect appliance admin, with improved protection, including anomaly detection and data immutability.

Availability and background information

Find out more by reading various Dell blogs:

Availability:

  • Dell Private Cloud and Dell NativeEdge integrated with the Dell Automation Platform is available today. 
  • Dell PowerStore 5200Q, PowerStore software PowerFlex and PowerMax updates will be available in October 2025. 
  • Dell PowerStore support for Nutanix will be available in early access in Spring 2026. 
  • Dell PowerProtect Data Domain DD3410 will be available in Q1 2026. 
  • Dell PowerProtect Data Manager Appliance will be available in Q4 2025. 

Micron rides AI-fueled DRAM wave to record revenue

Micron revenues passed $11 billion in its final FY 2025 quarter as hyperscalers bought more memory than ever before.

Revenues in the quarter, ended August 28, 2025, were a record, at $11.32 billion, an all-time high, beating its $11 billion high-end outlook, and 46 percent higher than the year-ago $7.75 billion, with a GAAP profit of $320 million, up 261 percent year-over-year. Full FY 2025 revenues were $37.38 billion, 48.9 percent higher than a year ago, with a profit increase to $8.54 billion.

A glory-to-gloom pattern is evident in Micron’s revenue history, with AI and allied HBM demand sending the current up-cycle to much higher levels

Chairman, president, and CEO Sanjay Mehrotra stated: “Micron closed out a record-breaking fiscal year with exceptional Q4 performance, underscoring our leadership in technology, products, and operational execution. In fiscal 2025, we achieved all-time highs across our datacenter business and are entering fiscal 2026 with strong momentum and our most competitive portfolio to date.” He also mentioned pricing execution.

Financial summary

  • Gross margin: 44.7 percent vs 35.3 percent a year ago
  • Operating cash flow: $5.73 billion vs $3.41 billion a year ago
  • Free cash flow: $803 million vs -$758 million last year
  • Cash, marketable investments, and restricted cash: $11.94 billion vs $9.2 billion last year
  • Diluted EPS: $2.83 vs $0.79 a year ago

DRAM revenues increased by 69 percent year-over-year to $9 billion from $5.3 billion, with HBM reaching a record, but NAND revenues declined 5.0 percent to $2.3 billion. NAND bit shipments went down but, as prices increased, revenues didn’t decline as much as they would otherwise have done. 

Basically, DRAM is 80 percent of Micron’s revenues and rising, while NAND is 20 percent and flat.

Mehrotra was bullish on Micron’s DRAM technology, saying: ”Our 1γ DRAM node reached mature yields in record time, 50 percent faster than in the prior generation. We are the first in the industry to ship 1γ DRAM and will leverage 1γ across our entire DRAM portfolio to maximize the benefits of this leadership technology. We achieved first revenue from a major hyperscale customer on our 1γ products for server DRAM in the quarter.”

Micron aims to boost production, installing its first EUV tool in its Japan fab to enable 1γ capability, which will complement its existing 1γ supply from fabs in Taiwan. US federal funds will help too. Micron received a CHIPS grant disbursement for a new high-volume manufacturing fab in Idaho (ID1), with the first wafer output expected to begin in the second half of calendar 2027. The firm began design work on a second Idaho manufacturing fab (ID2), which will provide additional capacity beyond 2028. 

Micron’s HBM manufacturing capacity is increasing too. Mehrotra said:”Our continued HBM assembly and test investments position us well to meet growing HBM capacity requirements in calendar 2026. We are making good progress on our Singapore HBM assembly and test facility construction, which is on track to contribute to our HBM supply capability beginning in calendar 2027.” 

Micron has changed its business unit reporting structure. Up until last quarter its four BUs were Compute & Networking, Mobile, Storage and Embedded. These now change to Cloud Memory, Core Data Center, Mobile & Client, and Automotive & Embedded and we lose clarity on Micron’s storage revenues. CFO Mark Murphy said: ”The Cloud Memory Business Unit and Core Data Center Business Unit combined represent the totality of our datacenter business.” 

The new business unit revenue results were:

Micron did not publish Q1 and Q2 FY 2025 business unit numbers

Cloud memory sales have shot up year-over-year while the core datacenter sales revenue has gone down, which is unexpected. We continually hear that AI server demand is rising and that such servers need DRAM and flash, yet Micron’s numbers appear to contradict that view.

High-Bandwidth Memory

HBM revenue grew to nearly $2 billion, implying an annualized run rate of nearly $8 billion, driven by the ramp of its HBM3E products. Micron says: “We have pricing agreements with almost all customers for a vast majority of our HBM3E supply in calendar 2026. We are in active discussions with customers on the specifications and volumes for HBM4, and we expect to conclude agreements to sell out the remainder of our total HBM calendar 2026 supply in the coming months.” 

It has recently shipped next-generation HBM4 samples and believes it “outperforms all competing HBM4 products, delivering industry-leading performance as well as best-in-class power efficiency.“ Micron is also working on HBM4E (Extended), partnering with TSMC for manufacturing the HBM4E base logic die for both standard and customized products. It sees HBM4E as a 2027 timeframe product.

Micron has six HBM customers and the HBM outlook is good: “Our HBM share is on track to grow again, and be in line with our overall DRAM share in this calendar Q3, delivering on our target that we have discussed for several quarters now.” Its DRAM market share is around 22.5 percent.

As the HBM market is expected to be worth $50-60 billion in 2026, Micron’s hoped-for 22.5 percent share would be $12.58 billion – say, $3.1 billion/quarter. Looking even further ahead, Micron had said that by 2030, it expected HBM’s total addressable market to reach $100 billion.

AI and outlook

Micron is using AI internally and says it’s “seen strong adoption and as much as a 30-40 percent productivity uplift in select GenAI use cases, such as code generation.” It’s “driven a 5x increase in wafer images analyzed in the past year and doubled the amount of useful data and telemetry collected and analyzed from our fab tools, all of which improve our yield performance.“

AI PCs and AI-capable smartphones will both need more DRAM. Mehrotra said: ”Overall, AI trends are strong, and this is across datacenter, across AI-enabled smartphones and AI-enabled PCs. This is what leads to strong demand in 2026, across 2026.” And supply will be tight, leading to higher prices.

Mehrotra told earnings call attendees: “We have strong momentum entering fiscal 2026 with a robust fiscal Q1 demand outlook led by datacenter and the most competitive position in our history. Over the coming years, we expect trillions of dollars to be invested in AI, and a significant portion will be spent on memory. As the only US-based manufacturer of memory, Micron is uniquely positioned to benefit from the AI opportunity ahead.”

Next quarter guidance is for revenues of $12.5 billion ± $300 million, a rise of an impressive 43.5 percent year-over-year at the midpoint. Datacenter sales should increase, with Micron saying “we now expect calendar 2025 total server units to grow approximately ten percent, up from our prior expectations of mid-single digits percentage growth,” with growth in both AI and traditional servers ramping up DRAM demand.

Bootnote

Although Micron is doing well in HBM, it is still lagging behind market leader SK hynix, as a revenue comparison history chart illustrates:

MemVerge unveils open source AI memory layer for LLMs

MemVerge has launched an open source MemMachine software project to provide a cross-platform and long-context memory layer for large language models (LLM) and agentic AI.

MemVerge provides Memory Machine software to virtualize DRAM, combining a server CPU’s memory with an external memory tier. It enables data to be loaded into their own local over-burdened memory capacity. It claims the MemMachine software, with an associated enterprise offering, will deliver the world’s most accurate AI Memory system with fast recall and a foundation for human-like memory in machines.

Charles Fan

Charles Fan, MemVerge co-founder and CEO, stated: “AI without memory is incomplete. MemMachine delivers the memory layer that makes AI agents truly intelligent, personal, and enterprise-ready. This is the beginning of the next generation of agentic AI, and we are proud to deliver the world’s most powerful AI memory system.”

MemVerge waxes lyrical about its new software, saying: “The long-term vision for MemMachine is to parallel – and surpass – the richness of human memory. Like people, it will enable agents to retain episodic experiences, semantic knowledge, and procedural skills. Unlike people, it will deliver limitless recall, instant context retrieval, perfect fidelity, and secure sharing across agents, applications, and enterprises. The result: AI that acts as a true collaborator, remembering everything important, forgetting nothing critical, and scaling far beyond biological limits.”

MemMachine is not another KV cache offload engine, MemVerge is claiming it is “providing a persistent, intelligent memory layer that retains episodic, personal, and procedural knowledge across sessions, models, agents, and environments. The result: assistants that evolve from disposable chatbots into trusted, context-aware collaborators.”

A chart illustrates this additional memory layer concept:

The model weights are intrinsic to an LLM agent while the KV cache is a run-time memory. Context is the MemMachine area, with Fan writing in a blog (with his italics and bold text): “When I say memory, I don’t mean a long prompt or a vector store bolted onto a chatbot. I mean a cross-model, multi-agent, policy-aware, low-latency memory layer that captures, organizes, and retrieves knowledge with intent. Concretely, that layer should support four complementary modes:

  • Episodic memory – “What happened?” Persistent records of past interactions and outcomes, time-stamped and traceable.
  • Semantic memory – “What does it mean?” Concepts, entities, and relationships distilled from raw data.
  • Procedural memory – “How do we do it?” Steps, playbooks, and skills that agents can reuse and adapt.
  • Profile memory – “Who am I working with?” Durable knowledge of user identities, roles, preferences, and constraints, enabling personalization and continuity.

“Enterprises need all four. Episodic for continuity, semantic for understanding, procedural for action, and profile for personalization. Together they transform assistants from single-turn tools into reliable, context-aware collaborators.”

He adds: “For [AI] memory to be truly enterprise-grade, it must perform with the same rigor as other core infrastructure. It must deliver retrieval speeds fast enough to keep GPUs fully utilized, but just as importantly, it must be accurate and relevant – returning the right piece of context at the right time. It should be secure by design, with encryption, fine-grained access, and full auditability. It should work seamlessly across different clouds and models, avoiding vendor lock-in, and it must come with the observability, quotas, and service-level guarantees that enterprises expect from production systems. Memory is not a toy or a demo feature. It is infrastructure, and it must behave accordingly.”

MemVerge has benchmarked MemMachine against ChatGPT, zep, LangMem and other AI memory systems using the LoCoMo test of long-context memory systems. LoCoMo consists of long conversation data, and a collection of 500 question-answer pairs, and measures the percentage of correct answers. MemMachine led the pack with its 85 percent score:

The MemMachine software features:

  • Open Source software with Apache 2.0 license
  • Inclusion of Episodic Memory and Profile Memory for users, AI assistants and AI agents
  • Support for all foundational LLMs including OpenAI, Claude, Gemini, Grok, and open source models
  • Ability to deploy in any environment, cloud or on-prem

MemVerge identifies several agent examples in its ambitious scheme: benchmark agent, personalized context agent, SlackCRM agent, coding assistant, exec assistant, creative assistant, financial advisor agent and customer support agent. It provided a slide showing interactions with a financial agent:

This certainly looks highly professional, but we think potential investors would want this to be rigorously tested before chancing their dollars, pounds, euros or any other currency.

MemVerge says every enterprise deploying AI agents – in customer service, healthcare, finance, and beyond – will require secure memory infrastructure to enable productivity, personalization, compliance, and trust. Its MemMachine software “enables agents and applications to access, store, and retrieve context with real-time personalization, faster task execution, and fluid orchestration of complex workflows. MemMachine works seamlessly across major LLMs – OpenAI, Claude, Gemini, Grok, Llama, DeepSeek, Qwen and other models, and can be deployed on any cloud or on-prem.“

Fan writes: “Our goal is to make AI memory as fundamental as databases or storage systems – a dependable layer enterprises can standardize on for the decades ahead, and to deliver MemMachine as the most powerful AI memory that is the easiest for the developer to use… AI models will continue to improve, but the durable advantage will come from what your AI knows about your business, how reliably it can recall it, and how safely it can share that knowledge across people, agents, and applications. That is the next frontier – and it’s where we intend to lead.”

The MemVerge commercial MemMachine offerings deliver, it says, enterprise-class scalability, security, and support, with capabilities for compliance, orchestration, observability, and enterprise integration. The open source MemMachine project is available now at www.memmachine.ai, as are MemVerge’s parallel commercial offerings.

Bootnote

A Fortune report says MIT’s NANDA initiative published “The GenAI Divide: State of AI in Business 2025” study, which shows that “about 5 percent of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.” Lead author Aditya Challapally told Fortune that the MIT research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows. This is the issue that MemVerge wants to solve with its long-context AI Memory MemMachine software.

Komprise launches AI-focused ingest tool to clean up unstructured data

Data management business Komprise has launched a generally available Intelligent AI Ingest product as part of its Smart Data Workflow ingestion engine.

Komprise Intelligent Data Management delivers a single platform to easily analyze, migrate, transparently tier, and manage the lifecycle of petabytes of file and object data across hybrid environments. It uses file and object metadata to manage unstructured data estates and provide policy-driven workflows to manage placement and accessibility. Komprise says it automatically builds metadata and delivers a single view of all file data within the enterprise at scale and customers “can find precisely the right data for your AI use case with simple queries.” A recent Komprise AI Data and Enterprise Risk survey found that IT leaders cited getting the right unstructured data into AI systems and ensuring proper AI data governance as two major challenges. 

Kumar Goswami

CEO Kumar Goswami stated: “Our mission is to help organizations untangle the mess of unstructured data to gain the greatest competitive advantage with AI. Komprise Intelligent AI Ingest is the latest advancement in Smart Data Workflows to solve a critical customer pain point of efficiently finding and moving the right data to AI.”

The company says unstructured data is unorganized, containing large quantities of irrelevant, outdated, and duplicate files. This reduces precision, clutters context windows, and adds latency in AI pipelines. Studies show a 10 percent efficiency drop per 10,000 additional unstructured documents in typical retrieval-augmented generation (RAG), leading to reduced accuracy and poor outcomes. Irrelevant unstructured data wastes expensive AI processing resources, drives up costs, reduces accuracy, and ultimately erodes return on investment.

There is a risk of sensitive data leakage. Ingesting data in bulk can lead to inadvertent sensitive data exposure in AI tools, violating privacy, security, and compliance policies. Intelligent AI Ingest uses filters to eliminate low-quality and sensitive data flowing from data sources via connectors during ingest. Komprise claims it doubles ingest performance compared to the AWS DataSync data transfer tool in benchmark tests because it has a massively parallel architecture and minimizes file overhead. 

Intelligent AI Ingest has a sensitive data classification feature with built-in PII (Personally Identifiable Information) and sensitive data handling. It automatically maintains an audit trail of each ingestion workflow for data governance and auditing, documenting the who, what, and when, plus data lineage for compliance reporting. 

Komprise told us it can ingest the right data for AI model training or inferencing to Nvidia GPUDirect and NeMo DataStores and move this data out when the compute-intensive processing is complete. Essentially, Komprise provides a way to ingest in and lifecycle out data to AI-ready storage. Read a blog to find out more.

VDURA names RAID pioneer Garth Gibson as CTO

VDURA has appointed RAID and parallel file system developer Garth Gibson as its first chief technology and AI officer (CTAIO) to reinvent the storage stack for AI.

Garth Gibson

Gibson co-invented RAID (redundant array of independent disks) to increase disk data storage reliability, pioneered parallel file system development at Carnegie Mellon University’s Parallel Data Lab, and co-founded Panasas, the high-performance computing company that became VDURA. He was president and CEO of the Vector Institute, focusing on generative AI, and advised on AI infrastructure from research clusters to billion-dollar AI datacenters.

He stated: “For four decades I’ve advanced high-performance data storage, co-inventing RAID, pioneering the parallel file system, and driving innovation in AI applications and infrastructure. Today at VDURA I see a once-in-a-generation opportunity: to reinvent the storage stack for AI. We’re solving challenges others haven’t cracked, building a data platform that powers the full AI lifecycle, and that sets a new standard for performance, scale, and reliability.”

Gibson was appointed to VDURA’s board a year ago along with former Cray president and CEO Peter Ungaro. 

VDURA CEO Ken Claffey said: “Garth is a legend in storage and AI infrastructure, and his return to VDURA is a pivotal moment. Garth brings unmatched vision for how storage must evolve for the AI era, and with him leading our technology, we’re building a platform that makes AI faster, more efficient, and massively scalable, setting the course for AI’s future.”

It is a truism that AI workload numbers and size are surging, with AI training needing high-speed data delivery to GPUs, and fast checkpointing and AI inferencing demanding interim token storage offload for KV cache.

VDURA architecture

VDURA has added GPUDirect Storage (GDS), RDMA, and RoCE (v2) to help with AI training and inferencing, and also provided an AI infrastructure blueprint with AMD. We can imagine that Gibson will help VDURA combat the KV cache memory wall problem by developing offload technology so that fast SSDs can act as a KV cache backing store. He could also bring technologies examined by Vector Institute researchers to VDURA. GenAI data privacy is another area he might investigate.

VAST Data gets public cloud burstability with Red Stapler

When VAST Data bought Red Stapler earlier this month, it acqua-hired a company staffed entirely by NetApp leavers – just 4 months after they quit NetApp’s Iceland operation, and gained a cross-public cloud control plane architecture to help provide its AI OS as a public cloud service.

Had the six people simply left NetApp and joined VAST then it would have appeared to be VAST poaching NetApp staff. Iceland does support employment non-compete contracts, called restrictive covenants, which can restrict employees from joining competitors after leaving employment. Having left NetApp and joined Red Stapler, and then joined VAST Data, any such restriction has effectively been by-passed.

We have asked NetApp if the six ex-NetApp Red Stapler staff had non-compete clauses in their employment contracts, and would these non-compete clauses have prevented them leaving NetApp and joining VAST Data directly?

The six Red Staplers all previously worked at an Icelandic startup called Greenqloud, which NetApp acquired in August 2017 for $51 million in cash. Greenqloud had developed Qstack software for orchestrating and managing cloud services in hybrid cloud environments, and improving cost-efficiency, scalability and sustainability. It had a Service Delivery Engine (SDE) built on-top of Kubernetes, fitted in with NetApp’s hybrid Data Fabric ideas, and worked across public clouds.

NetApp had hired Anthony Lye to set up a Cloud Data Services Business Unit and he acquired 10 companies, that we know about, between 2017 and 2022, to get the software technology needed, with Greenqloud the first.

The total publicly known cost was half a billion dollars, with NetApp selling off five of them this year, for $100 million, one bought for $450 million, as it exited the Cloud FinOps area.

Here are the six Red Stapler people:

  • Jonsi Stefansson – CEO, ex- Greenqloud CEO, now GM of Cloud at VAST
  • Eirikur Hrafnsson – Co-founder and Chief Product Officer, ex-Greenqloud, now VP Cloud Engineering at VAST
  • Tryggvi Larusson – CTO and co-founder, ex-Greenqloud, now Cloud Architect at VAST
  • Pall Helgason – Distinguished Engineer, ex-Greenqloud, now at VAST
  • Þórhallur Helgason – CSS Wizard, ex-Greenqloud, now Principal SW Engineer at VAS
  • Grímur Jónsson – Senior Developer, ex-Greenqloud, now Principal SW Engineer at VAST

Red Stapler was operating in stealth, without a website, and had no publicly-revealed external funding. Dun & Bradstreet shows it having $180,000 in revenue in its short life. It is surprising that it was acquired just four months after being founded. The development software must have been highly impressive.

Eirikur Hrafsson said in a LinkedIn post: “We started Red Stapler to create the ultimate cloud native SaaS engine to help companies transform their products into fully-managed, multi-tenant, scalable cloud solutions. And now we will focus our efforts bringing that to VAST AI OS.“

Thus VAST intends to make its AI OS software a “fully-managed, multi-tenant, scalable cloud” offering. It says it has a “commitment to deepening alignments with the world’s leading hyperscalers.”

The VAST AI OS is already available on AWS, Azure and GCP. What Red Stapler’s SW will bring is further API integration to provide billing, monitoring, observability and scalability for each host cloud. This will provide consistency across the clouds and enable VAST’s customers to temporarily move workloads – burst – to these hyperscale public clouds. They should be able to “burst into any hyperscaler with business-critical workloads while maintaining data integrity, observability, and cost-efficiency.”

B&F diagram.

Our understanding is that VAST would like the public cloud giants to partner with it in this regard and provide “a unified platform that consolidates data services, database capabilities, and agentic execution” to their customers.

VAST CEO and founder Renen Hallak stated: “With Jonsi and team joining VAST, we gain proven expertise in working hand-in-hand with hyperscalers to deliver services that meet the demands of AI at global scale, and accelerate the path to a truly unified, multi-cloud, data foundation.” 

The description of the Red Stapler software is similar at a broad outline level with the Greenqloud QStack software.