Parallel file system software Quobyte is being used by robotaxi developer Zoox to store vehicle sensor and simulation data for training the AI software controlling the vehicles.
Amazon-owned Zoox competes with Google’s Waymo and Tesla’s robotaxi efforts with its urban vehicles. These have onboard GPUs and 4x Intel Xeon CPUs. They are purpose-designed, unlike Waymo’s retrofitted SUVs. It thinks it has an edge because its vehicles, which lack a steering wheel, “look more like carriages than cars, with seating for up to four passengers.” There is no front and back as the cars are bi-directional and the two pairs of passengers face each other.
Zoox robotaxi
Its fleet of testing vehicles in the targeted urban environments are retrofitted Toyota Highlanders equipped with the same sensor and compute packages as its robotaxis, plus a human operator. These vehicles generate driving data and validate its autonomous technology. SUVs with human safety operators run in the San Francisco Bay Area, Las Vegas, Seattle, Austin, Miami, and Los Angeles. Zoox plans to welcome public riders in Las Vegas and San Francisco later this year.
Zoox timeline
2014 – Zoox founded by CEO Tim Kentley-Klay and CTO Jesse Levinson who was developing self-driving tech at Stanford University. He is the son of Apple chairman Arthur Levinson
March 2018 – raised $500 million funding taking total to $800 million
April 2018 – Zoox fires CEO Kentley-Klay and Levinson becomes president
Dec 2018 – Gained approval for to provide self-driving transport in California
Jan 2019 – Ex-Intel chief strategy officer Aicha Evans hired as CEO
April 2020 – Settled with Tesla over alleged IP exposure following the hiring of former Tesla employees
June 2020 – Amazon acquired Zoox for >$1.2 billion and put it inside its Amazon Devices & Services organization
July 2022 – Zoox self-certified its passenger vehicles met Federal Motor Vehicle Safety Standards (FMVSS) without the need for regulatory changes or exemption requests
2023 – Approved by California Department of Motor Vehicles to begin testing self-driving robotaxis on open public roads with passengers on board. Also authorized by the Nevada Department of Motor Vehicles to operate its autonomous robotaxis on public roads
May 2024 – National Highway Traffic Safety Administration (NHTSA) opened investigation into potential flaws in Zoox vehicles after two rear-end collisions involving motorbikes and Zoox vehicles
2025 – Zoox has robotaxis operating in Las Vegas, Los Angeles, with Atlanta, Austin, Miami and San Francisco coming soon. It opened its first full-scale production facility for robotaxis in Hayward, CA, near Silicon Valley.
Intel and Zoox video. Aicha Evans is in the top right frame
Zoox is headquartered in Foster City, CA, and its developing fleet of robotaxis will be controlled by AI software that responds to real-time radar, lidar, camera, long-wave infrared, and microphone data with the AI models trained on datasets comprising this sensor and simulation data. The AI models have a perception engine and a prediction module with planning and control systems. Zoox tests its autonomous vehicle systems in virtual environments before real-world deployment.
Although owned by Amazon, which has its AWS compute and storage cloud, Zoox has its own datacenter with compute clusters formed from thousands of Nvidia GPUs – but it does use the cloud for cold data storage and client access. The on-prem storage is needed to avoid the latency involved in cloud data transfer.
Zoox started out by using Ceph to store its data and found this problematic as the OS they used was old and Ceph kernel modules had to be upgraded for performance. They constantly exceeded their capacity and the performance was slow with outages not helping. So Zoox thought again and the data is now stored in a Quobyte scale-out parallel file system, deployed in 2020 after a proof-of-concept period in 2019, which has scaled to 30 petabytes as the datasets have grown.
The data sets include high-precision 3D renderings of geographic map data which is used for training, and to keep the vehicles inside geo-fenced urban areas. Training runs can occur every two weeks or so, and data is tiered across SSD, disk drives, and the public cloud to optimize for cost and performance.
There are three Quobyte clusters with tens of thousands of clients accessing the data through AWS. Almost an exabyte of cold data is stored in AWS and Zoox is exploring tighter integration between the Quobyte system and S3 vaults to keep costs in check.
Abstract Security announced LakeVilla, a new product aimed at offering customers a cloud-native cold storage facility built for long-term security telemetry retention that delivers compliance-ready, highly accessible storage at a fraction of SIEM costs – without compromising on performance or accessibility. LakeVilla enables organizations to retain and replay years of security data – instantly searchable and seamlessly usable across detection, investigation, and compliance workflows. Read more in a blog.
…
Data security, governance and resilience supplier AvePoint announced significant updates to the AvePoint Confidence Platform, including the launch of two new Command Centers – the Optimization and ROI Command Center and the Resilience Command Center – along with expanded agentic AI governance capabilities for Microsoft Copilot agents. The Optimization and ROI Command Center provides organizations with a comprehensive view of hard-to-find cost-saving opportunities across their data estate in a single pane of glass. The Resilience Command Center addresses the critical challenge of tracking and managing data resilience across complex environments.
…
Connector company CData has a deal with SAP. CData’s embeddable connectors provide direct, enterprise-grade integration to third-party platforms. Through the partnership, customers can access data, whether stored in cloud services, on-prem databases, or productivity tools, through a consistent, SQL-based interface. By embedding CData connectors into SAP Business Data Cloud, SAP can provide its customers with access to non-SAP data and improve their time-to-insight across AI, analytics, and operational workloads.
…
Resilience supplier CrashPlan announced its integration with Microsoft 365 Backup Storage, enabling enterprises to rapidly restore large amounts of data with 10-minute recovery points and speeds of up to two terabytes (TB) per hour at scale while maintaining control over their data through a single, unified platform.
…
DDN responded to its competitors’ (Hammerspace, WEKA and VAST Data) views on its 10-node IO500 performance by saying: “DDN dominance on the 10-node benchmark is not a coincidence, nor is it cherry picking. The 10-node IO500 benchmark is great for comparing systems since the client count limit creates a more even playing field than other lists and stresses the importance of getting data into limited client counts.
“The IO500 was designed to cover the full range of IO patterns, and encompasses reads, writes, metadata, small and large IO, and much in between. In the end-to-end AI workflow – across inference workloads, model loads, checkpoints, data labeling, data sorting, and preparation – we see these same challenging IO patterns. DDN has optimized exhaustively to accelerate every one of these AI workloads and so leads the pack by a long mile.
“This leadership reflects DDN’s relentless engineering focus on delivering the most effective AI data intelligence platform in the industry. Our technology outperforms competitors like VAST Data and Weka, who continue to struggle with architectural limitations that prevent them from fully addressing these complex and diverse IO challenges at scale. DDN’s platform is proven across on-premises, cloud, and hybrid environments, driving significant business value for the world’s most demanding AI applications and industries. Our market-leading position is validated by our substantial revenue lead and unmatched customer trust.
“We thank our customers and partners worldwide for their continued support and collaboration as we accelerate AI execution and deliver breakthrough performance that powers the future of intelligent business.”
…
An E2 SSD form factor is being developed for high-capacity “warm” datacenter storage with up to 1 PB capacity per drive, using QLC NAND with SLC landing zones. The existing EDSFF form factors are E1.S and E1.L (Short and Long) replacing M.2, and E3.S and E3.L replacing U.2 (2.5-inch) – so there is an obvious numeric gap. The Storage Networking Industry Association (SNIA) and the Open Compute Project (OCP) are working on the E2 standard, with contributions from SSD suppliers like Micron and Pure Storage. E2 should use the PCIe Gen 6 interconnect standard. Read more in an Embedded Computing article here.
…
Cloud file services supplier Egnyte has a new AI-powered Project Hub, specifically designed for the AEC industry to give users visibility and control over data throughout the entire project lifecycle, from building the bid to closeout. It includes a customizable Project Setup Wizard that enables firms to set up a standardized folder structure from the start. Beta customers report that the new tool saves them significant time and resources, freeing up their teams to focus on design and delivery instead of administrative tasks.
Prasad Gune, Chief Product Officer at Egnyte, said: “The Project Hub acts as a central repository for all project data, providing users with real-time, comprehensive insights into their projects, storing everything from design files to field data. From project kickoff to closeout, Project Hub’s streamlined workflows, including standardized project setup and integrations with essential platforms like Autodesk Construction Cloud and Procore, help eliminate versioning conflicts and duplicate work so our customers can focus on project delivery, not managing data.”
…
Deduplicating tiered backup target system supplier ExaGrid is planning to add support for Cohesity DataProtect as a source of backup data alongside its existing (Veritas – now Cohesity) NetBackup support. DataProtect support should be GA in the first half of 2026. It can’t be a simple software add.
…
Gartner has a Backup and Data Protection Platform magic quadrant coming out on today. Unusually, the analyst has allowed suppliers to pre-announce their positions:
Commvault has been positioned as a Leader for the 14th consecutive year.
Druva has been named a leader.
Veeam has been positioned in the Leaders Quadrant for the ninth consecutive time, and the sixth consecutive year it’s positioned highest for Ability to Execute.
Druva will ship you a complimentary copy of the report today, June 30. Apply here.
For comparison here is last year’s version;
Druva got promoted from Visionary to Leader, Huawei entered as a Challenger, the separate Cohesity and Veritas entries have been combined, and Microsoft exits as a Niche Player.
…
JuiceFS is a cloud-native distributed POSIX file system that was open sourced (Apache License 2.0) in 2021 and currently has over 11,700 stars on GitHub. Originally designed for big data cloud migration, it has since seen adoption in AI applications, including LLM, autonomous driving, etc. Data, stored via JuiceFS, will be persisted in Object Storage (e.g. Amazon S3), and the corresponding metadata can be persisted in various compatible database engines such as Redis, MySQL, and TiKV based on the scenarios and requirements. With JuiceFS, massive cloud storage can be directly connected to big data, machine learning, artificial intelligence, and various application platforms in production environments. Without modifying code, the massive cloud storage can be used as efficiently as local storage. More info here.
…
Kioxia’s SSD roadmap includes a new SSD that pairs its XL-FLASH single-level cell memory with a redesigned controller. This is expected to deliver over 10 million IOPS, particularly for small data transactions. It plans to have sample units available by the second half of 2026. Kioxia is working closely with major GPU manufacturers to fine-tune these drives to enhance performance in AI and graphics-heavy apps. Kioxia has two main paths for future flash memory development; increasing capacity by stacking more memory layers with the upcoming 10th generation BiCS FLASH (332-layers), and optimizing BiCS 9 (218-L) by CMOS bonding directly to the memory array.
…
Micron announced the shipment of HBM4 36 GB 12-high samples to multiple key customers. It features a 2,048-bit interface, achieving speeds greater than 2.0 TBps per memory stack and more than 60 percent better performance and over 20 percent better power efficiency than the previous HBM3E generation. Micron plans to ramp HBM4 in calendar year 2026, aligned to the ramp of customers’ next-generation AI platforms.
…
Object storage software supplier MinIO has launched a US government business and has hired Cameron Chehreh as president and general manager of MinIO Government and Deep Grewal as Vice President of Federal. The two formerly led government efforts at Intel and AMD respectively. With the launch of MinIO AIStor, the company’s commercial object storage platform, MinIO has created a new data management layer to future-proof and prepare government agencies for sovereign AI.
…
Cloud file services supplier Nasuni has been awarded AWS Energy and Utilities Competency status. Michael Sotnick, SVP of Business & Corporate Development at Nasuni, said: “Our strategic collaboration with AWS is redefining how energy companies harness seismic data. Together, we’re removing traditional infrastructure barriers and unlocking faster, smarter subsurface decisions. By integrating Nasuni’s global unified file data platform with the power of AWS solutions including Amazon Simple Storage Service (S3), Amazon Bedrock, and Amazon Q, we’re helping upstream operators accelerate time to first oil, boost capital efficiency, and prepare for the next era of data-driven exploration.”
…
OWC Express 4M2
OWC announced the OWC Express 4M2, a four-slot NVMe M.2 SSD USB4, Thunderbolt-compatible enclosure offering improved performance, thermal protection for today’s high performance drives, as well as expanded compatibility for an improved near-silent operation. It supports up to 4 x NVMe M.2 SSDs (2230, 2242, or 2280) and achieves real-world speeds up to 3200MB/s – with flexible RAID options, i.e., 0, 1, 4, 5, and 1+0. The OWC Express 4M2 is available now for pre-order and will be shipping next week, with pricing as follows: $239.99 0GB, $379.99 0GB with 3-year SoftRAID included.
…
OWC announced its 4 TB Aura Pro X2 SSD built for Macs running macOS High Sierra (10.13) or later. It’s designed for late 2013-Mid 2015 MacBook Pro, 2013-2017 MacBook Air, late 2013-2019 iMac 27” and 21.5” models, late 2014 Mac mini, and late 2013 Mac Pro models (released from 2013 to 2019). Built for PCIe Gen 4 and NVMe access, it can deliver read and write speeds over 3,200 MBps (specific performance depends on Mac model).
Aura Pro X2 SSD
…
Tarek Robbiati
Tarek Robbiati has been appointed Pure Storage CFO, replacing the departing Kevan Krysler. He previously co-founded Bluestone Estate in January 2024, served as CEO at RingCentral until December 2023, and was CFO at HPE for five years. After Robbiati left RingCentral, founder Vlad Shmunis, who served as executive chair during Robbiati’s tenure, returned as CEO.
…
UK data intelligence company Sagacity is partnering with Databricks to make its fully permissioned UK consumer datasets available on the Databricks Marketplace. The available datasets include over 900 individual-level UK consumer attributes, helping organizations accelerate AI initiatives, personalized experiences, and make data-led decisions more effectively. Some of the data products now readily available on the Databricks marketplace include Enhance core, Enhance Property, Smart Link, and The Bereavement Register
…
HA and DR company SIOS announced a strategic partnership with India’s FCS Infotech, a solutions and services company.
…
Startup Slide, specializing in Business Continuity and Disaster Recovery (BCDR) solutions for Managed Service Providers (MSPs), and which launched in the USA earlier this year, has raised $25 million in Series A funding. Slide has announced its entry into the Canadian market, launching a new datacenter to meet data residency requirements. It was founded by Austin McChord, former CEO of Datto, and Michael Fass, who was Datto’s General Counsel and Chief People Officer.
…
Data warehouser Teradata introduced its on-prem Teradata AI Factory unifying components from Teradata, and third parties like Nvidia, into a single, scalable system configured for AI development and workflows – including support for on-prem vector stores and native retrieval-augmented generation (RAG) pipelines. Teradata AI Factory, built with the Nvidia Enterprise AI Factory validated design, is ideal for highly regulated industries like healthcare, finance, and government, as well as any enterprise seeking greater autonomy over its AI strategy.
Teradata AI Factory is designed to deliver the complete package for private, trusted AI at scale: the security and control of on-premises infrastructure, cost-effective analytic performance, and seamless integration of hardware and software, including Teradata’s Enterprise Vector Store and Teradata AI Microservices, which leverages Nvidia NeMo microservices for native RAG pipeline capabilities.
…
Data protector Veeam will provide image-based backup support for HPE Morpheus VM Essentials Software, including VM migration and data portability from traditional hypervisor environments. Fidelma Russo, EVP of hybrid cloud and CTO of HPE, said: “With our deep partnership and integration, HPE and Veeam are delivering unified virtualization and data protection that is future-ready, giving customers the resiliency and agility to evolve their hybrid IT strategy.” Veeam Kasten provides backup and recovery for containerized and cloud-native workloads. HPE and Veeam say the powerful combination of Veeam Data Platform, HPE Morpheus Software, and HPE Zerto Software – backed by increased joint go-to-market investment – enables customer data protection success.
Startup Databahn diagnoses security threats by using AI agents to trawl through masses of log telemetry data.
Dallas, Texas-based Databahn was founded in July 2023 by CEO Nanda Santhana and President Nithya Nareshkumar. Security specialist Santhana came out of the University of Southern California in 2005 with an MS in Engineering and Industrial Management, and joined security company Vaau as a founding member. It was bought by Sun in 2008 to bolster its identity management offerings and he became a regional manager, progressing to Tech Fellow when Oracle bought Sun in 2010. He became a founding member of Securonix, which developed cyber threat detection using machine learning and big data analytics, and stayed there almost 12 years until Databahn was born.
Finance-oriented Nithya Nareshkumar spent 13 years at JP Morgan, finishing as an executive director for wealth management, and then joined The Depository Trust & Clearing Corp (DTCC) as exec and then Managing Director. She left to co-found Databahn with Santhana, as well as investing in early stage startups. The pair reckoned they could build a better way to collect and secure telemetry data by separating it from traditional SIEM (Security Information and Event Management) security platforms and security lakes – think Databricks and Snowflake, which can have high subscription and license fees. Databahn claims it can reduce security telemetry costs by half.
From left, Databahn founders CEO Nanda Santhana and President Nithya Nareshkumar
Using seed funding from GTM Capital, Databahn developed its DataBahn.ai offering, using AI and data orchestration to manage distributed security information better, and improve threat detection and analysis. It developed its Cruz AI agent, a self-described data-engineer-in-a-box, to automate processes like log discovery, data onboarding, normalization, transformation, optimization, and operational monitoring. Cruz autonomously keeps track of new event types, automatically addressing schema drifts and format changes, and transforming data into any data model such as CIM, ECS, or OCSF.
What Databahn had developed was a security log data pipeline with a data fabric concept. It saw that businesses were collecting petabytes of logs, alerts, and telemetry but they “typically analyze less than 5 percent of it.” That spelled out an AI large language model (LLM) or agent opportunity. Databahn launched its Reef product to ingest the petabytes of log data and filter, identify, contextualize, and prioritize the high-value data there, in real time, writing it directly to enterprise-owned data lake infrastructure.
It adopted the Model Context Protocol (MCP) to integrate Reef with Cruz AI and has now gained interest from Series A venture capitalists, raising $17 million from Forgepoint Capital, assisted by S3 Ventures and returning investor GTM Capital, taking total funding to $19 million. Databahn will use the cash to develop “autonomous agents that learn from enterprise data flows to automate data engineering tasks – and support global expansion as the company establishes itself as the trusted foundation for enterprises seeking clarity, control and composability in their data pipelines.”
It says it can manage and operationalize telemetry across security, observability, IoT/OT, and AI ecosystems. This will enable “organizations to seamlessly integrate, govern and optimize data pipelines from any source to any destination—with one-click simplicity and enterprise-grade control.“
New “Phantom agents collect telemetry without deploying traditional agents, avoiding footprint bloat and preserving compute resources.” Its software will parse, enrich, and suppress noise at scale and provide federated search capabilities to deliver persona-based insights, beyond just using SQL queries.
Santhana said: “We’re building the foundation for a new era of observability, one where data is not just moved, but understood, enriched and made AI-ready in real time.”
Comment
Analyzing large-scale SIEM telemetry data looks like a great match for AI agent capabilities with proprietary data, a well-defined workload space and ransomware/malware detection ranking high on every organization’s list of concerns. For data protection companies that have pivoted to becoming cyber-resilience suppliers, such as Cohesity, Commvault, Druva, Rubrik, and many others, a company like Databahn could represent a great tuck-in acquisition opportunity – as it could for established security vendors as well.
VDURA CEO Ken Claffey believes that the company should be classed alongside DDN, VAST Data, and WEKA as an extreme high-performing and reliable data store for modern AI and traditional HPC workloads.
The world of storage buyers needs to re-evaluate VDURA as the PanFS software that was the basis of Panasas’s HPC success has been completely overhauled since Claffey became CEO in September 2023. The company changed its name to VDURA in May 2024 to reflect its transformation and focus on data velocity and durability.
Ken Claffey
Claffey says VDURA combines the stable and linear performance of a parallel file system with the resilience and cost-efficiency of object storage.
VDURA’s microservices-based VDP (VDURA Data Platform, the updated PanFS) has a base object store with data accessed by clients through a parallel file system layered on top of that. There is a unified global namespace, a single control plane, and a single data plane. The metadata management uses a VeLO (Velocity Layer Operations) distributed key-value store running on flash storage with the object storage default being HDD.
Virtualized Protected Object Device (VPOD) storage entities reside on the HDD layer. For data durability, erasure coding is provided within each VPOD and across a VDURA cluster. The VeLO software runs on scale-out 1U director nodes with VDURA’s own hardware using AMD EPYC 9005 CPUs, Nvidia ConnectX-7 network interface cards, Broadcom 200Gb Ethernet, and Phison PCIe NVMe SSDs – Pascari X200s.
VDP has a unified namespace where Director Nodes handle metadata and small files via VeLO and larger data through VPODs. The Director Nodes manage file-to-object mapping, allowing seamless integration between the parallel file system and object storage. They also support S3.
VPODs can run on hybrid flash-disk nodes and also all-flash V5000 storage nodes, F-Nodes. The Hybrid Storage Nodes incorporate the same 1RU server used with the Director Node and 4RU JBODs running VPODs for cost-effective bulk storage with high performance and reliability.
The F-Nodes have a 1RU server chassis containing up to 12 x 128 TB NVMe QLC SSDs providing 1.536 PB of raw capacity. An F-Node is powered by an AMD EPYC 9005 Series CPU with 384 GB of memory. There are Nvidia ConnectX-7 Ethernet SmartNICs for low latency data transfer, plus three PCIe and one OCP Gen 5 slots for high-speed front-end and back-end expansion connectivity.
Coming ScaleFlow software will allow “seamless data movement” across high-performance QLC flash and high-capacity disk.
VDP is a software-defined, on-premises offering, using off-the-shelf hardware, and is being ported to the main public clouds. It will also support GPUDirect Storage (GDS), as well as RDMA and RoCE (v2) this summer.
Claffey says that the idea from three to five years ago that QLC flash prices would drop down to HDD levels has not come true. He tells us: “Enterprise flash would go from 8x to 6x to 4x and then all geniuses were saying, oh, it’s going to go to 2x and then 1x. Remember those forecasts? And then the reality is, the opposite happened. There was no fundamental change in the cost of the drive … Now if you go look at it, go to Best Buy, go wherever you want to go, the gap between a terabyte HDD and a terabyte SSD is close to 8x.”
You need a tiered flash-disk architecture to provide flash speed and disk economics. VDURA wants to build the best, most efficient storage infrastructure for AI and HPC. It doesn’t intend to build databases; that’s too high up the AI stack from its storage infrastructure point of view. Instead it will make itself open and welcoming to all AI databases.
VDURA believes it will be the performance leader in this AI/HPC storage infrastructure space. Early access customers using its all-flash F-Nodes, which go GA in September, say it’s very competitive.
Claffey says VDURA wins bids against rivals. This was exemplified in a US federal bid for a large system integrator. It said the SI looked at several competing suppliers who proposed parallel access systems offering performance sufficient to feed large x86 and GPU compute clusters – one of the world’s largest US defense clusters – with sub-millisecond latency. The bids were for a multi-year project with a 2025 phase 1 requiring 20 PB of total capacity and sustained >100 GBps performance. A phase 2 in 2026 will move up to around 200 PB of usable capacity and 2.5 TBps sustained performance.
VDURA bid a system with V5000 all-flash nodes for performance and HDD extensions for bulk capacity. It was selected by the SI because it matched the performance and capacity needs. It claimed it beat a rival on performance and TCO and added that the VDURA system had a better TB per watt rating and a lower carbon footprint than its competitors.
The company reckons it matches DDN and IBM Storage Scale on performance and claims it is easy to use, manage, and reliable.
The new 2600 client QLC SSD from Micron dynamically optimizes its cache to get QLC flash writing like TLC.
NAND has four basic formats for the number of bits in a cell: SLC with one, MLC with two, TLC with three, and QLC with four. Higher cell bit counts reduce cost but degrade write performance and endurance. The 2600’s Adaptive Write Technology (AWT) provides TLC-class write performance by having a top-level SLC cache for fresh incoming writes and a second tier TLC cache for use when the SLC cache is full. When both SLC and TLC mode areas are full, AWT migrates data from those areas to QLC mode when the SSD is idle, even for a short amount of time. As this process continues, AWT continues to migrate data from the SLC and TLC mode caches, folding that data into QLC mode. AWT also resizes the SLC and TLC regions to ensure the advertised capacity is available. This, Micron claims, achieves up to 63 percent faster sequential write and 49 percent faster random write speeds than competing value QLC and TLC SSDs.
Mark Montierth, CVP and GM of Micron’s Mobile and Client business unit, stated: “The Micron 2600 QLC SSD achieves superior performance compared to competitive value TLC drives … This Micron innovation milestone allows for broader commercial adoption of QLC NAND.”
The 2600 is a DRAM-less SSD with a Phison PS5029-E29T four-channel controller. It uses a 2 Tb die built from Micron’s 276-layer (G9) 3D NAND and has a six-plane architecture, with an NVMe PCIe Gen 4×4 interface. Micron says it offers “the fastest NAND I/O rate now shipping in a client SSD” and “up to four times faster sequential write speeds while continuously writing up to 800 GB of data to a 2 TB SSD.” Micron claims it offers better sequential and random performance compared to other DRAM-less TLC and QLC SSDs, and “easily surpasses competitor QLC and value TLC SSDs” in speed.
It has 512 GB, 1 TB, and 2 TB capacities, and its random and sequential performance and endurance numbers all increase with capacity:
Micron produced an earlier gumstick QLC drive, the 2500, in April 2024, with the same capacity points but built with earlier, 232-layer 3D NAND. At the 2 TB level, its random read and write IOPS were both 1 million and its sequential bandwidth was 7.1 GBps read and 6 GBps write. The 2600’s 1 million random read IOPS, 1.1 million random write IOPS, 7.2 GBPs sequential read, and 6.5 GBps million sequential write numbers are not that much better, with just 7.7 percent faster sequential write speed for example. It would appear that AWT provides a marginal gain for the 2600 compared to the 2500. The size of the SLC caches in the 2500 and 2600 is unknown, and a smaller SLC cache could negatively affect write performance.
Endurance is improved rather more, though, with the 2500’s 2 TB capacity point endurance of 600 TB written eclipsed by the 2 TB 2600’s 700 TB written, a 17 percent increase.
A Micron AWT technical brief provides a comprehensive explanation of how AWT works and how the SLC, TLC, and QLC region boundaries are dynamically changed as the AWT processes take place.
The 2600 with AWT is in qualification with Micron’s OEM customers and not all resulting 2600-based SSDs will necessarily use AWT – Micron is providing 2600 versions without it.
Bootnote
SSD speed comparisons are based on currently in-production, commonly available 2 TB QLC and value TLC NAND client SSDs from the top five competitive suppliers of OEM SSDs by revenue (using 1 TB where the supplier does not offer 2 TB), excluding consoles and Apple products, as per Forward Insights analyst report, “SSD Supplier Status Q1/25.” Performance comparisons are based on publicly available data sheet information.
Cyber-resilience supplier Rubrik is buying AI agent development startup Predibase to accelerate agentic AI adoption.
Predibase is a 2021 startup founded by Google and Uber alumni, originally focused on providing machine learning (ML) tools to customers. It pivoted to GenAI LLM-based agent tools when ChatGPT arrived and its focus now is building tools to help customers move agents from pilot projects to at-scale production use. Predibase believes the future of generative AI is smaller, faster, and cheaper open source LLMs fine-tuned for specific tasks. The thinking is that competitive value in AI wouldn’t accrue to general-purpose APIs, but to how models are customized with proprietary data.
Rubrik co-founder, chairman, and CEO Bipul Sinha stated: “What the Predibase team has achieved with model training and serving infrastructure in the last few years is nothing short of remarkable. AI engineers and developers across the industry trust their expertise. Together, Rubrik and Predibase will drive agentic AI adoption around the world and unlock immediate value for our customers.”
Instagram image. From left, Piero Molino, Bipul Sinha, Devvret Rishi, and Rubrik CTO Arvind Nithrakashyap
The acquisition cost has not been revealed. A CNBC report suggests Rubrik paid between $100 million and $500 million for Predibase. In its Q3 2025 results, Rubrik reported it had cash, cash equivalents, and short-term investments of $632 million. It may be buying Predibase for cash – it could afford it as a mix of cash and shares, or just shares.
Rubrik has more than 6,000 enterprise customers. The great thing for backup suppliers providing proprietary data to AI models and agents is that they represent a single source of data, backed up as it is from multiple sources within a customer’s data and application estate, both on-prem and in the public cloud. If a customer wanted to provide this data to AI models and agents from the original, real-time sources, it would have to build AI pipelines to collect, filter, select, and move this data to a central place. Of course, it may already be doing this with some of it by utilizing data warehouses, lakes, and lakehouses. The backup folks, like Commvault, Cohesity, Rubrik, and others, provide an existing AI model and agent data collection point, like a data warehouse/lake/lakehouse but without the in-built analytics.
Now Rubrik, calling its backed-up data a data lake, wants to provide an agentic AI production pipeline, using Predibase tooling to take pilot projects to widespread production use. That software enables AI agent developers to fine-tune and deploy AI agents at scale and post-train them in the inferencing period.
The Predibase software has a proprietary post-training stack for customizing models with a highly optimized inference engine. There is a turbo serving engine and LoRAX (LoRA eXchange), an open source system for deploying personalized models at scale. It “allows users to pack hundreds of fine-tuned models into a single GPU and thus dramatically reduce the cost of serving.”
Sinha blogs: “Rubrik’s data lake provides the necessary access controls, permissions, and policies to securely power AI applications. Predibase delivers operational value on top of improved, customized models, enabling enterprises to build, pilot, and deploy AI safely at scale. When teams are equipped with Predibase and our secure data lake, they are empowered to accelerate AI adoption.”
Predibase CEO Devvret Rishi said: “We created Predibase to lift the barriers between an idea and production-ready AI. Today, many organizations still face challenges moving beyond the proof-of-concept stage. Predibase removes the hardest part of that journey and accelerates production-ready AI by giving teams an easy-to-use platform to tune models to their own data and run on an optimized inference stack. This unlocks more accurate results and faster models, all at lower cost.”
Rubrik noted that integrating Predibase will expand the work to secure and deploy GenAI applications that Rubrik is doing today with Amazon Bedrock, Azure OpenAI, and Google Agentspace.
Read more in a Predibase blog, where Rishi says Rubrik and Predibase will be “building towards a shared vision at the intersection of AI, security, and data,” and a Rubrik blog.
Rubrik has now added a third string to its bow, first providing backup, then security, and now agentic AI development tooling.
Bootnote
San Francisco-based Predibase was founded by original CEO and now chief scientific officer Piero Molino, CTO Travis Addair, and original chief product officer and now CEO Devvret Rishi.
Molino spent three and a half years at Uber as a natural language processing scientist and Addair worked at Uber for four years as a senior software engineer and tech lead manager. Both worked in machine learning projects there, such as Ludwig (auto-ML platform) and Michelangelo (ML-as-a-Service). Rishi was a Google product manager for Cloud AI during his time in machine learning.
Predibase raised an undisclosed seed round and then a $16.25 million A-round in May 2022, led by Greylock, with participation from Factory and angel investors, including Anthony Goldbloom. This was followed by an A expansion round in May 2023 for $12.2 million led by Felicis Ventures. Total known funding is $28.45 million and it helped take Predibase from beta product status to general availability. Customers include Checkr, Convirza, Marsh McLennan, Nubank, and many others.
We understand Predibase staff will be joining Rubrik.
Separately, startup Typedef is also developing software to take AI agent pilots to production deployment.
Micron reported record revenue in its third fiscal 2025 quarter as DRAM sales rocketed, with 50 percent sequential growth in HBM.
Revenues of $9.3 billion were up 37 percent year-over-year with GAAP profits of $1.9 billion, up 467.8 percent on last year. Datacenter revenue more than doubled year-over-year to a new high, and consumer-oriented markets had strong sequential growth. It is currently the only supplier producing LP DRAM at volume for the datacenter market. And for the first time ever, Micron has become the number two brand by share in datacenter SSDs, according to third-party data.
Sanjay Mehrotra
Micron chairman, president, and CEO Sanjay Mehrotra stated: “Micron delivered record revenue in fiscal Q3, driven by all-time-high DRAM revenue including nearly 50 percent sequential growth in HBM revenue … We are on track to deliver record revenue with solid profitability and free cash flow in fiscal 2025, while we make disciplined investments to build on our technology leadership and manufacturing excellence to satisfy growing AI-driven memory demand.”
DRAM revenues were $7.1 billion, up 51 percent year-over-year, while NAND revenues rose a modest 4 percent to $2.2 billion. AI training demands HBM now, while AI inferencing may, could, or should demand more SSD capacity over the next four to eight quarters.
Financial summary
Gross margin: 37.7 vs 26.9 percent a year ago
Operating cash flow: $4.61 billion vs $2.48 billion a year ago
Free cash flow: $1.95 billion vs $425 million last year
Cash, marketable investments, and restricted cash: $12.2 billion vs $9.2 billion last year
Diluted EPS: $1.68 vs 0.30 a year ago
Micron says its high-capacity DIMM and LP server products have generated multiple billions of dollars in revenue in fiscal 2025, fivefold growth compared to the same period last year.
It’s shipping HBM in high volume to four customers, spanning both GPU and ASIC platforms. Its yield and volume ramp on HBM3E 12-Hi is progressing well, and it sees shipment crossover from HBM3 in the current quarter. Micron expects its HBM market share to grow to the 22-23 percent level, matching its overall DRAM market share, behind Samsung (41.1 percent) and SK hynix (34.8 percent). It thinks it could achieve this in calendar 2025.
According to TrendForce, current HBM supplier market shares are SK hynix: 46-49 percent; Samsung: 42-45 percent; and Micron: 4-6 percent. Bloomberg Intelligence is guiding SK hynix at 40 percent, Samsung at 35 percent, and Micron at 23 percent by 2033.
Micron has delivered samples of HBM4 to multiple customers and expects to ramp volume production in calendar 2026. It is making progress developing its new and denser 1-gamma DRAM node with increasing yields on its current 1-beta node. The company has started qualifications for new high-performance SSD products based on its G9 (276-layer) 2 Tb QLC 3D-NAND die. And it is announcing a client SSD using the G9 QLC tech, delivering performance equivalent to TLC NAND for most consumer use cases.
Business unit performance
Compute and Networking: a record $5.1 billion, up 11 percent sequentially, 98 percent year-over-year.
Storage: $1.5 billion, up 4 percent sequentially, 11 percent year-over-year, primarily from consumers buying SSDs.
Mobile: $1.6 billion, up 45 percent sequentially, 0.8 percent year-over-year, on phone demand for DRAM.
Embedded: $1.2 billion, up 20 percent sequentially, down 8 percent year-over-year, with growth due to recent tight supply of DDR4 and resulting improved pricing.
Micron has completed an AI-focused strategic reorganization of its business units around key market segments, such as high-performance memory and storage, to capitalize on “the tremendous AI growth opportunity ahead.”
The company is going to fab more memory in the US with plans to invest approximately $200 billion, which includes $150 billion in manufacturing and $50 billion in R&D over the next 20-plus years. As part of the $200 billion, it plans to invest an additional $30 billion beyond previously announced plans, including the construction of a second memory fab in Boise, Idaho, expanding and modernizing its existing fab in Manassas, Virginia, and bringing advanced packaging capabilities to the US to support long-term HBM growth plans.
It expects to begin ground preparation of its New York fab later this year following the completion of state and federal environmental reviews.
Micron says its customers continue to signal a constructive demand environment for the remainder of calendar 2025.
The revenue outlook for the final FY 2025 quarter is $10.7 billion ± $300 million, a 38.1 percent rise at the midpoint, and $36.7 billion revenues for the full year, 46.2 percent up on FY 2024.
Four more high-bandwidth memory (HBM) generations have been outlined by KAIST (Korea Advanced Institute of Science & Technology) and its Terabyte Interconnection and Package Laboratory (Tera) research group, with up to 64 TBps bandwidth and 24-Hi Stacks – 50 percent more than HBM4.
The latest HBM generation is HBM4 with up to 2 TBps bandwidth (data rate) and a max of 16-Hi DRAM chip stacks and 64 GB capacity. HBM standards are published by JEDEC (Joint Electron Device Engineering Council) with the first HBM standard (JESD235) published in 2013, and updates like HBM2 (JESD235A), HBM2E, HBM3, and HBM3E improving bandwidth, capacity, and efficiency.
HBM3 and HBM4 use either forced air cooling (fin/fan) or forced water cooling (D2C – direct to chip) cooling. HBM5 in 2029 will use immersion cooling with HBM7 and HBM8 going for embedded cooling, in which cooling mechanisms are integrated directly into or very close to the chip itself.
Microbump (MR-MUF) die stacking will be used with HBM4 and 5 with bumpless Cu-Cu (copper-to-copper) direct bonding featuring in HBM6 to HBM8 because of its high density and performance and better signal integrity.
Nvidia’s Feynman (F400) accelerator is set to use HBM5, with an entire GPU HBM5 capacity of 400 to 500 GB. It has a 2028/2029 release date.
HBM6 in 2032 will have an active/hybrid (silicon + glass) interposer and its maximum stack number rises to 20 from HBM5’s 16. This will send per-stack capacity to 96 to 120 GB.
HBM7 in 2035 will feature 160 to 192 GB stack capacity with an up to 24-Hi stack and 24 TBps bandwidth, three times that of HBM6. The bandwidth increases to 64 TBps with HBM8 in 2038, which has the same max 24-Hi stack height. Stack capacity rises to 200-240 GB.
HBM8 could use a double-sided interposer with HBM on one side and HBM, LPDDR memory, or HBF (high-bandwidth flash) memory, which combines the high-bandwidth characteristics of HBM with the high capacity of 3D NAND, on the other. The HBM dies have a 240 GB capacity; the LPDDR ones 480 GB; and the HBF dies, 1,024 GB capacity.
HBM8 with a mix of HBM memory and High-Bandwidth Flash (HBF) for extra capacity. Note the embedded cooling
We’ve summarized what we know in a table listing HBM4-8 characteristics:
This is a roadmap and the further out it gets the less certainty can be attached to it. We have to wait for JEDEC to issue a formal specification for future HBM generations before trusting the details.
A NetApp AI Space Race report asks whether China, the USA, or another country will become the world leader in AI innovation and says that businesses will need an intelligent data infrastructure. As a data infrastructure supplier, its assertion is unsurprising.
Gabie Boko
It sees the race for AI leadership as being equivalent to the US-Russia space race, with CMO Gabie Boko stating: “In the ‘Space Race’ of the 1960s, world powers rushed to accelerate scientific innovation for the sake of national pride. The outcomes of the ‘AI Space Race’ will shape the world for decades to come.”
NetApp surveyed 400 CEOs and 400 IT execs across China, India, the UK, and USA in May, and 43 percent said the US would lead in AI in the next five years, twice as many as those positioning India, China, or the UK in the lead.
Its report says 92 percent of Chinese CEOs report active AI projects but only 74 percent of Chinese IT execs agree with them. In the USA, 77 percent of CEOs report active AI projects and 86 percent of US IT execs agree with them. NetApp says there is a “critical misalignment between CEOs and IT executives” in China, “which could hinder its long-term leadership potential.”
It suggests that “internal alignment, not just ambition, may ultimately shape how AI strategies are executed across regions and roles.”
A different view might be that Chinese organizations are developing CEO-led AI projects faster than US ones.
Another difference between China and the other countries is that China is more focused on scalability (35 percent compared to global average of 24 percent), whereas others are focused on integration. Security and compliance are the least-ranked concerns (10 percent average between IT execs and CEOs globally).
More respondents think the US will be the likely long-term AI leader than China:
64 percent of US respondents ranked the US as the likely leader in AI innovation over the next five years, versus 43 percent of the global average
43 percent of China respondents ranked China as the likely leader in AI innovation over the next five years, versus only 22 percent of the global average
40 percent of India respondents ranked India as the likely leader in AI innovation over the next five years, versus only 16 percent of the global average
34 percent of UK respondents ranked the UK as the likely leader in AI innovation over the next five years, versus only 19 percent of the global average
Overall, CEOs and IT execs see “AI for decision making and competition to stay ahead” as the single most powerful force to drive AI adoption (26 percent). India (29 percent) and UK (32 percent) feel extra pressure to compete as China and the US are seen as clear leaders. China is uniquely driven by customer demand (21 percent vs 13 percent of global average), underscoring that the China market is seen as leading today with actual pilots and programs (83 percent vs 81 percent global average – not much of a difference).
Just over half (51 percent) of respondents saw their own organization as competitive in AI but none see themselves as the current leader. Almost all (88 percent) think their organization is mostly or completely ready to sustain AI transformation and 81 percent are currently piloting or scaling AI.
NetApp’s report states: “One of the most significant success factors in the AI Space Race will be data infrastructure and data management, supported by cloud solutions that are agile, secure and scalable. Successful organizations need an intelligent data infrastructure in place to ensure unfettered AI innovation. This is critical no matter the company size, industry or geography.”
It concludes: “In the AI Space Race, hype wonʼt win – data will. No matter the size, industry, or location, success hinges on a foundation that can support the full weight of AI. Organizations that come out on top will be those with intelligent, secure, and scalable data infrastructure built to power real innovation.”
Comment
It seems obviously true that successful AI projects will need a scalable and secure data infrastructure. Accepting that, which suppliers could provide one? NetApp sees itself here, as “the intelligent data infrastructure company.” But we would suggest all of its competitors are also well positioned, as they currently emphasize storing unstructured data, supplying and supporting AI pipelines, RAG, vector databases, agents, and Nvidia GPUs and software.
We might suggest that leading positions in AI training and inference could be indicated by Nvidia GPU server certifications, Nvidia AI Factory,and AI Data Platform support. This would include DDN, Dell, HPE, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data, and WEKA.
If we look at support for GPUDirect for files and objects, we could add Cloudian, Hammerspace, MinIO, and Scality to our list. We could look at IO500 data and see that Xinnor, DDN, WEKA, VAST Data, IBM (Storage Scale), Qumulo, and VDURA are represented there. We could also look at AI LLM, RAG, and agent support for backup data sets by Cohesity, Commvault, Rubrik, Veeam, and others, and see that cloud file services suppliers such as Box, CTERA, Egnyte, Panzura and Nasuni are piling into AI as well. Data management suppliers like Datadobi amn Komprise are also active.
Data services suppliers in the widest sense, from observability and governance to database, data warehouse, data lake, lakehouse and SaaS app suppliers, are all furiously developing AI-related capabilities, with Teradataannoucing its own AI Factory in partnership with Nvidia.
In China and elsewhere outside the USA, Huawei and other Chinese suppliers will be well represented.
A conclusion is that all incumbent IT suppliers see any weakness in their adoption of AI and support of customer AI projects as a potential entry point into their customer base by competitors. None of them are willing to let this happen.
NetApp has recruited Zscaler CTO Syam Nair to be its new chief product officer, replacing the departing Harvinder Bhela.
Bhela joined NetApp in January 2022 as EVP and CPO and led product management, engineering, hardware, design, data science, operations, and product marketing, after spending 25 years at Microsoft. His LinkedIn profile says he is “in mindful hibernation,” while comments on his departure post suggest he may be pursuing another opportunity.
CEO George Kurian stated: “I am thrilled to welcome Syam to NetApp’s leadership team. He joins us at a time when our customers are looking to NetApp to help them deliver data-enabled growth and productivity: not only must they innovate to stay ahead, but they must also simplify to improve productivity and agility. This is a balance Syam has mastered throughout his career. Syam’s proven track record – from building planet-scale Azure data services at Microsoft to spearheading hyper-scale platforms like Salesforce Data Cloud – is exactly what we need as we sharpen our focus on high-growth markets.”
Nair is a former executive at Salesforce, Microsoft, and Zscaler with a track record of scaling major cloud platforms and pioneering next-gen AI-powered products. He joined Zscaler in May 2023 as CTO and EVP for R&D, moving from Salesforce. While at Zscaler, he led efforts to integrate AI and ML into its offerings and drove the expansion of its Zero Trust Exchange platform, scaling it to handle over 300 billion daily transactions, reinforcing Zscaler’s position as the world’s largest in-line cloud security platform.
Syam Nair
At Salesforce, he was EVP and head of product engineering and technology for Tableau, Customer Data Cloud (Genie – a hyperscale CRM data cloud), AI (Einstein), Automation (Flow), and Salesforce Marketing Cloud. He managed a globally distributed engineering team and cloud infrastructure. Nair and his execs were also responsible for the vision and execution of Salesforce’s next-gen AI search and analytics platform and experiences. At Microsoft, he was part of the leadership team responsible for building and accelerating the expansion of the globally distributed Azure data services.
Nair will help NetApp sharpens its focus on hybrid cloud and AI. The company is developing a new version of ONTAP for the AI era, and reckons AI inferencing and RAG use cases in the enterprise will be a larger and longer-lived market than model training. NetApp also has a strong focus on developing its cloud-native file offerings in the AWS, Azure, and Google clouds.
William Blair analyst Jason Ader has hosted investor meetings with NetApp CFO Wissam Jabre and tells subscribers: “NetApp has been plagued by choppy revenue performance over the past five years (two up years, three down years), with investors looking for clues to gain conviction on future consistency. Going forward, the company’s revenue growth algorithm to achieve its revenue CAGR target (from fiscal 2025-2027) of mid to high single digits is based on a mix shift to higher growth opportunities, including AFAs, public cloud services, block-optimized storage products, and AI. The company is off to a good start here, with revenue growth of 5 percent in fiscal 2025, and guidance of 4 percent growth (excluding the Spot divestiture) in fiscal 2026.”
Nair said: “I’ve spent my career tackling complex technological challenges and leading teams through transformations for hyper-growth. I’m thrilled to bring that experience to NetApp. We will set a bold vision for the future of hybrid cloud data services and execute with a growth mindset and relentless focus on customer success.”
AI is everywhere at HPE Discover 2025 with Nvidia-fueled AI factories taking top billing, Alletra storage products playing their part at the bottom of its AI product stack, and a Commvault-Zerto deal to protect data.
The Alletra Storage MP X10000 object storage array, based on ProLiant server controller nodes and with a disaggregated shared everything (DASE) architecture, will support Anthropic’s Model Context Protocol (MCP) with built-in MCP servers.
HPE president and CEO Antonio Neri said: “Generative, agentic, and physical AI have the potential to transform global productivity and create lasting societal change, but AI is only as good as the infrastructure and data behind it.”
The company says that “integrating MCP with the X10000’s built-in data intelligence accelerates data pipelines and enables AI factories, applications, and agents to process and act on intelligent unstructured data.” In its view, the MP X10000 will offer agentic AI-powered storage with MCP.
It explains: “By connecting GreenLake Intelligence with the X10000 through MCP servers, HPE can enable developers and admins to orchestrate data management and operations through GreenLake Copilot or natural-language interfaces. Additionally, connecting the built-in data intelligence layer of X10000 with internal and external AI agents ensures AI workflows are fed with unstructured data and metadata-based intelligence.”
The X10000 now supports the Nvidia AI Data Platform reference design and “has an SDK to streamline unstructured data pipelines for ingestion, inferencing, training and continuous learning.” Nvidia says this data platform reference design, which features Blackwell GPUs, BlueField-3 DPUs, Spectrum-X networking, and its AI Enterprise software “integrates enterprise storage with Nvidia-accelerated computing to power AI agents with near-real-time business insights.”
An Nvidia blog said back in May: “Nvidia-Certified Storage partners DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA are introducing products and solutions built on the Nvidia AI Data Platform, which includes NVIDIA accelerated computing, networking and software.”
The Commvault-Zerto deal builds on an existing GreenLake cloud partnership with Commvault’s cloud offering (Metallic) by having Commvault integrate Zerto’s continuous data protection and disaster recovery software into its cloud and offering it to Commvault Cloud customers to protect virtualized on-premises and cloud workloads. They get near-zero recovery point objectives (RPOs) and recovery time objectives (RTOs) to provide better protection against data outages.
Fidelma Russo
Fidelma Russo, EVP and GM Hybrid Cloud and CTO at HPE, stated: “Our combined innovations set a new standard for data resilience, helping customers navigate a rapidly evolving threat landscape.”
Commvault CEO Sanjay Mirchandani said: “At a time when data is more valuable and vulnerable than ever, our collaboration is empowering customers to keep their business continuous by advancing their resilience and protection of hybrid workloads.”
The two companies say they are also “introducing enhanced integration between the HPE storage and data protection and Commvault Cloud portfolios to safeguard sensitive data, protect against ransomware, and ensure seamless recovery from disruptions.” This has three aspects:
Resilience: The combination of Alletra Storage MP B10000 (unified file and block storage) with built-in ransomware detection and snapshot immutability, HPE Cyber Resilience Vault with air-gapped protection, and Commvault Cloud AI-enhanced anomaly detection and threat scanning provides unmatched resilience and peace of mind.
Fast, Clean Recovery: The integration of Alletra Storage MP X10000 featuring data protection accelerator nodes with Commvault Cloud enables enterprises to return to operation safely and rapidly after an incident. It brings together blazing fast storage, typical 20-to-1 data reduction, and the broadest protection across hybrid cloud workloads.
Geographic Protection: Commvault Cloud seamlessly orchestrates simultaneous snapshots and local backups for two synchronously replicated Alletra Storage MP B10000 arrays, located in different geographical regions. This streamlines data protection workflows and delivers “unparalleled recoverability for critical enterprise data.
There is no delivery timescale for these three items yet. Commvault and HPE’s partnership includes integrations with HPE StoreOnce backup appliances and tape storage systems “for highly cost-effective, long-term data retention, as well as advanced image-based protection for virtualized environments through HPE Morpheus VM Essentials Software.”
HPE’s Morpheus Enterprise Software provides a unified control plane for its AI factories and has a Veeam data protection integration.
Alletra block storage has been ported to the AWS and Azure clouds and is delivered and managed via the GreenLake cloud. The Alletra B10000 and X10000 systems share the same hardware.
Digital Realty, a cloud and carrier-neutral datacenter, colocation and interconnection provider, is standardizing on the HPE Private Cloud Business Edition for its operations across more than 300 datacenters on six continents. This includes the Morpheus VM Essentials Software and the Alletra Storage MP B10000. HPE and World Wide Technology (WWT) will collaborate to support deployment across Digital Realty’s global footprint.
A new HPE CloudOps Software suite brings together OpsRamp, Morpheus Enterprise software, and Zerto software. Available standalone or as part of the suite, these provide automation, orchestration, governance, data mobility, data protection, and cyber resiliency across multivendor, multicloud, multi-workload infrastructure.
Alletra Storage MP X10000 with MCP support is planned for the second half of 2025.
Bootnote
Veeam and HPE announced a combination of Veeam Data Platform, HPE Morpheus Software and Zerto Software, with increased joint go-to-market investment, to provide;
Veeam delivery of image-based backup for Morpheus VM Essentials Software in the near term. Whether running HPE Private Cloud solutions or standalone servers with Veeam and VM Essentials, customers can take advantage of seamless, unified multi-hypervisor protection and VM mobility, as well as up to 90 percent reduction in VM license costs.
Protection for containerized and cloud-native workloads withVeeam Kasten providing backup and recovery for containerized and cloud-native workloads.
HPE and Veeam also announced a “Data Resilience by Design” joint framework that includes HPE cybersecurity and cyber resilience transformation and readiness services.
StorONE is using Phison’s aiDAPTIV+ software in its ONEai automated AI system for enterprise storage.
SSD controller and latterly drive supplier Phison launched aiDAPTIV+, an LLM system that can be trained and maintained on premises, in August last year. StorONE supplies S1 storage; performant and affordable block, file, and object storage from a single array formed from clustered all-flash and hybrid flash+disk nodes. ONEai integrates Phison’s aiDAPTIV+ technology directly into the StorONE storage platform, with plug-and-play deployment, GPU optimization, and native AI processing built into the storage layer. It is said to enable large language model (LLM) training and inferencing without the need for external infrastructure or cloud services.
Gal Naor
At an IT Press Tour event StorONE CEO Gal Naor stated: “ONEai sets a new benchmark for an increasingly AI-integrated industry, where storage is the launchpad to take data from a static component to a dynamic application. Through this technology partnership with Phison, we are filling the gap between traditional storage and AI infrastructure by delivering a turnkey, automated solution that simplifies AI data insights for organizations with limited budgets or expertise.
“We’re lowering the barrier to entry to enable enterprises of all sizes to tap into AI-driven intelligence without the requirement of building large-scale AI environments or sending data to the cloud.”
ONEai uses AI GPU and memory optimization and intelligent data placement to offer an efficient, AI-integrated system with minimal setup complexity. Integrated GPU modules reduce AI inference latency and deliver up to 95 percent hardware utilization.
Users benefit from reduced power, operational and hardware costs, enhanced GPU performance and on-premises LLM training and inferencing on proprietary organizational data. There is no need build complex AI infrastructure or navigate the regulations and costs of off-premises systems.
Michael Wu
The ONEAi software is optimized for fine-tuning, RAG and inferencing, features integrated GPU memory extensions and simplifies data management via a very user-friendly GUI, eliminating the need for complex infrastructure or external AI platforms. We’re told it automatically recognizes and responds to file creation, modification and deletion, feeding them into ongoing AI activities, and delivering real-time insights into data stored in the storage system.
Michael Wu, GM and president of Phison US, said: “Through the aiDAPTIV+ integration, ONEai connects the storage engine and the AI acceleration layer, ensuring optimal data flow, intelligent workload orchestration and highly efficient GPU utilization. The result is an alternative to the DIY approach for IT and infrastructure teams, who can now opt for a pre-integrated, seamless, secure and efficient AI deployment within the enterprise infrastructure.”