Storage news roundup – 17 October

AI-powered data trust company Ataccama announced a native integration with Atlan, bringing Ataccama’s automated data quality intelligence directly into experiences powered by Atlan’s metadata lakehouse, including search, lineage, and glossary. Stewards define the rules in Ataccama, and data consumers see trust signals instantly in Atlan, reducing risk, accelerating decision-making, and building confidence in AI.

The CTERA Intelligent Data Platform is now available in the AWS Marketplace. It, CTERA says, “delivers a secure, compliant, and scalable data fabric across on-premises, cloud, and hybrid environments, offering centralized control, automated protection, and intelligent insights so enterprises can operate with agility, reduce risk, and unlock the full value of their data estate. … AWS customers can now access the CTERA Intelligent Data Platform and suite of Enterprise Data Services directly within AWS Marketplace, streamlining purchase, deployment, and management within their existing AWS accounts.” 

Databricks plans to train 100k professionals in data and AI in the UK and Ireland by 2028 in generative AI,  data engineering, machine learning, and analytics through Databricks’ in-person and self-paced programmes. As part of Databricks Free Edition, a global data and AI education programme aimed at closing the industry-wide talent gap, more than $10 million will be invested in the UK and Europe to provide free access to the Databricks Data Intelligence Platform. Databricks is involved in the UK’s Department for Science, Innovation and Technology’s (DSIT) “Get Tech Certified this Autumn” programme, and has partnered with UK & Irish universities including London School of Economics (LSE) and University College Dublin to equip students with in-demand data and AI skills.

dbt Labs announced it is open-sourcing MetricFlow with an Apache 2.0 licence. “This marks a significant step towards advancing trustworthy AI across enterprises and comes as the company has committed to the Open Semantic Interchange (OSI), a joint initiative led by industry leaders aimed at creating vendor-neutral standards for semantic data exchange across analytics platforms and AI tools. … MetricFlow is the core engine that compiles metric definitions into the code that computes them, and, unlike text-to-SQL methods, that computation is explainable and reliable every time. … uses information from semantic model and metric YAML configurations to construct and run SQL in a user’s data platform, providing governed metrics.”

EnterpriseDB has added new capabilities to EDB Postgres AI giving enterprises faster, simplified paths to achieve hybrid data sovereignty.

  • EDB is expanding Agent Studio’s tool functionality with native Model Context Protocol (MCP) support, giving instant access to hundreds of pre-built MCP servers
  • Enhancements to the migration experience help enterprises leave limited legacy and cloud systems behind for open solutions that accelerate sovereign AI goals. There is an AI copilot for Oracle modernizations, support for AWS Aurora migrations and improved migration guidance and monitoring.
  • EDB PG AI now delivers secure, flexible deployment across cloud, on-premises, and hybrid environments – achieving sovereign control.

Target backup device supplier ExaGrid announced its v7.4.0 release includes new features optimal for Managed Service Providers (MSPs) who use ExaGrid Tiered Backup Storage to protect their customers’ data. MSPs can store multiple customers in a single ExaGrid system can use ExaGrid with over 25 backup applications including Veeam, Rubrik, Commvault, NetBackup, HYCU, Cohesity (in 2026), Oracle RMAN Direct, SQL Dumps Direct and many others.. The new features will help MSPs track their customers’ data usage and separately provide the ability to restore an individual customer’s data in the case of a ransomware attack.

Data orchestrator Hammerspace demonstrated its Tier 0 architecture at Oracle AI World 2025, Las Vegas. With Tier 0, Oracle Cloud Infrastructure (OCI) Supercluster – a bare metal GPU server cluster – operates with ultra-high-performance shared storage, helping to reduce bottlenecks and minimize GPU idle time. By transforming existing local NVMe storage in OCI GPU shapes into a persistent, ultra-fast shared storage tier, Hammerspace eliminates data silos and unifies storage, unlocking a new level of efficiency and performance for AI workloads.

Benchmark testing in OCI showed:

  • Up to 7x improvement in latency vs. traditional cloud file storage.
  • Up to 6x improvement in storage performance vs. traditional cloud file storage.
  • Checkpointing at extreme speeds, crushing idle time.
  • Throughput so fast it keeps GPUs fed 24/7, not waiting on data.
  • Policy-driven flexibility to move cold data to lower-cost tiers without touching the hot path.

IBM and Nvidia are working together to bring Nvidia cuDF to the Velox execution engine, enabling GPU-native query execution for widely used platforms like Presto and Apache Spark. This is an open project. Get info here. IBM Fusion HCI announced support for the Lenovo SR675V3 3U rack-mount server with support for the Nvidia RTX PRO 6000 Blackwell Server Edition GPUs and H200 NVL PCIe GPUs. Get more detail here.

Index Engines released findings from an independent study that reveals a gap between cyber resilience awareness and actual preparedness to respond and recover from cyberattacks. Conducted by theCUBE Research and based on insights from 600 IT and cybersecurity professionals across North America, Europe, and APAC, the research in this eBook highlights  a troubling reality: while most organizations recognize cyber resilience as a business imperative, few are equipped to recover when—not if—a cyberattack occurs.

InfluxData announced that InfluxDB 3, its time series database, is available as a fully-managed service on Amazon Timestream for InfluxDB, now the default time series database in the AWS Console. Developers can run InfluxDB 3 Core (open source) and InfluxDB 3 Enterprise directly on AWS. Users of AWS’ time series offering, LiveAnalytics (now closed to new customers), can migrate to InfluxDB 3 immediately.

Informatica  announced an expanded partnership with Oracle to help customers unify and govern trusted master records to accelerate agentic development. §Some details:

  • Blueprint for Agentic AI on Oracle Cloud Infrastructure (OCI) – A framework that runs on Oracle Cloud, with no-code, pre-built connectors, recipes and an API layer to speed agent development. 
  • IDMC MCP Server Support – A dedicated server within Informatica’s Intelligent Data Management Cloud (IDMC) platform that lets enterprise agentic AI projects access IDMC’s data management capabilities, via the industry-standard MCP protocol. 
  • Master Data Management (MDM) Capability on OCI – Native availability of Informatica MDM SaaS on OCI providing trusted data from any business domain with enterprise-grade security, performance and cloud-native efficiency. 
  • Informatica’s IDMC on Oracle Dedicated Region Cloud@Customer (DRCC) – This enables customers to run the entire IDMC platform, including MDM, data governance, and other IDMC services in a private environment that meets regulatory and data-residency requirements. 

MemVerge and XConn Technologies unveiled a 100 TiB CXL memory pool for KV Cache, integrated with Nvidia Dynamo at the OCP Global Summit,, demonstrating how CXL-based memory pooling boosts AI inference performance by more than 5x compared to SSDs. The duo showed how scalable CXL memory architecture—powered by XConn’s hybrid CXL/PCIe Apollo switch and MemVerge’s GISMO software—offers a commercially available, high-performance system for the next generation of AI workloads. The joint demo illustrated how MemVerge’s Global IO-free Shared Memory Objects (GISMO) technology enables Nvidia’s Dynamo and NIXL to tap into huge CXL memory pool (up to 100TiB in 2025) and serve as the KV Cache store for AI inference workloads, where prefill GPUs and Decode GPUs work in synchrony to take advantage of the low latency and high bandwidth memory access to complete the computing.

Reuters reports Micron is stopping selling its DRAM chips for data center servers inside China. It will continue to sell chips to auto and mobile phone sector customers China. The Chinese government forbade the use of Micron’s chips in critical infrastructure in 2023. Micron will continue selling chips to China’s Lenovo and one other Chinese customer (Huawei?) that have data center businesses outside China. Some 12% of Micron’s revenues came from sales inside China in its last financial year.

Mirantis announced the release of Pelagia, an open source Kubernetes controller for lifecycle management of Ceph software-defined storage. It works alongside Rook to add needed lifecycle automation and a simplified control plane for large-scale environments. Pelagia comes from the Mirantis OpenStack for Kubernetes (MOSK) product, which for more than five years, was used to deploy and manage large production Ceph clusters where reliability was critical. Pelagia is available now under an open source license. Quick start instructions are available here. For more information, join the user community discussion group on GitHub.

Kristine Sedum.

NetApp announced a multi-year partnership with The San Francisco 49ers to provide data science education to high school students in the Bay Area. This will essentially allow the 49ers and NetApp to use Levi’s Stadium as a classroom and is part of the 49ers EDU Data Analytics Lessons Series that’s already reached 500K students. 49ers EDU educators will teach data storage, cloud integration, data security, and performance optimization, with NetApp volunteers mentoring on tech careers (fostering  the pipeline from classroom to real-world scenarios). The 49ers Foundation received a NetApp Customer Social Impact Award due to this partnership at NetApp INSIGHT 2025.

NetApp appointed Kristine Wedum as VP, US Partner Organization.  Wedum previously held related roles at Pure Storage (8 years), Proofpoint, Tufin, and Brocade.  

Alex Dunfey.

M&E industry storage software supplier OpenDrives has promoted Alex Dunfey to CTO from his former position as SVP of Engineering. This coincides with the soft launch of Astraeus, OpenDrives’ new cloud-native data services platform announced in September. Dunfey said: “Future updates to Astraeus will add new data services, simplify workflow management, and leverage AI for insights and automation, which means we have to evolve the way we operate to do that. Download an Astraeus white paper here.

William Blair analysts Sebastian Naji and Jason Ader noted: “This week we attended Oracle’s 2025 AI World conference and financial analyst session. The focus this year was not only on the sea-change impact of AI across the technology landscape, but also on Oracle’s advantage in monetizing the AI platform shift through its vertical integration of infrastructure, database, and apps. While OCI is the primary driver of growth over the next five years (growing at a 75% compound annual rate), the ability for customers to leverage vectorized data inside the Oracle database and the integration of AI agents across all of Oracle’s applications (from Fusion to vertical apps) should also serve as key levers of growth.”

Oracle is vectorizing everything in its database so that its customers can use Oracle database records in their RAG AI workloads. The Oracle AI database can take data in the database, OCI object store, or AWS or google cloud storage, and vectorize it for AI models such as Grok, ChatGPT, Llama, or Gemini. Watch Oracle chairman and CEO Larry Ellison’s keynote presentation from OracleAI World here

Video grab from Larry Ellison’s keynote presentation from OracleAI World.

PingCAP unveiled TiDB X, a new architecture for its distributed SQL database, and announced a suite of Generative and Agentic AI innovations:

  • TiDB X: A new distributed SQL architecture that makes object storage the backbone of TiDB. By truly decoupling compute and storage, TiDB X enables TiDB to scale intelligently, adapting in real time to workload patterns, business cycles, and data characteristics. 
  • Smarter Retrieval & Reasoning: A unified query engine that fuses vectors, knowledge graphs, JSON, and SQL for richer, multi-hop queries and deeper insights. Enables long-term memory with versioned, branchable storage. 
  • AI Developer Toolkit: New building blocks for GenAI, including the TiDB AI SDK, TiDB Reasoning Engine, and TiDB MCP Server, empowering developers to quickly build and scale agentic workflows. 
  • LLM Integrations: Out-of-the-box support for OpenAI, Hugging Face, Cohere, Gemini, Jina, Nvidia, and more, making TiDB the most open and flexible distributed SQL platform for AI builders. 

TiDB X and the new GenAI capabilities will be available across all TiDB Cloud tiers in late 2025, including Starter, Essential, and Premium, with BYOC (Bring Your Own Cloud) coming soon. To learn more or request early access, visit https://www.pingcap.com/product/cloud

Red Hat announced Red Hat AI 3, with the latest versions of Red Hat AI Inference Server, Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI. It makes it possible to rapidly scale and distribute AI workloads across hybrid, multi-vendor environments while simultaneously improving cross-team collaboration on next-generation AI workloads like agents, all on the same common platform.

It introduces the general availability of llm-d, which reimagines how LLMs run natively on Kubernetes. llm-d enables intelligent distributed inference, tapping the proven value of Kubernetes orchestration and the performance of vLLM, combined with key open source technologies like Kubernetes Gateway API Inference Extension, the Nvidia Dynamo low latency data transfer library (NIXL), and the DeepEP Mixture of Experts (MoE) communication library. Learn more here.

Riverbed, which supplies AIOps for observability and data acceleration, introduced its SaaS-based Data Express Service built on Oracle Cloud Infrastructure (OCI) and designed to move petabyte-scale datasets up to 10x faster than traditional offerings. Petabyte-scale datasets can  now be moved in days instead of months, dramatically accelerating AI model training and deployment. The service uses post-quantum cryptography (PQC) to move petabyte-scale  datasets through secure VPN tunnels to ensure that customer data remains protected during the transfer process. The service includes enterprise-grade controls for secure access to data as well as the option to deploy data mover agents in customer tenants to  enable additional security controls. Get a Solution Brief here.

The Sandisk PCIe Gen 5 SN861 NVMe SSD is now featured on the OCP Marketplace.

Object storage supplier Scality announced an exceptional Net Promoter Score (NPS) of 85 across its RING and ARTESCA products. It says that, iIn the standardised NPS model, any score above 30 is considered strong, above 70 is excellent, and above 80 is exceptional. With an industry average NPS for data storage solution vendors typically ranging  between 40 and 50, Scality’s 85 score places it among the top-performing technology vendors worldwide. The score is based on ratings collected directly from customers following interactions with Scality’s support and customer-success teams.

SK hynix revealed a PS1101 SSD wih 245 TB capacity and a PCIe gen 5 connection in the E3.L format at a Dell Technologies forum event in Seoul, Korea. It uses QLC NAND and a 321-layer 3D NAND chip. The drive could ship in 2026. SK hynix says it’s continuing its journey to becoming a full-stack AI memory provider. 

SK Hynix display panel showing PS1101.

SK hynix is offering some scary good deals on select SSDs during a limited-time promotion available on Amazon, now through October 26:

  •  SK hynix Platinum P41 1TB – Now $71.99 (was $79.99) → 10% off 
  •  SK hynix Tube T31 2TB – Now  $118.98 (was $159.99) → 26% off 

Open data lakehouse Starburst has a new set of capabilities designed to operationalize the Agentic Workforce. It says that, with new, built-in support for model-to-data architectures, multi-agent interoperability, and an open vector store on Iceberg, Starburst delivers the first lakehouse platform that empowers AI agents, with unified enterprise data, governed data products, and metadata, empowering humans and AI to reason, act, and decide faster while ensuring trust and control. 

Starburst gives AI agents secure, governed access to data wherever it resides, on-premises or in the cloud, at enterprise scale. This federated, model-to-data approach helps organizations maintain sovereignty, reduce costs, and avoid compliance pitfalls, especially in highly regulated industries or cross-border environments. Read a blog here. The enhancements:-

  • Multi-Agent Ready Infrastructure: A new MCP server and agent API allows enterprises to create, manage, and orchestrate multiple AI agents along-side the Starburst agent. This enables customers to develop multi-agent and AI application solutions that are geared to complete tasks of growing complexity.
  • Open & Interoperable Vector Access: Starburst unifies access to vector stores, enabling retrieval augmented generation (RAG) and search tasks across Iceberg, PostgreSQL + PGVector, Elasticsearch and more. Enterprises gain flexibility to choose the right vector solution for each workload without lock-in or fragmentation. 
  • Model Usage Monitoring & Control: Starburst offers enterprise-grade AI model monitoring and governance. Teams can track, audit, and control AI usage across agents and workloads with dashboards, preventing cost overruns and ensuring compliance for confident, scalable AI adoption.
  • Deeper Insights & Visualization: An extension of Starburst’s conversational analytics agent enables users to ask questions across different data product domains and provide back a natural language response in natural language, a visualization, or combination of the two. The agent is able to understand the user intent and question to do data discovery to find the right data before query processing to answer the question.

Synology announced DSM 7.3, which includes significant updates to our third-party hard drive support policy. This new release restores support for third-party drives across the 2025 DiskStation Plus, Value, and J Series models, allowing installation and storage pool creation without the previous restrictions. 

  • DSM 7.3 introduces Synology Tiering, automatically moving files between high-speed and cost-effective storage based on actual use, maximising storage value and performance. 
  • Security enhancements include adoption of industry-recognised risk indicators — KEV, EPSS, and LEV — for smarter threat prioritisation and stronger protection against emerging vulnerabilities. 
  • The Synology Office Suite has been upgraded to improve team workflows. With Synology Drive, users can now add shared labels and create smarter file requests. 
  • MailPlus now enables email moderation and domain sharing for unified identities. 
  • The AI Console introduces custom data masking and filtering, allowing users to safeguard sensitive information locally before transmitting data to third-party AI providers, enhancing both security and workflow reliability. 
  • Storage flexibility has been expanded, with 2025 DiskStation Plus, Value, and J Series models permitting installation and storage pool creation using third-party drives (M.2 pools still require validated models). 

VDURA CEO Ken Claffey has written a blog about NetApp’s AFX, using it as an example to support his point that the general purpose AI storage era is ending. Hesays NetApp positions AFX as highly capable for AI, with claims of up to 4 TB/s throughput and strong enterprise integration. AFX inherits ONTAP’s strengths in enterprise controls but also its constraints in areas like scaling mechanics, data protection, performance, and hardware efficiency.

AFX achieves up to 4 TB/s cluster throughput (reads) and scales to over 1 EB, which is respectable  for feeding GPU clusters via standard protocols like NFS/pNFS. This peak requires up to 128 2 RU AFX 1K controller nodes, equating to about 31 GB/s reads per controller node (4 TB/s ÷ 128), plus 2 RU NX224 NVMe HA enclosures and 100GbE switches. He reckons the total rack space needed would be 512RU.

VDURA delivers over 60 GB/s reads and 40 GB/s writes per 1 RU standard server node in comparable configs, with its shared-nothing design that avoids legacy controller  bottlenecks.  A 67 node (67 RU) VDURA config would deliver 4 TB/s. Read much more in his blog.

SW RAID supplier Xinnor with MEGWARE, Celestica, and Phison, enabled Germany’s most powerful university-owned AI supercomputer at NHR@FAU to achieve the #3 position in the global IO500 benchmark rankings and #1 among Lustre-based solutions. The Helma supercomputer at Friedrich-Alexander-Universität Erlangen-Nürnberg’s National High-Performance Computing Center (NHR@FAU) combines 192 dual-socket AMD EPYC 9554 “Genoa” compute nodes with 768 Nvidia H100/H200 GPUs, ranking #51 on the June 2025 TOP500 list. The storage infrastructure, designed and built by MEGWARE using Celestica SC6100 systems with Phison Pascari drives and protected by Xinnor’s xiRAID Classic 4.2, delivered breakthrough performance metrics that established new benchmarks for high-availability NVMe storage in academic HPC environments. Read the case study here.

Xinnor case study diagram.