Home Blog Page 7

Storage news ticker – October 3

Data protector Acronis announced GA of True Image 2026 with built-in patch management for Windows and a strengthened security engine with AI-based threat detection, anti-ransomware, and malware scanning. Acronis claims it is the first consumer software to proactively safeguard against emerging cyberthreats and provide identity protection, fast backup, easy recovery, and advanced cyber protection in an all-in-one tool. Acronis True Image 2026 can be applied to five computers and unlimited mobile devices. Learn more here.

Automated data exchange supplier Adeptia announced a rebrand of its flagship Connect platform as Adeptia Automate. With new AI-enhanced capabilities, the end-to-end data platform helps data experts and business users manage first-mile data with greater speed, control, and ease. Adeptia Automate pairs AI-guided data mapping with a modern UX and scalable and flexible architecture to turn disconnected data into actionable intelligence, helping modern enterprises eliminate IT bottlenecks, speed up partner onboarding, and boost enterprise efficiency. It has enhanced usability, enterprise-grade scalability, frictionless operations, and increased data governance.

US-based cybersecurity firm Blue Goat has criticized UK government moves to prohibit public bodies and critical national infrastructure (CNI) operators from paying ransom demands and requiring private organizations to notify authorities if they intend to make a payment. It says bans will push adversaries to adapt, and that adaptation can be painful for victims who lack contingency plans. “Organizations should assume attackers will pivot to data theft, crippling disruption and supply-chain targeting. The immediate priorities for UK organizations are straightforward: harden defenses, test incident response, and ensure you have a credible recovery plan that does not depend on ransom payments.”

Data protector CloudCasa released a software update featuring:

  • Backup to NFS storage: In addition to object storage, CloudCasa now supports backups directly to NFS targets, providing greater flexibility for on-premises and edge environments.
  • File browsing and download for VM backups: Users can now browse and download individual files from virtual machine disks and file systems, enabling quick file-level recovery without restoring the full VM.
  • Disaster recovery restore for Longhorn volumes: CloudCasa adds a streamlined disaster recovery (DR) restore option for Longhorn environments with DR volumes configured for continuous data replication between clusters, simplifying recovery during failover scenarios.

The London Stock Exchange Group (LSEG) is partnering with Databricks so that financial firms can deploy AI agents directly on live LSEG market data. This will bring AI-ready financial data – starting with Lipper Fund Data & Analytics and Cross Asset Analytics – natively into the Databricks platform. Customers can now combine LSEG market data with their own, enabling them to build and deploy governed AI agents for real-time investment analytics, risk management and trading workflows using Databricks’ Agent Bricks. For analysts, the company claims this means fewer hours wrangling legacy data feeds, and the ability to run real-time investment, trading, and compliance agents within days, not quarters.

An independent study revealed that enterprises using the Denodo Platform alongside modern data lakehouses like Snowflake or Databricks achieve a staggering 345 percent ROI, $3.6 million in cost avoidance, payback in 6.5 months, and 3-4x faster time-to-insight. Veqtor8 published the results of this study in a whitepaper entitled “The ROI of Using the Denodo Platform alongside the Modern Data Lakehouse,” and a complimentary copy is available on the Denodo website. Denodo produces a data management platform product that virtualizes data sources.

MRAM developer Everspin announced a strategic collaboration with Quintauris to bring its MRAM technology into the Quintauris RISC-V ecosystem. By integrating Everspin’s MRAM technologies with Quintauris’s reference architectures and real-time platforms, the partnership works to ensure memory subsystems meet the highest standards for performance and functional safety. Everspin has shipped more than 200 million MRAM devices.

South Korea’s SSD controller supplier FADU has unveiled its Sierra FC6161 PCIe Gen 6 SSD controller, read and write speeds of up to 28.5 GB/s, 500 MB/s more than Silicon Motion’s PCIe Gen 6 SM8466 controller, with up to 28 GB/s speeds and 512 TB total capacity. The Sierra FC6161 supports random read/write IOPS of 6.9 million/1 million. It will operate at a sub-9 W TDP (Thermal Design Power) level.

Data-moving business Fivetran says clothing brand Steve Madden is using Fivetran to centralize its advertising, web traffic, and social engagement data from its global digital footprint. It relies on Fivetran to integrate data from a wide range of platforms including Facebook Ads, Google Analytics 4, Facebook Pages, Google Search Console, TikTok Ads, LinkedIn Ads, Instagram Business, and Snapchat Ads. The retailer also leverages Fivetran’s Connector SDK to bring in custom data from additional platforms like Adform.

By unifying these sources, Steve Madden’s marketing and analytics teams can view paid and organic performance side-by-side across all major channels, align campaign spend with engagement and conversion data, and eliminate the need for manual data pulls from multiple platforms.

Fivetran has acquired open source transformation company Tobiko Data, the company behind SQLMesh and SQLGlot. With the acquisition, Fivetran claims this strengthens its position as the only fully managed, end-to-end platform that combines data movement, transformation, and activation – making it easier for customers to deliver governed, AI-ready data with speed and scale.

This marks Fivetran’s second acquisition of the year, following its acquisition of Census to expand into reverse ETL. Last year, Fivetran surpassed $300 million in annual recurring revenue, expanded its Connector SDK to help developers build high-quality production connectors, and launched Hybrid Deployment to support pipelines across private cloud and on-premises environments. Fivetran also expanded its Managed Data Lake Service to support Amazon S3, Azure Data Lake Storage, Microsoft OneLake and Fabric, and Google Cloud Storage. The service integrates with all major data catalogs and supports open table formats like Iceberg, helping enterprises build governed, AI-ready data lakes at scale.

Google has added AI-based ransomware detection and recovery features to Google Drive for Desktop. It identifies attempts to encrypt a lot of files at once and stops file syncing to the Google cloud, after detecting three to five modified files. Files uploaded to Google Drive are already scanned for malware. Users get “ransomware detected” alerts and can restore affected files using the web UI. This software is in an open beta phase.

HighPoint Technologies has introduced the Rocket 7638D, a PCIe Gen 5 switch adapter designed with a GPUDirect Storage (GDS) hardware architecture. The adapter eliminates the traditional CPU bottleneck by enabling Nvidia GPUs to directly access massive datasets, ensuring maximum GPU utilization for AI, ML, HPC, and scientific workflows. It is the first 48-lane Gen 5 PCIe switch adapter engineered with a dedicated x16 Gen 5 pathway for both an external GPU and NVMe storage from a single slot. This architecture creates a direct, peer-to-peer data channel that bypasses the host CPU entirely. HighPoint’s hardware architecture provides a direct data path that, when paired with compatible third-party software, enables a full GDS stack.

HighPoint Rocket 7638D diagram

With 48 PCIe Gen 5 lanes, dedicated x16 pathways for a Gen 5 GPU and NVMe Storage (up to 16 drives and nearly 2 petabytes), the Rocket 7638D delivers unprecedented bandwidth, capacity and performance. It simplifies deployment with native OS support and a suite of monitoring utilities, across Intel, AMD, and ARM computing platforms. Read more here.

HPE Alletra Storage MP B10000 has been fully integrated with AWS Outposts. This means customers can attach data volumes backed by MP B10000 storage to Amazon EC2 instances on AWS Outposts right through the AWS Management Console. In addition, customers can boot EC2 instances on Outposts directly from MP B10000 storage. Learn more here.

Streaming data lake supplier Hydrolix has purchased intellectual property from Quesma to continue supporting Kibana interoperability. Hydrolix had previously partnered with Quesma through a reseller agreement to bundle its Kibana gateway proxy into the Hydrolix platform. Recently, Quesma notified Hydrolix of its intent to cease support for the product, and Hydrolix worked with Quesma to purchase the technology to ensure uninterrupted support for customers who depend on the integration.

Kioxia and Sandisk announced the start of operation at the Fab2 (K2), a state-of-the-art semiconductor fabrication facility, at the Kitakami Plant in Iwate Prefecture, Japan. Fab2 has the capability to produce eighth-generation, 218-layer 3D flash memory, featuring the companies’ CMOS directly bonded to array (CBA) technology, and future advanced 3D flash memory nodes to meet growing demand for storage driven by AI. Production capacity at Fab2 will ramp up in stages over time, in line with market trends, with meaningful output expected to begin in the first half of 2026.

Kioxiaand Sandisk’s Fab2

Observability business New Relic released its 2025 Observability Forecast report, showing that high-impact outages carry a median cost of $2 million per hour, or approximately $33,333 for every minute systems remain down. For the UK and Ireland specifically, the annual median cost of high-impact IT outages for organizations surveyed in those countries is $38 million per year. Respondents across the UK and Ireland see tool sprawl as a significant challenge, with a third (33 percent) of organizations citing too many monitoring tools and siloed data as a barrier to achieving full-stack observability, compared to the regional EMEA average of 27 percent. Access the report here.

Nutanix has announced the availability of its Nutanix Cloud Clusters (NC2) software on OVHcloud in the EMEA region. It provides customers the opportunity to build hybrid and multicloud deployments that support European data-sovereignty objectives. More information can be found here.

Veeam backup storage target device supplier Object First has released survey data that uncovers a growing crisis in the IT and cybersecurity community: 84 percent of IT professionals report feeling uncomfortably stressed at work due to rising cyber threats – and nearly 60 percent are considering leaving their jobs because of it. The survey of 500 US IT and security professionals reveals the emotional toll of being the “last line of defense” against cyberattacks:

  • 47 percent feel pressure from leadership to “fix everything” after a breach
  • 18 percent feel “hopeless and overwhelmed” during or after an incident
  • 59 percent have thought about quitting due to stress
  • 74 percent say recovery tools are too complex, fueling further burnout

To help address this, Object First partnered with Cybermindz, a San Francisco–based nonprofit dedicated to mental resilience in cybersecurity, to release resources including stress-reduction tools and a practical recovery protocol for IT professionals. Object First and Cybermindz have released a short video about this.

Personal Media Corporation announced its PC data erasure software “DiskShredder ToGo” has achieved ADISA Product Assurance Certification to NIST SP 800-88 following testing by ADISA Research Centre. “DiskShredder ToGo” is a data erasure software that can be used an unlimited number of times within 90 days of purchase, with no restrictions on the number of devices that can be erased. The product has been available through Personal Media Corporation’s online store since April 2025, responding to the surge in erasure demand driven by the planned end of Windows 10 support in October 2025. It’s the first Japanese data erasure software to receive certification from ADISA Research Centre.

Pua Khein-Seng

The CEO of SSD controller and drive supplier Phison, Pua Khein-Seng, inventor of the USB flash drive, said in an interview reported by the Taiwanese CommonWealth Magazine that: “NAND will face severe shortages next year. I think supply will be tight for the next ten years … In the past, every time flash makers invested more, prices collapsed, and they never recouped their investments. So companies slowed spending starting around 2019-2020. Then in 2023, Micron and SK hynix redirected huge capex into HBM because the margins were so attractive, leaving even less investment for flash.”

Multibillion AI datacenter build-outs will need data storage as well as GPUs, and he thinks SSDs will take an increasing share of it. “In 2020, the SSD-to-HDD ratio in datacenters was in the single digits versus more than 90 percent. Today, it’s about 20 to 80 percent. Looking ahead, SSDs will account for 80 to 100 percent. The real question is: how much new capacity will be needed to support that transition? That’s why I say flash will remain strong for the next ten years.”

Mainframe-to-cloud connectivity supplier Precisely enhanced its Data Integrity Suite with AI-driven features that enable organizations to operationalize high-quality, AI-ready data faster and better. There are natural language interfaces, AI-powered rule and description generation, AI-generated sample data, and LLM integration in pipelines. Learn more here.

Precisely has updated its EnterWorks software, integrating master data management (MDM) with the Data Governance service in the Precisely Data Integrity Suite. The release enables organizations to link master data to policies, goals, and metrics – ensuring greater visibility, accountability, and compliance across the business. There is also a modernized user interface with visual enhancements across the dashboard, repository lists, and record detail views. Learn more here.

Quobyte, which supplies a distributed parallel file system, says the University of California, Davis, has moved from an NFS-based NAS system to its software for HPC storage, after evaluating it along with BeeGFS and Ceph. Data reads continue during hardware component repairs; if a node or rack is lost, the cluster continues to operate. No maintenance windows are needed for hardware or software upgrades. This results in 100 percent availability, 24/7/365, for optimal resource utilization. Common hardware failures, such as failed drives, are handled automatically and have become non-events. Migrating to Quobyte was straightforward because it used Quobyte’s qcopy tool. The team streamlined the initial data synchronization, minimizing manual intervention and ensuring that the bulk of the data transfer was handled in a highly parallelized, fast, and metadata-preserving manner, including external attributes and access controls.

Samsung is developing CXL 3.1 and PCIe 6.0 CMM-D products, which should be available next year. A PM1763 Gen 6 SSD will arrive in early 2026, with twice the performance, better energy efficiency, and 25 W power draw, compared to its (PCIe 5) predecessor. It will have an EDSFF form factor and 256 TB capacity, with a 512 TB version in 2027.

Storage test house SANBlaze announced its OCP-compliant 2.6 test suite as an addition to its NVMe qualification platform. The Certified by SANBlaze test suite within its latest version 11.0 Build 7 software now supports all aspects of NVMe qualification with the addition of the industry accepted Open Compute Project (OCP) 2.6, with ongoing work for OCP 2.7. Other capabilities in the Certified by SANBlaze test suite include FDP (Flexible Data Placement), PCIe Single Root I/O Virtualization (SR-IOV), ZNS, NVMe-MI, TCG, SRIS/SRNS clocking, and T10/DIF. The SANBlaze OCP 2.6 package is now available for distribution. Contact sales at sales@sanblaze.com.

Object storage supplier Scality is working with Inria, the French national research institute for digital science and technology, on a Cyberté project “developing data storage solutions that are secure, reliable, sovereign, and sustainable, by integrating artificial intelligence into the core of software infrastructures.” Research conducted by Inria’s project teams will allow Scality to integrate the following into its software: 

  • AI predictive models to anticipate hardware and software failures.
  • Real-time anomaly detection algorithms to identify early signs of attacks such as ransomware or data exfiltration, even in encrypted traffic.
  • Frugal AI approaches and methodologies to reduce the energy footprint by using software-defined power meters (PowerAPI initiative).

More information here.

LaCie Rugged SSD4

HDD supplier Seagate’s LaCie unit has a new Rugged SSD4 drive “for creators who need durable storage and drive performance, on the go.” The drive supports filmmakers, photographers, and audio professionals working outside the studio, with:

  • Up to 4,000 MB/s read and 3,800 MB/s write speeds.
  • USB 40 Gbps port allows quick file transfers across Mac, iPad, PC, and mobile devices.
  • Durable casing: IP54-rated for dust and water resistance and drop-tested to three meters. 
  • Signature LaCie colour: Designed with the classic orange bumper, providing renowned durability, everywhere.

It is priced at $119.99 for 1 TB, $214.99 for 2 TB, and $399.99 for 4 TB. In the UK it’s available for £139.99 (1 TB), £249.99 (2 TB), and £449.99 (4 TB) through Seagate’s website.

SK hynix has assembled the industry’s first High NA EUV (High Numerical Aperture Extreme Ultraviolet) lithography system, the ASML TWINSCAN EXE:5200B, for mass production at its M16 fabrication plant in Icheon, South Korea. This will enable the company to design and build its chips with smaller feature sizes. It will enable printing transistors 1.7 times smaller and achieve transistor densities 2.9 times higher, compared with the existing EUV system, with a 40 percent improvement in the NA to 0.55 from 0.33. SK hynix has been expanding the scope of EUV adoption for production of the most advanced DRAM since the first introduction of the technology in 2021 for the 1nm, the fourth generation of the 10nm tech.

Matt Bryson at Wedbush says SK hynix intends to add around 20 EUV systems by 2027. It already has around 20 units, including some used for R&D. With each system costing hundreds of millions of dollars, the total investment is expected to exceed $4 billion. We believe much of this capacity will likely be focused on HBM, limiting additional wafer starts in standard DRAM.

Cloud data warehouser Snowflake announced Cortex AI for Financial Services, with the following features:

  • Complex Machine Learning Workflows: Snowflake Data Science Agent acts as an AI coding agent, automating data cleaning, feature engineering, model prototyping, and validation so teams can move from raw data to production-ready models faster. This means automating and streamlining models that underpin quantitative research, fraud detection, customer 360, and underwriting workflows.
  • Analysis of Unstructured Data: With Snowflake Cortex AISQL adding functions like AI-powered extraction and transcription, businesses can harness unstructured data by efficiently processing and derive insights from documents, audio, and images at scale – transforming end-to-end workflows like customer service, investment analytics, claims management, and next-best action.
  • Easy Access to Flexible Insights: Snowflake Intelligence (in public preview) offers business users an intuitive conversational interface to gain insights using natural language from data stored in Snowflake, as well as third-party data, apps, and agents – allowing users to quickly uncover actionable insights from both structured tables and unstructured documents. This democratizes access to data and insights across financial institutions, and eliminates the technical overhead that slows down business decision-making.

Snowflake also announced a managed Model Context Protocol (MCP) Server, now in public preview.

Parallel file system supplier VDURA’s V5000 Data Platform is being used by a Tier-1 US federal system integrator to support one of the world’s largest defense GPU deployments. It’s being used to accelerate AI and Computational Fluid Dynamics (CFD) workloads combining flash-class performance, unified tiering across NVMe and HDD, and built-in end-to-end encryption. Phase 1 of the deployment includes 20 PB of high-performance storage, achieving sustained transfer rates above 800 GB/s, with future scalability to 200 PB and 2.5 TB/s. The system reduces cost-per-terabyte by over 60 percent compared to all-flash alternatives, improved energy efficiency by 44 percent, and lowered operational overhead to just half an FTE (full-time equivalent).

Comparing EMEA data from its 2024 and 2025 Ransomware Trends Reports, Veeam discovered that: 

  • Paying doesn’t always pay off: In 2023, more than half (54 percent) of EMEA organizations that paid a ransom successfully recovered their data. In 2024, this fell sharply to just 32 percent.
  • More organizations are recovering data without paying: 14 percent in 2023 vs 30 percent in 2024.
  • Organizations still aren’t prepared: Despite a 22 percent year-on-year drop in ransom payments, 63 percent of organizations would still be unable to recover from a site-wide crisis due to limited infrastructure plans.
  • Only 37 percent had arrangements in place for alternative infrastructure in 2024.

DigiTimes reports HDD supplier Western Digital plans to invest $1 billion in Japan between now and 2031 to strengthen next-generation HDD technology (HAMR) and production processes. Its report says “The investment comes amid surging demand for datacenter storage driven by widespread adoption of generative artificial intelligence (AI), with WD’s cloud business unit accounting for up to 90 percent of revenue in the second quarter of 2025.”

Clumio, MinIO expand Apache Iceberg protections and support

Commvault’s AWS cloud-app-protecting subsidiary Clumio is protecting Iceberg data in AWS while MinIO is adding Iceberg table support to its AIStor object storage software.

Open source Apache Iceberg operates as a data lakehouse software layer above storage systems like Parquet, ORC, and Avro, and cloud object stores such as AWS S3, Azure Blob, and Google Cloud Storage. It is an open table format for large-scale analytics, providing ACID transactions, schema versioning, and time travel. The querying of data in its tables is provided by Apache Flink, Presto, Spark, Trino, and other analytics engines. Clumio is a Commvault-acquired business that protects customer data in AWS.

Woon Jung

Commvault’s Woon Jung, CTO Cloud Native, stated: “With Clumio for Apache Iceberg, we are providing the industry’s first true safety net for the data lakehouse. For the first time, organisations can protect their AI and analytics data with an air-gapped, automated solution, allowing them to accelerate innovation with confidence.” 

Clumio for Apache Iceberg on AWS provides fast, reliable, virtually air-gapped, and transactionally consistent backups that intrinsically understand Iceberg table structures and captures a complete state of the data in them. It safeguards backups in an isolated environment, protecting them against ransomware, account compromise, and accidental or malicious deletions. Compliance and governance requirements are supported and Commvault claims this is the data protection industry’s only offering to protect such data.

We’re told native Iceberg snapshots are typically tied to the source account, lack an air-gapped copy, and are not designed for large-scale, point-in-time recovery, making them vulnerable to data loss, ransomware attacks, and compliance risks. Restoring from non-Iceberg-aware backups often requires complex manual processes to rewire and reconfigure tables, leading to extended downtime and a high risk of data inconsistency. 

Clumio Iceberg protection status

Clumio for Apache Iceberg marks the latest step, Commvault says, in its strategy to deliver comprehensive resilience for the entire AWS data pipeline. It already protects S3, DynamoDB, RDS/Aurora, and EC2/EBS.

Learn more about Clumio for Apache Iceberg on AWS here. Clumio for Apache Iceberg is now generally available in the AWS Marketplace, supporting both self-managed tables via the AWS Glue Data Catalog and fully managed tables via Amazon S3 Tables. 

MinIO

MinIO has added native Apache Iceberg support to its AIStor offering via a powerful new Tables feature and integrating the Iceberg Catalog API directly into AIStor. Iceberg has traditionally been considered primarily a tool for analytics on structured data. With AIStor’s native Iceberg implementation, enterprises can now unify all their structured and unstructured data – whether tables, transactions, images, audio and more – into a single, coherent fabric. This strengthens the efficacy of AI workloads from analytics to agentic AI, because AI can now act on all enterprise knowledge, not just a subset.

AB Periasamy

AB Periasamy, MinIO co-founder and co-CEO, said: “Iceberg is the clear standard for enterprise AI data. The challenge is that most on-prem implementations make it harder than it needs to be, requiring separate catalog databases and extra layers of infrastructure that add cost and operational risk. By building Iceberg directly into AIStor, we take away that complexity and give enterprises a simple, scalable foundation for AI. This not only lowers costs and speeds progress, but also ensures AI can reach its full potential because all data is AI data.”

AIStor Tables works out of the box with existing tools and query engines including Spark, Trino, Dremio, and Starburst, protecting past investments. As the only object-native on-premises object store with the Apache Iceberg REST catalog built-in, AIStor unifies tabular and object data to power agentic AI and lakehouse analytics. Just as MinIO brought the Amazon S3 experience to every datacenter, MinIO is now doing the same for Iceberg with AIStor Tables – making the enterprise AI standard available at scale, on-premises. AIStor Tables is available immediately in tech preview.

Comment

Both MinIO and Cloudian are making their object storage offerings AI-aware and relevant. They recognize that unstructured data in their systems will be used by AI models and agents, requiring vector support and easier access. Cloudian also supports Iceberg tables, storing them as files in S3 buckets.

Greek satellite outfit Planetek picks Cubbit for sovereign storage

A Greek geospatial data company is storing its data on a Cubbit decentralized storage cloud for cash and sovereignty reasons.

Cubbit’s decentralized storage is a web of interconnected individual private organizations’ datacenters with spare storage capacity, managed through its DS3 Composer software. These sites or nodes provide S3-compatible storage with data split into fragments, encrypted and encoded for resilience and spread across the nodes.

Planetek Hellas is a satellite Earth observation company. It obtains geospatial data from Earth-orbiting satellites, with its own on-satellite software. It makes this data available, via its Rheticus analytics facility, for environmental and infrastructure monitoring, urban planning, civil protection and security. Along with Planetek Italia, it is part of the D-Orbit group, which specializes in space logistics.

Sergio Samarelli

Planetek needed to backup and archive its geospatial data, and keep it inside Greece. It has ordered 3.5 PB of storage from Cubbit, with planned expansion to 5 PB in the future. Planetek CTO Sergio Samarelli said: “With Cubbit, we’ve built a solid and flexible infrastructure that enables us to meet the highest security requirements in the strategic field of Earth Observation. The ability to maintain full control of the data within our technological perimeter, and the sovereignty guarantee offered to our customers, gives us a key strategic advantage.”

The company has contracts with the European Space Agency and participates in many European and national research projects. All this means that, as EU data sovereignty requirements get enforced, it is well placed to continue and expand these activities.

Why doesn’t Planetek run its own in-house object storage or use a mainstream public cloud such as AWS? It says it has evaluated various traditional on-prem object storage systems over the years, but they would have required significant IT resources. As for the mainstream public cloud alternative, it “was not an option due to concerns over data sovereignty, regulatory compliance, and economic constraints.” The basic reasons it chose the decentralized Cubbit alternative were cost and data sovereignty.

Alessandro Cillario

Alessandro Cillario, co-CEO and co-founder of Cubbit, said: “This is the first in a series of projects in the space industry that we are working on, and which we hope to announce in the coming months. In this sector, the need to securely store and share data is directly linked to stringent requirements for data sovereignty and control over one’s technological infrastructure. “

The best control over your technological infrastructure comes from owning and operating it yourself. While decentralized storage can be cheaper than AWS or Azure and can offer stronger data-sovereignty assurances, it still provides less control than running your own datacenters. This, as Samarelli says, comes at a cost.

This is a good win for Cubbit, leaving it well placed to capture more business from customers in the EU with data sovereignty requirements and facing cost pressures.

Storage news ticker – October 1

BestBrokers has collated data from Pitchbook, Crunchbase and more, and provided a table of the biggest funding rounds in AI in September 2025:

  1. Anthropic – $13 billion, $183 billion valuation
  2. Databricks – $1 billion, $100 billion valuation
  3. Groq – $750 million, $6.9 billion valuation
  4. Cognition – $400 million, $10.2 billion valuation
  5. Sierra – $350 million, $10 billion valuation
  6. Modular – $250 million, $1.6 billion valuation
  7. Lila Sciences – $235 million, $1.23 billion valuation
  8. Perplexity – $200 million, $20 billion valuation
  9. Baseten – $150 million, $2.15 billion valuation
  10. Upscale AI – $100 million, unknown valuation
  11. Invisible Technologies – $100 million, $2 billion valuation
  12. You.com – $100 million, $1.5 billion valuation

It also has a graphic showing what it believes to be the largest AI funding rounds in the whole of 2025 so far:

Databricks launched Data Intelligence for Cybersecurity to help organizations defend against modern and AI-driven threats with more accuracy, stronger governance and greater flexibility. Data Intelligence for Cybersecurity seamlessly integrates with enterprises’ existing security stacks, unifying all data and leveraging an open partner ecosystem so security teams can fully harness the power of AI – spotting risks earlier, understanding the full context of an attack and responding with greater speed. Building on this foundation, Databricks Agent Bricks enables enterprises to build AI apps and agents that not only accurately analyze their data but also take safely-governed actions across every step of the security workflow. With this release, Databricks is also introducing partner integrations with leading providers, including Abnormal AI, Accenture Federal, ActiveFence, Alpha Level, Arctic Wolf, BigID, DataBahn, DataNimbus, Deloitte, Entrada, Obsidian Security, Panther, PointGuard AI, Rearc, SPLX, Theom, Varonis, and ziggiz, extending the power of Databricks and helping customers drive unified, measurable outcomes in their cybersecurity defense strategies. 

A blog by Dell’s Drew Schulke, VP of Product Management, explains Dell’s views on QLC flash. It says “Done correctly, QLC delivers equivalent performance to TLC while meeting the high bar of reliability primary storage customers demand. With >20 percent of PowerStore’s shipped capacity leveraging QLC, and the introduction of QLC on our mission-critical PowerMax platform we’ve shattered the legacy narrative, and our competition has some explaining to do.”

“The truth on QLC performance is that it can deliver sub-millisecond latency – the kind of performance expected for demanding transactional workloads. Why some vendors choose to deliver QLC with 2-4 millisecond response times is a question for them to answer.”

“Next month you will see new QLC-based offerings on both our PowerStore and PowerMax product lines. In both we will make available to a larger number of customers the TCO benefits of QLC with the help of our software-based intellectual property.” 

GoodData introduced a new AI platform combining AI Lake, AI Hub, and AI Apps into a comprehensive data intelligence system. AI Lake processes structured and unstructured data into a self-learning layer for accurate AI outputs. AI Hub provides tools for workflow management and compliance. AI Apps integrate secure AI agents into business processes. The platform offers governance, scalability, and flexible integration with open APIs and SDKs, allowing custom AI solutions. GoodData aims to help enterprises turn data into actionable insights, moving beyond traditional analytics to support AI-driven business applications with transparency and control.

Fortanix (data security for an AI world) and BigID (data security, privacy, compliance, and AI governance) announced an integration between Fortanix Data Security Manager (DSM) and BigID’s data discovery and classification capabilities to automatically trigger protection actions whenever sensitive data is classified. This streamlined workflow eliminates manual intervention while maintaining complete audit trails of all data protection activities.

SaaS data protector HYCU announced the findings of the newly released HYCU State of SaaS Resilience Report 2025, an independent global survey of 500 IT business decision-makers. According to survey respondents, organizations now use an average of 139 SaaS applications, with many deploying well over 200. The introduction of every new app creates new data, new permissions, and new vulnerabilities. As SaaS portfolios grow, so does the exposure.

  • 65 percent of organizations were hit by a SaaS-related breach in the last 12 months with $405,770 as the average daily cost of SaaS downtime, adding up to $2.3 million over a 5-day recovery period.
  •  87 percent admit they have at least one SaaS application at risk due to inadequate protection.
  •  Only 56 percent of SaaS applications are under the control of IT.
  •  43 percent of respondents admit no one truly owns SaaS data resilience.
  • 70 percent do not perform policy-driven backups for some apps .
  •  74 percent do not have offsite data retention.
  •  75 percent do not test resilience regularly.

 This leaves the vast majority underprepared when cyber threats hit, integrations break, minor disruptions or data is deleted, accidentally or maliciously. Download the report here.

Index Engines will demonstrate ransomware-detecting CyberSense at IBM TechXchange 2025, October 6-9 in Orlando. It integrates with IBM FlashSystems’ immutable snapshots, using AI forensic analysis to detect ransomware corruption and support reliable recovery. It validates snapshots, reduces reinfection risks, and shortens restoration times. Sessions cover:

  • October 7, 10:30-11:30 AM ET: Validated immutable storage recovery.
  • October 7, 4:00-4:20 PM ET: FlashSystem integration and automation.
  • October 8, 3:00-3:20 PM ET: 99.99 percent SLA for healthcare systems like EPIC.

InfluxData has released InfluxDB 3.5, available on both its Core and Enterprise products, and introducing:

  • Explorer dashboards (beta): Designed for visualizing, querying, and managing data stored in InfluxDB 3 Core and Enterprise, the InfluxDB 3 Explorer UI now supports Dashboards, giving users the ability to save and revisit preferred queries in one place.
  • Cache querying: Users can now perform ad hoc queries against InfluxDB 3’s built-in Last Value and Distinct Value Caches from Explorer to support exploratory analysis and everyday workflows.
  • Ops upgrades: New features to simplify cluster management and improve oversight, as well as general improvements for production deployments.

This release fuels more powerful, controlled workspaces for time series monitoring and analysis. 

Copenhagen-based SaaS data protector Keepit has secured $60 million in upsized and refinanced credit facilities from the Export and Investment Fund of Denmark (EIFO) and HSBC Innovation Banking. This includes $20 million in new funding and $40 million in refinanced existing facilities, building on a $50 million funding round from December 2024. The funds will support Keepit’s growth, product development, and expansion into larger client markets. Keepit provides secure, vendor-neutral cloud storage to ensure data access, compliance, and recovery for over 18,000 global customers.

Kioxia and Sandisk announced the operational start of Fab2 at the Kitakami Plant in Iwate, Japan, on September 30, 2025. The facility produces eighth-generation, 218-layer 3D flash memory using CMOS directly Bonded to Array technology to meet AI-driven storage demand. Production will scale up gradually, with significant output expected by mid-2026. Fab2 features an earthquake-resistant design, energy-efficient equipment, and AI-enhanced production processes. Part of the investment is subsidized by the Japanese government. The facility strengthens the 20-year Kioxia-Sandisk partnership, focusing on advanced 3D flash memory development to support applications like smartphones, data centers, and AI systems.

UK-based hyperscaler AI data center builder Nscale has closed a $433 million Pre-Series C SAFE, reinforcing commercial momentum and significant investor support following its recent $1.1 billion Series B. Backed by key investors such as Blue Owl Managed Funds, Dell, Nvidia and Nokia, along with other existing Series B and new investors, the funding underscores confidence in Nscale’s execution and growth. The close of the Pre-Series C SAFE comes only days after Nscale announced the largest Series B funding round in European history, raising $1.1 billion. “We’re overwhelmed by the interest we’ve received. It’s incredible to see the passion and confidence we have in Nscale is matched by key investors,” said Josh Payne, CEO and founder of Nscale. “This commitment to participating in our pre-Series C SAFE, just days after the close of our Series B funding, represents a powerful endorsement of our vision to deliver sovereign, scalable infrastructure for the AI era.”

Netlist has initiated new legal proceedings before the United States International Trade Commission (ITC) seeking exclusion and cease and desist orders against Samsung, Google, and Supermicro. The ITC investigation is based on the infringement of six Netlist patents. C.K. Hong, Netlist’s CEO, said: “Since its founding in 2000, Netlist has pioneered innovations in advanced memory technologies. With this action, Netlist is seeking remedial orders that direct US Customs and Border Protection to stop Samsung memory products that infringe on Netlist’s intellectual property from entering the country.”

Netlist asked the ITC to investigate whether Samsung, Google, and Supermicro infringe US Patent Nos. 12,737,366, 10,025,731, 10,268,608, 10,217,523, 9,824,035, and 12,308,087. Each of these patents reads on one or more of the following products: DDR5 memory modules, e.g. DDR5 RDIMM, UDIMM, SODIMM, and MRDIMM, and high-bandwidth memory (HBM).

Other World Computing (OWC) released SoftRAID 8.6, compatible with macOS 26. The update introduces SMART over USB support, allowing health monitoring of most USB and Thunderbolt-connected drives to detect potential failures early. It supports RAID 0, 1, 4, 5, and 1+0 configurations for flexible storage setups and enables sharing of Apple-formatted APFS and HFS+ volumes between macOS and Windows. The free Standard version offers basic data access and compatibility updates, while the Premium version adds advanced RAID options and health monitoring with alerts. SoftRAID 8.6 is available for download, with automatic updates for existing users.

Quantum announced its Scalar i7 RAPTOR tape library has won a Best of Show Award from TV Tech at IBC 2025 in Amsterdam. It was selected for delivering breakthrough advancements in archive scalability, density, and cyber resilience, making it a preferred solution for large-scale media archives. RAPTOR provides up to 2,016 slots and more than 60 PB of native storage with LTO-10 media offering unparalleled scalability. By delivering up to 200 percent more storage density than competing systems, i7 RAPTOR significantly reduces power, cooling, and floor space requirements, enabling up to 70 percent lower total cost of ownership. This marks the third Best of Show recognition for Scalar i7 RAPTOR in 2025. Earlier this year at NAB, the system received Best of Show honors from both TV Tech and TVBEurope.

Reuters reports Samsung Electronics and SK hynix have signed letters of intent to supply memory chips for OpenAI’s Stargate data centers. South Korea presidential adviser Kim Yong-beom said OpenAI was seeking to order 900,000 semiconductor wafers in 2029, and planned to set up joint ventures with Samsung and SK hynix to build two data centers in South Korea with an initial capacity of 20 megawatts. Samsung Electronics’ affiliate Samsung SDS signed a partnership with OpenAI to develop, build and operate AI data centers under the Stargate project, while also expanding enterprise AI services.Shipbuilder Samsung Heavy Industries and construction unit Samsung C&T will jointly work with OpenAI to develop floating offshore data centers to cut cooling costs and carbon emissions.

Sandisk announced licensed storage for Asus ROG Xbox Ally (X) handhelds at Tokyo Game Show: MicroSD cards in 512 GB, 1 TB, and 2 TB capacities with up to 200 MB/s reads via QuickFlow tech (limited to 104 MB/s in standard readers), and WD_BLACK SN7100X NVMe SSDs in 2 TB and 4 TB for internal upgrades. The SSD, akin to the existing SN7100, is DRAM-less and power-efficient for PCIe 4.0. MicroSDs are pre-orderable; SSDs arrive later. 

Solidigm has a new AI Central Lab located near its HQ in Rancho Cordova, CA., with high-performing, storage-dense clusters to handle AI workloads in an environment similar to global large-scale data centers. Hardware highlights include:

  • Highest-performing storage test cluster with D7-PS1010 SSD, which achieved the highest-ever per-node throughput measured in the MLPerf Storage (AI model training) test at 116 GB/s. This can be flexibly scaled to multiple nodes.
  • Most dense storage test cluster with 192 x D5-P5336 122 TB SSDs packing 23.6 PB into 16U.
  • Nvidia B200 and H200 GPUs, 800 Gbps Ethernet networking, and storage servers from leading vendors.

Workloads available for testing at the lab include:

  • AI-specific and emerging workloads such as GPU-intensive AI model training and inference, KV cache offload and VectorDB tuning.
  • Benchmarking power consumption of different configurations.
  • Feeding data to GPUs, keeping them as busy as possible.
  • Helping translate SSD specifications into system-level, industry-relevant AI efficiency metrics such as tokens per dollar and tokens per watt.

America’s Test Kitchen, a longstanding StorONE enterprise customer, is benefiting from StorONE’s storage software, SSDs, and WD’s Ultrastar disk drives, and praises the unmatched stability and simplicity the system delivers, maximizing performance and longevity by moving data seamlessly between flash and HDD tiers. Dustin Brandt, Director of IT at America’s Test Kitchen, says: “The system runs flawlessly – we simply don’t need to worry about storage. StorONE’s platform is incredibly stable and simple to operate, and Western Digital’s Ultrastar HDDs have been rock-solid. This frees up our time to focus on what really matters to our business.”

DDN subsidiary Tintri announced GA of its VMstore T7000 hardware system for hypervisor, database and container ecosystems, bringing new features to enable enterprises to deploy virtualized environments, alongside the latest version of Tintri’s TxOS operating system, available for all Tintri VMstore solutions. The VMstore T7290 expands Tintri’s T7000 portfolio, featuring 40 percent better performance and support for 30 TB drives that double system capacity over past generations. The T7290 scales up to 2.58 PB in a single system and over 165 PB in scale-out configuration, enabling the management of hundreds of thousands of VMs, persistent container volumes and databases. 

The T7290 also delivers enhanced security with Glas-DP, Tintri’s built-in Data Protection and Recovery Suite, and dual custody snapshot locking. It offers simultaneous multi-hypervisor and container support, integrating with vSphere, Hyper-V, Citrix Xen, Red Hat RHEV, plus expanded support for Platform9, OpenStack and Red Hat OpenShift.

TxOS 6.0 introduces enhanced observability, automation and self-service for application-centric infrastructure, allowing IT managers to simplify administration, reduce complexity and focus on business outcomes while managing all workloads from a single console. Tintri VMstore T7290 and the TxOS 6.0 update are available directly from Tintri and certified partners now.

The GA of the latest VAST AI OS version includes the VAST DataBase now delivering enhanced performance through its new Sorted Tables feature, making near-logarithmic query speeds across massive datasets possible. Logarithmic time means the query time increases by a constant amount each time the dataset size doubles. For example, if a database has 1,000 records and a query takes 10 milliseconds, doubling the dataset to 2,000 records might only increase the query time to ~11 milliseconds. Databases use indexes (e.g., B-tree or B+ tree indexes) to organize data in a way that allows for fast lookups, inserts, and deletes. Instead of scanning every record (linear time, O(n)), the database traverses a tree structure, reducing the number of operations to a logarithmic scale.

Suppose a database has 1 million records. Without an index, finding a specific record might require scanning all 1 million (O(n)). With a B-tree index, the database might only need to check ~20 nodes (log₂(1,000,000) ≈ 20) to locate the record, resulting in a much faster query. In real-world scenarios, VAST has seen point queries improve by up to 100x and Trino queries on 10B+ rows speed up by 95x – reducing latency from 12 seconds to just 144 ms.

VAST Data is partnering with Shonfeld Data Services (SDS), the dominant colocation and cloud provider in Israel, to power Israel’s sovereign AI infrastructure. SDS is building a next-generation AI cloud ecosystem to serve both domestic and international enterprises, and integrating VAST’s AI Operating System. There will be dozens of petabytes of VAST-powered data infrastructure and thousands of Nvidia Blackwell GPUs and Nvidia networking designed to serve large-scale enterprises that require uncompromising performance, security, and scalability to fuel their AI training and deployment workloads.

AI and HPC parallel file system supplier VDURA has joined the Ultra Ethernet Consortium (UEC). The UEC, which has welcomed more than 40 new members in the past year and a half, is building an open, Ethernet-based, high-performance architecture designed to address the challenges of accelerated computing. UEC brings together the full ecosystem of compute, networking, and storage to deliver advances in bandwidth, latency, and scalability needed for AI training, fine-tuning, and inference at scale.

SingleStore wants to be ChatGPT for data and AI apps

Universe
Universe

SingleStore is updating its unified database to handle natural language queries by converting them into SQL – and it’s building its own AI assistant to boot.

SingleStore provides the real-time transactional and analytic SingleStoreDB database, a distributed, relational SQL database with operational, analytical, and vector data support. It is integrated with Apache Iceberg, Snowflake, BigQuery, Databricks, and Redshift, and CEO Raj Verma positions SingleStoreDB as a foundational layer for AI. SingleStore was taken into majority private equity ownership last month. The database has been undergoing recent developments to add AI/ML functionality, and now even more has been added.

Dave Eyler

VP of Product Dave Eyler stated: “With these additions, SingleStore becomes for data and AI apps what ChatGPT has become for consumers – an indispensable foundation that turns questions into insights.”

The SingleStoreDB update has three main features:

  • AI and ML Functions: Developers can now call machine learning models and large language models (LLMs) directly from SQL, streamlining the process of building intelligent, data-driven apps with familiar syntax.
  • Zero Copy Attach: This new functionality makes it easier to experiment on production data by replicating it into lightweight clusters within the same cluster group, enabling safe, agile iteration, especially when paired with SingleStore’s database branching.
  • Aura Analyst: An analytics assistant that allows users to conversationally explore insights across all their data. In the future, SingleStore will add the capability to embed Aura Analyst directly into its customers’ apps, turning every application into an AI-native experience and positioning SingleStore as the ChatGPT for data and AI apps.

Aura will provide a natural language-based query interface into SingleStoreDB’s data. It analyzes the query, turns it into a SQL request, sends that to SingleStoreDB data, receives a response, and builds an answer to the input request.

SingleStore Aura AI assistant responding to a “What is our sales growth by region?” request by running generated SQL and building the response

This raises the question of where in your database estate is the appropriate place to have an AI assistant querying your data. We’re seeing a tremendous tussle going on between the various unified data store providers – Databricks, SingleStore, Snowflake, and others – to be the AI model or agent query entry point. Whichever supplier owns that entry point effectively owns the data being queried and becomes the top-level, AI-relevant data store for that customer. Lower-down-the-stack data stores then become mere data feeders to the chosen entry point.

You can view demos of database access with Zero Copy Attach here, running AI tasks directly in SQL with AI Functions here, and Aura Analyst here.

Hitachi Vantara offers vSAN replacement for VMware migrants

Hitachi Vantara is trying to attract wannabe VMware leavers by providing vSAN replacement with VSP One and vSphere with Red Hat OpenShift.

The background to this is, of course, Broadcom’s acquisition of VMware and its subsequent VMware licensing changes. Red Hat OpenShift Virtualization (RHOSV) uses the open source KubeVirt control plane to integrate Kubernetes container orchestration with server virtualization using the KVM hypervisor. OpenShift itself is a commercial platform. The idea is to migrate VMware systems over to RHOSV, using KubeVirt, and enable containerized app development and support as well, with VSP One providing the underlying storage needed and replacing vSAN. It features a unified data storage offering for block, file, and object storage across on-premises systems and the public cloud. 

Dan McConnell

Dan McConnell, SVP product management and enterprise infrastructure at Hitachi Vantara, stated: “By combining Red Hat OpenShift Virtualization with Hitachi Vantara’s high-performance VSP One infrastructure, we’re enabling customers to simplify migration, reduce complexity, and accelerate application delivery on a modern hybrid cloud foundation. Customers want choice without complexity or cost or vendor lock-in.” 

The combined VSP One-RHOSV offering includes a “pre-validated reference architecture and a VM migration tool that simplifies and accelerates the transition from legacy platforms.” It “allows organizations to run virtual machines (VMs) and containers side by side on the same platform, reducing the need for separate virtualization infrastructure and avoiding duplicate environments, which reduces hardware, software licensing, and operational costs.”

This reference architecture uses stretched Red Hat OpenShift clusters and features VSP One’s Global Active Device (GAD) technology, which enables active-active data access across multiple sites. There are enhanced CSI drivers, and the architecture supports disaster avoidance, continuous operations, and seamless workload mobility across geographically distributed sites. There is an optional third-site quorum with Red Hat OpenShift master node support in public cloud or isolated sites which enables maximum availability zone resiliency.

Hitachi Vantara says the combined offering will bring reduced operational costs and less vendor lock-in, faster app delivery and migration, continuous uptime, 100 percent data availability, and support for mission-critical workloads. The company tells us there will be improved policy consistency, proactive issue resolution, and secure operations across hybrid environments, through the integrated Red Hat OpenShift observability and automation tools and Hitachi Vantara’s “intelligent infrastructure management.”

Earlier this year, Hitachi Vantara and Red Hat announced an update to the Red Hat OpenShift migration toolkit for virtualization, which includes a storage offloading feature for cold migrations, powered by VSP One. Allowing storage offloading during a cold migration can significantly accelerate the process by moving the data-copying workload from the server and network to the storage array itself, reducing downtime and maintaining operational continuity. Hitachi Vantara says it’s the primary driver for the development of this feature, and one of the first to have its offload driver reach a technology preview stage. 

Dell did a deal with Red Hat OpenShift Virtualization a year ago. Infinidat and Veeam did a backup-focused deal with Red Hat OpenShift in March this year. Alternative VMware storage migration destinations include Nutanix and Pure Storage.

Get more information on Hitachi Vantara with Red Hat OpenShift offerings here with a downloadable eBook and also a solution brief.

Infinidat doubles InfiniBox capacity with G4 refresh

Infinidat has refreshed its fourth-generation (G4) InfiniBox arrays with larger drives, more slots, faster internal networking, a smaller all-flash model, S3 protocol support, QLC flash, and a new capacity-upgrade scheme.

The InfiniBox arrays are high-end and scale-up storage array products competing with Dell PowerMax, Hitachi Vantara VSP and IBM DS8000 systems.  Their unique features include having three controllers for reliability, Neural Cache-branded memory caching providing data access responses in as little as 35μs, and both all-flash (SSA) and hybrid flash/disc versions of its array. The controlling software is InfuzeOS, which also runs in the AWS cloud, providing an InfiniBox environment there. Infinidat supplies an InfiniGuard cyber-protection and backup system, and has an InfiniVerse cloud-based monitoring system receiving telemetry from the InfiniBox arrays.

Eric Herzog

Infinidat CMO Eric Herzog stated: “We continue to expand and enhance our InfiniBox G4 family, enabling enterprise customers and service providers to store larger quantities of data more efficiently, have easier access to advanced storage capabilities, benefit from flexible capacity management, free up rack space and floorspace, and reduce energy consumption for a greener storage infrastructure at a better power cost-efficiency per terabyte of storage.”

The hardware changes start with support for 24TB disk drives, up from 20 TB, the SAS-4 22.5 Gbit/s internal array drive connectivity support, upgrading from 12 Gbit/s SAS-3,  and a 78-slot drive enclosure, replacing the prior 60-drive box. This takes maximum capacity per InfiniBox rack up from 17.2 PB to 33 PB effective, a 92 percent increase. One use case for this is as a high-capacity backup storage target.

There is a new F24 SSA model. The existing F24ST SSA, with SAS drive support, starts at 155 TB capacity in its 14 RU enclosure. The new F24NT comes in an 11 RU enclosure with 77 TB starting capacity, offering a 29 percent lower entry price than the F24ST. It supports NVMe-connected 16 TB TLC-format SSDs with 30 TB and 60 TB QLC drives coming later this year.

Herzog told us: “We obviously know that there’s other capacities coming out in the market. So as those come out, we will test them of course, and add them in, just like we did with the 24 TB hard drives.”

The F24NT can be installed in a customer’s existing datacenter racks, where there is space, at a colocation facility, or at an enterprise’s remote locations where there are rack-level IT hardware facilities, such as major office or factory deployments. This means that the InfiniBox can now compete for the high-end of mid-range storage requirements where these require features above and beyond what classic mid-range storage products can provide.

The InfiniBox systems now support 100 GbitE connectivity, up from 25 GbitE.

The InfuzeOS now supports the S3 object storage protocol, on top of the existing SMB/NFS, NVMe-F/TCP, and Fibre Channel. Herzog told us: “We made some replication enhancements. We’ve gone multi-target and multi-parallel for async replication… If we have a customer on the third generation or who bought a G4, for example, last summer, and they want faster replication, they just get our software and then they just load it up and they’ll get better performance.”

Infinidat also pitches a green angle, saying its systems consume less power than rivals such as Dell’s PowerMax:

Infinidat enables customers to buy a partially populated SSA array with pre-determined fixed capacity percentages – such as 60, 80, and 100 percent – with non-disruptive upgrades. It’s now offering capacity-based upgrades with lower increments per upgrade and lower costs.

Ashish Nadkarni, Group VP and GM, Worldwide Infrastructure at IDC, stated: “In 2025, the progression of the G4 is extending the reach of the G4 beyond existing customers and into new enterprise customers. Infinidat is making it easier for large enterprises to deploy the InfiniBox G4 for a broader range of applications and workloads. The G4 is gaining extensive additional momentum.”

Get a hybrid InfiniBox product range datasheet here and an all-flash InfiniBox (SSA) product datasheet here.

Availability

The 24 TB disk drives will be available in Q4 2025. NVMe QLC flash modules in 30 TB and 60 TB capacities will be added to the SSA range during the same period.

Cloudian launches HyperScale AI platform built on Nvidia Blackwell GPUs

Cloudian has launched a HyperScale AI Data Platform (AIDP), with its HyperStore S3-compatible object storage being the datastore for AI models and agents running on Nvidia GPU hardware and software.

HyperScale AIDP is built on the Nvidia RTX PRO 6000 Blackwell Server Edition GPU in line with Nvidia’s AI Data Platform reference design. HyperStore supports the S3 storage protocol over remote direct memory access (RDMA) to deliver faster object storage I/O performance to the Blackwell GPUs and so speed up AI response generation. Cloudian says HyperScale AIDP makes its customers’ unstructured data available to GenAI large language models (LLMs) and agents so that they can serve their staff and customers better.

Cloudian CTO Neil Stobart stated: “Rather than forcing enterprises to build complex, separate infrastructure for AI – which requires skills most don’t possess – we’ve engineered a platform that automatically transforms existing data into actionable intelligence at the storage layer.” As he puts it: “The HyperScale AI Data Platform represents a fundamental shift in how enterprises approach AI readiness.”

HyperScale AIDP uses RTX PRO 6000 Blackwell GPUs, BlueField DPUs, Spectrum-X Ethernet, and Nvidia AI Enterprise software – including NIM and NeMo Retriever microservices. There is a three-node HyperStore cluster in the system

Cloudian’s thinking here is that, up until now, “AI implementations require enterprises to build complex, dedicated file system structures and separate vector databases to achieve optimal performance.” This means its exabyte-scale HyperStore and HyperScale products can enable “enterprises to deploy AI applications directly on their native S3-compatible data sources without requiring additional file system layers or separate vector database infrastructure.” 

Vector databases hold the mathematical transformations of source unstructured data items that are used in semantic search by LLMs, and HyperStore has its own built-in vector database. The Cloudian HyperScale AI Data Platform automatically ingests, embeds, and indexes multimodal unstructured content, making it instantly searchable through vector search interfaces and immediately available for retrieval-augmented generation (RAG) workflows.

HyperScale AIDP demo video screengrab showing the system answering a natural language query about creating a HyperStore storage policy

The company says that with its HyperScale AIDP, a customer’s AI agents “can access, search, and analyze information in near real-time,” with pretty much instant access to all the organizational knowledge – the reports, manuals, presentations, and multimedia content – stored in HyperStore. Incoming “data is instantly classified, enriched with real-time metadata, and vectorized as it’s stored, eliminating manual data preparation processes and complex infrastructure provisioning.”

The S3 object storage RDMA facility provides “direct data paths between storage, system memory, and GPU memory for ultra-low latency and up to 35 GBps per node (reads), scalable to TBs per second.”

Cloudian asserts that “organizations can now build AI applications that securely access enterprise data with reduced overhead and faster time to deployment, all while maintaining full control over their most valuable asset – their data.”

A Cloudian blog, Unlocking Enterprise Knowledge with Cloudian HyperScale AIDP, says: “Most people are familiar with chatbots that give generic responses without benefit of your enterprise knowledge. Or search engines that return endless lists of links to sort through. This system is fundamentally different because it applies agents, making decisions and reasoning through problems without needing step-by-step instructions from humans.”

The system “employs Llama-3.2-3B-Instruct, Meta’s instruction-tuned transformer model optimized for dialogue and reasoning tasks, featuring a 128K token context window. A diagram shows the request-response data flow:

Its four-GPU compute infrastructure has allocates each GPU specific role:

  • GPU-1 – Llama-3.2-3B-Instruct LLM inference operations
  • GPU-2 – Vector database operations and similarity search
  • GPU-3 – Nvidia software reranking processes and relevancy scoring
  • GPU-4 – Shared resources for embedding generation and auxiliary functions

This scheme provides optimized throughput without resource contention in the pipeline. An internal Cloudian HyperScale AIDP demo ingested a 1,000-page Cloudian admin guide in around five minutes, making it available for use by LLMs and agents. Existing Cloudian customers can acquire a HyperScale AIDP system, built around their HyperStore, and make the unstructured data stored within it available for AI LLM/agent RAG inferencing without the data leaving their HyperStore deployment. You can view a HyperScale AIDP demo here.

Managing data with Pure Storage’s Enterprise Data Cloud architecture

Don’t manage arrays, manage your data instead

Most people believe they are good drivers. Yet driving remains one of the most dangerous things we do every day. Cars operate in silos, each requiring constant maintenance, and outcomes depend entirely on human judgment. Mistakes are inevitable, and the consequences can be severe.

Now imagine a different reality: a world where autonomous vehicles dominate the roads. Manual driving isn’t just outdated, it’s dangerous, inefficient, and obsolete. Journeys are safer, faster, and more cost-effective. Accidents become rare. Traffic flows seamlessly. Safety is no longer at the mercy of human error. In that world, the very idea of human-driven cars feels as archaic as lighting your home with kerosene lamps.

This is the trajectory of all technology. Complex, manual systems give way to automation. Carburetors and choke levers gave way to computer-controlled fuel injection. Clutch pedals yielded to automatic transmissions. Now assisted driving and self-driving systems are steadily taking over the most complex tasks of vehicle operation. The arc is clear: from manual control to intelligent automation, from human-managed risk to machine-optimized safety, from silos to platform.

The legacy enterprise equivalent

Enterprise data management today looks a lot like the manual driving era, with complexity, siloed systems, high costs, and security gaps. Every dataset is treated like its own vehicle, requiring care, attention, and oversight. Different teams manage different stacks, leading to fragmentation and duplication. On average, data is copied seven to eight times through unchecked backups and replications, bloating costs, creating inefficiency, and multiplying risk.

AI is now amplifying this challenge. AI workloads are not tolerant of data silos, manual oversight, or fragmented systems. They demand a clear understanding of where data resides, secure and compliant access to that data, and seamless movement across environments. Old-world storage architectures simply cannot keep up with the agility, scale, and governance AI requires. The risk is clear: enterprises that cling to siloed infrastructure will find themselves unable to compete in an AI-driven economy.

This is why the Enterprise Data Cloud (EDC) matters. It represents a paradigm shift from managing storage arrays to managing data itself. Just as autonomous driving abstracts away the mechanics of gear-shifting and fuel injection, EDC abstracts away infrastructure complexity, replacing it with policy-driven, automated, intelligent data management at scale.

The Pure Fusion control plane

At the heart of the Enterprise Data Cloud is Pure Fusion, a highly intelligent global control plane. Pure Fusion brings together fragmented systems into a unified data plane, bringing together different environments, whether they are on-prem, cloud, or hybrid deployments.

Pure Fusion creates a unified, virtual pool of storage, that provides visibility, access, and control across all data assets. This is not just storage orchestration, it provides intelligence to build a data management platform:

  • Policy-driven governance ensures compliance and security across environments.
  • Automation provisions, protects, and optimizes data in real time.
  • Mobility keeps data fluid, eliminating silos and making it instantly accessible wherever it is needed.

This architecture transforms storage into a fully managed, always-modern service: continuously available, infinitely scalable, and optimized automatically without the grind of legacy complexity and manual operations. With Pure Fusion, enterprises no longer think about disks, arrays, or replication policies — they think about data as a platform.

A paradigm shift for the AI era

AI is the accelerant forcing this transition. Enterprise AI at scale requires data to be governed, unified, and delivered at the right time, in the right format, to the right systems. Enterprises that cannot do this will face ballooning costs, compliance failures, and strategic irrelevance. Enterprises that embrace EDC, by contrast, will manage orders of magnitude more data, support AI-driven innovation, and turn data from a liability into a differentiating asset.

This isn’t a distant vision. Pure Storage is already delivering it. With EDC, IT teams move beyond managing storage infrastructure and instead manage a living, intelligent data platform. One that breaks down silos, supports modern workloads, strengthens cyber resilience, and enables enterprises to operate at SaaS speed.

The future of enterprise data

Once an organization embraces the Enterprise Data Cloud, the shift feels inevitable. Just as no one today would willingly choose carburetors over fuel injection, no enterprise tomorrow will tolerate managing fragmented data silos manually. The future is an AI-infused, data-led infrastructure that makes organizations more agile, more compliant, and more innovative.

With EDC powered by Pure Fusion, IT transforms from a brake on innovation to an accelerator pedal. Data becomes the foundation for competitive advantage, fueling faster insights, unlocking new opportunities, and enabling secure, compliant growth.

This is not just evolution. It is a revolution. The future of enterprise data management is here, and Pure Storage is leading the way.

Contributed by Pure Storage.

VirtualZ and x86 server/public cloud mainframe bridging software

Two years ago we reviewed VirtualZ with its Lozen IBM mainframe data access and PropelZ mainframe data extract products. Now it has two more, FlowZ and Zaac, and is talking about AI and mainframe data. It’s time for another look.

Mainframes hold vast amounts of highly-valued data, some of which needs contributing to an organization’s general data pool so that its x86 servers, GPU servers, and cloud processing instantiations can make use of it. Lozen enables real-time, read-write, peer-to-peer, access to IBM z mainframe data, while PropelZ makes one-time copies of the Z-held structured data for transfer to an on-prem or public cloud database. The two new products extend VirtualZ’ product repertoire, with FlowZ moving mainframe-held files to a target on-prem or public cloud destination, and Zaac expanding mainframe storage capacity with an external SAN or the public cloud.

Let’s take a closer look at each one.

FlowZ

With FlowZ, mainframe and on-prem X86 server/public cloud apps can have bi-directional file – unstructured – data access. Mainframe users can open, close, read and write to these files without having to write extra code. The admin person configures an output file name in JCL and FlowZ transparently maps local mainframe file operations to the external storage.

This can support:

  • Cloud backups without staging. 
  • File sharing for distributed teams/hybrid apps. 
  • Feeding unstructured data into AI training/inference. 
  • Replacing FTP/manual processes for compliance/archival.

Zaac

This software resides in the mainframe and acts as a bidirectional gateway for mainframe apps, presenting on-prem NAS/SAN and cloud block storage as a local DASD (Direct Access Storage Device aka disk or SSD) or tape. That means it can be quickly spun up and used by mainframe apps or by external apps. This could be significantly cheaper than buying additional mainframe capacity, and significantly faster to provision as well. It can, VirtualZ says, “feed real-time mainframe data into AI pipelines and hybrid analytics platforms.” When these external apps put their data into the Zaac stores then mainframe apps can access it, as if it were local.

The Zaac SW installs on the Z system in minutes, with no app software changes, and is public cloud-agnostic, supporting AWS S3, Azure Blob, or Google objects. It enables mainframe apps to access cloud-native data, AI-generated datasets, or external-sourced data directly, supporting read-write operations without intermediaries. Mainframe applications interact with external data via standard JCL or APIs, treating remote storage as local.

Data flows bidirectionally in real time, with built-in optimization for zIIP offload to minimize MIPS costs. Zaac support unlimited LPARs (virtual mainframe in partitions) and sites, with cloud-agnostic drivers for seamless integration.

The Zaac name comes from Z (IBM Z mainframes) and its aim to provide Access to Any data, Cloud, or platform. 

VirtualZ says both FlowZ and Zaac can contribute mainframe data to AI pipelines. Find out more about FlowZ here and Zaac here.

Bootnote

VirtualZ has had six funding rounds, with about $7 million raised in total. It took in $4.9 million in October ($2.7 million) and December 2023 ($2.2 million) to help develop its FlowZ and Zaac products and build out its sales team. It raised an additional $2.1 million in August 2024, for research, development, team expansion, and scaling operations to meet rising demand for hybrid cloud integrations.

Its competitors include BMC Software, which acquired Model 9 in April 2023, and startup Geniez.

Storage news ticker – September 29

Screenshot

Data source connector builder Airbyte announced Airbyte Enterprise Flex, providing customers with data sovereignty and full control over their data, enabling accurate and secure data for AI and analytics. It says Flex is the industry’s first data movement platform built on one codebase, delivering the Airbyte feature set while giving customers the freedom to deploy wherever they choose – within days – in an on-premises datacenter, cloud, multi-cloud, or hybrid cloud. Airbyte also announced Data activation (often referred to as reverse ETL), which allows customers to move their data back into business-critical applications for AI and analytics use cases. Data Activation is available on all Airbyte plans for Hubspot and Customer.io destinations with plans to roll out to some 200 application destinations. Lastly it announced improved platform performance speeds – by as much as five to ten times. That translates to similar improvements in performance speeds for Airbyte supported connectors. 


William Blair analyst Jason Ader tells subscribers that cloud and backup storage supplier Backblaze is in an ongoing business turnaround and led by a fresh management team outside of co-founder and CEO Gleb Budman. Management’s confidence in the business outlook stems from:

  • The company’s competitive differentiation versus hyperscaler and neo-cloud storage offerings (less expensive, more efficient, and flexible storage)
  • B2 Cloud’s growth reacceleration (driven by momentum with AI customers, increased cross-sell/upsell opportunities), which is projected to exit fiscal 2025 at 30-plus percent growth
  • Major go-to-market changes (establishment of direct sales teams, focus on upskilling), which have already begun to bear fruit
  • Secular AI/data tailwinds, which magnify the need for available storage at scale
  • Improving profitability, with management prioritizing margin upside and adjusted free cash flow generation as core drivers here

Connector company CData Software announced Connect AI, the first managed Model Context Protocol (MCP) platform that integrates AI assistants, agent orchestration platforms, AI workflow automation, and embedded AI applications with more than 300 enterprise data sources. Connect AI has governed, in-place access to enterprise data, and, preserves data semantics and relationships, giving AI complete understanding of the context. It says Connect AI takes the same enterprise-grade connectivity technology already embedded by top technology companies including Palantir, SAP, Salesforce Data Cloud, and Google Cloud into their offerings, and reimagines it specifically for AI workloads with real-time semantic integration capabilities.

Cloudera has announced the launch of Cloudera Iceberg REST Catalog and Lakehouse Optimizer, and an integration with Dell ObjectScale to deliver a next-generation Private AI platform. The Iceberg REST Catalog enables secure, zero-copy data sharing and unified governance across any cloud or datacenter. With the Lakehouse Optimizer, enterprises gain automated cost savings and performance improvements for Apache Iceberg tables. Cloudera compute engines can run directly on ObjectScale storage, making it easier to manage both structured and unstructured data in one place with clear governance, security, and scalability.

Commvault and BeyondTrust have a strategic integration, bringing BeyondTrust’s Password Safe privileged access management (PAM) solution into Commvault Cloud, enhancing identity security. 

Commvault research reveals that the rise of AI, tightening cross-border data regulations, and persistent cybersecurity threats are the top three external factors putting pressure on businesses to improve trust, with 97 percent of UK business leaders believing that introducing a chief trust officer role is necessary. This survey was conducted independently and exclusively for Commvault by Censuswide. It reveals the views of 1,000 UK business leaders, from companies with revenue of over £100 million.

Private equity biz Haveli Investments has acquired NoSQL database supplier Couchbase for $1.5 billion in cash four years after went public and since under-performed. Couchbase will now be a privately held company, and stockholders are entitled to receive $24.50 per share of Couchbase common stock owned immediately prior to closing. Sumit Pande, Senior Managing Director at Haveli Investments, said: “The combination of Couchbase’s strong product leadership with Haveli’s expertise in scaling enterprise software organizations, positions us well to expand market leadership while continuing to meet the performance and scalability demands of customers.”

Databricks and OpenAI announced the launch of a $100 million partnership that brings OpenAI models to all Databricks’ 20,000-plus customers. This means organizations can now bring OpenAI models directly to their enterprise data, benefit from access to high-capacity processing across the latest OpenAI models, and build production-ready AI agents by measuring the accuracy of models like GPT-5 and gpt-oss with task-specific evaluation and LLM judges. The $100 million number? It’s a minimum spending commitment from Databricks to OpenAI over the multi-year period of the deal. 

If actual revenue exceeds $100 million, OpenAI receives more; if it falls short, Databricks covers the full minimum. This structure mirrors Databricks’ earlier $100 million deal with Anthropic and provides OpenAI with predictable income amid its datacenter expansions. It supports digital twins and autonomous systems, with applications already seen in Hitachi Rail’s HMAX.

Data integration supplier Fivetran is reportedly in talks to buy dbt Labs which provides an open source software framework to manage, transform, and model data in data warehouses.

Hitachi Vantara launched a global Hitachi AI Factory, built on Nvidia’s full-stack AI platform. The AI Factory has a distributed global infrastructure powered by Hitachi iQ with Nvidia HGX B200 systems featuring Nvidia Blackwell GPUs; Hitachi iQ M Series with Nvidia RTX PRO 6000 Server Edition GPUs; and Spectrum-X Ethernet networking platform. It enables seamless collaboration across the US, EMEA, and Japan, and provides acceleration of Lumada 3.0, Hitachi’s data-driven transformation model combining IT, OT, and AI to solve real-world challenges. (Lumada is Hitachi’s operating model that helps enterprises solve business and societal problems through co-created digital transformation.) 

The Hitachi AI Factory is strategically distributed across the United States, EMEA, and Japan ensuring that Hitachi’s engineers can collaborate seamlessly and access powerful computing resources with low latency, no matter where they are. 

Majed Saadi

Hitachi Vantara Federal announced Majed Saadi as the company’s CTO to lead Hitachi Vantara Federal’s technology strategy, driving IT modernization, strengthening enterprise platforms, and advancing the company’s position as a federal innovation leader. Saadi has held senior technology leadership roles across federal integrators, resellers, and consulting firms. As VP of Growth and Technology at Synergy Inc., he led market expansion, technology adoption, and federal contract growth. Previously, as VP at General Dynamics IT, he oversaw complex IT and mission support services for federal civilian agencies. 

Lenovo announced SMB compute and storage products, which included “Business Protection in a Box: Safeguard critical data and workloads with Lenovo ThinkSystem SR650 V3 supporting up to 55 VMs; Lenovo ThinkSystem SR630 V3 + ThinkSystem Storage Arrays supporting up to 140 VMs.” Lenovo integrates Veeam to safeguard workloads against ransomware and failures, and with near-instant recovery, SMBs can restore critical operations in minutes, without needing a large IT staff.

Lenovo research drawn from its third Work Reborn report, Reinforcing the Modern Workplace, found 65 percent of IT leaders surveyed admitted their defenses are outdated and unable to withstand AI-enabled attacks, and just 31 percent feel confident defending against them. It says AI has changed the balance of power in cybersecurity. To keep up, organizations need intelligence that adapts as fast as the threats. That means fighting AI with AI. Lenovo says it’s leading with AI-native defenses designed to spot threats earlier, adapt in real time, and scale across the modern workplace.

OSI (Open Semantic Interchange) is an open-source initiative aiming to create a universal semantic data framework for all companies to standardise their fragmented data definitions with an open, vendor-neutral specification. In this starting era of AI agents, fragmented data definitions remain one of the biggest barriers to AI adoption, making a shared open standard essential to ensuring semantic consistency. OSI establishes a shared semantic standard so all AI, BI, and analytics tools can “speak the same language,” giving companies the flexibility to adopt best-of-breed technologies without losing consistency in metrics or business logic. 

Without a shared semantic specification, data and AI teams often spend weeks reconciling conflicting definitions or duplicating work across platforms. By standardising how semantics are defined and exchanged, OSI hopes to ensure data is governed, consistent, and context-rich, enabling more accurate, trustworthy AI insights and faster adoption.

The OSI was set up by ThoughtSpot and Snowflake alongside Alation, Atlan, BlackRock, Blue Yonder, Cube, dbt Labs, Elementum AI, Honeydew, Mistral AI, Omni, Relational AI, Salesforce, Select Star, Sigma, and Tableau.

PostgreSQL has released v18 of its eponymous product. EnterpriseDB (EDB) contributed over 30 patches, more than ever before, helping to add 200 new features, including OAuth Authentication, SQL Standards Improvements and Performance Optimizer Enhancements, that make it easier for organizations to run secure, high-performance and portable applications across hybrid environments. 

Peter Thiel’s Founders Fund-backed startup Sentient has launched its open AGI network, the first truly composable AI ecosystem where agents, models, and data sources work together. It’s accessible through Sentient Chat. Sentient’s mission is to ensure that Artificial General Intelligence (AGI) is open-source and not controlled by any single entity. While OpenAI and Anthropic operate closed ecosystems that limit what developers can build, and platforms like AWS’s AI agent marketplace and Hugging Face offer discovery of individual models without enabling coordination between them, Sentient orchestrates 40+ specialized agents into coordinated workflows capable of generating outputs like comprehensive investment reports that combine pricing, research, and market data in real-time.

At launch, Sentient’s platform features more than 40 specialized agents, 50 data sources, and 10-plus models. The agents, from both Web2 and Web3, include generative graphics engine Napkin, and fast-growing search startup Exa and Composio, a developer-focused platform that enables AI agents to seamlessly integrate with over 250 external tools and APIs.

Research biz TrendForce expects NAND flash prices to rise 5-10 percent in 4Q25, driven by spillover demand for QLC products. It says HDD shortages and longer lead times have prompted CSPs to quickly redirect storage demand toward QLC enterprise SSDs. SanDisk was the first to announce a 10 percent price increase, while Micron paused quotations due to pricing and capacity issues. QLC’s cost benefits have increased its adoption in SSDs. As generative AI drives higher demand for extensive data storage, suppliers are more focused on expanding QLC capacity.

VAST Data co-founder Jeff Denworth posted on X: “One of the silent stories of this week’s flurry of UK activity is the @VAST_Data involvement in these super-projects. CoreWeave, Microsoft/NScale, etc. There’s a whole lot of VAST’s AI Operating System headed to Great Britain!” VAST is at the #24 spot on the 2025 Forbes Cloud 100, “the definitive ranking of the top 100 private cloud companies in the world, published by Forbes in partnership with Bessemer Venture Partners.”

Kinda Baydoun

Veeam has promoted Kinda Baydoun as the new Lead for its EMEA Partner Organization. She has four years’ experience of leading partner engagement across the Middle East, English-speaking Africa, and Eastern Europe. 

Mainframe data access supplier VirtualZ makes data available to LLMs and agents in four ways:

  • Lozen – real-time, read-write access to mainframe data without replication, enabling governed AI/analytics directly against source systems.
  • PropelZ – no-code ELT for landing mainframe data into JDBC targets (Snowflake, Databricks, SQL Server, etc.) with policy-driven control (e.g., column/row exclusion).
  • Zaac – “instant mainframe storage” that bridges cloud object storage and z/OS for cost-efficient modernization.
  • FlowZ – lightweight pipelines and backup/restore workflows that are getting traction with EU DORA-focused teams.

ReRAM developer Weebit Nano has joined the EDGE AI FOUNDATION as a Strategic Partner. The foundation is a global hub that unites industry leaders and researchers to drive innovation, solve global challenges, and democratize edge AI technologies. Weebit says it brings its low-power, high-performance Resistive RAM (ReRAM or RRAM) technology to this dynamic community. For advanced edge AI chips, Weebit ReRAM provides the dense on-chip non-volatile memory needed to store weights for artificial Neural Networks (NNs) with ultra-low power consumption that is critical for edge devices. It also scales to the smaller process geometries used in the fabrication of advanced AI SoCs.

Rubrik adds Okta protection to identity recovery lineup

Rubrik can now protect and restore Okta Identity Provider (IdP) environments from automated and immutable backups.

Okta provides cloud-based user authentication, authorization, and application access via Single Sign-On and multi-factor authentication to people and applications, both on-prem and SaaS, needing to access a supplier’s IT environment. It has an Okta Integration Network and can provide ID-as-a-Service. Each Okta customer is akin to a tenant with their own Okta IdP environment and is responsible for protecting the data and configurations within their tenant environment – the classic SaaS shared responsibility model. Rubrik now offers data protection and cyber-resilience services covering Okta, and says it’s the only supplier protecting all three Active Directory, Entra ID, and Okta IdPs.

Hema Mohan

Hema Mohan, Rubrik VP of Product Management, stated: “While organizations are consolidating their identity systems, many are still operating in complex hybrid and multi-IdP environments that create new blind spots when it comes to complete cyber resilience. By protecting the critical configurations and dependencies within Okta, we are empowering our customers to defend identity and data, recover quickly, and build lasting resilience in one simple, yet powerful solution.”

The Rubrik Okta Recovery product provides continuous, automated protection of Okta objects, including users, groups, and applications. There is in-place, granular recovery for Okta objects and metadata. It can restore misconfigured, compromised, or deleted objects directly in the live Okta tenant, “minimizing disruption and eliminating tedious, manual rebuilds.”

All the Okta tenant backup data is secured in Rubrik-owned, immutable storage, isolated from attacks and tampering.

As seen in recent incidents affecting organizations such as M&S and Co-op earlier this year, many malware attacks rely on phishing to get access credentials, thus the management of user access is becoming more important. Protecting cloud-held user and app access details is critical as well.

Self-hosted SaaS backup service business Keepit already protects Okta, as does HYCU. In fact, Okta Ventures has invested in HYCU. Suppliers such as Cohesity, Commvault, and Druva do not. Rubrik Okta Recovery is expected to be available in the coming months.