Home Blog Page 12

Storage news ticker – April 4

Cohesity announced it is the first data protection provider to achieve Nutanix’s Database Service (NDB) database protection Nutanix Ready validation. It says: “NDB is the market leading database lifecycle management platform for building database-as-a-service solutions in hybrid multicloud environments.” Cohesity DataProtect now integrates with NDB’s native time machine capabilities and streamlines protection for PostgreSQL databases on NDB via a single control plane.

Bill O’Connell

Data protector Commvault has announced the appointment of Bill O’Connell as its chief security officer. He had prior leadership roles at Roche, leading technical, operational, and strategic programs to protect critical data and infrastructure, and also at ADP. He previously served as chair of the National Cyber Security Alliance Board of Directors and remains actively involved in various industry working groups focused on threat intelligence and privacy.

Global food and beverage company Danone has adopted Databricks’ Data Intelligence Platform to drive improvements in data accuracy and reduce “data-to-decision” time by up to 30 percent. It says data ingestion times are set to drop from two weeks to one day, with fewer issues requiring debugging and fixes. A “Talk to Your Data” chatbot, powered by generative AI and Unity Catalog, will help non-technical users explore data more easily. Built-in tools will support rapid prototyping and deployment of AI models. Secure, automated data validation and cleansing could increase accuracy by up to 95 percent.

ExaGrid announced three new models, adding the EX20, EX81, and EX135 to its line of Tiered Backup Storage appliances, as well as the release of ExaGrid software version 7.2.0. The EX20 has 8 disks that are 8 TB each. The EX81 has 12 disks that are 18 TB each. The EX135 has 18 disks that are 18 TB each. Thirty-two of the EX189 appliances in a single scale-out system can take in up to a 6 PB full backup with 12 PB raw capacity, making it the largest single system in the industry that includes data deduplication. ExaGrid’s line of 2U appliances now include eight models: EX189, EX135, EX84, EX81, EX54, EX36, EX20, and EX10. Up to 32 appliances can be mixed and matched in a single scale-out system. Any age or size appliance can be used in a single system, eliminating planned product obsolescence.

The product line has also been updated with new Data Encryption at Rest (SEC) options. ExaGrid’s larger appliance models, including the EX54, EX81, EX84, EX135, and EX189, offer a Software Upgradeable SEC option to provide Data Encryption at Rest. SEC hardware models that provide Data Encryption at Rest are also available for ExaGrid’s entire line of appliance models. The v7.20 software includes External Key Management (EKM) for encrypted data at rest, support for NetBackup Flex Media Server Appliances with the OST plug-in, support of Veeam S3 Governance Mode and Dedicated Managed Networks.

Data integration provider Fivetran announced it offers more than 700 pre-built connectors for seamless integration with Microsoft Fabric and OneLake. This integration, powered by Fivetran’s Managed Data Lake Service, enables organizations to ingest data from over 700 connectors, automatically convert it into open table formats like Apache Iceberg or Delta Lake, and continuously optimize performance and governance within Microsoft Fabric and OneLake – without the need for complex engineering effort.

Edge website accelerator Harper announced version 5 of its global application delivery platform. It includes several new features for building, scaling, and running high-performance data-intensive workloads, including the addition of Binary Large Object (Blob) storage for the efficient handling of unstructured, media-rich data (images, real-time videos, and rendered HTML). It says the Harper platform has unified the traditional software stack – database, application, cache, and messaging functions – into a single process on a single server. By keeping data at the edge, Harper lets applications avoid the transit time of contacting a centralized database. Layers of resource-consuming logic, serialization, and network processes between each technology in the stack are removed, resulting in extremely low response times that translate into greater customer engagement, user satisfaction, and revenue growth.

Log data lake startup Hydrolix has closed an $80 million C-round of funding, bringing its total raised to $148 million. It has seen an eightfold sales increase in the past year, with more than 400 new customers, and is building sales momentum behind a comprehensive channel strategy. The cornerstone of that strategy is a partnership with Akamai, whose TrafficPeak offering is a white label of Hydrolix. Additionally, Hydrolix recently added Amazon Web Services as a go-to-market (GTM) partner and built connectors for massive log-data front-end ecosystems like Splunk. These and similar efforts have driven the company’s sales growth, and the Series C is intended to amplify this momentum.

Cloud data management supplier Informatica has appointed Krish Vitaldevara as EVP and chief product officer coming from NetApp and Microsoft. This is a big hire. He was an EVP and GM for NetApp’s core platforms and led NetApp’s 2,000-plus R&D team responsible for technology, including ONTAP, FAS/AFF, application integration & data protection software. At Informatica, “Vitaldevara will develop and execute a product strategy aligning with business objectives and leverage emerging technologies like AI to innovate and improve offerings. He will focus on customer engagement, market expansion and strategic partnerships while utilizing AI-powered, data-driven decision-making to enhance product quality and performance, all within a collaborative leadership framework.”

Sam King

Cloud file services supplier Nasuni has appointed Sam King as CEO, succeeding Paul Flanagan who is retiring after eight years in the role. Flanagan will remain on the Board, serving as Non-Executive Chairman. King was previously CEO of application security platform supplier Veracode from 2019 to 2024.

ObjectFirst has announced three new Ootbi object backup storing appliances for Veeam, with new entry-level 20 and 40 TB capacities, and a range-topping 432 TB model, plus new firmware delivering 10-20 percent faster recovery speeds across all models. The 432 TB model supports ingest speeds of up to 8 GBps in a four-node cluster, double the previous speed. New units are available for purchase immediately worldwide.

OpenDrives is bringing a new evolution of its flagship Atlas data storage and management platform to the 2025 NAB Show. Atlas’ latest release provides cost predictability and economical scalability with an unlimited capacity pricing model, high performance and freedom from paying for unnecessary features with targeted composable feature bundles, greater flexibility and freedom of choice with new certified hardware options, and intelligent data management via the company’s next-generation Atlas Performance Engine. OpenDrives has expanded certified hardware options to include the Seagate Exos E JBOD expansion enclosures.

Other World Computing announced the release of OWC SoftRAID 8.5, its RAID management software for macOS and Windows, “with dozens of enhancements,” delivering “dramatic increases in reliability, functionality, and performance.” It also announced the OWC Archive Pro Ethernet network-based LTO backup and archiving system with drag-and-drop simplicity, up to 76 percent cost savings versus HDD storage, a 501 percent ROI, and full macOS compatibility.

OWC Archive Pro

Percona is collaborating with Red Hat on OpenShift and Percona Everest will now support OpenShift, so you can run a fully open source platform for running “database as a service” style instances on your own private or hybrid cloud. The combination of Everest as a cloud-native database platform with Red Hat OpenShift allows users to implement their choice of database in their choice of locations – from on-premises datacenter environments through to public cloud and hybrid cloud deployments.

Perforce Delphix announced GA of Delphix Compliance Services, a data compliance product built in collaboration with Microsoft. It offers automated AI and analytics data compliance supporting over 170 data sources and natively integrated into Microsoft Fabric pipelines. The initial release of Delphix Compliance Services is pre-integrated with Microsoft Azure Data Factory and Microsoft PowerBI to natively protect sensitive data in Azure and Fabric sources as well as other popular analytical data stores. The next phase of this collaboration adds a Microsoft Fabric Connector.

Perforce is a Platinum sponsor at the upcoming 2025 Microsoft Fabric Conference (FabCon) jointly sponsoring with PreludeSys. It will be demonstrating Delphix Compliance Services and natively masking data for AI and analytics in Fabric pipelines at booth #211 and during conference sessions.

Pliops has announced a strategic collaboration with the vLLM Production Stack developed by LMCache Lab at the University of Chicago, aimed at revolutionizing large language model (LLM) inference performance. The vLLM Production Stack is an open source reference implementation of a cluster-wide full-stack vLLM serving system. Pliops has developed XDP (Extreme Data Processor) key-value store technology with its AccelKV software running in an FPGA or ASIC to accelerate low-level storage stack processing, such as RocksDB. It has announced a LightningAI unit based on this tech. The aim is to enhance LLM inference performance.

Pure Storage is partnering with CERN to develop DirectFlash storage for Large Hadron Collider data. Through a multi-year agreement, Pure Storage’s data platform will support CERN openlab to evaluate and measure the benefits of large scale high-density storage technologies. Both organizations will optimize exabyte-scale flash infrastructure, and the application stack for Grid Computing and HPC workloads, identifying opportunities to maximize performance in both software and hardware while optimizing energy savings across a unified data platform.

Seagate has completed the acquisition of Intevac, a supplier of thin-film processing systems for $4.00 per share, with 23,968,013 Intevac shares being tendered. Intevac is now a wholly owned subsidiary of Seagate. Wedbush said: “We see the result as positive for STX given: 1) we believe media process upgrades are required for HAMR and the expense of acquiring and operating IVAC is likely less than the capital cost for upgrades the next few years and 2) we see the integration of IVAC into Seagate as one more potential hurdle for competitors seeking to develop HAMR, given that without an independent IVAC, they can no longer leverage the sputtering tool maker’s work to date around HAMR (with STX we believe using IVAC exclusively for media production.”

DSPM provider Securiti has signed a strategic collaboration agreement (SCA) with Amazon Web Services (AWS). AWS selected Securiti to help enterprise customers safely use their data with Amazon Bedrock’s foundation models, integrating Securiti’s Gencore AI platform to enable compliant, secure AI development with structured and unstructured data. Securiti says its Data Command Graph provides contextual data intelligence and identification of toxic combinations of risk, including the ability to correlate fragmented insights across hundreds of metadata attributes such as data sensitivity, access entitlements, regulatory requirements, and business processes. It also claims to offer the following:

  • Advanced automation streamlines remediation of data risks and compliance with data regulations.
  • Embedded regulatory insights and automated controls enable organizations to align with emerging AI regulations and frameworks such as EU AI Act and NIST AI RMF. 
  • Continuous monitoring, risk assessments and automated tests streamline compliance and reporting. 

SpectraLogic has launched the Rio Media Suite, which it says is simple, modular and affordable software to manage, archive and retrieve media assets across a broad range of on-premises, hybrid and cloud storage systems. It helps break down legacy silos, automates and streamlines media workflows, and efficiently archives media. It is built on MediaEngine, a high-performance media archiver that orchestrates secure access and enables data mobility between ecosystem applications and storage services. 

A variety of app extensions integrate with MediaEngine to streamline and simplify tasks such as creating and managing lifecycle policies, performing partial file restores, and configuring watch folders to monitor and automatically archive media assets. The modular MAP design of Rio Media Suite allows creative teams to choose an optimal set of features to manage and archive their media, with the flexibility to add capabilities as needs change or new application extensions become available.

Available object and file storage connectors enable a range of Spectra Logic and third-party storage options, including Spectra BlackPearl storage systems, Spectra Object-Based Tape, major third-party file and object storage systems, and public cloud object storage services from leading providers such as AWS, Geyser Data, Google, Microsoft and Wasabi.

A live demonstration of Rio Media Suite software will be available during exhibit hours on April 6-9, 2025, in the Spectra Logic booth (SL8519) at NAB Show, Las Vegas Convention Center, Las Vegas, Nevada. Rio Media Suite software is available for Q2 delivery.

Starfish Storage, which provides metadata-driven unstructured data management, is being used at Harvard’s Faculty of Arts and Sciences Research Computing group to manage more than 60 PB involving over 10 billion files across 600 labs and 4,000 users. In year one it delivered $500,000 in recovered chargeback, year two hit $1.5 million, and it’s on track for $2.5 million in year three. It also identified 20 PB of reclaimable storage, with researchers actively deleting what they no longer need. Starfish picked up a 2025 Data Breakthrough Award for this work in the education category.

Decentralized (Web3) storage supplier Storj announced a macOS client for its new Object Mount product. It joins the Windows client announced in Q4 2024 and the Linux client, launched in 2022. Object Mount delivers “highly responsive, POSIX-compliant file system access to content residing in cloud or on-premise object storage platforms, without changing the data format.” Creative professionals can instantly access content on any S3-compatible or blob object storage service, as if they were working with familiar file storage systems. Object Mount is available for users of any cloud platform or on-premise object storage vendor. It is universally compatible and does not require any data migration or format conversion. 

Media-centric shared storage supplier Symply is partnering with DigitalGlue to integrate “DigitalGlue’s creative.space software with Symply’s high-performance Workspace XE hardware, delivering a scalable and efficient hybrid storage solution tailored to the needs of modern content creators. Whether for small post-production teams or large-scale enterprise environments, the joint solution ensures seamless workflow integration, enhanced performance, and simplified management.”

An announcement from DDN’s Tintri subsidiary says: “Tintri, leading provider of the world’s only workload-aware, AI-powered data management solutions, announced that it has been selected as the winner of the ‘Overall Data Storage Company of the Year’ award in the sixth annual Data Breakthrough Awards program conducted by Data Breakthrough, an independent market intelligence organization that recognizes the top companies, technologies and products in the global data technology market today.”

We looked into the Data Breakthrough Awards program. There are several categories in these awards with multiple sub-category winners in each category: Data Management (13), Data Observability (4), Data Analytics (10), Business Intelligence (4), Compute and Infrastructure (6), Data Privacy and Security (5), Open Source (4), Data Integration and Warehousing (5), Hardware (4), Data Storage (6), Data Ops (3), Industry Applications (14) and Industry Leadership (11). That’s a whopping 89 winners.

In the Data Management category we find 13 winners with DataBee the “Solution of the Year” and VAST Data the “Company of the Year.” Couchbase is the “Platform of the Year” and Grax picks up the “Innovation of the Year” award:

The six Data Storage category winners are:

The award structure may strike some as unusual. The judging process details can be found here.

Cloud, disaster recovery, and backup specialist virtualDCS has announced a new senior leadership team as it enters a new growth phase after investment from private equity firm MonacoSol, which was announced last week. Alex Wilmot steps in as CEO, succeeding original founder Richard May, who moves into a new role as product development director. Co-founder Dan Nichols returns as CTO, while former CTO John Murray transitions to solutions director. Kieran Brady also joins as chief revenue officer (CRO) to drive the company’s next stage of expansion.

S3 cheaper-than-AWS cloud storage supplier Wasabi has achieved Federal Risk and Authorization Management Program (FedRAMP) Ready status, and announced its cloud storage service for the US Federal Government. Wasabi is now one step closer to full FedRAMP authorization, which will allow more Government entities to use its cloud storage service.

Software RAID supplier Xinnor announced successful compatibility testing of an HA multi-node cluster system combining its xiRAID Classic 4.2 software and the Ingrasys ES2000 Ethernet-attached Bunch of Flash (EBOF) platform. It supports up to 24 hot-swap NVME SSDs and is compatible with Pacemaker-based HA clusters. Xinnor plans to fully support multi-node clusters based on Ingrasys EBOFs in upcoming xiRAID releases. Get a full PDF tech brief here.

Quesma bridges Elasticsearch and SQL, promises faster, cheaper queries

Quesma has built a gateway between Elasticsearch EQL and SQL-based databases like ClickHouse, claiming EQL users can use it to access faster and cheaper stored data sources.

Jacek Migdal, Quesma
Jacek Migdal

EQL (Elastic Query Language) is used by tools such as Kibana, Logstash, and Beats. Structured Query Language (SQL) is the 50-year-old standard for accessing relational databases. Quesma co-founder Jacek Migdal, who previously worked at Sumo Logic, says that Elasticsearch is designed for Google-style searches, but 65 percent of the use cases come from observability and security, rather than website search. The majority of telcos have big Elastic installations. However, Elastic is 20x slower at answering queries than the SQL-accessed ClickHouse relational database.

Quesma lets users carry on using Elastic as a front end while translating EQL requests to SQL using a dictionary generated by an AI model. Migdal and Pawel Brzoska founded Quesma in Warsaw, Poland, in 2023, and raised €2.1 million ($2.3 million) in pre-seed funding at the end of that year.

The company partnered with streaming log data lake company Hydrolix in October 2024 as it produces a ClickHouse-compatible data lake. Quesma lets Hydrolix customers continue using EQL-based queries, redirecting them to the SQL used by ClickHouse. Its software acts as a transparent proxy.

How Quesma works

Hydrolix now has a Kibana compatibility feature powered by Quesma’s smart translation technology. It enables Kibana customers to connect their user interface to the Hydrolix cloud and its ClickHouse data store. This means Elasticsearch customers can migrate to newer SQL databases while continuing to use their Elastic UI.

Quesma enables customers to avoid difficult and costly all-in-one database migrations and do gradual migrations instead, separating the front-end access from the back-end database. Migdal told an IT Press Tour briefing audience: “We are using AI internally to develop rules to translate Elasticsearch storage rules to ClickHouse [and other] rules. AI produces the dictionary. We use two databases concurrently to verify rule development.”

Although AI is used to produce the dictionary, it is not used, in the inference sense, at run time by customers. Migdal said: “Customers won’t use AI inferencing at run time in converting database interface languages. They don’t want AI there. Their systems may not be connected to the internet.”

Its roadmap has a project to add pipe syntax extensions to SQL, so that the SQL operator syntax order matches the semantic evaluation order, making it easier to understand:

Quesma pipe syntax example
Quesma pipe syntax example

Quesma is also using its AI large language model experience to produce a charting app, interpreting natural language prompts, such as ”Plot top 10 languages, split by native and second language speakers” to create and send requests to apps like Tableau.

Panzura Symphony taps IBM tape tech to cut cloud costs for cold data

The Symphony unstructured data estate data manager from Panzura has extended its reach into IBM Storage Deep Archive territory, integrating it with S3-accessed Diamondback tape libraries.

Symphony is Panzura’s software for discovering and managing exabyte-scale unstructured data sets, featuring scanning, tiering, migration, and risk and compliance analysis. It is complementary to Panzura’s original and core CloudFS hybrid cloud file services offering supporting large-scale multi-site workflows and collaboration using active, not archived, data. The IBM Storage Deep Archive is a Diamondback TS6000 tape library, storing up to 27 PB of LTO-9 data in a single rack with 16.1 TB/hour (4.47 GBps) performance. It’s equipped with an S3-accessible front end, similar to the file-based LTFS.

Sundar Kanthadai, Panzura
Sundar Kanthadai

Sundar Kanthadai, Panzura CTO, stated that this Panzura-IBM offering “addresses surging cold data volumes and escalating cloud fees by combining smart data management with ultra-low-cost on-premises storage, all within a compact footprint.” 

Panzura Product SVP Mike Harvey added: “This integration allows technologists to escape the trap of unpredictable access fees and egress sticker shock.” 

The Symphony-Deep Archive uses S3 Glacier Flexible Retrieval storage classes to “completely automate data transfers to tape.” Use Symphony to scan an online unstructured data estate and move metadata-tagged cold data to the IBM tape library to free up SSD and HDD storage capacity while keeping the data on-prem. Symphony’s data catalog gets embedded file metadata automatically added. It’s searchable, with more than 500 data types, and accessible via API and Java Database Connectivity requests.

Specific file recall and deletion activity can be automated through policy settings.

Panzura’s Symphony can access more than 400 file formats via a deal with GRAU Data for its Metadata Hub software. It is already integrated with IBM’s Fusion Data Catalog, which provides unified metadata management and insights for heterogeneous unstructured data, on-premises and in the cloud, and Storage Fusion. IBM Storage Fusion is a containerized solution derived from Spectrum Scale and Spectrum Protect data protection.

According to IBM, Deep Archive is much more affordable than public cloud alternatives, “offering object storage for cold data at up to 83 percent lower cost than other service providers, and importantly, with zero recall fees.” 

Panzura says the IBM Deep Archive-Symphony deal is “particularly crucial for artificial intelligence (AI) workloads,” because it can make archived data accessible to AI model training and inference workloads. 

It claims the Symphony IBM Deep Archive integration enables users to streamline data archiving processes and “significantly reduce cloud and on-premises storage expenses.” The combined offering is available immediately.

Don’t let cyberattacks keep you down

'recovery' key on keyboard
recovery key on keyboard

SPONSORED POST: It’s not a question of if your organization gets hit by a cyberattack – only when, and how quickly it recovers.

Even small amounts of application and service downtime can cause massive disruption to any business. So being able to get everything back online in minutes rather than hours, or even days, can be the key to resilience.

But modern workloads rely on increasingly large volumes of data to function efficiently. What used to involve gigabytes of critical information now needs petabytes, and making sure all of that data can be restored immediately when that cyber security incident hits is definitely no easy task.

It’s a challenge that Infinidat’s enterprise storage solutions for next-generation data protection and recovery were built to help address, using AI-based deep machine learning techniques to speed up the process. At their core are InfiniSafe cyber resilience and recovery storage solutions which provide immutable snapshot recovery, local or remote air gaps, and fenced forensic environments to deliver a near-instantaneous guaranteed Service Level Agreement (SLA) recovery from cyberattacks, says the company.

Watch this Hot Seat video to see Infinidat CMO Eric Herzog tell The Register’s Tim Philips exactly how Infinidat can help you withstand cyberattacks.

InfiniSafe Automated Cyber Protection (ACP) uses application program interfaces (APIs) to integrate with a range of third-party Security Operations Centers (SOC), Security Information and Event Management (SIEM) and Security Orchestration and Response (SOAR) platforms. It automatically triggers an immediate immutable data snapshot based on the input from the cybersecurity packages. Then, you can configure it to use InfiniSafe Cyber Detection to start AI-based scanning of those immutable snapshots to see if malware or ransomware is present.

Those capabilities are supplemented by the InfiniBox storage array, which uses a native software-defined operating system, Neural Cache and a 3-way active controller architecture to deliver immutable snapshot recovery that is guaranteed in under a minute.

You can find out more about Infinidat’s enterprise storage solutions for next-generation data protection and recovery by clicking this link.

Sponsored by Infinidat.

CoreWeave tops new GPU cloud rankings from SemiAnalysis

Research firm SemiAnalysis has launched its ClusterMAX rating system to evaluate GPU cloud providers, with performance criteria that include networking, management software, and storage capabilities.

SemiAnalysis aims to help organizations evaluate GPU cloud providers – both hyperscalers like AWS, Azure, GCP, and Oracle Cloud, and what it calls “Neoclouds,” a group of newer GPU-focused providers. The initial list includes 131 companies. There are five rating classifications: Platinum, Gold, Silver, Bronze, and Underperforming. It classifies GPU cloud suppliers into trad hyperscalers, neocloud giants, emerging and sovereign neoclouds, and adds brokers, platforms, and aggregators to the GPU cloud market along with management software and VC clusters:

The research company states: “The bar across the GPU cloud industry is currently very low. ClusterMAX aims to provide a set of guidelines to help raise the bar across the whole GPU cloud industry. ClusterMAX guidelines evaluate features that most GPU renters care about.”

VAST Data co-founder Jeff Denworth commented that the four neocloud giants “have standardized on VAST Data” with the trad hyperscalers using “20-year-old technology.”

SemiAnalysis says the two main storage frustration areas “are when file volumes randomly unmount and when users encounter the Lots of Small File (LOSF) problem.” A program called “autofs” will automatically keep a file system mounted.

“The LOSF problem can easily be avoided as it is only an issue if you decide to roll out your own storage solution like an NFS-server instead of paying for a storage software vendor like WEKA or VAST. An end user will very quickly notice an LOSF problem on the cluster as the time even to import PyTorch into Python will lead to a complete lag out if an LOSF problem exists on the cluster.”

The report reckons that “efficient and performant storage solutions are essential for machine learning workloads, both for training and inference” and “high-performance storage is needed for model checkpoint loads” during training. It mentions Nvidia’s Inference Transfer Library (NIXL) as helping here.

During training, “managed object storage options are equally crucial for flexible, cost-effective, and scalable data storage, enabling teams to efficiently store, version, and retrieve training datasets, checkpoints, and model artifacts.”

On the inference side, “performance-oriented storage ensures that models are loaded rapidly from storage production scenarios. Slow or inefficient storage can cause noticeable delays, degrading the end-user experience or reducing real-time responsiveness of AI-driven applications.”

“It is, therefore, vital to assess whether GPU cloud providers offer robust managed parallel file system and object storage solutions, ensuring that these options are optimized and validated for excellent performance across varied workloads.”

In general, SemiAnalysis sees that “most customers want managed high-performance parallel file systems such as WEKA, Lustre, VAST Data, DDN, and/or want a managed S3-compatible object storage.”

The report also examines the networking aspects of GPU server rental.

Ratings

There is only one cloud in the top-rated Platinum category, CoreWeave. “Enterprises mainly rent GPUs from Hyperscalers + CoreWeave. Enterprises rarely rent from Emerging Neoclouds,” the report says.

Gold tier providers are Crusoe, Nebius, Oracle, Azure, Together AI, and LeptonAI. The silver tier providers are AWS, Lambda, Firma/Sustainable Metal Cloud, and Scaleway. The bronze tier includes Google Cloud, DataCrunch, TensorWave, and other unnamed suppliers. The report authors say: “We believe Google Cloud is on a Rocketship path toward ClusterMAX Gold or ClusterMAX Platinum by the next time we re-evaluate them.”

The underperformers, such as Massed Compute and SaladCloud, are described as “not having even basic security certifications, such as SOC 2 or ISO 27001. Some of these providers also fall into this category by hosting underlying GPU providers that are not SOC 2 compliant either.”

Full access to the report is available to SemiAnalysis subscribers via the company’s website.

Commvault teams with SimSpace to launch cyberattack training

Commvault has entered a deal with SimSpace offering customers a way to learn how to react and respond to a cyberattack in a simulated environment with training exercises.

SimSpace produces such environments, called cyber ranges. These are hands-on virtual environments – “interactive and simulated platforms that replicate networks, systems, tools, and applications. They provide a safe and legal environment for acquiring hands-on cyber skills and offer a secure setting for product development and security posture testing.” A downloadable NIST document tells you more. The deal with SimSpace means Commvault is now offering the Commvault Recovery Range, powered by SimSpace, which models a customer’s environment and simulates a cyberattack.

Bill O'Connell, Commvault
Bill O’Connell

Commvault CSO Bill O’Connell said: “Together with SimSpace, we are offering companies something that’s truly unique in the market – the physical, emotional, and psychological experience of a real-world cyberattack and the harrowing challenges often experienced in attempting to rapidly recover.”  

By “combining SimSpace’s authentic cyberattack simulations with Commvault’s leading cyber recovery capabilities, we’re giving companies the ability to strengthen their security posture, cyber readiness,  and business resilience.” 

The main idea is to prepare cyber defenders to respond effectively when an attack happens. By going through cyber range training, they get:

  • Hands-on attack simulations with defenders working in “hyper-realistic environment that mirrors their actual networks, infrastructure, and day-to-day operations – complete with simulated users logging in and out, sending emails, and interacting with applications.” The defenders face attacks, like Netwalker, that can be challenging to detect and “forced to make decisions and execute strategic responses under pressure as the clock is ticking.”  
  • Exercises with no-win recovery scenarios and learning “the hard way the importance of validating backups, cleaning infected data, and executing swift restorations.” 
  • Drills that bring disparate teams together with CSOs, CISOs, CIOs, IT Ops, and SecOps working together to emerge with a cohesive strategy for handling crises and restoring core services swiftly. 

We should think in terms of training exercises almost akin to military war gaming, with attack scenarios, response drills, and ad hoc groups of people brought together in a reaction team so they can understand their minimum viability; the critical applications, assets, processes, and people required for an organization to recover following a cyberattack.

Recovery exercises include using Commvault Cloud for threat scanning, Air Gap Protect for immutable storage, Cleanroom Recovery for on-demand recovery testing, and Cloud Rewind to automatically rebuild cloud-native apps. Commvault says these components enable defenders to recover their business without reinfecting it.

Phil Goodwin, research VP at IDC, commented on the Commvault-SimSpace deal, saying: “This is a huge advancement in modern cyber preparedness training.” 

Commvault and SimSpace will be showcasing Commvault Recovery Range during RSAC 2025 from April 28 to May 1 in San Francisco at the Alloy Collective. You can get a taste of that here.

Keepit plans to back up hundreds of SaaS apps by 2028

Self-hosted SaaS backup service business Keepit intends to back up hundreds of different SaaS apps by 2028, starting from just seven this year.

The seven are Jira, Bamboo, Okta, Confluence, DocuSign, Miro, and Slack, with the ultimate goal of full coverage for all SaaS applications used by enterprises, spanning HR, finance, sales, production, and more. This ambitious scope rivals that of HYCU back in 2023 with its connectors – an API scheme for SaaS app suppliers. This resulted in 50 SaaS app connectors in November that year and almost 90 a year later.

Keepit says the average enterprise uses approximately 112 SaaS applications, according to BetterCloud research. Keepit cites a Gartner report saying that by 2028, 75 percent of enterprises will prioritize backup of SaaS applications as a critical requirement, compared to just 15 percent in 2024. 

Michael Amsinck, Keepit
Michael Amsinck

Michael Amsinck, Keepit Chief Product and Technology Officer (CPTO) stated: “Legacy backup and recovery solutions are not able to adapt and scale to rise to that challenge. Having a platform that is purpose-built for the cloud is a clear advantage to us, because it enables us to build exactly what our customers and the markets need.”

Keepit reckons its Domain-Specific Language (DSL) concept will accelerate development for each application, with them “seamlessly integrating with the unique Keepit platform.” There are no details available explaining how DSL works or which organization – Keepit or the SaaS app supplier – produces the DSL-based connector code enabling Keepit to back up the app. 

The product roadmap also includes anomaly detection with enhanced monitoring, compliance, and security insights, and will be available in early May. 

Keepit already protects Microsoft 365, Entra ID, Salesforce, and other mainstream SaaS apps, with what we understand to be the DSL-based approach now used for Jira, Bamboo, Okta, Confluence, DocuSign, Miro, and Slack.

The company says it will “offer a comprehensive backup and recovery solution for all SaaS applications, ensuring full control of data regardless of unforeseen events such as outages, malicious attacks, or human error.”

MinIO joins agentic AI gold rush with MCP support in AIStor

MinIO is staking its claim in the large language model (LLM) market, adding support for the Model Context Protocol (MCP) to its AIStor software – a move sparked by agentic AI’s growing reliance on object storage.

MCP is an Anthropic-supported method for AI agents to connect to proprietary data sources. “Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools,” Anthropic says. As a result, Anthropic’s Claude model can query, read, and write to a customer’s file system storage.

MinIO introduced its v2.0 AIStor software supporting Nvidia GPUDirect, BlueField SuperNICS, and NIM microservices in March. Now it is adding MCP server support so AI agents can access AIStor. A “preview release includes more than 25 commonly used commands, making exploring and using data in an AIStor object store easier than ever.”

Pavel Anni, a MinIO Customer Engineer and Technology Educator, writes: “Agents are already demonstrating incredible intelligence and are very helpful with question answering, but as with humans, they need the ability to discover and access software applications and other services to actually perform useful work … Until now, every agentic developer has had to write their own custom plumbing, glue code, etc. to do this. Without a standard like MCP, building real-world agentic workflows is essentially impossible … MCP leverages language models to summarize the rich output of these services and can present crucial information in a human-readable form.”

The preview release “enables interaction with and management of MinIO AIStor … simply by chatting with an LLM such as Anthropic Claude or OpenAI ChatGPT.” Users can tell Claude to list all object buckets on an AIStor server and then to create a list of objects grouped by categories. Claude then creates a summary list:

Anni contrasts a command line or web user interface request with the Claude and MCP approach: “The command-line tool or web UI would give us a list of objects, as requested. The LLM summarizes the bucket’s content and provides an insightful narrative of its composition. Imagine if I had thousands of objects here. A typical command-line query would give us a long list of objects that could be hard to consume. Here, it gives us a human-readable overview of the bucket’s contents. It is similar to summarizing an article with your favorite LLM client.”

Anni then had Claude add tags to the bucket items. “Imagine doing the same operation without MCP servers. You would have to write a Python script to pull images from the bucket, send them to an AI model for analysis, get the information back, decode it, find the correct fields, apply tags to objects … You could easily spend half a day creating and debugging such a script. We just did it simply using human language in a matter of seconds.”

There is more information about AIStor and MCP in Anni’s blog.

Pure adds VM support to Portworx amid VMware shakeup

Pure Storage’s Portworx is looking to win over customers wishing to migrate their virtual machines to containers by adding VM support to its container storage software product.

Businesses and public sector customers can keep using existing VMs on Kubernetes while refactoring old apps or creating entirely new cloud-native ones using Kubernetes-orchestrated containers. VMware’s Tanzu offering added container support to vSphere. Pure is now taking the opposite approach by adding VM support to its Portworx offering. Pure positions this move in the broader context of Broadcom’s 2023 acquisition of VMware and the subsequent pricing changes that have affected VMware customers.

It says 81 percent of enterprises that participated in a 2024 survey of Kubernetes experts plan to migrate their VMware VMs to Kubernetes over the next five years, with almost two-thirds intending to do so within the next two years. v3.3 of the Portworx Enterprise software will add this VMware VM support and is projected to deliver 30 to 50 percent cost savings for customers moving VMs to containers.

Mitch Ashley, VP and Practice Lead, DevOps and Application Development at Futurum, stated: “With Portworx 3.3, Pure Storage is bringing together a scalable data management platform with a simplified workflow across containers and VMs. That’s appealing to enterprises modernizing their infrastructure, pursuing cloud-native applications, or both.”

v3.3 provides a single workflow for VM and cloud-native apps instead of having separate tools and processes. It will support VMs running on Kubernetes in collaboration with Red Hat, SUSE, Kubermatic, and Spectro Cloud, and deliver:

  • RWX Block support for KubeVirt VMs running on FlashArray or other storage vendors’ products providing fast read/write capabilities
  • Single management plane, including synchronized disaster recovery for VMs running on Kubernetes with no data loss (zero RPO)
  • File-level backups for Linux VMs, allowing for more granular backup and restore
  • Reference architecture and partner integrations with KubeVirt software from Red Hat, SUSE, Spectro Cloud, and Kubermatic

Portworx Enterprise 3.3 will be generally available by the end of May and you can learn more about it here.

XenData Z20 slings media files from SMB to cloud and back

XenData has launched an on-prem gateway appliance for moving Windows SMB media files to and from public cloud object storage.

XenData provides storage products, such as on-prem X-Series tape archives and Media Portal viewers for the media and entertainment industry and allied customers. The Z20 Cloud Media Appliance is a Windows 11 Pro x86 box that hooks up to a local SMB network and can move files to object storage in the cloud, utilizing both online and archive tiers.

XenData Z20

CEO Dr Phil Storey stated: “The Z20 makes it easy for users to store media files in the cloud and it is especially useful when content is stored on the lower-cost Glacier and Azure Archive tiers. It allows users to check that they are rehydrating the correct media files before incurring rehydration and egress fees. Furthermore, it provides users with a self-service to easily restore individual files without the need to bother IT support staff.”

The system has a multi-tenant, web-based UI. It’s compliant with Microsoft’s security model and can be added to an existing Domain or Workgroup. Remote users are supported using HTTPS when a SSL security certificate is added. Physically, the device is a 1 RU rack-mount appliance with 4 x 1 GbE network ports and options for additional 10 and 25 GbE connectivity. 

XenData previously released Cloud File Gateway software, running on Windows 10 Pro and Windows Server, to enable file-based apps to use cloud object storage such as AWS S3, Azure Blob, and Wasabi S3 as an archive facility. In effect, it has updated this software to support deep cloud archives, such as AWS Deep Glacier or Azure’s Archive Tier, and added in Media Asset Viewer functionality to provide users with a self-serve capability.

By using the web-based UI, they can display media file previews and change the storage tier for a selected file, rehydrating a file from a deep archive, and then downloading it, for example.

The Z20 is available from XenData Authorized Partners worldwide and is priced at $9,880 in the US.

Arcitecta rolls out Mediaflux Real-Time to streamline global media workflows

Arcitecta is rolling out a real-time content delivery and media management aimed at media production pros.

Australia-based Arcitecta provides distributed data management software, its Universal Data System, supporting file and object data storage with single namespace and tiering capability covering on-premises SSDs, disk and tape, plus the public cloud, with a Livewire data mover and metadata database. Its Mediaflux Multi-Site, Mediaflux Edge, and Mediaflux Burst products enable geo-distributed workers to collaborate with faster access to shared data more effectively across normal and peak usage times. Mediaflux Real-Time accelerates access speed to provide virtually instant access to media content data.

Jason Lohrey, Arcitecta
Jason Lohrey

Jason Lohrey, CEO and founder of Arcitecta, stated: “Mediaflux Real-Time is revolutionary and will power the future of live production, supporting continuous file expansion such as live video streams and enabling editors to work with those files in real-time, even while they are still being created.”

He said Arcitecta’s Livewire data transfer module “securely moves millions or billions of files at light speed” to accelerate workflows. “In pre-release previews, broadcasters have praised Mediaflux Real-Time as ‘a game-changer’ for live broadcast, live sports, and media entertainment production.”

Mediaflux Real-Time is hardware, file-type, and codec agnostic, delivering centralized content management, network optimization, collaboration tools, security, and cost efficiency. Customers can organize storage and metadata for easy access and retrieval, have a reliable infrastructure for handling large file transfers, and use version control and integrated feedback systems. They can share content with multiple locations in real time and grow files with live content. The content can be protected with encryption and access controls. 

Arcitecta Mediaflux LiveWire with Dell PowerScale and ECS
Arcitecta Mediaflux LiveWire with Dell PowerScale and ECS

Arcitecta is aiming the product at editors in the sports production, broadcast, and media entertainment environments who need access growing video file content “for live productions and rapid post-event workflows. Editors working remotely often experience delays due to slow transfers and playback speeds, which extend the time to the final product.” Remote editors can work collaboratively, creating highlight reels or edit live footage almost instantly, “dramatically cutting post-production time.”

Mediaflux Real-Time supports real-time editing, with faster content delivery, removes single-location-based workflow bottlenecks and enhances remote collaboration. Content can be played back in real-time across sites. It “eliminates the need to buy and configure dedicated streams or connections to each editing location, requiring only a single stream to transfer the data to multiple sites – reducing cost and infrastructure requirements.” 

We asked Arcitecta how MediaFlux Real-Time differs from the 2024 release of Livewire. Lohrey told us: “Mediaflux Real-Time is a file system (shim) that intercepts all file system traffic and uses Livewire to transport changes to other locations/file systems in real-time.”

“Livewire is a system/fabric that can be asked to transmit a set of data from A to N destinations. What is different here is that we are transmitting file system operations as they happen. For that to happen our file system end point is in the data path and dispatching changes/modifications to other end-points with Livewire. That is, we have tapped into (by being in the data path) the file system and teeing off the modifications as they happen.” In practice, this means:

  • I make a file -> transmitted
  • I rename a file -> transmitted
  • I write to a file -> transmitted
  • I delete a file -> transmitted (although the receiving end may decide not to honor that)

Mediaflux Real-Time is available immediately. It is part of the Mediaflux and Livewire suite and works seamlessly with a wide range of storage and infrastructure solutions and protocols.

Arcitecta and Dell Technologies will showcase Mediaflux Real-Time, combined with Dell PowerScale and ECS, in the Dell Technologies booth #SL4616 at the NAB Show, April 6-9, at the Las Vegas Convention Center.

Storage news ticker – March 31

Amazon Web Services (AWS) wants to build a two-story datacenter for tape storage in Middlesex, England. Its building application has been granted. The planning documents say: “This datacenter will be a data repository which requires significantly less power consumption than a typical datacenter. This building will be designed to house tape media that provides a long-term data storage solution for our customers. It will utilize magnetic tape media.” The lucky tape system supplier has not been identified.

AWS tape storage datacenter planned for Hayes, Middlesex

CoreWeave dropped its IPO target to $40/share for 37.7 million shares and valuing it around $23 billion, which would raise $1.5 billion. It had planned to sell them for between $47 and $55/share, with 49 million shares on offer, valuing it at up to $32 billion and raising up to $2.7 billion. The company reported 2024 revenues of almost $2 billion with a net loss of $863 million. CoreWeave shares should be available on Nasdaq today. It is thought Microsoft’s reported withdrawal from datacenter leases, implying a lower-than-expected growth rate for GPU-heavy AI processing, spooked investors during CoreWeave’s pre-IPO investor roadshow.

DDN has been recognized as a winner of the 2025 Artificial Intelligence Excellence Awards by the Business Intelligence Group for its Infinia 2.0 storage system. See here for more details on the awards and a complete list of winners and finalists.

HighPoint announced its RocketStore RS654x Series NVMe RAID enclosures measuring less than 5 inches tall and 10 inches long with PCIe 4.0 x16 Switch Architecture, built-in RAID 0, 1, and 10 technology, and up to 28 GBps transfer speeds. These four and eight-bay enclosures are specifically designed for 4K and 8K video editing, 3D rendering, and other high-resolution applications.

IBM announced Storage Ceph as a Service so clients can leverage the block+file+object storage software as a fully managed, cloud storage experience on-premises. It’s designed to reduce operational costs by aligning spending with actual usage, avoiding under-utilization and over-provisioning, and scaling on-demand. Prices start at $0.026/GB/month. More information here.

NVMe TCP-connected block storage supplier Lightbits has an educational blog focusing on block storage, which “is evolving into a critical component of high-performance, accelerated data pipelines.” Read it here.

Microsoft has announced new capabilities for Azure NetApp Files (ANF): 

  • A flexible service level separates throughput and capacity pricing, saving customers up to 10-40 percent – think of it as a “pay for the capacity you need, and scale the performance as you grow” model.
  • Application Volume Groups are now available for Oracle and SAP workloads, simplifying management and optimizing performance.
  • A new cool access tier with a snapshot-only policy offers a cost-effective solution for managing snapshots – allowing customers to benefit from cost savings without compromising on restore times.

A blog has more.

OneTrust has launched the Privacy Breach Response Agent, built with Microsoft Security Copilot. When a data breach occurs, privacy teams have to analyze security requirements and regulatory privacy requirements if personal data is compromised. Privacy and breach notification regulations are fragmented and complex, varying by geography and type of data, and the notification windows are often very short. The Privacy Breach Response Agent enables privacy teams to evaluate the scope of the incident, identify jurisdictions, assess regulatory requirements, generate guidance, and coordinate and align with the InfoSec response team. More information on the agent can be found here.

Other World Computing (OWC) launched its Jellyfish B24 and Jellyfish S24 Storage products. The Jellyfish B24 delivers a cost-effective, high-capacity solution for seamless collaboration and nearline backup, while the Jellyfish S24 offers a full SSD production server with lightning-fast performance for demanding video workflows. The B24 has four dedicated SAS ports to which you can connect B24-E expansions via a mini-SAS cable, included with every expansion chassis. By adding four B24-E expansion chassis to a B24 head unit, the total storage capacity can reach up to 2.8 petabytes.

The SSDs in the S24 are the OWC Mercury Extreme Pro SSDs. The S24 can be combined with an OWC Jellyfish S24-E SSD expansion chassis for up to 736 TB of fast SSD storage.

M&E market focused file and object storage supplier OpenDrives is introducing a cloud-native, data services offering it has dubbed Astraeus that merges on-premises, high-performance storage with the ability to provision and manage integrated data services like the public cloud. Customers can “easily repatriate their data, bringing both data and cloud-native applications back on-premises and into the security of a private cloud.” Compute and storage resources can scale independently with dynamic provisioning and orchestration capabilities. Astraeus follows an unlimited capacity pricing model, licensing per-node instead of per-capacity, enabling cost predictability. OpenDrives will be exhibiting at the upcoming 2025 NAB Show in Las Vegas, Booth SL6612 in the South Hall Lower, April 6 to 9.

PNY announced its CS2342 M.2 NVMe SSD in 1 and 2 TB capacities with PCIe Gen 4 x 4 connectivity. It has up to 7,300 MBps sequential read and 6,000 MBps sequential write speeds. The product supports TCG Pyrite and has a five-year or TBW-based warranty.

The co-CEO of Samsung, Han Jong-Hee, has died from a heart attack at the age of 63. Co-CEO Jun Young-hyun, who oversees Samsung’s chip business, is now the sole CEO. Han Jong-Hee was responsible for Samsung’s consumer electronics and mobile devices business.

SMART Modular Technologies announced it is sampling its redefined Non-Volatile CXL Memory Module (NV-CMM) to Tier 1 OEMs based on the CXL 2.0 standard in the E3.S 2T form factor. “This product combines non-volatile high-performance DRAM memory, persistent flash memory and an energy source in a single removable EDSFF form factor to deliver superior reliability and serviceability for data-intensive applications … PCIe Gen 5 and CXL 2.0 compliance ensures seamless integration with the latest datacenter architectures.” View it as a high-speed cache tier.

SMART Modular’s NV-CMM details

There will be an SNIA Cloud Object Storage Plugfest in Denver from April 28 to 30. Learn more here. There will also be an SNIA Swordfish Plugfest at the same time in conjunction with SNIA’s Regional SDC Denver event. Register here.

Team Group announced the launch of the TEAMGROUP ULTRA MicroSDXC A2 V30 Memory Card, which delivers read speeds of up to 200 MBps and write speeds of up to 170 MBps. The ULTRA MicroSDXC A2 V30 meets the A2 application performance standard with a V30 video speed rating and a lifetime warranty.

Tiger Technology has officially achieved AWS Storage Competency status.

Financial analyst Wedbush has identified eight publicly owned suppliers it believes will benefit greatly from an exploding AI spending phase by businesses. It says: “While there is a lot of noise in the software world around driving monetization of AI, a handful of software players have started to separate themselves from the pack … We believe the use cases are exploding, enterprise consumption phase is ahead of us in the rest of 2025, launch of LLM models across the board, and the true adoption of generative AI will be a major catalyst for the software sector and key players to benefit from this once in a generation Fourth Industrial Revolution set to benefit the tech space.” Wedbush identifies Oracle and Salesforce as the top opportunities. The others are Amazon, Elastic, Alphabet, IBM, Innodata, MongoDB, Micron Technology, Pegasystems, and Snowflake. “The clear standout over the last month from checks has been the cloud penetration success at IBM which has a massive opportunity to monetize its installed base over the next 12 to 18 months.”