Analysis Trump’s tariffs will affect US companies with multinational supply chain components and finished products imported to America, and products from foreign suppliers imported to the US. They will also affect US storage suppliers exporting to tariff-raising countries. There are three groups facing different tariff-related problems.
“This tariff policy would set the US tech sector back a decade in our view if it stays.
Wedbush
Wedbush financial analyst Daniel Ives is telling subscribers to brace themselves: “Investors today are coming to the scary realization this economic Armageddon Trump tariff policy is really going to be implemented this week and it makes the tech investing landscape the most difficult I have seen in 25 years covering tech stocks on the Street. Where is the E in the P/E? No one knows….what does this do to demand destruction, Cap-Ex plans halted, growth slowdown, and damaging companies and consumers globally. Then there is the cost structure and essentially ripping up a global supply chain overnight with no alternative….making semi fabs in West Virginia or Ohio this week? Building hard drive disks in Florida or New Jersey next month?”
And: “…this tariff policy would set the US tech sector back a decade in our view if it stays.”
Here is a survey of some of the likely effects, starting with a review of Trump’s tariffs on countries involved in storage product supply.
China gets the top overall tariff rate of 54 percent, followed by Cambodia on 49 percent, Laos on 48 percent, and Vietnam on 46 percent. Thailand gets a 37 percent tariff imposed, Indonesia and Taiwan 32 percent. India gets 27 percent, South Korea 26 percent, and Japan 24 percent. The EU attracts 18.5 percent and the Philippines 18 percent.
US storage component and product importers
US suppliers with multinational supply chains import components and even complete products to the US. At the basic hardware level, the Trump tariffs could affect companies supplying DRAM, NAND, SSDs, tape, and tape drives, as well as those making storage controllers and server processors.
However, ANNEX II of the of the Harmonized Tariff Schedule of the United States (HTSUS) applies to a presidential proclamation or trade-related modification that amends or supplements the HTSUS. Currently it says that semiconductors are exempt from tariffs.
Semiconductor chips are exempt but not items that contain them as components.
Micron makes DRAM, NAND, and SSDs. The DRAM is manufactured in Boise, Idaho, and in Japan, Singapore, and Taiwan. The exemption could apply to the DRAM and NAND chips but not necessarily to the SSDs that contain NAND, as there is no specific exemption for them. They face the appropriate country of origin tariffs specified by the Trump administration.
Samsung makes DRAM and NAND in South Korea with some NAND made in China. SSD assembly is concentrated in South Korea. The SSDs will likely attract the South Korea 26 percent tariff.
SK hynix makes its DRAM and NAND chips and SSDs in Korea, while subsidiary Solidigm makes its SSD chips in China, implying their US import prices will be affected by the 54 percent tariff on Chinese SSDs and a 26 percent tariff on South Korean ones.
Kioxia NAND and SSDs are made in Japan and the SSDs bought in America will attract a 24 percent tariff – which suppliers will pass on to US consumers, in part or in full. SanDisk NAND is made in Japan (with Kioxia), but we understand some of its SSDs are manufactured in China – which means a 54 percent tariff might apply. That means Kioxia SSDs, Samsung and SK hynix SSDs, but not Solidigm ones, could cost less than Sandisk SSDs.
Consider Seagate and its disk drives. It has component and product manufacturing and sourcing operations in an integrated international supply chain involving China, Thailand, Singapore, and Malaysia.
It makes disk drive platters and some finished drives – Exos, for example – in China, and spindle motors, head gimbal assemblies, and other finished drives in Thailand. Platters and some other drives are assembled in Singapore and Malaysia. Trump’s tariffs will apply to finished drives imported into the US, with rates depending on country of origin.
The tariff rates for China, Malaysia, Singapore, and Thailand are 54 percent, 24 percent, 10 percent, and 36 percent respectively. If Seagate raised its prices to US customers by the tariff amounts, the effect would be dramatic.
Western Digital will be similarly affected as it assembles its disk drives in Malaysia and Thailand, and so face tariffs of 24 and 36 percent respectively imposed on these drives.
Toshiba HDDs are made in China, the Philippines, and Japan, implying US import tariffs of 54, 18, and 24 percent respectively.
IBM makes tape drives for itself and the LTO consortium in Tucson, Arizona, so there are no Trump tariffs applying to them, only to whatever foreign-made components IBM might be importing.
LTO tape media is made by Japan’s Fujifilm and Sony. Fujifilm makes its tape in the US, in Bedford, Massachusetts, but Sony makes its tape in Japan, meaning it will get a 24 percent tariff applied to tape imports into the US. Fujifilm wins while Sony loses.
Recordable Blu-ray and DVD discs are made in China, India, Japan, and Taiwan, and will have US import tariffs imposed on them depending upon the country of origin.
Storage controllers and server processors are mostly made by Intel with some by AMD.
Intel has CPU fabs in Oregon (Hillsboro), Arizona (Chandler), and New Mexico (Rio Rancho). There are processor assembly, test, and packaging facilities in Israel, Malaysia, Vietnam, China, and Costa Rica. The Leixlip plant in County Kildare, Ireland, also produces a range of processors. This is a complex manufacturing supply chain and Intel will avoid a tariff hit on all its CPUs, and other semiconductor products because of the Annexx II exemptions above. The same applies to AMD processors and Arm chips.
Storage arrays are typically made in the US, with Dell, HPE, and NetApp all manufacturing inside the US. However, Hitachi Vantara makes storage arrays in Japan, so they will receive a 24 percent import tariff. Lenovo’s storage is mostly based on OEM’d NetApp arrays so it might share NetApp’s US country of origin status and so avoid tariffs.
Infinidat outsources its array manufacturing to Arrow Electronics, which has a global supply chain, with the US as a global hub. The actual country of origin of Infinidat’s arrays has not been publicly revealed and lawyers may well be working on its legal location.
Hitachi Vantara looks likely to be the most disadvantaged storage array supplier, at the moment.
Non-US storage suppliers
Non-US storage suppliers exporting to the US will feel the tariff pain depending upon their host country. We understand the country of origin of manufactured storage hardware products will be the determining factor.
EU storage suppliers will be affected – unless they maintain a US-based presence.
One tactic suppliers might use is to transfer to a US operation and so avoid tariffs altogether – although critics have said investing in the US at present, with construction costs up and consumer spending down, is far from a safe bet.
US storage exporters
The third group of affected storage suppliers are the US storage businesses exporting goods to countries including China, which is raising its own tariffs in response. There is now a 34 percent tariff on US goods imported into China, starting April 10. This will affect all US storage suppliers exporting there. For example, Intel, which exports x86 CPUs to China.
We understand that China’s tariffs in reaction to Trump’s apply to the country of origin of the US-owned supplier’s manufactured products and not to the US owning entity. So Intel’s US-made semiconductor chips exported to China will have the tariff imposed by Beijing, but not its products made elsewhere in the world. Thus foreign-owned suppliers exporting storage products to China from the US will have the 34 percent tariff applied but this will not apply to their goods exported to China from the rest of the world.
If other countries outside the US were to follow China’s lead and apply their own import tariffs on US-originated goods, US-based exporters would feel the pain, too.
We believe that one of the general storage winners from this tariff fight is Huawei. It doesn’t import to the US anyway, and is thus unaffected by Trump’s tariff moves. As a Chinese supplier, it is also not affected by China’s tariffs on US-made goods, unlike Lenovo if it imports its NetApp OEM’d arrays into China.
Analysis: Pure Storage has won a deal to supply its proprietary flash drive technology to Meta, with Wedbush financial analysts seeing this as “an extremely positive outcome for PSTG given the substantially greater EB of storage PSTG will presumably ship.” The implication is that hyperscaler HDD purchases will decline as a result of this potentially groundbreaking deal.
The storage battleground here is for nearline data that needs to have fast online access while being affordable. Pure says its Direct Flash Modules (DFMs), available at 150 TB and soon 300 TB capacity points, using QLC flash, will save significant amounts of rack space, power, and cooling versus storing the equivalent exabytes of data in 30-50 TB disk drives.
A Pure blog by co-founder and Chief Visionary Officer John Colgrove says: “Our DirectFlash Modules drastically reduce power consumption compared to legacy hard disk storage solutions, allowing hyperscalers to consolidate multiple tiers into a unified platform.”
He adds: “Pure Storage enables hyperscalers and enterprises with a single, streamlined architecture that powers all storage tiers, ranging from cost-efficient archive solutions to high-performance, mission-critical workloads and the most demanding AI workloads.” That’s because “our unique DirectFlash technology delivers an optimal balance of price, performance, and density.”
A Meta blog states: “HDDs have been growing in density, but not performance, and TLC flash remains at a price point that is restrictive for scaling. QLC technology addresses these challenges by forming a middle tier between HDDs and TLC SSDs. QLC provides higher density, improved power efficiency, and better cost than existing TLC SSDs.”
It makes a point about power consumption: “QLC flash introduced as a tier above HDDs can meet write performance requirements with sufficient headroom in endurance specifications. The workloads being targeted are read-bandwidth-intensive with infrequent as well as comparatively low write bandwidth requirements. Since the bulk of power consumption in any NAND flash media comes from writes, we expect our workloads to consume lower power with QLC SSDs.”
Meta says it’s working with Pure Storage “utilizing their DirectFlash Module (DFM) and DirectFlash software solution to bring reliable QLC storage to Meta … We are also working with other NAND vendors to integrate standard NVMe QLC SSDs into our datacenters.”
It prefers the U.2 drive form factor over any EDSFF alternatives, noting that “it enables us to potentially scale to 512 TB capacity … Pure Storage’s DFMs can allow scaling up to 600 TB with the same NAND package technology. Designing a server to support DFMs allows the drive slot to also accept U.2 drives. This strategy enables us to reap the most benefits in cost competition, schedule acceleration, power efficiency, and vendor diversity.”
The bloggers say: “Meta recognizes QLC flash’s potential as a viable and promising optimization opportunity for storage cost, performance, and power for datacenter workloads. As flash suppliers continue to invest in advanced fab processes and package designs and increase the QLC flash production output, we anticipate substantial cost improvements.” That’s bad news for the HDD makers who must hope that HAMR technology can preserve the existing HDD-SSD price differential.
Wedbush analysts had a briefing from Colgrove and CFO Kevan Krysler, who said that Pure’s technology “will be the de facto standard for storage except for certain very performant use cases” at Meta.
We understand that Meta is working with Pure for its flash drive, controller, and system flash drive management software (Purity). It is not working with Pure at the all-flash array (AFA) level, suggesting other AFA vendors without flash-level IP are wasting their time knocking on Meta’s door. Also, Meta is talking to Pure because it makes QLC flash drives that are as – or more – attractive than those of off-the-shelf vendors such as Solidigm. Pure’s DFMs have higher capacities, lower return rates, and other advantages over commercial SSDs.
The Wedbush analysts added this thought, which goes against Pure’s views to some extent, at least in the near-term: “We would note that while PSTG likely displaces some hard disk, we also believe Meta’s requirements for HDD bits are slated to grow in 2025 and 2026.” Flash is not yet killing off disk at Meta, but it is restricting HDD’s growth rate.
Generalizing from the Pure-Meta deal, they add: “Any meaningful shift from HDD to flash in cloud environments, should seemingly result in a higher longer term CAGR for flash bits, a result that should ultimately prove positive for memory vendors (Kioxia, Micron, Sandisk, etc.)”
Auwau provisions multi-tenant backup services for MSPs and departmental enterprise with automated billing and stats.
It is a tiny Danish firm, just three people, with a mature Cloutility software stack and highly valued and easy-to-use functionality by its 50 or so customers, which is why we’re writing about it. Auwau’s web-based software enables MSPs and companies to deliver Backup-as-a-Service (BaaS) and S3-to-tape storage as a service; Cloutility supporting IBM’s Storage Protect; and Storage Defender, Cohesity Data Protect, Rubrik and PoINT’s (S3 to tape endpoint-based) Archival Gateway. IBM is an Auwau reseller.
Thomas Bak
CEO Thomas Bak founded Auwau in Valby, Denmark in 2016, basing it around a spin-out of acquired backup-as-a-service software while he was a Sales Director and Partner at Frontsafe. Cloutility runs on a Windows machine and has a 30 min install. It doesn’t support Linux, with Bak saying he “never meets a SP who doesn’t have Windows somewhere.”
Although BaaS provisioning is a core service the nested multi-tenancy automated billing is equally important, and the two functions are controlled through a single software pane of glass. Users canactivate new backups and schedule backups from Cloutility via self service.
Bak told an IT Press Tour audience: “Multi-tenancy is a big draw [with] tenants in unlimited tree structures. … We automate subscription-based billing.” Cloutility provides price by capacity by tenant and customers can get automated billing for their tenants plus custom reporting and alerting. He said: “We sell to enterprises who are internal service providers and want data in their data centers.” On-premises and not in the cloud in other words.
Cloutility single pane of glass
Universities could invoice per department and/or by projects for example. Role-based access control, single sign-on and two-factor authentication are all supported.
Auwau offers OEM/white label branding capability so every reselling tenant of an MSP could be branded. Their recurring bills and reports will reflect this branding. An MSP can set up partners and resellers in their multi-tenant tree structure who can add and operate their own subset of customers as if the system was their own.
Development efforts are somewhat limited; there are only two engineers. Bak says Auwau will add new backup service provisioning connectors and billing when customers request them. He doesn’t have a build-it-and-they-will-come approach to product development. It’s more of a case of being able to depend upon a future cash flow from customers requesting a new BaaS offering which would spur any development.
He has Veeam support as a roadmap item but with no definite timescale. There are no plans to generate IBM COS to general S3 target capability nor support for Cohesity NetBackup.
In the USA and some other geographies N-able would be a competitor, but Bak says he never meets N-able in the field.
Bak is very agile. He finished his presentation session by doing a handstand and walking around on his hands. That will make for unforgettable sales calls.
Bootnote
IBM Storage Defender is a combination of IBM Storage Protect (Spectrum Protect as was), FlashSystem, Storage Fusion and Cohesity’s DataProtect product. This will run with IBM Storage’s DS8000 arrays, tape and networking products.
There is no restore support for IBM StorageProtect as it is not API-driven. Rubrik and Cohesity are OK for restore.
Cohesity announced it is the first data protection provider to achieve Nutanix’s Database Service (NDB) database protection Nutanix Ready validation. It says: “NDB is the market leading database lifecycle management platform for building database-as-a-service solutions in hybrid multicloud environments.” Cohesity DataProtect now integrates with NDB’s native time machine capabilities and streamlines protection for PostgreSQL databases on NDB via a single control plane.
Bill O’Connell
…
Data protector Commvault has announced the appointment of Bill O’Connell as its chief security officer. He had prior leadership roles at Roche, leading technical, operational, and strategic programs to protect critical data and infrastructure, and also at ADP. He previously served as chair of the National Cyber Security Alliance Board of Directors and remains actively involved in various industry working groups focused on threat intelligence and privacy.
…
Global food and beverage company Danone has adopted Databricks’ Data Intelligence Platform to drive improvements in data accuracy and reduce “data-to-decision” time by up to 30 percent. It says data ingestion times are set to drop from two weeks to one day, with fewer issues requiring debugging and fixes. A “Talk to Your Data” chatbot, powered by generative AI and Unity Catalog, will help non-technical users explore data more easily. Built-in tools will support rapid prototyping and deployment of AI models. Secure, automated data validation and cleansing could increase accuracy by up to 95 percent.
…
ExaGrid announced three new models, adding the EX20, EX81, and EX135 to its line of Tiered Backup Storage appliances, as well as the release of ExaGrid software version 7.2.0. The EX20 has 8 disks that are 8 TB each. The EX81 has 12 disks that are 18 TB each. The EX135 has 18 disks that are 18 TB each. Thirty-two of the EX189 appliances in a single scale-out system can take in up to a 6 PB full backup with 12 PB raw capacity, making it the largest single system in the industry that includes data deduplication. ExaGrid’s line of 2U appliances now include eight models: EX189, EX135, EX84, EX81, EX54, EX36, EX20, and EX10. Up to 32 appliances can be mixed and matched in a single scale-out system. Any age or size appliance can be used in a single system, eliminating planned product obsolescence.
The product line has also been updated with new Data Encryption at Rest (SEC) options. ExaGrid’s larger appliance models, including the EX54, EX81, EX84, EX135, and EX189, offer a Software Upgradeable SEC option to provide Data Encryption at Rest. SEC hardware models that provide Data Encryption at Rest are also available for ExaGrid’s entire line of appliance models. The v7.20 software includes External Key Management (EKM) for encrypted data at rest, support for NetBackup Flex Media Server Appliances with the OST plug-in, support of Veeam S3 Governance Mode and Dedicated Managed Networks.
…
Data integration provider Fivetran announced it offers more than 700 pre-built connectors for seamless integration with Microsoft Fabric and OneLake. This integration, powered by Fivetran’s Managed Data Lake Service, enables organizations to ingest data from over 700 connectors, automatically convert it into open table formats like Apache Iceberg or Delta Lake, and continuously optimize performance and governance within Microsoft Fabric and OneLake – without the need for complex engineering effort.
…
Edge website accelerator Harper announced version 5 of its global application delivery platform. It includes several new features for building, scaling, and running high-performance data-intensive workloads, including the addition of Binary Large Object (Blob) storage for the efficient handling of unstructured, media-rich data (images, real-time videos, and rendered HTML). It says the Harper platform has unified the traditional software stack – database, application, cache, and messaging functions – into a single process on a single server. By keeping data at the edge, Harper lets applications avoid the transit time of contacting a centralized database. Layers of resource-consuming logic, serialization, and network processes between each technology in the stack are removed, resulting in extremely low response times that translate into greater customer engagement, user satisfaction, and revenue growth.
…
Log data lake startup Hydrolix has closed an $80 million C-round of funding, bringing its total raised to $148 million. It has seen an eightfold sales increase in the past year, with more than 400 new customers, and is building sales momentum behind a comprehensive channel strategy. The cornerstone of that strategy is a partnership with Akamai, whose TrafficPeak offering is a white label of Hydrolix. Additionally, Hydrolix recently added Amazon Web Services as a go-to-market (GTM) partner and built connectors for massive log-data front-end ecosystems like Splunk. These and similar efforts have driven the company’s sales growth, and the Series C is intended to amplify this momentum.
…
Cloud data management supplier Informatica has appointed Krish Vitaldevara as EVP and chief product officer coming from NetApp and Microsoft. This is a big hire. He was an EVP and GM for NetApp’s core platforms and led NetApp’s 2,000-plus R&D team responsible for technology, including ONTAP, FAS/AFF, application integration & data protection software. At Informatica, “Vitaldevara will develop and execute a product strategy aligning with business objectives and leverage emerging technologies like AI to innovate and improve offerings. He will focus on customer engagement, market expansion and strategic partnerships while utilizing AI-powered, data-driven decision-making to enhance product quality and performance, all within a collaborative leadership framework.”
…
Sam King
Cloud file services supplier Nasuni has appointed Sam King as CEO, succeeding Paul Flanagan who is retiring after eight years in the role. Flanagan will remain on the Board, serving as Non-Executive Chairman. King was previously CEO of application security platform supplier Veracode from 2019 to 2024.
…
ObjectFirst has announced three new Ootbi object backup storing appliances for Veeam, with new entry-level 20 and 40 TB capacities, and a range-topping 432 TB model, plus new firmware delivering 10-20 percent faster recovery speeds across all models. The 432 TB model supports ingest speeds of up to 8 GBps in a four-node cluster, double the previous speed. New units are available for purchase immediately worldwide.
…
OpenDrives is bringing a new evolution of its flagship Atlas data storage and management platform to the 2025 NAB Show. Atlas’ latest release provides cost predictability and economical scalability with an unlimited capacity pricing model, high performance and freedom from paying for unnecessary features with targeted composable feature bundles, greater flexibility and freedom of choice with new certified hardware options, and intelligent data management via the company’s next-generation Atlas Performance Engine. OpenDrives has expanded certified hardware options to includethe Seagate Exos E JBOD expansion enclosures.
…
Other World Computing announced the release of OWC SoftRAID 8.5, its RAID management software for macOS and Windows, “with dozens of enhancements,” delivering “dramatic increases in reliability, functionality, and performance.” It also announced the OWC Archive Pro Ethernet network-based LTO backup and archiving system with drag-and-drop simplicity, up to 76 percent cost savings versus HDD storage, a 501 percent ROI, and full macOS compatibility.
OWC Archive Pro
…
Percona is collaborating with Red Hat on OpenShift and Percona Everest will now support OpenShift, so you can run a fully open source platform for running “database as a service” style instances on your own private or hybrid cloud. The combination of Everest as a cloud-native database platform with Red Hat OpenShift allows users to implement their choice of database in their choice of locations – from on-premises datacenter environments through to public cloud and hybrid cloud deployments.
…
Perforce Delphix announced GA of Delphix Compliance Services, a data compliance product built in collaboration with Microsoft. It offers automated AI and analytics data compliance supporting over 170 data sources and natively integrated into Microsoft Fabric pipelines. The initial release of Delphix Compliance Services is pre-integrated with Microsoft Azure Data Factory and Microsoft PowerBI to natively protect sensitive data in Azure and Fabric sources as well as other popular analytical data stores. The next phase of this collaboration adds a Microsoft Fabric Connector.
Perforce is a Platinum sponsor at the upcoming 2025 Microsoft Fabric Conference (FabCon) jointly sponsoring with PreludeSys. It will be demonstrating Delphix Compliance Services and natively masking data for AI and analytics in Fabric pipelines at booth #211 and during conference sessions.
…
Pliops has announced a strategic collaboration with the vLLM Production Stack developed by LMCache Lab at the University of Chicago, aimed at revolutionizing large language model (LLM) inference performance. The vLLM Production Stack is an open source reference implementation of a cluster-wide full-stack vLLM serving system. Pliops has developed XDP (Extreme Data Processor) key-value store technology with its AccelKV software running in an FPGA or ASIC to accelerate low-level storage stack processing, such as RocksDB. It has announced a LightningAI unit based on this tech. The aim is to enhance LLM inference performance.
…
Pure Storage is partnering with CERN to develop DirectFlash storage for Large Hadron Collider data. Through a multi-year agreement, Pure Storage’s data platform will support CERN openlab to evaluate and measure the benefits of large scale high-density storage technologies. Both organizations will optimize exabyte-scale flash infrastructure, and the application stack for Grid Computing and HPC workloads, identifying opportunities to maximize performance in both software and hardware while optimizing energy savings across a unified data platform.
…
Seagate has completed the acquisition of Intevac, a supplier of thin-film processing systems for $4.00 per share, with 23,968,013 Intevac shares being tendered. Intevac is now a wholly owned subsidiary of Seagate. Wedbush said: “We see the result as positive for STX given: 1) we believe media process upgrades are required for HAMR and the expense of acquiring and operating IVAC is likely less than the capital cost for upgrades the next few years and 2) we see the integration of IVAC into Seagate as one more potential hurdle for competitors seeking to develop HAMR, given that without an independent IVAC, they can no longer leverage the sputtering tool maker’s work to date around HAMR (with STX we believe using IVAC exclusively for media production.”
…
DSPM provider Securiti has signed a strategic collaboration agreement (SCA) with Amazon Web Services (AWS). AWS selected Securiti to help enterprise customers safely use their data with Amazon Bedrock’s foundation models, integrating Securiti’s Gencore AI platform to enable compliant, secure AI development with structured and unstructured data. Securiti says its Data Command Graph provides contextual data intelligence and identification of toxic combinations of risk, including the ability to correlate fragmented insights across hundreds of metadata attributes such as data sensitivity, access entitlements, regulatory requirements, and business processes. It also claims to offer the following:
Advanced automation streamlines remediation of data risks and compliance with data regulations.
Embedded regulatory insights and automated controls enable organizations to align with emerging AI regulations and frameworks such as EU AI Act and NIST AI RMF.
Continuous monitoring, risk assessments and automated tests streamline compliance and reporting.
…
SpectraLogic has launched the Rio Media Suite, which it says is simple, modular and affordable software to manage, archive and retrieve media assets across a broad range of on-premises, hybrid and cloud storage systems. It helps break down legacy silos, automates and streamlines media workflows, and efficiently archives media. It is built on MediaEngine, a high-performance media archiver that orchestrates secure access and enables data mobility between ecosystem applications and storage services.
A variety of app extensions integrate with MediaEngine to streamline and simplify tasks such as creating and managing lifecycle policies, performing partial file restores, and configuring watch folders to monitor and automatically archive media assets. The modular MAP design of Rio Media Suite allows creative teams to choose an optimal set of features to manage and archive their media, with the flexibility to add capabilities as needs change or new application extensions become available.
Available object and file storage connectors enable a range of Spectra Logic and third-party storage options, including Spectra BlackPearl storage systems, Spectra Object-Based Tape, major third-party file and object storage systems, and public cloud object storage services from leading providers such as AWS, Geyser Data, Google, Microsoft and Wasabi.
A live demonstration of Rio Media Suite software will be available during exhibit hours on April 6-9, 2025, in the Spectra Logic booth (SL8519) at NAB Show, Las Vegas Convention Center, Las Vegas, Nevada. Rio Media Suite software is available for Q2 delivery.
…
Starfish Storage, which provides metadata-driven unstructured data management, is being used at Harvard’s Faculty of Arts and Sciences Research Computing group to manage more than 60 PB involving over 10 billion files across 600 labs and 4,000 users. In year one it delivered $500,000 in recovered chargeback, year two hit $1.5 million, and it’s on track for $2.5 million in year three. It also identified 20 PB of reclaimable storage, with researchers actively deleting what they no longer need. Starfish picked up a 2025 Data Breakthrough Award for this work in the education category.
…
Decentralized (Web3) storage supplier Storj announced a macOS client for its new Object Mount product. It joins the Windows client announced in Q4 2024 and the Linux client, launched in 2022. Object Mount delivers “highly responsive, POSIX-compliant file system access to content residing in cloud or on-premise object storage platforms, without changing the data format.” Creative professionals can instantly access content on any S3-compatible or blob object storage service, as if they were working with familiar file storage systems. Object Mount is available for users of any cloud platform or on-premise object storage vendor. It is universally compatible and does not require any data migration or format conversion.
…
Media-centric shared storage supplier Symply is partnering with DigitalGlue to integrate “DigitalGlue’s creative.space software with Symply’s high-performance Workspace XE hardware, delivering a scalable and efficient hybrid storage solution tailored to the needs of modern content creators. Whether for small post-production teams or large-scale enterprise environments, the joint solution ensures seamless workflow integration, enhanced performance, and simplified management.”
…
An announcement from DDN’s Tintri subsidiary says: “Tintri, leading provider of the world’s only workload-aware, AI-powered data management solutions, announced that it has been selected as the winner of the ‘Overall Data Storage Company of the Year’ award in the sixth annual Data Breakthrough Awards program conducted by Data Breakthrough, an independent market intelligence organization that recognizes the top companies, technologies and products in the global data technology market today.”
We looked into the Data Breakthrough Awards program. There are several categories in these awards with multiple sub-category winners in each category: Data Management (13), Data Observability (4), Data Analytics (10), Business Intelligence (4), Compute and Infrastructure (6), Data Privacy and Security (5), Open Source (4), Data Integration and Warehousing (5), Hardware (4), Data Storage (6), Data Ops (3), Industry Applications (14) and Industry Leadership (11). That’s a whopping 89 winners.
In the Data Management category we find 13 winners with DataBee the “Solution of the Year” and VAST Data the “Company of the Year.” Couchbase is the “Platform of the Year” and Grax picks up the “Innovation of the Year” award:
The six Data Storage category winners are:
The award structure may strike some as unusual. The judging process details can be found here.
…
Cloud, disaster recovery, and backup specialist virtualDCS has announced a new senior leadership team as it enters a new growth phase after investment from private equity firm MonacoSol, which was announced last week. Alex Wilmot steps in as CEO, succeeding original founder Richard May, who moves into a new role as product development director. Co-founder Dan Nichols returns as CTO, while former CTO John Murray transitions to solutions director. Kieran Brady also joins as chief revenue officer (CRO) to drive the company’s next stage of expansion.
…
S3 cheaper-than-AWS cloud storage supplier Wasabi has achieved Federal Risk and Authorization Management Program (FedRAMP) Ready status, and announced its cloud storage service for the US Federal Government. Wasabi is now one step closer to full FedRAMP authorization, which will allow more Government entities to use its cloud storage service.
…
Software RAID supplier Xinnor announced successful compatibility testing of an HA multi-node cluster system combining its xiRAID Classic 4.2 software and the Ingrasys ES2000 Ethernet-attached Bunch of Flash (EBOF) platform. It supports up to 24 hot-swap NVME SSDs and is compatible with Pacemaker-based HA clusters. Xinnor plans to fully support multi-node clusters based on Ingrasys EBOFs in upcoming xiRAID releases. Get a full PDF tech brief here.
Quesma has built a gateway between Elasticsearch EQL and SQL-based databases like ClickHouse, claiming EQL users can use it to access faster and cheaper stored data sources.
Jacek Migdal
EQL (Elastic Query Language) is used by tools such as Kibana, Logstash, and Beats. Structured Query Language (SQL) is the 50-year-old standard for accessing relational databases. Quesma co-founder Jacek Migdal, who previously worked at Sumo Logic, says that Elasticsearch is designed for Google-style searches, but 65 percent of the use cases come from observability and security, rather than website search. The majority of telcos have big Elastic installations. However, Elastic is 20x slower at answering queries than the SQL-accessed ClickHouse relational database.
Quesma lets users carry on using Elastic as a front end while translating EQL requests to SQL using a dictionary generated by an AI model. Migdal and Pawel Brzoska founded Quesma in Warsaw, Poland, in 2023, and raised €2.1 million ($2.3 million) in pre-seed funding at the end of that year.
The company partnered with streaming log data lake company Hydrolix in October 2024 as it produces a ClickHouse-compatible data lake. Quesma lets Hydrolix customers continue using EQL-based queries, redirecting them to the SQL used by ClickHouse. Its software acts as a transparent proxy.
Hydrolix now has a Kibana compatibility feature powered by Quesma’s smart translation technology. It enables Kibana customers to connect their user interface to the Hydrolix cloud and its ClickHouse data store. This means Elasticsearch customers can migrate to newer SQL databases while continuing to use their Elastic UI.
Quesma enables customers to avoid difficult and costly all-in-one database migrations and do gradual migrations instead, separating the front-end access from the back-end database. Migdal told an IT Press Tour briefing audience: “We are using AI internally to develop rules to translate Elasticsearch storage rules to ClickHouse [and other] rules. AI produces the dictionary. We use two databases concurrently to verify rule development.”
Although AI is used to produce the dictionary, it is not used, in the inference sense, at run time by customers. Migdal said: “Customers won’t use AI inferencing at run time in converting database interface languages. They don’t want AI there. Their systems may not be connected to the internet.”
Its roadmap has a project to add pipe syntax extensions to SQL, so that the SQL operator syntax order matches the semantic evaluation order, making it easier to understand:
Quesma pipe syntax example
Quesma is also using its AI large language model experience to produce a charting app, interpreting natural language prompts, such as ”Plot top 10 languages, split by native and second language speakers” to create and send requests to apps like Tableau.
The Symphony unstructured data estate data manager from Panzura has extended its reach into IBM Storage Deep Archive territory, integrating it with S3-accessed Diamondback tape libraries.
Symphony is Panzura’s software for discovering and managing exabyte-scale unstructured data sets, featuring scanning, tiering, migration, and risk and compliance analysis. It is complementary to Panzura’s original and core CloudFS hybrid cloud file services offering supporting large-scale multi-site workflows and collaboration using active, not archived, data. The IBM Storage Deep Archive is a Diamondback TS6000 tape library, storing up to 27 PB of LTO-9 data in a single rack with 16.1 TB/hour (4.47 GBps) performance. It’s equipped with an S3-accessible front end, similar to the file-based LTFS.
Sundar Kanthadai
Sundar Kanthadai, Panzura CTO, stated that this Panzura-IBM offering “addresses surging cold data volumes and escalating cloud fees by combining smart data management with ultra-low-cost on-premises storage, all within a compact footprint.”
Panzura Product SVP Mike Harvey added: “This integration allows technologists to escape the trap of unpredictable access fees and egress sticker shock.”
The Symphony-Deep Archive uses S3 Glacier Flexible Retrieval storage classes to “completely automate data transfers to tape.” Use Symphony to scan an online unstructured data estate and move metadata-tagged cold data to the IBM tape library to free up SSD and HDD storage capacity while keeping the data on-prem. Symphony’s data catalog gets embedded file metadata automatically added. It’s searchable, with more than 500 data types, and accessible via API and Java Database Connectivity requests.
Specific file recall and deletion activity can be automated through policy settings.
Panzura’s Symphony can access more than 400 file formats via a deal with GRAU Data for its Metadata Hub software. It is already integrated with IBM’s Fusion Data Catalog, which provides unified metadata management and insights for heterogeneous unstructured data, on-premises and in the cloud, and Storage Fusion. IBM Storage Fusion is a containerized solution derived from Spectrum Scale and Spectrum Protect data protection.
According to IBM, Deep Archive is much more affordable than public cloud alternatives, “offering object storage for cold data at up to 83 percent lower cost than other service providers, and importantly, with zero recall fees.”
Panzura says the IBM Deep Archive-Symphony deal is “particularly crucial for artificial intelligence (AI) workloads,” because it can make archived data accessible to AI model training and inference workloads.
It claims the Symphony IBM Deep Archive integration enables users to streamline data archiving processes and “significantly reduce cloud and on-premises storage expenses.” The combined offering is available immediately.
SPONSORED POST: It’s not a question of if your organization gets hit by a cyberattack – only when, and how quickly it recovers.
Even small amounts of application and service downtime can cause massive disruption to any business. So being able to get everything back online in minutes rather than hours, or even days, can be the key to resilience.
But modern workloads rely on increasingly large volumes of data to function efficiently. What used to involve gigabytes of critical information now needs petabytes, and making sure all of that data can be restored immediately when that cyber security incident hits is definitely no easy task.
It’s a challenge that Infinidat’s enterprise storage solutions for next-generation data protection and recovery were built to help address, using AI-based deep machine learning techniques to speed up the process. At their core are InfiniSafe cyber resilience and recovery storage solutions which provide immutable snapshot recovery, local or remote air gaps, and fenced forensic environments to deliver a near-instantaneous guaranteed Service Level Agreement (SLA) recovery from cyberattacks, says the company.
Watch this Hot Seat video to see Infinidat CMO Eric Herzog tell The Register’s Tim Philips exactly how Infinidat can help you withstand cyberattacks.
InfiniSafe Automated Cyber Protection (ACP) uses application program interfaces (APIs) to integrate with a range of third-party Security Operations Centers (SOC), Security Information and Event Management (SIEM) and Security Orchestration and Response (SOAR) platforms. It automatically triggers an immediate immutable data snapshot based on the input from the cybersecurity packages. Then, you can configure it to use InfiniSafe Cyber Detection to start AI-based scanning of those immutable snapshots to see if malware or ransomware is present.
Those capabilities are supplemented by the InfiniBox storage array, which uses a native software-defined operating system, Neural Cache and a 3-way active controller architecture to deliver immutable snapshot recovery that is guaranteed in under a minute.
You can find out more about Infinidat’s enterprise storage solutions for next-generation data protection and recovery by clicking this link.
Research firm SemiAnalysis has launched its ClusterMAX rating system to evaluate GPU cloud providers, with performance criteria that include networking, management software, and storage capabilities.
SemiAnalysis aims to help organizations evaluate GPU cloud providers – both hyperscalers like AWS, Azure, GCP, and Oracle Cloud, and what it calls “Neoclouds,” a group of newer GPU-focused providers. The initial list includes 131 companies. There are five rating classifications: Platinum, Gold, Silver, Bronze, and Underperforming. It classifies GPU cloud suppliers into trad hyperscalers, neocloud giants, emerging and sovereign neoclouds, and adds brokers, platforms, and aggregators to the GPU cloud market along with management software and VC clusters:
The research company states: “The bar across the GPU cloud industry is currently very low. ClusterMAX aims to provide a set of guidelines to help raise the bar across the whole GPU cloud industry. ClusterMAX guidelines evaluate features that most GPU renters care about.”
VAST Data co-founder Jeff Denworth commented that the four neocloud giants “have standardized on VAST Data” with the trad hyperscalers using “20-year-old technology.”
SemiAnalysis says the two main storage frustration areas “are when file volumes randomly unmount and when users encounter the Lots of Small File (LOSF) problem.” A program called “autofs” will automatically keep a file system mounted.
“The LOSF problem can easily be avoided as it is only an issue if you decide to roll out your own storage solution like an NFS-server instead of paying for a storage software vendor like WEKA or VAST. An end user will very quickly notice an LOSF problem on the cluster as the time even to import PyTorch into Python will lead to a complete lag out if an LOSF problem exists on the cluster.”
The report reckons that “efficient and performant storage solutions are essential for machine learning workloads, both for training and inference” and “high-performance storage is needed for model checkpoint loads” during training. It mentions Nvidia’s Inference Transfer Library (NIXL) as helping here.
During training, “managed object storage options are equally crucial for flexible, cost-effective, and scalable data storage, enabling teams to efficiently store, version, and retrieve training datasets, checkpoints, and model artifacts.”
On the inference side, “performance-oriented storage ensures that models are loaded rapidly from storage production scenarios. Slow or inefficient storage can cause noticeable delays, degrading the end-user experience or reducing real-time responsiveness of AI-driven applications.”
“It is, therefore, vital to assess whether GPU cloud providers offer robust managed parallel file system and object storage solutions, ensuring that these options are optimized and validated for excellent performance across varied workloads.”
In general, SemiAnalysis sees that “most customers want managed high-performance parallel file systems such as WEKA, Lustre, VAST Data, DDN, and/or want a managed S3-compatible object storage.”
The report also examines the networking aspects of GPU server rental.
Ratings
There is only one cloud in the top-rated Platinum category, CoreWeave. “Enterprises mainly rent GPUs from Hyperscalers + CoreWeave. Enterprises rarely rent from Emerging Neoclouds,” the report says.
Gold tier providers are Crusoe, Nebius, Oracle, Azure, Together AI, and LeptonAI. The silver tier providers are AWS, Lambda, Firma/Sustainable Metal Cloud, and Scaleway. The bronze tier includes Google Cloud, DataCrunch, TensorWave, and other unnamed suppliers. The report authors say: “We believe Google Cloud is on a Rocketship path toward ClusterMAX Gold or ClusterMAX Platinum by the next time we re-evaluate them.”
The underperformers, such as Massed Compute and SaladCloud, are described as “not having even basic security certifications, such as SOC 2 or ISO 27001. Some of these providers also fall into this category by hosting underlying GPU providers that are not SOC 2 compliant either.”
Full access to the report is available to SemiAnalysis subscribers via the company’s website.
Commvault has entered a deal with SimSpace offering customers a way to learn how to react and respond to a cyberattack in a simulated environment with training exercises.
SimSpace produces such environments, called cyber ranges. These are hands-on virtual environments – “interactive and simulated platforms that replicate networks, systems, tools, and applications. They provide a safe and legal environment for acquiring hands-on cyber skills and offer a secure setting for product development and security posture testing.” A downloadable NIST document tells you more. The deal with SimSpace means Commvault is now offering the Commvault Recovery Range, powered by SimSpace, which models a customer’s environment and simulates a cyberattack.
Bill O’Connell
Commvault CSO Bill O’Connell said: “Together with SimSpace, we are offering companies something that’s truly unique in the market – the physical, emotional, and psychological experience of a real-world cyberattack and the harrowing challenges often experienced in attempting to rapidly recover.”
By “combining SimSpace’s authentic cyberattack simulations with Commvault’s leading cyber recovery capabilities, we’re giving companies the ability to strengthen their security posture, cyber readiness, and business resilience.”
The main idea is to prepare cyber defenders to respond effectively when an attack happens. By going through cyber range training, they get:
Hands-on attack simulations with defenders working in “hyper-realistic environment that mirrors their actual networks, infrastructure, and day-to-day operations – complete with simulated users logging in and out, sending emails, and interacting with applications.” The defenders face attacks, like Netwalker, that can be challenging to detect and “forced to make decisions and execute strategic responses under pressure as the clock is ticking.”
Exercises with no-win recovery scenarios and learning “the hard way the importance of validating backups, cleaning infected data, and executing swift restorations.”
Drills that bring disparate teams together with CSOs, CISOs, CIOs, IT Ops, and SecOps working together to emerge with a cohesive strategy for handling crises and restoring core services swiftly.
We should think in terms of training exercises almost akin to military war gaming, with attack scenarios, response drills, and ad hoc groups of people brought together in a reaction team so they can understand their minimum viability; the critical applications, assets, processes, and people required for an organization to recover following a cyberattack.
Recovery exercises include using Commvault Cloud for threat scanning, Air Gap Protect for immutable storage, Cleanroom Recovery for on-demand recovery testing, and Cloud Rewind to automatically rebuild cloud-native apps. Commvault says these components enable defenders to recover their business without reinfecting it.
Phil Goodwin, research VP at IDC, commented on the Commvault-SimSpace deal, saying: “This is a huge advancement in modern cyber preparedness training.”
Commvault and SimSpace will be showcasing Commvault Recovery Range during RSAC 2025 from April 28 to May 1 in San Francisco at the Alloy Collective. You can get a taste of that here.
Self-hosted SaaS backup service business Keepit intends to back up hundreds of different SaaS apps by 2028, starting from just seven this year.
The seven are Jira, Bamboo, Okta, Confluence, DocuSign, Miro, and Slack, with the ultimate goal of full coverage for all SaaS applications used by enterprises, spanning HR, finance, sales, production, and more. This ambitious scope rivals that of HYCU back in 2023 with its connectors – an API scheme for SaaS app suppliers. This resulted in 50 SaaS app connectors in November that year and almost 90 a year later.
Keepit says the average enterprise uses approximately 112 SaaS applications, according to BetterCloud research. Keepit cites a Gartner report saying that by 2028, 75 percent of enterprises will prioritize backup of SaaS applications as a critical requirement, compared to just 15 percent in 2024.
Michael Amsinck
Michael Amsinck, Keepit Chief Product and Technology Officer (CPTO) stated: “Legacy backup and recovery solutions are not able to adapt and scale to rise to that challenge. Having a platform that is purpose-built for the cloud is a clear advantage to us, because it enables us to build exactly what our customers and the markets need.”
Keepit reckons its Domain-Specific Language (DSL) concept will accelerate development for each application, with them “seamlessly integrating with the unique Keepit platform.” There are no details available explaining how DSL works or which organization – Keepit or the SaaS app supplier – produces the DSL-based connector code enabling Keepit to back up the app.
The product roadmap also includes anomaly detection with enhanced monitoring, compliance, and security insights, and will be available in early May.
Keepit already protects Microsoft 365, Entra ID, Salesforce, and other mainstream SaaS apps, with what we understand to be the DSL-based approach now used for Jira, Bamboo, Okta, Confluence, DocuSign, Miro, and Slack.
The company says it will “offer a comprehensive backup and recovery solution for all SaaS applications, ensuring full control of data regardless of unforeseen events such as outages, malicious attacks, or human error.”
MinIO is staking its claim in the large language model (LLM) market, adding support for the Model Context Protocol (MCP) to its AIStor software – a move sparked by agentic AI’s growing reliance on object storage.
MCP is an Anthropic-supported method for AI agents to connect to proprietary data sources. “Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools,” Anthropic says. As a result, Anthropic’s Claude model can query, read, and write to a customer’s file system storage.
MinIO introduced its v2.0 AIStor software supporting Nvidia GPUDirect, BlueField SuperNICS, and NIM microservices in March. Now it is adding MCP server support so AI agents can access AIStor. A “preview release includes more than 25 commonly used commands, making exploring and using data in an AIStor object store easier than ever.”
Pavel Anni, a MinIO Customer Engineer and Technology Educator, writes: “Agents are already demonstrating incredible intelligence and are very helpful with question answering, but as with humans, they need the ability to discover and access software applications and other services to actually perform useful work … Until now, every agentic developer has had to write their own custom plumbing, glue code, etc. to do this. Without a standard like MCP, building real-world agentic workflows is essentially impossible … MCP leverages language models to summarize the rich output of these services and can present crucial information in a human-readable form.”
The preview release “enables interaction with and management of MinIO AIStor … simply by chatting with an LLM such as Anthropic Claude or OpenAI ChatGPT.” Users can tell Claude to list all object buckets on an AIStor server and then to create a list of objects grouped by categories. Claude then creates a summary list:
Anni contrasts a command line or web user interface request with the Claude and MCP approach: “The command-line tool or web UI would give us a list of objects, as requested. The LLM summarizes the bucket’s content and provides an insightful narrative of its composition. Imagine if I had thousands of objects here. A typical command-line query would give us a long list of objects that could be hard to consume. Here, it gives us a human-readable overview of the bucket’s contents. It is similar to summarizing an article with your favorite LLM client.”
Anni then had Claude add tags to the bucket items. “Imagine doing the same operation without MCP servers. You would have to write a Python script to pull images from the bucket, send them to an AI model for analysis, get the information back, decode it, find the correct fields, apply tags to objects … You could easily spend half a day creating and debugging such a script. We just did it simply using human language in a matter of seconds.”
There is more information about AIStor and MCP in Anni’s blog.
Pure Storage’s Portworx is looking to win over customers wishing to migrate their virtual machines to containers by adding VM support to its container storage software product.
Businesses and public sector customers can keep using existing VMs on Kubernetes while refactoring old apps or creating entirely new cloud-native ones using Kubernetes-orchestrated containers. VMware’s Tanzu offering added container support to vSphere. Pure is now taking the opposite approach by adding VM support to its Portworx offering. Pure positions this move in the broader context of Broadcom’s 2023 acquisition of VMware and the subsequent pricing changes that have affected VMware customers.
It says 81 percent of enterprises that participated in a 2024 survey of Kubernetes experts plan to migrate their VMware VMs to Kubernetes over the next five years, with almost two-thirds intending to do so within the next two years. v3.3 of the Portworx Enterprise software will add this VMware VM support and is projected to deliver 30 to 50 percent cost savings for customers moving VMs to containers.
Mitch Ashley, VP and Practice Lead, DevOps and Application Development at Futurum, stated: “With Portworx 3.3, Pure Storage is bringing together a scalable data management platform with a simplified workflow across containers and VMs. That’s appealing to enterprises modernizing their infrastructure, pursuing cloud-native applications, or both.”
v3.3 provides a single workflow for VM and cloud-native apps instead of having separate tools and processes. It will support VMs running on Kubernetes in collaboration with Red Hat, SUSE, Kubermatic, and Spectro Cloud, and deliver:
RWX Block support for KubeVirt VMs running on FlashArray or other storage vendors’ products providing fast read/write capabilities
Single management plane, including synchronized disaster recovery for VMs running on Kubernetes with no data loss (zero RPO)
File-level backups for Linux VMs, allowing for more granular backup and restore
Reference architecture and partner integrations with KubeVirt software from Red Hat, SUSE, Spectro Cloud, and Kubermatic
Portworx Enterprise 3.3 will be generally available by the end of May and you can learn more about it here.