The Symphony unstructured data estate data manager from Panzura has extended its reach into IBM Storage Deep Archive territory, integrating it with S3-accessed Diamondback tape libraries.
Symphony is Panzura’s software for discovering and managing exabyte-scale unstructured data sets, featuring scanning, tiering, migration, and risk and compliance analysis. It is complementary to Panzura’s original and core CloudFS hybrid cloud file services offering supporting large-scale multi-site workflows and collaboration using active, not archived, data. The IBM Storage Deep Archive is a Diamondback TS6000 tape library, storing up to 27 PB of LTO-9 data in a single rack with 16.1 TB/hour (4.47 GBps) performance. It’s equipped with an S3-accessible front end, similar to the file-based LTFS.
Sundar Kanthadai
Sundar Kanthadai, Panzura CTO, stated that this Panzura-IBM offering “addresses surging cold data volumes and escalating cloud fees by combining smart data management with ultra-low-cost on-premises storage, all within a compact footprint.”
Panzura Product SVP Mike Harvey added: “This integration allows technologists to escape the trap of unpredictable access fees and egress sticker shock.”
The Symphony-Deep Archive uses S3 Glacier Flexible Retrieval storage classes to “completely automate data transfers to tape.” Use Symphony to scan an online unstructured data estate and move metadata-tagged cold data to the IBM tape library to free up SSD and HDD storage capacity while keeping the data on-prem. Symphony’s data catalog gets embedded file metadata automatically added. It’s searchable, with more than 500 data types, and accessible via API and Java Database Connectivity requests.
Specific file recall and deletion activity can be automated through policy settings.
Panzura’s Symphony can access more than 400 file formats via a deal with GRAU Data for its Metadata Hub software. It is already integrated with IBM’s Fusion Data Catalog, which provides unified metadata management and insights for heterogeneous unstructured data, on-premises and in the cloud, and Storage Fusion. IBM Storage Fusion is a containerized solution derived from Spectrum Scale and Spectrum Protect data protection.
According to IBM, Deep Archive is much more affordable than public cloud alternatives, “offering object storage for cold data at up to 83 percent lower cost than other service providers, and importantly, with zero recall fees.”
Panzura says the IBM Deep Archive-Symphony deal is “particularly crucial for artificial intelligence (AI) workloads,” because it can make archived data accessible to AI model training and inference workloads.
It claims the Symphony IBM Deep Archive integration enables users to streamline data archiving processes and “significantly reduce cloud and on-premises storage expenses.” The combined offering is available immediately.
SPONSORED POST: It’s not a question of if your organization gets hit by a cyberattack – only when, and how quickly it recovers.
Even small amounts of application and service downtime can cause massive disruption to any business. So being able to get everything back online in minutes rather than hours, or even days, can be the key to resilience.
But modern workloads rely on increasingly large volumes of data to function efficiently. What used to involve gigabytes of critical information now needs petabytes, and making sure all of that data can be restored immediately when that cyber security incident hits is definitely no easy task.
It’s a challenge that Infinidat’s enterprise storage solutions for next-generation data protection and recovery were built to help address, using AI-based deep machine learning techniques to speed up the process. At their core are InfiniSafe cyber resilience and recovery storage solutions which provide immutable snapshot recovery, local or remote air gaps, and fenced forensic environments to deliver a near-instantaneous guaranteed Service Level Agreement (SLA) recovery from cyberattacks, says the company.
Watch this Hot Seat video to see Infinidat CMO Eric Herzog tell The Register’s Tim Philips exactly how Infinidat can help you withstand cyberattacks.
InfiniSafe Automated Cyber Protection (ACP) uses application program interfaces (APIs) to integrate with a range of third-party Security Operations Centers (SOC), Security Information and Event Management (SIEM) and Security Orchestration and Response (SOAR) platforms. It automatically triggers an immediate immutable data snapshot based on the input from the cybersecurity packages. Then, you can configure it to use InfiniSafe Cyber Detection to start AI-based scanning of those immutable snapshots to see if malware or ransomware is present.
Those capabilities are supplemented by the InfiniBox storage array, which uses a native software-defined operating system, Neural Cache and a 3-way active controller architecture to deliver immutable snapshot recovery that is guaranteed in under a minute.
You can find out more about Infinidat’s enterprise storage solutions for next-generation data protection and recovery by clicking this link.
Research firm SemiAnalysis has launched its ClusterMAX rating system to evaluate GPU cloud providers, with performance criteria that include networking, management software, and storage capabilities.
SemiAnalysis aims to help organizations evaluate GPU cloud providers – both hyperscalers like AWS, Azure, GCP, and Oracle Cloud, and what it calls “Neoclouds,” a group of newer GPU-focused providers. The initial list includes 131 companies. There are five rating classifications: Platinum, Gold, Silver, Bronze, and Underperforming. It classifies GPU cloud suppliers into trad hyperscalers, neocloud giants, emerging and sovereign neoclouds, and adds brokers, platforms, and aggregators to the GPU cloud market along with management software and VC clusters:
The research company states: “The bar across the GPU cloud industry is currently very low. ClusterMAX aims to provide a set of guidelines to help raise the bar across the whole GPU cloud industry. ClusterMAX guidelines evaluate features that most GPU renters care about.”
VAST Data co-founder Jeff Denworth commented that the four neocloud giants “have standardized on VAST Data” with the trad hyperscalers using “20-year-old technology.”
SemiAnalysis says the two main storage frustration areas “are when file volumes randomly unmount and when users encounter the Lots of Small File (LOSF) problem.” A program called “autofs” will automatically keep a file system mounted.
“The LOSF problem can easily be avoided as it is only an issue if you decide to roll out your own storage solution like an NFS-server instead of paying for a storage software vendor like WEKA or VAST. An end user will very quickly notice an LOSF problem on the cluster as the time even to import PyTorch into Python will lead to a complete lag out if an LOSF problem exists on the cluster.”
The report reckons that “efficient and performant storage solutions are essential for machine learning workloads, both for training and inference” and “high-performance storage is needed for model checkpoint loads” during training. It mentions Nvidia’s Inference Transfer Library (NIXL) as helping here.
During training, “managed object storage options are equally crucial for flexible, cost-effective, and scalable data storage, enabling teams to efficiently store, version, and retrieve training datasets, checkpoints, and model artifacts.”
On the inference side, “performance-oriented storage ensures that models are loaded rapidly from storage production scenarios. Slow or inefficient storage can cause noticeable delays, degrading the end-user experience or reducing real-time responsiveness of AI-driven applications.”
“It is, therefore, vital to assess whether GPU cloud providers offer robust managed parallel file system and object storage solutions, ensuring that these options are optimized and validated for excellent performance across varied workloads.”
In general, SemiAnalysis sees that “most customers want managed high-performance parallel file systems such as WEKA, Lustre, VAST Data, DDN, and/or want a managed S3-compatible object storage.”
The report also examines the networking aspects of GPU server rental.
Ratings
There is only one cloud in the top-rated Platinum category, CoreWeave. “Enterprises mainly rent GPUs from Hyperscalers + CoreWeave. Enterprises rarely rent from Emerging Neoclouds,” the report says.
Gold tier providers are Crusoe, Nebius, Oracle, Azure, Together AI, and LeptonAI. The silver tier providers are AWS, Lambda, Firma/Sustainable Metal Cloud, and Scaleway. The bronze tier includes Google Cloud, DataCrunch, TensorWave, and other unnamed suppliers. The report authors say: “We believe Google Cloud is on a Rocketship path toward ClusterMAX Gold or ClusterMAX Platinum by the next time we re-evaluate them.”
The underperformers, such as Massed Compute and SaladCloud, are described as “not having even basic security certifications, such as SOC 2 or ISO 27001. Some of these providers also fall into this category by hosting underlying GPU providers that are not SOC 2 compliant either.”
Full access to the report is available to SemiAnalysis subscribers via the company’s website.
Commvault has entered a deal with SimSpace offering customers a way to learn how to react and respond to a cyberattack in a simulated environment with training exercises.
SimSpace produces such environments, called cyber ranges. These are hands-on virtual environments – “interactive and simulated platforms that replicate networks, systems, tools, and applications. They provide a safe and legal environment for acquiring hands-on cyber skills and offer a secure setting for product development and security posture testing.” A downloadable NIST document tells you more. The deal with SimSpace means Commvault is now offering the Commvault Recovery Range, powered by SimSpace, which models a customer’s environment and simulates a cyberattack.
Bill O’Connell
Commvault CSO Bill O’Connell said: “Together with SimSpace, we are offering companies something that’s truly unique in the market – the physical, emotional, and psychological experience of a real-world cyberattack and the harrowing challenges often experienced in attempting to rapidly recover.”
By “combining SimSpace’s authentic cyberattack simulations with Commvault’s leading cyber recovery capabilities, we’re giving companies the ability to strengthen their security posture, cyber readiness, and business resilience.”
The main idea is to prepare cyber defenders to respond effectively when an attack happens. By going through cyber range training, they get:
Hands-on attack simulations with defenders working in “hyper-realistic environment that mirrors their actual networks, infrastructure, and day-to-day operations – complete with simulated users logging in and out, sending emails, and interacting with applications.” The defenders face attacks, like Netwalker, that can be challenging to detect and “forced to make decisions and execute strategic responses under pressure as the clock is ticking.”
Exercises with no-win recovery scenarios and learning “the hard way the importance of validating backups, cleaning infected data, and executing swift restorations.”
Drills that bring disparate teams together with CSOs, CISOs, CIOs, IT Ops, and SecOps working together to emerge with a cohesive strategy for handling crises and restoring core services swiftly.
We should think in terms of training exercises almost akin to military war gaming, with attack scenarios, response drills, and ad hoc groups of people brought together in a reaction team so they can understand their minimum viability; the critical applications, assets, processes, and people required for an organization to recover following a cyberattack.
Recovery exercises include using Commvault Cloud for threat scanning, Air Gap Protect for immutable storage, Cleanroom Recovery for on-demand recovery testing, and Cloud Rewind to automatically rebuild cloud-native apps. Commvault says these components enable defenders to recover their business without reinfecting it.
Phil Goodwin, research VP at IDC, commented on the Commvault-SimSpace deal, saying: “This is a huge advancement in modern cyber preparedness training.”
Commvault and SimSpace will be showcasing Commvault Recovery Range during RSAC 2025 from April 28 to May 1 in San Francisco at the Alloy Collective. You can get a taste of that here.
Self-hosted SaaS backup service business Keepit intends to back up hundreds of different SaaS apps by 2028, starting from just seven this year.
The seven are Jira, Bamboo, Okta, Confluence, DocuSign, Miro, and Slack, with the ultimate goal of full coverage for all SaaS applications used by enterprises, spanning HR, finance, sales, production, and more. This ambitious scope rivals that of HYCU back in 2023 with its connectors – an API scheme for SaaS app suppliers. This resulted in 50 SaaS app connectors in November that year and almost 90 a year later.
Keepit says the average enterprise uses approximately 112 SaaS applications, according to BetterCloud research. Keepit cites a Gartner report saying that by 2028, 75 percent of enterprises will prioritize backup of SaaS applications as a critical requirement, compared to just 15 percent in 2024.
Michael Amsinck
Michael Amsinck, Keepit Chief Product and Technology Officer (CPTO) stated: “Legacy backup and recovery solutions are not able to adapt and scale to rise to that challenge. Having a platform that is purpose-built for the cloud is a clear advantage to us, because it enables us to build exactly what our customers and the markets need.”
Keepit reckons its Domain-Specific Language (DSL) concept will accelerate development for each application, with them “seamlessly integrating with the unique Keepit platform.” There are no details available explaining how DSL works or which organization – Keepit or the SaaS app supplier – produces the DSL-based connector code enabling Keepit to back up the app.
The product roadmap also includes anomaly detection with enhanced monitoring, compliance, and security insights, and will be available in early May.
Keepit already protects Microsoft 365, Entra ID, Salesforce, and other mainstream SaaS apps, with what we understand to be the DSL-based approach now used for Jira, Bamboo, Okta, Confluence, DocuSign, Miro, and Slack.
The company says it will “offer a comprehensive backup and recovery solution for all SaaS applications, ensuring full control of data regardless of unforeseen events such as outages, malicious attacks, or human error.”
MinIO is staking its claim in the large language model (LLM) market, adding support for the Model Context Protocol (MCP) to its AIStor software – a move sparked by agentic AI’s growing reliance on object storage.
MCP is an Anthropic-supported method for AI agents to connect to proprietary data sources. “Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools,” Anthropic says. As a result, Anthropic’s Claude model can query, read, and write to a customer’s file system storage.
MinIO introduced its v2.0 AIStor software supporting Nvidia GPUDirect, BlueField SuperNICS, and NIM microservices in March. Now it is adding MCP server support so AI agents can access AIStor. A “preview release includes more than 25 commonly used commands, making exploring and using data in an AIStor object store easier than ever.”
Pavel Anni, a MinIO Customer Engineer and Technology Educator, writes: “Agents are already demonstrating incredible intelligence and are very helpful with question answering, but as with humans, they need the ability to discover and access software applications and other services to actually perform useful work … Until now, every agentic developer has had to write their own custom plumbing, glue code, etc. to do this. Without a standard like MCP, building real-world agentic workflows is essentially impossible … MCP leverages language models to summarize the rich output of these services and can present crucial information in a human-readable form.”
The preview release “enables interaction with and management of MinIO AIStor … simply by chatting with an LLM such as Anthropic Claude or OpenAI ChatGPT.” Users can tell Claude to list all object buckets on an AIStor server and then to create a list of objects grouped by categories. Claude then creates a summary list:
Anni contrasts a command line or web user interface request with the Claude and MCP approach: “The command-line tool or web UI would give us a list of objects, as requested. The LLM summarizes the bucket’s content and provides an insightful narrative of its composition. Imagine if I had thousands of objects here. A typical command-line query would give us a long list of objects that could be hard to consume. Here, it gives us a human-readable overview of the bucket’s contents. It is similar to summarizing an article with your favorite LLM client.”
Anni then had Claude add tags to the bucket items. “Imagine doing the same operation without MCP servers. You would have to write a Python script to pull images from the bucket, send them to an AI model for analysis, get the information back, decode it, find the correct fields, apply tags to objects … You could easily spend half a day creating and debugging such a script. We just did it simply using human language in a matter of seconds.”
There is more information about AIStor and MCP in Anni’s blog.
Pure Storage’s Portworx is looking to win over customers wishing to migrate their virtual machines to containers by adding VM support to its container storage software product.
Businesses and public sector customers can keep using existing VMs on Kubernetes while refactoring old apps or creating entirely new cloud-native ones using Kubernetes-orchestrated containers. VMware’s Tanzu offering added container support to vSphere. Pure is now taking the opposite approach by adding VM support to its Portworx offering. Pure positions this move in the broader context of Broadcom’s 2023 acquisition of VMware and the subsequent pricing changes that have affected VMware customers.
It says 81 percent of enterprises that participated in a 2024 survey of Kubernetes experts plan to migrate their VMware VMs to Kubernetes over the next five years, with almost two-thirds intending to do so within the next two years. v3.3 of the Portworx Enterprise software will add this VMware VM support and is projected to deliver 30 to 50 percent cost savings for customers moving VMs to containers.
Mitch Ashley, VP and Practice Lead, DevOps and Application Development at Futurum, stated: “With Portworx 3.3, Pure Storage is bringing together a scalable data management platform with a simplified workflow across containers and VMs. That’s appealing to enterprises modernizing their infrastructure, pursuing cloud-native applications, or both.”
v3.3 provides a single workflow for VM and cloud-native apps instead of having separate tools and processes. It will support VMs running on Kubernetes in collaboration with Red Hat, SUSE, Kubermatic, and Spectro Cloud, and deliver:
RWX Block support for KubeVirt VMs running on FlashArray or other storage vendors’ products providing fast read/write capabilities
Single management plane, including synchronized disaster recovery for VMs running on Kubernetes with no data loss (zero RPO)
File-level backups for Linux VMs, allowing for more granular backup and restore
Reference architecture and partner integrations with KubeVirt software from Red Hat, SUSE, Spectro Cloud, and Kubermatic
Portworx Enterprise 3.3 will be generally available by the end of May and you can learn more about it here.
XenData has launched an on-prem gateway appliance for moving Windows SMB media files to and from public cloud object storage.
XenData provides storage products, such as on-prem X-Series tape archives and Media Portal viewers for the media and entertainment industry and allied customers. The Z20 Cloud Media Appliance is a Windows 11 Pro x86 box that hooks up to a local SMB network and can move files to object storage in the cloud, utilizing both online and archive tiers.
XenData Z20
CEO Dr Phil Storey stated: “The Z20 makes it easy for users to store media files in the cloud and it is especially useful when content is stored on the lower-cost Glacier and Azure Archive tiers. It allows users to check that they are rehydrating the correct media files before incurring rehydration and egress fees. Furthermore, it provides users with a self-service to easily restore individual files without the need to bother IT support staff.”
The system has a multi-tenant, web-based UI. It’s compliant with Microsoft’s security model and can be added to an existing Domain or Workgroup. Remote users are supported using HTTPS when a SSL security certificate is added. Physically, the device is a 1 RU rack-mount appliance with 4 x 1 GbE network ports and options for additional 10 and 25 GbE connectivity.
XenData previously released Cloud File Gateway software, running on Windows 10 Pro and Windows Server, to enable file-based apps to use cloud object storage such as AWS S3, Azure Blob, and Wasabi S3 as an archive facility. In effect, it has updated this software to support deep cloud archives, such as AWS Deep Glacier or Azure’s Archive Tier, and added in Media Asset Viewer functionality to provide users with a self-serve capability.
By using the web-based UI, they can display media file previews and change the storage tier for a selected file, rehydrating a file from a deep archive, and then downloading it, for example.
The Z20 is available from XenData Authorized Partners worldwide and is priced at $9,880 in the US.
Arcitecta is rolling out a real-time content delivery and media management aimed at media production pros.
Australia-based Arcitecta provides distributed data management software, its Universal Data System, supporting file and object data storage with single namespace and tiering capability covering on-premises SSDs, disk and tape, plus the public cloud, with a Livewire data mover and metadata database. Its Mediaflux Multi-Site, Mediaflux Edge, and Mediaflux Burst products enable geo-distributed workers to collaborate with faster access to shared data more effectively across normal and peak usage times. Mediaflux Real-Time accelerates access speed to provide virtually instant access to media content data.
Jason Lohrey
Jason Lohrey, CEO and founder of Arcitecta, stated: “Mediaflux Real-Time is revolutionary and will power the future of live production, supporting continuous file expansion such as live video streams and enabling editors to work with those files in real-time, even while they are still being created.”
He said Arcitecta’s Livewire data transfer module “securely moves millions or billions of files at light speed” to accelerate workflows. “In pre-release previews, broadcasters have praised Mediaflux Real-Time as ‘a game-changer’ for live broadcast, live sports, and media entertainment production.”
Mediaflux Real-Time is hardware, file-type, and codec agnostic, delivering centralized content management, network optimization, collaboration tools, security, and cost efficiency. Customers can organize storage and metadata for easy access and retrieval, have a reliable infrastructure for handling large file transfers, and use version control and integrated feedback systems. They can share content with multiple locations in real time and grow files with live content. The content can be protected with encryption and access controls.
Arcitecta Mediaflux LiveWire with Dell PowerScale and ECS
Arcitecta is aiming the product at editors in the sports production, broadcast, and media entertainment environments who need access growing video file content “for live productions and rapid post-event workflows. Editors working remotely often experience delays due to slow transfers and playback speeds, which extend the time to the final product.” Remote editors can work collaboratively, creating highlight reels or edit live footage almost instantly, “dramatically cutting post-production time.”
Mediaflux Real-Time supports real-time editing, with faster content delivery, removes single-location-based workflow bottlenecks and enhances remote collaboration. Content can be played back in real-time across sites. It “eliminates the need to buy and configure dedicated streams or connections to each editing location, requiring only a single stream to transfer the data to multiple sites – reducing cost and infrastructure requirements.”
We asked Arcitecta how MediaFlux Real-Time differs from the 2024 release of Livewire. Lohrey told us: “Mediaflux Real-Time is a file system (shim) that intercepts all file system traffic and uses Livewire to transport changes to other locations/file systems in real-time.”
“Livewire is a system/fabric that can be asked to transmit a set of data from A to N destinations. What is different here is that we are transmitting file system operations as they happen. For that to happen our file system end point is in the data path and dispatching changes/modifications to other end-points with Livewire. That is, we have tapped into (by being in the data path) the file system and teeing off the modifications as they happen.” In practice, this means:
I make a file -> transmitted
I rename a file -> transmitted
I write to a file -> transmitted
I delete a file -> transmitted (although the receiving end may decide not to honor that)
Mediaflux Real-Time is available immediately. It is part of the Mediaflux and Livewire suite and works seamlessly with a wide range of storage and infrastructure solutions and protocols.
Arcitecta and Dell Technologies will showcase Mediaflux Real-Time, combined with Dell PowerScale and ECS, in the Dell Technologies booth #SL4616 at the NAB Show, April 6-9, at the Las Vegas Convention Center.
Amazon Web Services (AWS) wants to build a two-story datacenter for tape storage in Middlesex, England. Its building application has been granted. The planning documents say: “This datacenter will be a data repository which requires significantly less power consumption than a typical datacenter. This building will be designed to house tape media that provides a long-term data storage solution for our customers. It will utilize magnetic tape media.” The lucky tape system supplier has not been identified.
AWS tape storage datacenter planned for Hayes, Middlesex
…
CoreWeave dropped its IPO target to $40/share for 37.7 million shares and valuing it around $23 billion, which would raise $1.5 billion. It had planned to sell them for between $47 and $55/share, with 49 million shares on offer, valuing it at up to $32 billion and raising up to $2.7 billion. The company reported 2024 revenues of almost $2 billion with a net loss of $863 million. CoreWeave shares should be available on Nasdaq today. It is thought Microsoft’s reported withdrawal from datacenter leases, implying a lower-than-expected growth rate for GPU-heavy AI processing, spooked investors during CoreWeave’s pre-IPO investor roadshow.
…
DDN has been recognized as a winner of the 2025 Artificial Intelligence Excellence Awards by the Business Intelligence Group for its Infinia 2.0 storage system. See here for more details on the awards and a complete list of winners and finalists.
…
HighPoint announced its RocketStore RS654x Series NVMe RAID enclosures measuring less than 5 inches tall and 10 inches long with PCIe 4.0 x16 Switch Architecture, built-in RAID 0, 1, and 10 technology, and up to 28 GBps transfer speeds. These four and eight-bay enclosures are specifically designed for 4K and 8K video editing, 3D rendering, and other high-resolution applications.
…
IBM announced Storage Ceph as a Service so clients can leverage the block+file+object storage software as a fully managed, cloud storage experience on-premises. It’s designed to reduce operational costs by aligning spending with actual usage, avoiding under-utilization and over-provisioning, and scaling on-demand. Prices start at $0.026/GB/month. More information here.
…
NVMe TCP-connected block storage supplier Lightbits has an educational blog focusing on block storage, which “is evolving into a critical component of high-performance, accelerated data pipelines.” Read it here.
…
Microsoft has announced new capabilities for Azure NetApp Files (ANF):
A flexible service level separates throughput and capacity pricing, saving customers up to 10-40 percent – think of it as a “pay for the capacity you need, and scale the performance as you grow” model.
Application Volume Groups are now available for Oracle and SAP workloads, simplifying management and optimizing performance.
A new cool access tier with a snapshot-only policy offers a cost-effective solution for managing snapshots – allowing customers to benefit from cost savings without compromising on restore times.
OneTrust has launched the Privacy Breach Response Agent, built with Microsoft Security Copilot. When a data breach occurs, privacy teams have to analyze security requirements and regulatory privacy requirements if personal data is compromised. Privacy and breach notification regulations are fragmented and complex, varying by geography and type of data, and the notification windows are often very short. The Privacy Breach Response Agent enables privacy teams to evaluate the scope of the incident, identify jurisdictions, assess regulatory requirements, generate guidance, and coordinate and align with the InfoSec response team. More information on the agent can be found here.
…
Other World Computing (OWC) launched its Jellyfish B24 and Jellyfish S24 Storage products. The Jellyfish B24 delivers a cost-effective, high-capacity solution for seamless collaboration and nearline backup, while the Jellyfish S24 offers a full SSD production server with lightning-fast performance for demanding video workflows. The B24 has four dedicated SAS ports to which you can connect B24-E expansions via a mini-SAS cable, included with every expansion chassis. By adding four B24-E expansion chassis to a B24 head unit, the total storage capacity can reach up to 2.8 petabytes.
The SSDs in the S24 are the OWC Mercury Extreme Pro SSDs. The S24 can be combined with an OWC Jellyfish S24-E SSD expansion chassis for up to 736 TB of fast SSD storage.
…
M&E market focused file and object storage supplier OpenDrives is introducing a cloud-native, data services offering it has dubbed Astraeus that merges on-premises, high-performance storage with the ability to provision and manage integrated data services like the public cloud. Customers can “easily repatriate their data, bringing both data and cloud-native applications back on-premises and into the security of a private cloud.” Compute and storage resources can scale independently with dynamic provisioning and orchestration capabilities. Astraeus follows an unlimited capacity pricing model, licensing per-node instead of per-capacity, enabling cost predictability. OpenDrives will be exhibiting at the upcoming 2025 NAB Show in Las Vegas, Booth SL6612 in the South Hall Lower, April 6 to 9.
…
PNY announced its CS2342 M.2 NVMe SSD in 1 and 2 TB capacities with PCIe Gen 4 x 4 connectivity. It has up to 7,300 MBps sequential read and 6,000 MBps sequential write speeds. The product supports TCG Pyrite and has a five-year or TBW-based warranty.
…
The co-CEO of Samsung, Han Jong-Hee, has died from a heart attack at the age of 63. Co-CEO Jun Young-hyun, who oversees Samsung’s chip business, is now the sole CEO. Han Jong-Hee was responsible for Samsung’s consumer electronics and mobile devices business.
…
SMART Modular Technologies announced it is sampling its redefined Non-Volatile CXL Memory Module (NV-CMM) to Tier 1 OEMs based on the CXL 2.0 standard in the E3.S 2T form factor. “This product combines non-volatile high-performance DRAM memory, persistent flash memory and an energy source in a single removable EDSFF form factor to deliver superior reliability and serviceability for data-intensive applications … PCIe Gen 5 and CXL 2.0 compliance ensures seamless integration with the latest datacenter architectures.” View it as a high-speed cache tier.
SMART Modular’s NV-CMM details
…
There will be an SNIA Cloud Object Storage Plugfest in Denver from April 28 to 30. Learn more here. There will also be an SNIA Swordfish Plugfest at the same time in conjunction with SNIA’s Regional SDC Denver event. Register here.
…
Team Group announced the launch of the TEAMGROUP ULTRA MicroSDXC A2 V30 Memory Card, which delivers read speeds of up to 200 MBps and write speeds of up to 170 MBps. The ULTRA MicroSDXC A2 V30 meets the A2 application performance standard with a V30 video speed rating and a lifetime warranty.
Financial analyst Wedbush has identified eight publicly owned suppliers it believes will benefit greatly from an exploding AI spending phase by businesses. It says: “While there is a lot of noise in the software world around driving monetization of AI, a handful of software players have started to separate themselves from the pack … We believe the use cases are exploding, enterprise consumption phase is ahead of us in the rest of 2025, launch of LLM models across the board, and the true adoption of generative AI will be a major catalyst for the software sector and key players to benefit from this once in a generation Fourth Industrial Revolution set to benefit the tech space.” Wedbush identifies Oracle and Salesforce as the top opportunities. The others are Amazon, Elastic, Alphabet, IBM, Innodata, MongoDB, Micron Technology, Pegasystems, and Snowflake. “The clear standout over the last month from checks has been the cloud penetration success at IBM which has a massive opportunity to monetize its installed base over the next 12 to 18 months.”
Databricks data lake analysts can use Prophecy software to build their own data prep pipelines for downstream analytics and AI processing.
Prophecy, which describes itself as a data transformation copilot company, is a Databricks-focused AI and analytics data pipeline development tool that collects unstructured data from multiple corporate data sources – structured or unstructured, on-premises or in the cloud. The software then transforms it and delivers it to Databricks SQL queries. Its AI-powered visual designer generates standardized, open code that extracts, transforms, and loads the required data. It automatically builds the necessary data pipelines and tests, generates documentation, and suggests fixes for errors. The v4.0 ETL product delivers self-service, production-ready data preparation that operates within guardrails defined by central IT.
Roger Murff, VP of Technology Partners at Databricks, said: “Organizations have put their most valuable data assets into Databricks, and Prophecy 4.0 makes it easier than ever to make that data available to analysts. And because Prophecy is natively integrated with the Databricks Data Intelligence Platform, platform teams get centralized visibility and control over user access, compute costs, and more.”
Prophecy Studio visual design interface
Prophecy v4.0 features include:
Secure data loading from commonly used sources such as files via SFTP (Secure File Transfer Protocol), SharePoint, Salesforce, and Excel or CSV files from analysts’ desktops
Last-mile data operations, allowing analysts to send results to Tableau and notify stakeholders via email
Built-in automation with a drag-and-drop interface that lets analysts run and validate pipelines without the need for separate tools
Data profiles showing distribution, completeness, and other attributes
Packages of reusable, governed components
Simplified version control
Real-time pipeline observability to track performance and detect failures, thereby reducing downtime
A blog by Prophecy’s Mitesh Shah, VP for marketing and analyst relations, declares: “Self-service data preparation has been a game-changer for accelerating AI and analytics.” Analysts can get data prepared themselves and don’t have to hand the task off to data engineers, theoretically saving time and duplicated effort.
Shah says that as “Prophecy is deeply integrated with Databricks, we allow organizations to enforce cluster limits and cost guardrails automatically.” That prevents data prep costs from getting out of hand. Prophecy works with Databricks’ Unity Catalog and “analysts inherit existing permissions from Databricks.”
Prophecy Studio code interface
Raj Bains
Prophecy was founded by CEO Raj Bains and Vikas Marwaha in 2017. It raised $47 million in a B-round in January, justified by 3.5x revenue growth in 2024 with 160 percent net revenue retention from existing customers. It has raised a total of $114 million, $35 million in a 2023 B-round, $25 million in a 2022 A-round, and $7 million before that.
Bains stated: “Analysts can design and publish pipelines whenever they want, with security, performance, and data access standards predefined by IT. We’ve visited companies where analysts would outline data workflows in their data prep tools and then engineers downstream would recode the entire pipeline from scratch with their ETL software. It was a huge waste of time and energy. With Prophecy 4.0, everything is done once.”
There will be a Prophecy v4.0 in action webinar on April 24; those looking to attend can register here.
Infinidat and Veeam are encouraging VMware migration to Red Hat OpenShift Virtualization (RHOS-V) by providing immutable backups from a RHOS-V system using Infinidat storage to an InfiniGuard target system.
The two companies are positioning their joint initiative as a way to protect important petabyte-scale virtualization workloads, with billions of files, migrated from VMware. Their scheme is based on Veeam’s ability to protect the underlying RHOS-V Kubernetes container workloads using its Kasten v7.5 software and Infinidat’s CSI driver. Infinidat provides Infinibox block, file, and container access storage arrays using memory caching to speed data access to all-flash, hybrid, and disk drive array storage media. Infinidat is in the process of being acquired by Lenovo.
Erik Kaulberg
Erik Kaulberg, VP of Strategy and Alliances at Infinidat, stated: “Infinidat’s comprehensive support for Veeam Kasten v7.5 enables large-scale Kubernetes production deployments that are reliable, robust, and cyber secure … InfiniBox systems can scale to hundreds of thousands of persistent volumes. Partners like Veeam and Red Hat help fuel our containers innovation pipeline, providing a steady stream of enhancements that help our joint customers simplify all aspects of their container storage environments at enterprise scale.”
Gaurav Rishi, VP for Kasten Product and Partnerships at Veeam, said: “Kubernetes has become a vital part of enterprise infrastructure, especially in large enterprises and service providers, from its infancy as a DevOps application development and deployment environment to now being a production platform for delivering enterprise-class business applications. It is essential for our mutual customers that Veeam and Infinidat provide a highly cyber resilient, highly scalable, and highly performant next-generation data protection solution.”
Gaurav Rishi
Kasten v7.5 was released earlier this month and extended source system support to RHOS-V and also SUSE Virtualization. The software was faster at backing up large data volumes – for example, achieving 3x faster backups of volumes containing 10 million small files. It provided multi-cluster FIPS support to adhere to strict US government benchmarks, visibility into immutable restore points, and support for object lock capabilities in Google Cloud Storage.
The v7.5 release added Infinidat InfiniBox integration, based on Infinidat’s InfiniSafe immutable snapshot technology for persistent file and block volumes. The release also added NetApp support. Veeam now includes Infinidat in its Veeam Ready for Kubernetes program.
Analyst house GigaOm rated Veeam’s Kasten subsidiary as a Leader and Outperformer for the fifth time in its latest GigaOm Radar for Kubernetes Protection.
Mike Barrett
Infinidat and Veeam say customers can now bring new, existing, and large-scale VMware and other virtual machine workloads and virtualized applications to Kubernetes and container deployments, such as RHOS-V.
Mike Barrett, VP and GM for Hybrid Platforms at IBM-owned Red Hat, said: “As the virtualization landscape continues to evolve, many organizations are looking for a future proof virtualization solution. Red Hat OpenShift provides a complete application platform for both modern virtualization and containers, and through our collaboration with Infinidat and Veeam, users can leverage enhanced capabilities to scale and protect their VM and Kubernetes workloads.”