Home Blog Page 3

Nexsan launches Unity NV4000 for small and remote deployments

The new Unity 4000 is a flexible, multi-port, straightforward storage array from Nexsan.

The company has a focus on three storage product lines: E-Series block storage for direct-attached or SAN use, Assureon immutable storage to keep data compliant and safe in the face of malware, and the Unity NV series of unified file, block, and object systems. There are three Unity systems, with the NV10000 topping the range as an all-flash, 24-drive system. It is followed by the mid-range 60-disk drive NV6000 with a massive 5.18 PB capacity maximum, and now we have the NV4000 for branch offices and small/medium businesses.

Philippe Vincent, Nexsan
Vincent Phillips

CEO Vincent Phillips tells us: “Unity is versatile storage with unmatched value. We support multiple protocols, we support the flexibility to configure flash versus spinning drive in the mix, and that works for whatever the enterprise needs. Every system has dual controllers and redundancy for high availability and non-destructive upgrades. There’s no upcharges for individual features and unlike the cloud, the Unity is cost-effective for the life of the product and even as you grow.” 

Features include immutable snapshots and S3 object-locking. The latest v7.2 OS release adds a “cloud connector, which allows bi-directional syncing of files and directories with AWS S3, with Azure, Unity Object Store and Google Cloud coming soon.”

There are non-disruptive upgrades and an Assureon connector. With the latest v7.2 software, admins can now view file access by user via the SMB protocol as well as NFS.

Phillips said: “Every system has dual controllers and redundancy for high availability and non-destructive upgrades, but we also let customers choose between hardware and encryption on the drive with SED or they can utilize more cost-effective software-based encryption.”

All NV series models have FASTier technology to boost data access by caching. It uses a modest amount of solid-state storage to boost the performance of underlying hard disk drives by up to 10x, resulting in improved IOPS and throughput while maintaining cost-effectiveness and high capacity. FASTier supports both block (e.g. iSCSI, Fibre Channel) and file (e.g. NFS, SMB, FTP) protocols, allowing unified storage systems to handle diverse workloads, such as random I/O workloads in virtualized environments, efficiently.

The NV10000 flagship model “is employed by larger enterprises. It supports all flash or hybrid with performance up to 2 million IOPS, and you can expand with our 78-bay JBODs and scale up to 20 petabytes. One large cancer research facility that we have is installed at 38 petabytes and growing. They’re working on adding another 19 petabytes.”

The NV6000 was “introduced in January, and it’s been a mid-market workhorse. It can deliver one and a half million IOPS and it can scale up to five petabytes. We’ve seen the NV 6000 manage all kinds of customer workloads, but one that we consistently see is it utilized as a backup target for Veeam, Commvault and other backup software vendors.”

Nexsan Unity
Unity NV10000 (top), NV6000 (middle), and NV4000 (bottom)

Phillips said: “The product family needed a system that met the requirements at the edge or for smaller enterprises. That’s where the NV4000 that we’re talking about today comes in. It has all the enterprise features of the prior two models, but in a cost-effective model for the small organization or application or for deployment at the edge of an enterprise. The NV4000 is enterprise class, but priced and sized for the small or medium enterprise. It manages flexible workloads, backups, and S3 connections for hybrid solutions, all in one affordable box. The NV4000 can be configured in all flash or hybrid and it can deliver up to 1 million IOPS and it has the same connectivity flexibility as its bigger sisters.”

Nexsan Unity NV4000 specs

The NV4000 comes in a 4RU chassis with front-mounted 24 x 3.5-inch drive bays. The system has dual Intel Xeon Silver 4309Y CPUs (2.8 GHz, 8 cores, 16 threads) and 128 GB DDR4 RAM per controller. There is a 2 Gbps backplane and connectivity options include 10/25/40/100 GbE and 16/32 Gb Fibre Channel. The maximum flash capacity is 737.28 TB while the maximum capacity with spinning disk is 576 TB. The highest-capacity HDD available is 24 TB whereas Nexsan supplies SSDs with up to 30.72 TB.

That disparity is set to widen as disk capacities approach 30 TB and SSD makers supply 61 and 122 TB capacities. Typically these use QLC (4 bits/cell flash). Phillips said: “We don’t support the QLC today. We are working with one of the hardware vendors and we’ll begin testing those very shortly.”

Nexsan has a deduplication feature in beta testing and it’s also testing Western Digital’s coming HAMR disk drives.

There are around a thousand Nexsan channel partners and its customer count is in the 9,000-10,000 area. Partners and customers should welcome the NV4000 as a versatile edge, ROBO, and SMB storage workhorse.

Druva expands Azure coverage with SQL and Blob support

SaaS data protector Druva has expanded its coverage portfolio to include Azure SQL and Blob data stores, saying it’s unifying protection across Microsoft workloads with a single SaaS platform.

Druva already has a tight relationship with AWS, under which it offers managed storage and security on the AWS platform. It announced a stronger partnership with Microsoft in March, with a strategic relationship focused on deeper technical integration with Azure cloud services. At the time, Druva said this will protect and secure cloud and on-premises workloads with Azure as a storage target. It protects Microsoft 365, Entra ID, and Azure VMs, and now it has announced coverage for two more Azure services.

Stephen Manley, Druva
Stephen Manley

Stephen Manley, CTO at Druva, stated: “The need for cloud-native data protection continues to grow, and Druva’s support for Azure SQL and Azure Blob storage delivers customers the simplicity, security, and scalability they need to stay resilient in today’s threat landscape. By unifying protection across Microsoft workloads within a single SaaS platform, Druva continues to lead by delivering simplified, enterprise-grade cyber resilience with zero egress fees, zero management, and zero headaches.”

From a restore point of view, the zero egress fees sounds good. To emphasize this, Druva says it “offers a unified cloud-native platform with cross-region and cross-cloud protection, without the added cost or complexity of egress fees.”

The agentless Azure SQL coverage includes SQL Database, SQL Managed Instance, and SQL Server on Azure VMs. The Blob protection features granular, blob-level recovery, policy-based automation, and built-in global deduplication. Druva says it delivers “secure, air-gapped backups that protect critical data against ransomware, accidental deletion, and insider threats.”

Druva is now listed on the Azure Marketplace and is expanding support for additional Azure regions in North America, EMEA, and APAC. Druva protection for SQL Server and Blob storage was listed in the marketplace at the time of writing, with Druva protection for Enterprise Workloads including Azure VMs, SQL, and Blob.

Druva graphic

It is stepping into a field of other suppliers protecting Azure SQL and Blob, including Acronis, Cohesity, Commvault, Rubrik, and Veeam.

Support for Azure SQL is generally available today, as is Azure Blob.

Komprise CTO on how to accelerate cloud migration

Interview: Data manager Komprise takes an analytics-first approach to its Smart Data Migration service involving scanning and indexing all unstructured data across environments, then categorizing it based on access frequency (hot, warm, or cold), file types, data growth patterns, departmental ownership, and sensitivity (e.g. PII or regulated content).

Using this metadata, enterprises can:

  • Place data correctly: Ensure active data resides in high-performance cloud tiers, while infrequently accessed files move to lower-cost archival storage.
  • Reduce scope and risk: By offloading cold data first or excluding redundant and obsolete files, the total migration footprint is much smaller.
  • Avoid disruption: Non-disruptive migrations ensure that users and applications can still access data during the transfer process.
  • Optimize for compliance: Proper classification helps ensure sensitive files are placed in secure, policy-compliant storage.

We wondered about the cut-off point between digitally transferring files and physically transporting storage devices, and asked Komprise field CTO Ben Henry some questions about Smart Data Migration.

Blocks & Files: A look at the Smart Data Migration concept suggests that Komprise’s approach is to reduce the amount of data migrated to the cloud by filtering the overall dataset.

Ben Henry: Yes, we call this Smart Data Migration. Many of our customers have a digital landfill of rarely used data sitting on expensive storage. We recommend that they first tier off the cold data before the migration; that way they are only migrating the 20-30 percent of hot data along with the dynamic links to the cold files. In this way, they are using the new storage platform as it is meant to be used: for hot data that needs fast access.

Ben Henry, Komprise
Ben Henry

 
Blocks & Files: Suppose I have a 10 PB dataset and I use Komprise to shrink the amount actually sent to the cloud by 50 percent. How long will it take to move 5 PB of data to the cloud? 

Ben Henry: Komprise itself exploits the available parallelism at every level (volumes, shares, VMs, threads) and optimizes transfers to move data 27x faster than common migration tools. Having said this, the actual time taken to move data depends significantly on the topology of the customer environment. Network and security configurations can make a tremendous difference as well as where data resides. If it is spread across different networks that can impact the transfer times. We can use all available bandwidth when we are executing the migration if the customer chooses to do so.

 
Blocks & Files: Is there compute time involved at either end to verify the data has been sent correctly? 

Ben Henry: Yes. We do a checksum on the source and then on the destination and compare them to ensure that the data was moved correctly. We also provide a consolidated chain of custody report so that our customer has a log of all the data that was transferred for compliance reasons. Unlike legacy approaches that delay all data validation to the cutover, Komprise validates incrementally through every iteration as data is copied to make cutovers seamless. We are able to provide a current estimate of the final iteration because Komprise does all the validation up front when data is copied, not at the end during the time sensitive cutover events.

Blocks & Files: What does Komprise do to ensure that the data is moved as fast as possible? Is it compressed? Is it deduplicated? 

Ben Henry: Komprise has proprietary, optimized SMB and NFS clients that allow the solution to analyze and migrate data much faster. Komprise Hypertransfer optimizes cloud data migration performance by minimizing the WAN roundtrips using dedicated channels to send data, mitigating the SMB protocol issues.

Blocks & Files: At what point of capacity is it better to send physical disk drives or SSDs (124 TB ones are here now) to the cloud as that would be quicker than transmitting them across a network? Can Komprise help with this? 

Ben Henry: The cost of high-capacity circuits is now a fraction of what it was a few years ago. Most customers have adequate networks set up for hybrid cloud environments to handle data transfer without needing to use physical drives and doing things offline. It’s common for enterprises to have 1, 10, or even 100 Gb circuits. 

Physical media gets lost and corrupted and offline transfers may not be any quicker. For instance, sending 5 PB over an Amazon Snowball could easily take 25 shipments, since one Snowball only holds 210 TB. That’s painful to configure and track versus “set and forget” networking. Sneakernet in many scenarios is a thing of the past now. In fact, I was just talking with an offshore drilling customer who now uses satellite-based internet to transmit data from remote oil rigs that lack traditional network connectivity.

Blocks & Files: That’s a good example, but for land-based sites Seagate says its Lyve mobile offering can be used cost-effectively to physically transport data.

Ben Henry: Yes, we are not suggesting that you will never need offline transfer. It is just that the situations where this is needed have reduced significantly with the greater availability of high-speed internet and satellite internet. Now, the need for offline transfers has become more niche to largely high-security installations.

Blocks & Files: You mention that sending 5 PB by Amazon Snowball needs 25 shipments. I think sending 5 PB across a 100Gbit link could take around five days assuming full uninterrupted link speed and, say, 10 percent network overhead

Ben Henry: Using a steady, single stream of data sent over a 100Gbit link with 50 ms of average latency, which reduces the single stream to ~250 Mbps, the transfer time could take years. Komprise isn’t single stream. In fact, it’s distributed and multithreaded. So, instead of just using 250 Mbps of a 100Gbit link, we can utilize the entire circuit bringing the job down to days.

Blocks & Files: With Snowball Edge storage optimized devices, you can create a single, 16-node cluster with up to 2.6 PB of usable S3-compatible storage capacity. You would need just 2 x 16-node clusters for 5 PB.

Ben Henry: Yes, there are still scenarios that need edge computing or have high security constraints where network transfer is not preferred. For these scenarios, customers are willing to invest in additional infrastructure for edge storage and compute such as the options you mention. Our point is simply that the market has shifted and we are not seeing great demand for offline transfers largely because the bandwidth related demand for offline transfers has greatly reduced with the availability of high-speed internet and satellite internet. 

Storage news roundup – May 22

Arctera is collaborating with Red Hat and its InfoScale cyber-resilience product is certified on Red Hat OpenShift Virtualization.

Hitachi Vantara recently suffered a malware attack and we asked: “In the light of suffering its own malware attack, what would Hitachi Vantara say to customers about detecting, repelling and recovering from such attacks?”

The company replied: “Hitachi Vantara’s recent experience underscores the reality that no organization is immune from today’s sophisticated cyber threats, but it reinforces that how you detect, contain, and respond to such events is what matters most. At Hitachi Vantara, our focus has been on acting with integrity and urgency.

“We would emphasize three key lessons:

1. Containment Measures Must Be Quick and Decisive. The moment we detected suspicious activity on April 26, 2025, we immediately activated our incident response protocols and engaged leading third-party cybersecurity experts. We proactively took servers offline and restricted traffic to our data centers as a containment strategy.

2. Recovery Depends on Resilient Infrastructure. Our own technology played a key role in accelerating recovery. For example, we used immutable snapshot backups stored in an air-gapped data center to help restore core systems securely and efficiently. This approach helped reduce downtime and complexity during recovery.

3. Transparency and Continuous Communication Matter. Throughout the incident, we’ve prioritized open communication with customers, employees, and partners, while relying on the forensic analysis and our third-party cybersecurity experts to ensure decisions are based on verified data. As of April 27, we have no evidence of lateral movement beyond our environment.

“Ultimately, our experience reinforces the need for layered security, rigorous backup strategies, and well-practiced incident response plans. We continue to invest in and evolve our security posture, and we’re committed to sharing those insights to help other organizations strengthen theirs.”

HYCU has extended its support for Dell’s PowerProtect virtual backup target appliance to protect SaaS and cloud workloads with  backup, disaster recovery, data retention, and offline recovery.

The addition of support for Dell PowerProtect Data Domain Virtual Edition by HYCU R-Cloud SaaS  complements existing support for Dell PowerProtect Data Domain with R-Cloud Hybrid Cloud edition. HYCU says it’s first company to offer the ability to protect data across on-premises, cloud, and SaaS to the most efficient and secure storage in the market; PowerProtect. HYCU protects more than 90 SaaS apps. It says that what’s new here is that, of the 90+ offerings, only a handful of backup suppliers offer customer-owned storage.

OWC announced the launch and pre-order availability of the Thunderbolt 5 Dock with up to 80Gb/s of bi-directional data speed and 120Gb/s for higher display bandwidth needs. You can connect up to three 8K displays or dual 6K displays on Macs. It works with Thunderbolt 5, 4, 3, USB4, and USB-C devices, and delivers up to 140W of power to charge notebooks. The dock has 11 versatile ports, including three Thunderbolt 5 (USB-C), two USB-A 10Gb/s, one USB-A 5 Gbps, 2.5GbE Ethernet (MDM ready), microSD and SD UHS-II slots, and 3.5mm audio combo. The price is $329.99 with Thunderbolt 5 cable and external power supply.

At Taiwan’s Computex Phison announced the Pascari X200Z enterprise SSD with PCIe gen5 with near-SCM latency and – get this – up to 60 DWPD endurance – it’s designed for the high write endurance demands of generative AI and real-time analytics. It also announced aiDAPTIVGPT supporting generative tasks such as conversational AI, speech services, code generation, web search, and data analytics. Phison launched aiDAPTIVCache AI150EJ: a GPU memory extension optimized for AI Edge and Robotics Systems by enhancing edge inference performance by optimizing Time to First Token (TTFT) and increasing the number of tokens processed.

An E28 PCIe 5.0 SSD controller, built on TSMC’s 6nm process, is the first in the world to feature integrated AI processing, and achieving up to 2,600K/3,000K IOPS (random read/write) — over 10 percent higher than comparable products. It has up to 15 percent lower power consumption versus competing 6nm-based controllers. 

The  E31T DRAM-less PCIe 5.0 SSD controller is designed for ultra-thin laptops and handheld gaming devices with M.2 2230 and 2242 form factors. It delivers high performance, low power consumption, and space efficiency.

Phison also announced PCIe signal IC products:

  • The world’s first PCIe 5.0 Retimer certified for CXL 2.0
  • PCIe 5.0 Redriver with over 50% global market share
  • The industry’s first PCIe 6.0 Redriver
  • Upcoming PCIe 6.0 Retimer, Redriver, SerDes PHY, and PCIe-over-Optical platforms co-developed with customers

IBM-owned Red Hat has set up an open source llm-d project and community; llm-d standing for, we understand,  Large Language Model – Development. It is focused on AI inference at scale and aims to make production generative AI as omnipresent as Linux. It features:

  • vLLM, the open source de facto standard inference server, providing support for emerging frontier models and a broad list of accelerators, including Google Cloud Tensor Processor Units (TPUs).
  • Prefill and Decode Disaggregation to separate the input context and token generation phases of AI into discrete operations, where they can then be distributed across multiple servers.
  • KV (key-value) Cache Offloading, based on LMCache, shifts the memory burden of the KV cache from GPU memory to more cost-efficient and abundant standard storage, like CPU memory or network storage.
  • Kubernetes-powered clusters and controllers for more efficient scheduling of compute and storage resources as workload demands fluctuate, while maintaining performance and lower latency.
  • AI-Aware Network Routing for scheduling incoming requests to the servers and accelerators that are most likely to have hot caches of past inference calculations.
  • High-performance communication APIs for faster and more efficient data transfer between servers, with support for NVIDIA Inference Xfer Library (NIXL).

CoreWeave, Google Cloud, IBM Research and NVIDIA are founding contributors, with AMD, Cisco, Hugging Face, Intel, Lambda and Mistral AI as partners. The llm-d community includes founding supporters at the Sky Computing Lab at the University of California, originators of vLLM, and the LMCache Lab at the University of Chicago, originators of LMCache. Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud.

Red Hat has published a tech paper entitled “Accelerate model training on OpenShift AI with NVIDIA GPUDirect RDMA.” It says :”Starting with Red Hat OpenShift AI 2.19, you can leverage networking platforms such as Nvidia Spectrum-X with high-speed GPU interconnects to accelerate model training using GPUDirect RDMA over Ethernet or InfiniBand physical link. …this article demonstrates how to adapt the example from Fine-tune LLMs with Kubeflow Trainer on OpenShift AI so it runs on Red Hat OpenShift Container Platform with accelerated NVIDIA networking and gives you a sense of how it can improve performance dramatically.“ 

Red Hat graphic

SK hynix says it has developed UFS 4.1 product adopting the world’s highest 321-layer 1Tb triple level cell 3D NAND for mobile applications. It has a 7 percent improvement in power efficiency, compared with the previous generation based on 238-high NAND and a slimmer 0.85mm thickness, down from 1mm before, to fit into a ultra-slim smartphone. It supports data transfer speed of 4,300 MBps, the fastest sequential read for a fourth-generation UFS, while improving random read and write speed by 15 percent and 40 percent, respectively. SK hynix plans to win customer qualification within the year and ship in volume from the Q1 2026 in 512GB and 1TB capacities.

Snowflake’s latest quarterly revenues (Q1fy2026) showed 26 percent Y/Y growth to $1 billion. It’s still growing fast and its customer count is 11,578, up 19 percent. There was a loss of $429,952,000, compared to the previous quarter’s $325,724,000 loss – 32 percent worse. It’s expecting around 25 percent Y/Y revenue growth next quarter.

Data lakehouser Starburst announced a strategic investment from Citi without revealing the amount.

Starburst announced new Starburst AI Agent and new AI Workflows across its flagship offerings: Starburst Enterprise Platform and Starburst Galaxy. AI Agent is an out-of-the-box natural language interface for Starburst’s data platform that can be built and deployed by data analysts and application-layer AI agents.  AI Workflows connect the dots between vector-native search, metadata-driven context, and robust governance, all on an open data lakehouse architecture. With AI Workflows and the Starburst AI Agent, enterprises can build and scale AI applications faster, with reliable performance, lower cost, and greater confidence in security, compliance and control. AI Agent and AI Workflows are available in private preview.

Veeam’s Kasten for Kubernetes v8 release has new File Level Recovery (FLR) for KubeVirt VMs allowing granular restores enabling organizations to recover individual files from backups without the need to restore entire VM clones. A new virtual machine dashboard offers a workload-centric view across cluster namespaces to simplify the process of identifying each VM’s Kubernetes-dependent resources and makes configuring backup consistency easier. KforK v8 supports X8, ARM and IBM Power CPUs and is integrated with Veeam Vault. It broadens support for NetApp Trident storage provisioner with backup capabilities for ONTAP NAS “Economy” volumes. A  refreshed user interface simplifies onboarding, policy creation, and ongoing operations.

Veeam has released Kasten for Modern Virtualization – a tailored pricing option designed to align seamlessly with Red Hat OpenShift Virtualization Engine. Veeam Kasten for Kubernetes v8 and Kasten for Modern Virtualization are now available

… 

Wasabi’s S3 cloud storage service is using Kioxia CM7 Series and CD8 Series PCIe gen 5,  NVMe SSDs for its Hot Cloud Storage Service.

Zettlab launched its flagship product – Zettlab AI NAS, a high-performance personal cloud system that combines advanced offline AI, enterprise-grade hardware, and a sleek, modern design. Now live on Kickstarter, the Zettlab AI NAS gives users a smarter, more secure way to store, search, and manage digital files – with complete privacy, powerful performance, and an intuitive experience. Zettlab AI NAS transforms traditional NAS into a fully AI-powered data management platform with local computing, privacy-first AI tools, and a clean, user-friendly operating system – available to early backers at a special launch price. 

Zettlab AI NAS

it’s a full AI platform running locally on powerful hardware:

  •   Semantic file search, voice-to-text, media categorization, and OCR – all offline
  •   Built-in Creator Studio: plan shoots, auto-subtitle videos, organize files without lifting a finger
  •   ZettOS: an intuitive OS designed for everyday users with pro-level power
  •   Specs: Intel Core Ultra 5, up to 200TB, 10GbE, 96GB RAM expandable

AI and virtualization are two major headaches for CIOs. Can storage help solve them both?

A hand holding a chip against a swirling background. The chip has the letters 'AI' on it.

It’s about evolution not revolution, says Lenovo

CIOs have a storage problem, and the reason can seem pretty obvious.

AI is transforming the technology industry, and by implication, every other industry. AI relies on vast amounts of data, which means that storage has a massive part to play in every company’s ability to keep up. 

After all, according to Lenovo’s CIO Playbook report, data quality issues are the top inhibitor to AI projects meeting expectations.

There’s one problem with this answer: It only captures part of the picture. 

CIOs are also grappling with myriad other challenges. One of the biggest is the upheaval to their virtualization strategies caused by Broadcom’s acquisition of VMware at the close of 2023, and its subsequent licensing and pricing changes.

This has left CIOs contemplating three main options, says Stuart McRae, executive director and GM, data storage solutions at Lenovo. Number one is to adapt to the changes and stick with VMware, optimizing their systems as far as possible to ensure they harvest maximum value from those more expensive licenses. 

Another option is to look at alternative platforms to handle at least some of their virtualization workloads. Or they can simply jump the VMware ship entirely.

But options two and three will mean radically overhauling their infrastructure either to support new platforms or get the most from their legacy systems.

So, AI and virtualization are both forcing technology leaders to take a long hard look at their storage strategies. And, says McRae, these are not discrete challenges. Rather, they are intimately related.

This is because, as Lenovo’s CIO Playbook makes clear, tech leaders are not just looking to experiment with AI or start deploying the technology. The pressure is on to produce business outcomes, in areas such as customer experience, business growth, productivity and efficiency. At the same time, they are looking to make decision-making data-driven.

And this will mean their core legacy platforms, such as SAP, Oracle, and in-house applications will come into play, McRae says. This is where that corporate data lives after all. 

“They still have those systems,” he says. “AI will become embedded in many of those systems, and they will want to use that data to support their efforts in their RAG models.”

Storage is a real-world problem

It is precisely these systems that are running on enterprise virtualization platforms, so to develop AI strategies that deliver real world business value, CIOs need to get their virtualization strategy in order too. That means storage infrastructure that can deliver for both AI and virtualization.

One thing that is clear, McRae says, is that enterprise’s AI and virtualization storage will overwhelmingly be on-prem or co-located. These are core systems with critical data, and companies need to have hands-on control over them. Lenovo’s research shows that less than a quarter of enterprises are taking a “mainly” public cloud approach to their infrastructure for AI workloads.

But McRae explains, “If you look at the storage that customers have acquired and deployed in the last five years, 80 percent of that is hard drive-based storage.”

“The challenge with AI, especially from their storage infrastructure, is a lot of their storage is old and it doesn’t have the performance and resiliency to support their AI investments on the compute GPU side.”

From a technology point of view, a shift to flash is the obvious solution. The advantages from a performance point of view are straightforward when it comes to AI applications. AI relies on data, which in most enterprises will flow from established applications and systems. Moreover, having GPUs idling while waiting for data is massively wasteful. NVIDIA’s top-end GPUs consume roughly the same amount of energy as a domestic household does annually.

But there are broader data management implications as well. “If I want to use more of my data than maybe I did in the past, in a traditional model where they may have hot, warm, and cold data, they may want to make that data all more performant,” says McRae.

This even extends to backup and archive, he says. “We see customers moving backup data to flash for faster recovery.”

Flash offers other substantial power and footprint advantages as well. The highest capacity HDD announced at the time of writing is around 36TB, while enterprise-class SSDs range over 100TB. More importantly SSDs draw far less power than their moving part cousins.

This becomes critical given the concerns about overall datacenter power consumption and cooling requirements, and the challenges many organizations will face simply finding space for their infrastructure.

McRae says a key focus for Lenovo is to enable unified storage, “where customers can unify their file, block and object data on one platform and make that performant.”

That has a direct benefit for AI applications, allowing enterprises to extract value from the entirety of their data. But it also has a broader management impact by removing further complexity. 

“They don’t have different kits running different storage solutions, and so that gives them all the advantages of a unified backup and recovery strategy,” he says.

But modern flash-based systems offer resiliency benefits as well. McRae says a contemporary 20TB hard drive can take five to seven days to rebuild in the event of a failure. A flash drive will take maybe 30 hours.

Securing the future

In a similar vein, as AI becomes more closely intertwined with the broader, virtualized enterprise landscape, security becomes critical.

As McRae points out, while niche storage platforms might have a role to play in hyperscalers’ datacenters where the underlying LLMs are developed and trained, this is less likely to be the case for AI-enriched enterprise computing.

“When you’re deploying AI in your enterprise, that is your enterprise data, and other applications are using that data. It requires enterprise resiliency and security.” 

With Lenovo’s range, AI has a critical role to play in managing storage itself. Along with features such as immutable copies and snapshots, for example, “Having storage that provides autonomous, AI-driven ransomware protection to detect any anomalies or that something bad’s happening is really important for that data.”

So, it certainly makes sense for technology leaders to modernize their storage infrastructure. The question remains: Just how much storage will they need?

This is where Lenovo’s managed services offerings and its TruScale strategy come into play. They allow storage and other infrastructure to be procured on an OpEx, consumption-based basis, and for capacity to be unlocked and scaled up or down– over time.

“Every business is different based on their own capital model and structure,” says McRae. “But the consumption models work well for uncertain application and deployment rollouts.”

After all, most customers are only just beginning to roll out new virtualization and AI workloads. “We typically learn stuff as we start deploying it,” says McRae. “And it may not act exactly like we had planned. That flexibility and ability to scale performance and capacity is really important.”

Equally important, he says, is being able to call on experts who understand both the technology and the specifics of individual businesses. So, while AI can appear a purely technological play, McRae says its network of business partners is critical to its customers’ success.

“Working with a trusted business partner who’s going to have the day-to-day interaction with the customer and knowledge of their business is really important.” he adds.

AI will undoubtedly be revolutionary in the amount of data it requires, while VMware’s license changes have set off a revolution in their own way. But McRae says that data size apart, storage vendors need to ensure that upgrading enterprise storage to keep pace isn’t a dramatic step change.

“Your normal enterprise is going to go buy or license a model to use, and they’re going to go buy or license a vector database to pair it with it, and they’re going to get the tools to go do that,” he concludes. “So that’s what we have to make easy.”

Making modern IT easy means providing a storage infrastructure that offers end-to-end solutions encompassing storage, GPU, and computing capabilities that integrate to handle both AI and other applications using enterprise data. With over four decades’ experience in the technology sector, Lenovo is presenting itself as a go-to partner that will keep its customers at the cutting edge in fast-moving times.

Sponsored by Lenovo.

DataCore acquires StarWind for edge hyperconverged tech

DataCore is buying edge-focused hyperconverged infrastructure (HCI) supplier StarWind Software.

Beverly, Massachusetts-based StarWind sells an HCI Appliance or Virtual HCI Appliance, running the same software on a customer’s choice of hardware. The software supports options for hypervisors like VMware vSphere, Microsoft Hyper-V, or StarWind’s own KVM-based StarWind Virtual SAN, using SSDs and HDDs, with data access over iSCSI, NVMe-oF, or RDMA. The company has notched up more than 63,800 customers. It was founded in 2008 by CEO, CTO, and chief architect Anton Kolomyeytsev and COO Artem Berman. Kolomyeytsev was an ex-Windows kernel engineer.

They raised $2 million in a 2009 A-round and $3.25 million in a 2014 B-round from Almaz Capital, ABRT Venture Fund, the A-round investor, and AVentures Capital. It has been self-funding since and has flown under the radar to an extent as it focused more on software and support than marketing. StarWind was named Customer’s Choice in the 2023 Gartner Peer Insights Report for Hyperconverged Infrastructure.

Dave Zabrowski, DataCore
Dave Zabrowski

DataCore CEO Dave Zabrowski stated: “This acquisition represents a significant leap toward realizing our DataCore.NEXT vision. Merging our strengths with StarWind’s trusted edge and ROBO expertise allows us to deliver reliable HCI that works seamlessly from central data centers to the most remote locations. We are focused on giving organizations greater choice, control, and a more straightforward path for managing data wherever it resides.”

Kolomyeytsev said: “Joining the DataCore family allows us to bring our high-performance virtual SAN technology to a wider audience. With growing uncertainty around Broadcom-VMware’s vSAN licensing and pricing – particularly in distributed and cost-sensitive environments – organizations are rethinking their infrastructure strategies. Together with DataCore, we are delivering greater flexibility, performance, and freedom from hardware and hypervisor lock-in without compromising simplicity or control.”

Zabrowski and Kolomyeytsev’s pitch is that they can supply better software and a more affordable HCI system than either Broadcom’s vSphere/vSAN or so-called “legacy” HCI systems. They think the edge and remote office-branch office (ROBO) IT environment will be increasingly influenced by AI, and DataCore’s partner channel will relish having an AI-enabled edge HCI offering to add to their portfolio.

From left, StarWind founders CEO Anton Kolomyeytsev and COO Artem Berman

StarWind’s tech becomes a vehicle for DataCore to deliver its software-based services to edge and ROBO locations and complements its existing AI services-focused Perifery and the SANSymphony (block) and Swarm (object) core storage offerings, as well as its acquired Arcastream parallel file system tech.

Zabrowski is a serial acquirer at DataCore, having purchased Object Matrix, Caringo (for Swarm), and Kubernetes-focused MayaData.

The StarWind team will be mostly joining DataCore.

+Comment
The acquisition price was not revealed, but we think it’s a single-digit multiple of StarWind’s funding as the firm is highly regarded and successful, albeit by some standards a tad under-marketed.

VAST Data launches AI operating system

Disaggregated shared everything (DASE) parallel storage and AI software stack provider VAST Data has announced an AI operating system.

VAST envisages a new computing paradigm in which “trillions of intelligent agents will reason, communicate, and act across a global grid of millions of GPUs that are woven across edge deployments, AI factories, and cloud datacenters.” It will provide a unified computing and data cloud and feed new AI workloads with near-infinite amounts of data from a single fast and affordable tier of storage.

Renen Hallak.

VAST co-founder and CEO Renen Hallak stated: “This isn’t a product release – it’s a milestone in the evolution of computing. We’ve spent the past decade reimagining how data and intelligence converge. Today, we’re proud to unveil the AI Operating System for a world that is no longer built around applications – but around agents.” 

The AI OS is built on top of VAST’s existing AI Data Platform and provides services for distributed agentic computing and AI agents. It consists of:

  • Kernel to run platform services on private and public clouds
  • Runtime to deploy AI agents – AgentEngine
  • Eventing infrastructure for real-time event processing – Data Engine
  • Messaging infrastructure
  • Distributed file and database storage system that can be used for real-time data capture and analytics – DataStore, DataBase, and Data Space

VAST is introducing a new AgentEngine feature. Its InsightEngine prepares data for AI using AI. The AgentEngine is an auto-scaling AI agent deployment runtime that equips users with a low-code environment to build intelligent workflows, select reasoning models, define agent tools, and operationalize reasoning.

VAST AI Operating System

It has an AI agent tool server providing support for agents to invoke data, metadata, functions, web search or other agents using them as MCP-compatible tools. Agents can assume multiple personas with different purposes and security credentials. The Agent tool server provides secure, real-time access to various tools. Its scheduler and fault-tolerant queuing mechanisms ensure agent resilience against machine or service failure.

The AgentEngine has agentic workflow observability, using parallel, distributed tracing, so that developers have a unified and simple view into massively scaled and complex agentic pipelines. 

VAST says it will release a set of open source Agents at a rate of one per month. Some personal assistant agents will be tailored to industry use cases, while others will be designed for general-purpose use. Examples include:

  • A reasoning chatbot, powered by all of an organization’s VAST data
  • A data engineering agent to curate data automatically
  • A prompt engineer to help optimize AI workflow inputs
  • An meta-agent, to automate the deployment, evaluation, and improvement of agents
  • A compliance agent, to enforce data and activity level regulatory compliance
  • An editor agent, to create rich media content
  • A life sciences researcher, to assist with bioinformatic discovery 

VAST Data will run a series of “VAST Forward” global workshops, both in-person and online, throughout the year. These will include training on AI OS components and sessions on how to develop on the platform. 

Comment

VAST’s AI OS is not a standalone OS like Windows or Linux, which are low-level, processor-bound systems with a focus on hardware and basic services. The AI OS represents the culmination of its Thinking Machines vision and is a full AI stack entity. 

Nvidia has its AI Enterprise software suite that supports the development and deployment of production-grade AI applications, including generative AI and agentic AI systems. It includes microservices like NIM and supports tools for building and deploying AI agents and managing AI workflows. But it is not an overall operating system.

Both Dell and HPE have AI factory-type approaches that could be regarded as approaches that could be developed into AI operating systems. 

Bootnote

VAST claims it has recorded the fastest path to $2 billion in cumulative bookings of any data company in history. It experienced nearly 5x year-over-year growth in the first 2025 quarter and its DASE clusters support over 1 million GPUs around the world. VAST says it has a cash-flow-positive business model.

Hitachi Vantara adds VSP 360 see everything storage control plane

VSP 360 is Hitachi Vantara’s new management control plane for its VSP One storage portfolio, delivered as a service and covering on-premises, public cloud and hybrid deployments.

VSP One (Virtual Storage Platform One) is a storage product portfolio including the on-premises VSP One SDS Block, VSP One Block appliance, VSP One File and – a low-cost all flash array and object storage – VSP One Object offerings, and VSP One SDS Cloud (cloud-native SVOS) for the AWS cloud. These products were managed through Hitachi V’s Ops Center, which was described in 2023 as the company’s primary brand for infrastructure data management on the VSP One platform. Now it has evolved into VSP 360 which provides control for VSP One hybrid cloud deployments, AIOps predictive insights, and simplified and compliance-ready data lifecycle governance.

Octavian Tanase

Hitachi Vantara’s Chief Product Officer, Octavian Tanase, enthused: “VSP 360 represents a bold step forward in unifying the way enterprises manage their data. It’s not just a new management tool—it’s a strategic approach to modern data infrastructure that gives IT teams complete command over their data, wherever it resides.”

The company positions VSP One as a unified, multi-protocol, multi-tier data plane with VSP 360 being its unified control plane. It provides a single interface to manage VSP One resources, configurations and policies across its several environments, simplifying administration. Routine tasks like provisioning, monitoring and upgrades can be streamlined and it provides information about actual data usage and storage performance.

The AIOps facilities enable automated telemetry data correlation, identification of the root cause of performance problems, and sustainability analytics. 

An Ops Center Clear Sight facility provided cloud-based monitoring and management for VSP One products.This has become VSP 360 Clear Sight. Hitachi V says customers can use VSP 360 Clear Sight’s advanced analytics to optimize storage performance, capacity utilization and troubleshoot problems on-premise.

VSP 360 has built-in AI and automation and is  available via SaaS, Private, or via mobile phone. Hitachi V says it supports integrated fleet management, intelligent protection and infrastructure as code (IaC) interoperability across multiple storage types. Plus AI, PII (Personal Identifying Information) discovery, cybersecurity, and IaaS use cases.

VSP 360 integrates data management tools across the VSP One enterprise storage products, using AIOps observability, to monitor performance indicators such as storage capacity utilization and overall system health. It’s claimed to streamline data services delivery.

Dell’s CloudIQ is a cloud-based AIOps platform that aligns quite closely with VSP One’s capabilities, offering AI-driven insights, monitoring, and predictive analytics for Dell storage systems like PowerStore and PowerMax.

HPE’s InfoSight is a similar product, with AI-powered management for HPE storage arrays like Primera and Alletra. It focuses on predictive analytics, performance optimization, and automated issue resolution, with a centralized dashboard for system health, capacity, and performance insights. 

NetApp’s has its BlueXP storage and data services control plane facility.

Pure Storage’s Pure1 platform provides AI-driven management for Pure’s FlashArray and FlashBlade systems, with performance monitoring, capacity forecasting, and predictive analytics through a cloud-based interface.

You can dig deeper into VSP 360 here and in this blog.

Kioxia revenues expected to go down

Kioxia announced fourth quarter and full fiscal 2024 year results with revenues up slightly but set to decline as rising datacenter and AI server sales fail to offset a smartphone and PC/notebook market slump and an expectation that the future exchange rate will look bad.

Q4 revenues were ¥347.1 billion ($2.25 billion), up 2.9 percent annually, but 33 percent down sequentially, with IFRS net income of ¥20.3 billion ($131.8 million), better than the year-ago ¥64.9 billion loss.

Full fy2024 [PDF] revenues were ¥1.706 trillion ($11.28 billion), up 58.5 percent annually, with an IFRS profit of ¥272.3 billion ($1.77 billion), again much better than the year-ago ¥243.7 billion loss.

Hideki Hanazawa

CFO Hideki Hanazawa said: “Our revenue exceeded the upper end of our guidance. … Demand from enterprises remained steady. However ASPS were down around 20 percent quarter-on-quarter, influenced by inventory adjustments at PC and smartphone customers.” The full year revenue growth “was due to an increase in ASPS and bit shipments resulting fro recovery in demand from the downturn in fiscal year 2023, the effect of cost-cutting measures taken in 2023 and the depreciation of the yen.” 

Financial summary

  • Free cash flow: ¥46.6 billion ($302.6 million)
  • Cash & cash equivalents: ¥167.9 billion vs year-ago ¥187.6 billion
  • EPS: ¥24.6 ($0.159)

Kioxia’s ASP declined around 20 percent Q/Q with circa 10 percent decrease in bits shipped. It said it has had positive free cash flow for five consecutive quarters.

It’s NAND fab joint venture partner Sandisk reported $1.7 billion in revenues for its latest quarter, 0.6 percent down Y/Y, meaning that Kioxia performed proportionately better.

Kioxia ships product to three market segments and their revenues were: 

  • Smart Devices (Phones): ¥79.6 billion ($516.8 million), up 29.2 percent Y/Y
  • SSD & Storage: ¥215.2 billion ($$1.4 billion) up 32.5 percent
  • Other (Retail + sales to Sandisk): ¥52.3 billion ($339.6 million) up10.7 percent

Smart devices Q/Q sales decreased due to lower demand and selling prices as customers used up inventory. The SSD and storage segment is divided into Data Center/Enterprise, which is 60 percent of its revenues, and PC and Others which is 40 percent. Demand remained strong in Data Center/Enterprise, driven by AI adoption, but Q/Q sales declined mainly due to lower selling prices. There was ongoing weak demand in the PC sub-segment and lower selling prices led to reduced Q/Q sales.

Overall there was strong demand in the quarter for Data Center/Enterprise product with continued softness in the PC and smartphone markets. There was around 300 percent growth in Data Center/Enterprise SSD sales in the year compared to last year.

Kioxia had an IPO in the year and has improved its net debt to equity ratio from 277 percent at the end of fy2023 to 126 percent at fy2024 year end, strengthening its balance sheet. The company has started investing in BiCS 10 (332-layer 3D NAND) technology for future growth. This has a 9 percent improvement in bit density compared to BiCS 8 (218 layers).

The calculations for next quarter’s outlook assumes a minimal impact from US tariff changes, some improvement in smartphone and PC demand, accelerating AI server deployments and strong data center server demand, fueled by AI. However it anticipates a decrease in both revenue and profit on a quarterly basis. The corp believes the main reason for this sales decline will be the exchange rate, with the current assumed rate for June being ¥140 to the dollar. It was ¥154 to the dollar in the fourth fiscal 2024 quarter. A change of ¥1 to the dollar is expected to have an impact on Kioxia’s operating income of approximately ¥6 billion.

With this in mind, the outlook for the next quarter (Q1 fy2025) is revenues of ¥310 billion +/- ¥15 million; $2 billion at the mid-point and a 10.1 percent Y/Y decline. Kioxia says the potential for growth in the NAND market remains strong due to AI, datacenter and storage demand. If only the smartphone and PC/notebook markets would pick up!

Hypervisor swap or infrastructure upgrade?

a wave of data represented as a turning page

Don’t just replace VMware; tune up your infrastructure

Finding a suitable VMware alternative has become a priority for many IT organizations, especially following Broadcom’s acquisition of VMware. Rather than swapping out hypervisors, IT should seize this opportunity to reevaluate and modernize the entire infrastructure, adopting an integrated software-defined approach that optimizes resource efficiency, scalability, and readiness for emerging technologies, such as enterprise AI.

Beyond simple hypervisor replacement

Replacing a hypervisor without addressing underlying infrastructure limitations provides temporary relief. Issues such as siloed resource management, inefficient storage operations, and limited scalability remain, continuing to increase operational complexity and costs.

Adopting a comprehensive software-defined infrastructure ensures seamless integration, flexibility, and efficient scaling—critical factors for handling modern workloads, including legacy applications, VDI, and AI workloads.

Preparing infrastructure for enterprise AI

The rapid adoption of AI increases demands on existing IT infrastructure. AI workloads require extensive computational resources, high-speed data storage access, advanced GPU support, and low-latency networking. Traditional infrastructures, with their inflexible storage, compute, and network architectures, fail to meet these dynamic and intensive requirements.

Due to these software inefficiencies, IT teams are forced to create separate infrastructures for AI workloads. These new infrastructures require dedicated servers, specialized storage systems, and separate networking, thereby creating additional silos to manage. This fragmented approach increases operational complexity and duplication of resources.

A modern, software-defined infrastructure integrates flexible storage management, GPU pooling, and dynamic resource allocation capabilities. These advanced features enable IT teams to consolidate AI and traditional workloads, eliminating unnecessary silos and allowing resources to scale smoothly as AI workloads evolve.

What to look for in software-defined infrastructure

When selecting a VMware alternative and transitioning to a software-defined model, consider the following essential capabilities:

Integrated storage management

Choose solutions that manage storage directly within the infrastructure software stack, removing the need for external SAN or NAS devices. This integration streamlines data management, optimizes data placement, minimizes latency, and simplifies operational complexity.

No compromise storage performance

The solution should deliver storage performance equal to or exceeding that of traditional three-tier architectures using dedicated all-flash arrays. Modern software-defined infrastructure must leverage the speed and efficiency of NVMe-based flash storage to optimize data paths and minimize latency. This ensures consistently high performance, meeting or surpassing the demands of the most intensive workloads, including databases, VDI, and enterprise AI applications, without compromising simplicity or scalability.

Advanced GPU support for AI

Look beyond basic GPU support traditionally used for VDI environments. Modern infrastructure solutions should offer advanced GPU features, including GPU clustering, GPU sharing, and efficient GPU virtualization explicitly designed for AI workloads.

Efficient resource usage

Prioritize infrastructure that supports precise and dynamic resource allocation. Solutions should offer granular control over CPU, memory, storage, and networking, reducing wasted resources and simplifying management tasks.

Efficiency is critical due to the high cost of GPUs. Don’t waste these valuable resources on unnecessary virtualization overhead. To maximize your investment, modern solutions must deliver performance as close to bare-metal GPU speeds as possible, ensuring AI workloads achieve optimal throughput and responsiveness without resource inefficiencies.

High-performance networking

Evaluate infrastructures that feature specialized networking protocols optimized for internal node communications. Look for active-active network configurations, low latency, and high bandwidth capabilities to ensure consistent performance during intensive operations and in the event of node failures.

Global inline deduplication and data efficiency

Ensure the infrastructure offers global inline deduplication capabilities to reduce storage consumption, beneficial in environments with substantial VM or AI workload duplication. Confirm that deduplication does not negatively impact system performance.

Ask yourself when the vendor introduced deduplication into its infrastructure software. If it added this feature several years after the platform’s initial release, it is likely a bolt-on introducing additional processing overhead and negatively impacting performance. Conversely, a platform that launched with deduplication will be tightly integrated, optimized for efficiency, and unlikely to degrade system performance.

Simplified management and automation

Choose solutions that provide comprehensive management interfaces with extensive automation capabilities across provisioning, configuration, and scaling. Automated operations reduce manual intervention, accelerate deployments, and minimize human error.

Enhanced resiliency and high availability

Opt for infrastructures with robust redundancy and availability features such as distributed data mirroring, independent backup copies, and intelligent automated fail-over. These capabilities are critical for maintaining continuous operations, even during hardware failures or scheduled maintenance.

IT should evaluate solutions capable of providing inline protection from multiple simultaneous hardware failures (such as drives or entire servers) without resorting to expensive triple mirroring or even higher replication schemes. Solutions that achieve advanced redundancy without significant storage overhead help maintain data integrity, reduce infrastructure costs, and simplify operational management.

Evaluating your VMware alternative

Transitioning from VMware provides an ideal opportunity to rethink your infrastructure strategy holistically. A well-designed, software-defined infrastructure enables superior resource management, simplified operational processes, and improved scalability, preparing your organization for current needs and future innovations.

By carefully evaluating VMware alternatives through the lens of these critical infrastructure capabilities, IT organizations can significantly enhance their infrastructure agility, efficiency, and ability to support emerging technology demands, particularly enterprise AI.

Don’t view your VMware exit merely as a hypervisor replacement. Use it as a strategic catalyst to modernize your infrastructure. Storage is another key concern when selecting a VMware alternative. Check out “Comparing VMware Alternative Storage” to dive deeper. 

Sponsored by VergeIO

Dell updates PowerScale, ObjectScale to accelerate AI Factory rollout

Dell is refreshing its PowerScale and ObjectScale storage systems as part of a slew of AI Factory announcements on the first day of the Dell Technologies World conference.

The company positions its storage systems, data lakehouse, servers, and so on as integrated parts of an AI Factory set of offerings closely aligned with Nvidia’s accelerators and providing a GenAI workflow capability as well as supporting traditional apps. PowerScale – formerly known as Isilon – is its scale-out, clustered filer node offering. ObjectScale is distributed, microservices-based, multi-node, scale-out, and multi-tenant object storage software with a single global namespace that supports the S3 API.  

Jeff Clarke, Dell
Jeff Clarke

Jeff Clarke, Dell Technologies COO, stated: “It has been a non-stop year of innovating for enterprises, and we’re not slowing down. We have introduced more than 200 updates to the Dell AI Factory since last year. Our latest AI advancements – from groundbreaking AI PCs to cutting-edge data center solutions – are designed to help organizations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results.” 

As background, Nvidia has announced an extension of its storage server/controller host CPU+DRAM bypass NVMe/RDMA-based GPUDirect file protocol to S3, so that object data can be fed fast to its GPUs using similar, RDMA-based technology. Parallel access file systems like IBM’s Storage Scale, Lustre, VAST Data, VDURA, and WEKA have a speed advantage over serial filers, even scale-out ones, like PowerScale and Qumulo. Dell has responded to this with its Project Lightning initiative.

Dell slide

With these points in mind, the ObjectScale product is getting a denser version along with Nvidia BlueField-3 DPU (Data Processing Unit)  and Spectrum-4 networking support. BlueField 3 is powered by ARM processors and can run containerized software such as ObjectScale. Spectrum-4 is an Ethernet platform product providing 400Gbit/s end-to-end connectivity. Its components include a Spectrum-4 switch, ConnectX-7 SmartNIC, BlueField-3, and DOCA infrastructure software.

The denser ObjectScale system will support multi-petabyte scale andis built from PowerEdge R7725xd server nodes with 2 x AMD EPYC gen 5 CPUs, launching June 2025. It will offer the highest storage density NVMe configurations in the Dell PowerEdge portfolio. The system will feature Nvidia-enhanced BlueField-3 DPUs and Spectrum-4 Ethernet switches and provide planned network connectivity of up to 800 Gb per second.

The vendor says ObjectScale will support S3 over RDMA, making unstructured data stored as objects available much faster for AI training and inferencing. It has up to 230 percent higher throughput and up to 80 percent lower latency. It will also provide 98 percent reduced CPU load compared to traditional S3 data transfers, it claims. A fully managed S3 Tables feature, supporting open table formats that integrate with AI platforms, will be available later this year.

PowerScale gets S3 Object Lock WORM in an upcoming release, along with S3 bucket logging and protocol access logging. PowerScale file-to-object SmartSync automates data replication directly to AWS, Wasabi or Dell ObjectScale for lower-cost back up storage, and can burst to the cloud using EC2 for compute-heavy applications.

A  PowerScale Cybersecurity Suite is an AI, software-driven product designed to provide ransomware detection, minimum downtime when a threat occurs, and near-instant recovery. There are 3 bundles:

  • Cybersecurity software for real-time ransomware detection and mitigation, including a full audit trail when an attack occurs. 
  • Airgap vault for immutable backups. 
  • Disaster recovery software for seamless failover and recovery to guarantee business continuity. 

Project Lightning is claimed to be “the world’s fastest parallel file system per new testing, delivering up to two times greater throughput than competing parallel file systems.” This is according to internal and preliminary Dell testing comparing random and sequential throughput per rack unit. Dell has not provided specific throughput figures, which makes independent comparison difficult. A tweet suggested 97Gbps throughput. The company says Project Lightning will accelerate training time for large-scale and complex AI workflows.

Dell says Lightning is purpose-built for the largest AI deployments with tens of thousands of GPUs. Partners such as WWT and customers such as Cambridge University are active participants in  a multi-phase customer validation program, which includes performance benchmarking, feature testing, and education to drive product requirements and feedback into the product. 

Dell is introducing a high-performance offering built with PowerScale, Project Lightning and PowerEdge XE servers. It will use KV cache and integrate Nvidia’s Inference Xfer Library (NIXL), part of Nvidia’s Dynamo offering, making it ideal for large-scale, complex, distributed inference workloads, according to Dell. Dynamo serves generative AI models in large-scale distributed environments and includes optimizations specific to large language models (LLMs), such as disaggregated serving and key-value cache (KV cache) aware routing.

A Dell slide shows Project Lightning sitting as a software layer above both ObjectScale and PowerScale in an AI Data Platform concept:

Dell slide

We asked about this, and Dell’s Geeta Vaghela, a senior product management director, said: ”We really start to see parallel file systems not being generic parallel file systems, but really optimised for this AI use case and workflow.” She envisages it integrating with KV cache. Dell is now looking to run private previews of the Project Lightning software.

Dell says its AI Data Platform updates improve access to high quality structured, semi-structured, and unstructured data across the AI life cycle. There are Dell Data Lakehouse enhancements to simplify AI workflows and accelerate use cases, such as recommendation engines, semantic search, and customer intent detection by creating and querying AI-ready datasets. Specifically the Dell Data Lakehouse gets:

  • Native Vector Search Integration in the Dell Data Analytics Engine, powered by Starburst, bringing semantic understanding directly into SQL workflows and bridging the gap between structured query processing and unstructured data exploration.
  • Hybrid Search builds on the vector search capability by combining semantic similarity with traditional keyword matching, within a single SQL query.
  • Built-In LLM Functions integrate tools like text summarization and sentiment analysis into SQL-based workflows. 
  • Automated Iceberg Table Management looks after maintenance tasks such as compaction and snapshot expiration. 

There are PowerEdge server and network switch updates as part of this overall Dell AI Factory announcement. Dell is announcing Managed Services for the Dell AI Factory with Nvidia to simplify AI operations with management of the full Nvidia AI solutions stack, including AI platforms, infrastructure, and Nvidia AI Enterprise software. Dell managed services experts will handle 24×7 monitoring, reporting, version upgrades, and patching.

The Nvidia AI Enterprise software platform is now available directly from Dell, and customers can use Dell’s AI Factory with Nvidia NIM, NeMo microservices, Blueprints, NeMo Retriever for RAG, and Llama Nemotron reasoning models. They can, Dell says, “seamlessly develop agentic workflows while accelerating time-to-value for AI outcomes.” 

Dell AI Factory with Nvidia offerings support the Nvidia Enterprise AI Factory validated design, featuring Dell and Nvidia compute, networking, storage, and Nvidia AI Enterprise software. This provides an end-to-end, fully integrated AI product for enterprises. Red Hat OpenShift is available on the Dell AI Factory with Nvidia.

Availability

  • Dell Project Lightning is available in private preview for select customers and partners now. 
  • Dell Data Lakehouse updates will be available beginning in July 2025. 
  • Dell ObjectScale with NVIDIA BlueField-3 DPU and Spectrum-4 Ethernet Switches is targeting availability in 2H 2025. 
  • The Dell high-performance system built with Dell PowerScale, Dell Project Lightning and PowerEdge XE servers is targeting availability later this year. 
  • Dell ObjectScale support for S3 over RDMA will be available in 2H 2025. 
  • The NVIDIA AI Enterprise software platform, available directly from Dell, will be available May 2025. 
  • Managed Services for Dell AI Factory with NVIDIA are available now. 

Dell intros all-flash PowerProtect target backup appliance

Dell is launching an all-flash deduping PowerProtect backup target appliance, providing competition for all-flash systems from Pure Storage (FlashBlade), Quantum, and Infinidat.

The company is also announcing PowerStore enhancements at its Las Vegas-based Dell Technologies World event, plus Dell Private Cloud and NativEdge capabilities.

Arthur Lewis, Dell’s president, Infrastructure Solutions Group, stated: “Our disaggregated infrastructure approach helps customers build secure, efficient modern datacenters that turn data into intelligence and complexity into clarity.” 

There are currently four PowerProtect systems plus a software-only virtual edition, offering a range of capacities and speeds. 

Dell table

The All-Flash Ready node is basically a Dell PowerEdge R760 server with 24 x 2.5-inch SAS-4 SSD drives and 8 TB HDDs for cache and metadata, a hybrid flash and disk server. The capacity is up to 220 TB/node and a node supports up to 8 x DS600 disk array enclosures, providing 480 TB raw each, for expanded storage.

The PowerProtect Data Domain All-Flash DD9910F appliance restores data up to four times faster than an equivalent capacity PowerProtect disk-based system, based on testing the DD9910F and an HDD-using DD9910. We don’t get given actual restore speed numbers for any PowerProtect appliance, and Dell isn’t providing the all-flash DD9910F ingest speed either.

All-flash PowerProtect systems have up to twice as fast replication performance and up to 2.8x faster analytics. This is based on internal testing of CyberSense analytics performance to validate data integrity in a PowerProtect Cyber Recovery vault comparing a disk-based DD9910 appliance to the all-flash DD9910F at similar capacity. 

They occupy up to 40 percent less rack space and save up to 80 percent on power compared to disk drive-based systems. With Dell having more than 15,000 PowerProtect/Data Domain customers, it has a terrific upsell opportunity here.

PowerStore, the unified file and block storage array, is being given Advanced Ransomware Detection. This validates data integrity and minimizes downtime from ransomware attacks using Index Engines’ CyberSense AI analytics. This indexes data using more than 200 content-based analytic routines. The AI system used has been trained on over 7,000 variants of ransomware and their activity signals. According to Dell, it detects ransomware corruption with a 99.99 percent confidence level.

Dell says it’s PowerStore’s fifth anniversary and there are more than 17,000 PowerStore customers worldwide. 

The company now has a Private Cloud to provide a cloud-like environment on premises, built using its disaggregated infrastructure – servers, storage, networking, and software. There is a catalog of validated blueprints. This gives Dell an answer to HPE’s Morpheus private cloud offering.

A Dell Automation Platform provides centralized management and zero-touch onboarding for this private cloud. By using it, customers can provision a private cloud stack cluster in 150 minutes, with up to 90 percent fewer steps than a manual process.

Dell has new NativeEdge products for virtualized workloads at the edge and in remote branch offices. These protect and secure data with policy-based load balancing, VM snapshots and backup and migration capabilities. Customers can manage diverse edge environments consistently with support for non-Dell and legacy infrastructure.

Comment

We expect that Dell announcing its first all-flash PowerProtect appliance will be the trigger ExaGrid needs to bring out its own all-flash product.