Home Blog Page 10

Storage news roundup – May 22

Arctera is collaborating with Red Hat and its InfoScale cyber-resilience product is certified on Red Hat OpenShift Virtualization.

Hitachi Vantara recently suffered a malware attack and we asked: “In the light of suffering its own malware attack, what would Hitachi Vantara say to customers about detecting, repelling and recovering from such attacks?”

The company replied: “Hitachi Vantara’s recent experience underscores the reality that no organization is immune from today’s sophisticated cyber threats, but it reinforces that how you detect, contain, and respond to such events is what matters most. At Hitachi Vantara, our focus has been on acting with integrity and urgency.

“We would emphasize three key lessons:

1. Containment Measures Must Be Quick and Decisive. The moment we detected suspicious activity on April 26, 2025, we immediately activated our incident response protocols and engaged leading third-party cybersecurity experts. We proactively took servers offline and restricted traffic to our data centers as a containment strategy.

2. Recovery Depends on Resilient Infrastructure. Our own technology played a key role in accelerating recovery. For example, we used immutable snapshot backups stored in an air-gapped data center to help restore core systems securely and efficiently. This approach helped reduce downtime and complexity during recovery.

3. Transparency and Continuous Communication Matter. Throughout the incident, we’ve prioritized open communication with customers, employees, and partners, while relying on the forensic analysis and our third-party cybersecurity experts to ensure decisions are based on verified data. As of April 27, we have no evidence of lateral movement beyond our environment.

“Ultimately, our experience reinforces the need for layered security, rigorous backup strategies, and well-practiced incident response plans. We continue to invest in and evolve our security posture, and we’re committed to sharing those insights to help other organizations strengthen theirs.”

HYCU has extended its support for Dell’s PowerProtect virtual backup target appliance to protect SaaS and cloud workloads with  backup, disaster recovery, data retention, and offline recovery.

The addition of support for Dell PowerProtect Data Domain Virtual Edition by HYCU R-Cloud SaaS  complements existing support for Dell PowerProtect Data Domain with R-Cloud Hybrid Cloud edition. HYCU says it’s first company to offer the ability to protect data across on-premises, cloud, and SaaS to the most efficient and secure storage in the market; PowerProtect. HYCU protects more than 90 SaaS apps. It says that what’s new here is that, of the 90+ offerings, only a handful of backup suppliers offer customer-owned storage.

OWC announced the launch and pre-order availability of the Thunderbolt 5 Dock with up to 80Gb/s of bi-directional data speed and 120Gb/s for higher display bandwidth needs. You can connect up to three 8K displays or dual 6K displays on Macs. It works with Thunderbolt 5, 4, 3, USB4, and USB-C devices, and delivers up to 140W of power to charge notebooks. The dock has 11 versatile ports, including three Thunderbolt 5 (USB-C), two USB-A 10Gb/s, one USB-A 5 Gbps, 2.5GbE Ethernet (MDM ready), microSD and SD UHS-II slots, and 3.5mm audio combo. The price is $329.99 with Thunderbolt 5 cable and external power supply.

At Taiwan’s Computex Phison announced the Pascari X200Z enterprise SSD with PCIe gen5 with near-SCM latency and – get this – up to 60 DWPD endurance – it’s designed for the high write endurance demands of generative AI and real-time analytics. It also announced aiDAPTIVGPT supporting generative tasks such as conversational AI, speech services, code generation, web search, and data analytics. Phison launched aiDAPTIVCache AI150EJ: a GPU memory extension optimized for AI Edge and Robotics Systems by enhancing edge inference performance by optimizing Time to First Token (TTFT) and increasing the number of tokens processed.

An E28 PCIe 5.0 SSD controller, built on TSMC’s 6nm process, is the first in the world to feature integrated AI processing, and achieving up to 2,600K/3,000K IOPS (random read/write) — over 10 percent higher than comparable products. It has up to 15 percent lower power consumption versus competing 6nm-based controllers. 

The  E31T DRAM-less PCIe 5.0 SSD controller is designed for ultra-thin laptops and handheld gaming devices with M.2 2230 and 2242 form factors. It delivers high performance, low power consumption, and space efficiency.

Phison also announced PCIe signal IC products:

  • The world’s first PCIe 5.0 Retimer certified for CXL 2.0
  • PCIe 5.0 Redriver with over 50% global market share
  • The industry’s first PCIe 6.0 Redriver
  • Upcoming PCIe 6.0 Retimer, Redriver, SerDes PHY, and PCIe-over-Optical platforms co-developed with customers

IBM-owned Red Hat has set up an open source llm-d project and community; llm-d standing for, we understand,  Large Language Model – Development. It is focused on AI inference at scale and aims to make production generative AI as omnipresent as Linux. It features:

  • vLLM, the open source de facto standard inference server, providing support for emerging frontier models and a broad list of accelerators, including Google Cloud Tensor Processor Units (TPUs).
  • Prefill and Decode Disaggregation to separate the input context and token generation phases of AI into discrete operations, where they can then be distributed across multiple servers.
  • KV (key-value) Cache Offloading, based on LMCache, shifts the memory burden of the KV cache from GPU memory to more cost-efficient and abundant standard storage, like CPU memory or network storage.
  • Kubernetes-powered clusters and controllers for more efficient scheduling of compute and storage resources as workload demands fluctuate, while maintaining performance and lower latency.
  • AI-Aware Network Routing for scheduling incoming requests to the servers and accelerators that are most likely to have hot caches of past inference calculations.
  • High-performance communication APIs for faster and more efficient data transfer between servers, with support for NVIDIA Inference Xfer Library (NIXL).

CoreWeave, Google Cloud, IBM Research and NVIDIA are founding contributors, with AMD, Cisco, Hugging Face, Intel, Lambda and Mistral AI as partners. The llm-d community includes founding supporters at the Sky Computing Lab at the University of California, originators of vLLM, and the LMCache Lab at the University of Chicago, originators of LMCache. Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud.

Red Hat has published a tech paper entitled “Accelerate model training on OpenShift AI with NVIDIA GPUDirect RDMA.” It says :”Starting with Red Hat OpenShift AI 2.19, you can leverage networking platforms such as Nvidia Spectrum-X with high-speed GPU interconnects to accelerate model training using GPUDirect RDMA over Ethernet or InfiniBand physical link. …this article demonstrates how to adapt the example from Fine-tune LLMs with Kubeflow Trainer on OpenShift AI so it runs on Red Hat OpenShift Container Platform with accelerated NVIDIA networking and gives you a sense of how it can improve performance dramatically.“ 

Red Hat graphic

SK hynix says it has developed UFS 4.1 product adopting the world’s highest 321-layer 1Tb triple level cell 3D NAND for mobile applications. It has a 7 percent improvement in power efficiency, compared with the previous generation based on 238-high NAND and a slimmer 0.85mm thickness, down from 1mm before, to fit into a ultra-slim smartphone. It supports data transfer speed of 4,300 MBps, the fastest sequential read for a fourth-generation UFS, while improving random read and write speed by 15 percent and 40 percent, respectively. SK hynix plans to win customer qualification within the year and ship in volume from the Q1 2026 in 512GB and 1TB capacities.

Snowflake’s latest quarterly revenues (Q1fy2026) showed 26 percent Y/Y growth to $1 billion. It’s still growing fast and its customer count is 11,578, up 19 percent. There was a loss of $429,952,000, compared to the previous quarter’s $325,724,000 loss – 32 percent worse. It’s expecting around 25 percent Y/Y revenue growth next quarter.

Data lakehouser Starburst announced a strategic investment from Citi without revealing the amount.

Starburst announced new Starburst AI Agent and new AI Workflows across its flagship offerings: Starburst Enterprise Platform and Starburst Galaxy. AI Agent is an out-of-the-box natural language interface for Starburst’s data platform that can be built and deployed by data analysts and application-layer AI agents.  AI Workflows connect the dots between vector-native search, metadata-driven context, and robust governance, all on an open data lakehouse architecture. With AI Workflows and the Starburst AI Agent, enterprises can build and scale AI applications faster, with reliable performance, lower cost, and greater confidence in security, compliance and control. AI Agent and AI Workflows are available in private preview.

Veeam’s Kasten for Kubernetes v8 release has new File Level Recovery (FLR) for KubeVirt VMs allowing granular restores enabling organizations to recover individual files from backups without the need to restore entire VM clones. A new virtual machine dashboard offers a workload-centric view across cluster namespaces to simplify the process of identifying each VM’s Kubernetes-dependent resources and makes configuring backup consistency easier. KforK v8 supports X8, ARM and IBM Power CPUs and is integrated with Veeam Vault. It broadens support for NetApp Trident storage provisioner with backup capabilities for ONTAP NAS “Economy” volumes. A  refreshed user interface simplifies onboarding, policy creation, and ongoing operations.

Veeam has released Kasten for Modern Virtualization – a tailored pricing option designed to align seamlessly with Red Hat OpenShift Virtualization Engine. Veeam Kasten for Kubernetes v8 and Kasten for Modern Virtualization are now available

… 

Wasabi’s S3 cloud storage service is using Kioxia CM7 Series and CD8 Series PCIe gen 5,  NVMe SSDs for its Hot Cloud Storage Service.

Zettlab launched its flagship product – Zettlab AI NAS, a high-performance personal cloud system that combines advanced offline AI, enterprise-grade hardware, and a sleek, modern design. Now live on Kickstarter, the Zettlab AI NAS gives users a smarter, more secure way to store, search, and manage digital files – with complete privacy, powerful performance, and an intuitive experience. Zettlab AI NAS transforms traditional NAS into a fully AI-powered data management platform with local computing, privacy-first AI tools, and a clean, user-friendly operating system – available to early backers at a special launch price. 

Zettlab AI NAS

it’s a full AI platform running locally on powerful hardware:

  •   Semantic file search, voice-to-text, media categorization, and OCR – all offline
  •   Built-in Creator Studio: plan shoots, auto-subtitle videos, organize files without lifting a finger
  •   ZettOS: an intuitive OS designed for everyday users with pro-level power
  •   Specs: Intel Core Ultra 5, up to 200TB, 10GbE, 96GB RAM expandable

AI and virtualization are two major headaches for CIOs. Can storage help solve them both?

A hand holding a chip against a swirling background. The chip has the letters 'AI' on it.

It’s about evolution not revolution, says Lenovo

CIOs have a storage problem, and the reason can seem pretty obvious.

AI is transforming the technology industry, and by implication, every other industry. AI relies on vast amounts of data, which means that storage has a massive part to play in every company’s ability to keep up. 

After all, according to Lenovo’s CIO Playbook report, data quality issues are the top inhibitor to AI projects meeting expectations.

There’s one problem with this answer: It only captures part of the picture. 

CIOs are also grappling with myriad other challenges. One of the biggest is the upheaval to their virtualization strategies caused by Broadcom’s acquisition of VMware at the close of 2023, and its subsequent licensing and pricing changes.

This has left CIOs contemplating three main options, says Stuart McRae, executive director and GM, data storage solutions at Lenovo. Number one is to adapt to the changes and stick with VMware, optimizing their systems as far as possible to ensure they harvest maximum value from those more expensive licenses. 

Another option is to look at alternative platforms to handle at least some of their virtualization workloads. Or they can simply jump the VMware ship entirely.

But options two and three will mean radically overhauling their infrastructure either to support new platforms or get the most from their legacy systems.

So, AI and virtualization are both forcing technology leaders to take a long hard look at their storage strategies. And, says McRae, these are not discrete challenges. Rather, they are intimately related.

This is because, as Lenovo’s CIO Playbook makes clear, tech leaders are not just looking to experiment with AI or start deploying the technology. The pressure is on to produce business outcomes, in areas such as customer experience, business growth, productivity and efficiency. At the same time, they are looking to make decision-making data-driven.

And this will mean their core legacy platforms, such as SAP, Oracle, and in-house applications will come into play, McRae says. This is where that corporate data lives after all. 

“They still have those systems,” he says. “AI will become embedded in many of those systems, and they will want to use that data to support their efforts in their RAG models.”

Storage is a real-world problem

It is precisely these systems that are running on enterprise virtualization platforms, so to develop AI strategies that deliver real world business value, CIOs need to get their virtualization strategy in order too. That means storage infrastructure that can deliver for both AI and virtualization.

One thing that is clear, McRae says, is that enterprise’s AI and virtualization storage will overwhelmingly be on-prem or co-located. These are core systems with critical data, and companies need to have hands-on control over them. Lenovo’s research shows that less than a quarter of enterprises are taking a “mainly” public cloud approach to their infrastructure for AI workloads.

But McRae explains, “If you look at the storage that customers have acquired and deployed in the last five years, 80 percent of that is hard drive-based storage.”

“The challenge with AI, especially from their storage infrastructure, is a lot of their storage is old and it doesn’t have the performance and resiliency to support their AI investments on the compute GPU side.”

From a technology point of view, a shift to flash is the obvious solution. The advantages from a performance point of view are straightforward when it comes to AI applications. AI relies on data, which in most enterprises will flow from established applications and systems. Moreover, having GPUs idling while waiting for data is massively wasteful. NVIDIA’s top-end GPUs consume roughly the same amount of energy as a domestic household does annually.

But there are broader data management implications as well. “If I want to use more of my data than maybe I did in the past, in a traditional model where they may have hot, warm, and cold data, they may want to make that data all more performant,” says McRae.

This even extends to backup and archive, he says. “We see customers moving backup data to flash for faster recovery.”

Flash offers other substantial power and footprint advantages as well. The highest capacity HDD announced at the time of writing is around 36TB, while enterprise-class SSDs range over 100TB. More importantly SSDs draw far less power than their moving part cousins.

This becomes critical given the concerns about overall datacenter power consumption and cooling requirements, and the challenges many organizations will face simply finding space for their infrastructure.

McRae says a key focus for Lenovo is to enable unified storage, “where customers can unify their file, block and object data on one platform and make that performant.”

That has a direct benefit for AI applications, allowing enterprises to extract value from the entirety of their data. But it also has a broader management impact by removing further complexity. 

“They don’t have different kits running different storage solutions, and so that gives them all the advantages of a unified backup and recovery strategy,” he says.

But modern flash-based systems offer resiliency benefits as well. McRae says a contemporary 20TB hard drive can take five to seven days to rebuild in the event of a failure. A flash drive will take maybe 30 hours.

Securing the future

In a similar vein, as AI becomes more closely intertwined with the broader, virtualized enterprise landscape, security becomes critical.

As McRae points out, while niche storage platforms might have a role to play in hyperscalers’ datacenters where the underlying LLMs are developed and trained, this is less likely to be the case for AI-enriched enterprise computing.

“When you’re deploying AI in your enterprise, that is your enterprise data, and other applications are using that data. It requires enterprise resiliency and security.” 

With Lenovo’s range, AI has a critical role to play in managing storage itself. Along with features such as immutable copies and snapshots, for example, “Having storage that provides autonomous, AI-driven ransomware protection to detect any anomalies or that something bad’s happening is really important for that data.”

So, it certainly makes sense for technology leaders to modernize their storage infrastructure. The question remains: Just how much storage will they need?

This is where Lenovo’s managed services offerings and its TruScale strategy come into play. They allow storage and other infrastructure to be procured on an OpEx, consumption-based basis, and for capacity to be unlocked and scaled up or down– over time.

“Every business is different based on their own capital model and structure,” says McRae. “But the consumption models work well for uncertain application and deployment rollouts.”

After all, most customers are only just beginning to roll out new virtualization and AI workloads. “We typically learn stuff as we start deploying it,” says McRae. “And it may not act exactly like we had planned. That flexibility and ability to scale performance and capacity is really important.”

Equally important, he says, is being able to call on experts who understand both the technology and the specifics of individual businesses. So, while AI can appear a purely technological play, McRae says its network of business partners is critical to its customers’ success.

“Working with a trusted business partner who’s going to have the day-to-day interaction with the customer and knowledge of their business is really important.” he adds.

AI will undoubtedly be revolutionary in the amount of data it requires, while VMware’s license changes have set off a revolution in their own way. But McRae says that data size apart, storage vendors need to ensure that upgrading enterprise storage to keep pace isn’t a dramatic step change.

“Your normal enterprise is going to go buy or license a model to use, and they’re going to go buy or license a vector database to pair it with it, and they’re going to get the tools to go do that,” he concludes. “So that’s what we have to make easy.”

Making modern IT easy means providing a storage infrastructure that offers end-to-end solutions encompassing storage, GPU, and computing capabilities that integrate to handle both AI and other applications using enterprise data. With over four decades’ experience in the technology sector, Lenovo is presenting itself as a go-to partner that will keep its customers at the cutting edge in fast-moving times.

Sponsored by Lenovo.

DataCore acquires StarWind for edge hyperconverged tech

DataCore is buying edge-focused hyperconverged infrastructure (HCI) supplier StarWind Software.

Beverly, Massachusetts-based StarWind sells an HCI Appliance or Virtual HCI Appliance, running the same software on a customer’s choice of hardware. The software supports options for hypervisors like VMware vSphere, Microsoft Hyper-V, or StarWind’s own KVM-based StarWind Virtual SAN, using SSDs and HDDs, with data access over iSCSI, NVMe-oF, or RDMA. The company has notched up more than 63,800 customers. It was founded in 2008 by CEO, CTO, and chief architect Anton Kolomyeytsev and COO Artem Berman. Kolomyeytsev was an ex-Windows kernel engineer.

They raised $2 million in a 2009 A-round and $3.25 million in a 2014 B-round from Almaz Capital, ABRT Venture Fund, the A-round investor, and AVentures Capital. It has been self-funding since and has flown under the radar to an extent as it focused more on software and support than marketing. StarWind was named Customer’s Choice in the 2023 Gartner Peer Insights Report for Hyperconverged Infrastructure.

Dave Zabrowski, DataCore
Dave Zabrowski

DataCore CEO Dave Zabrowski stated: “This acquisition represents a significant leap toward realizing our DataCore.NEXT vision. Merging our strengths with StarWind’s trusted edge and ROBO expertise allows us to deliver reliable HCI that works seamlessly from central data centers to the most remote locations. We are focused on giving organizations greater choice, control, and a more straightforward path for managing data wherever it resides.”

Kolomyeytsev said: “Joining the DataCore family allows us to bring our high-performance virtual SAN technology to a wider audience. With growing uncertainty around Broadcom-VMware’s vSAN licensing and pricing – particularly in distributed and cost-sensitive environments – organizations are rethinking their infrastructure strategies. Together with DataCore, we are delivering greater flexibility, performance, and freedom from hardware and hypervisor lock-in without compromising simplicity or control.”

Zabrowski and Kolomyeytsev’s pitch is that they can supply better software and a more affordable HCI system than either Broadcom’s vSphere/vSAN or so-called “legacy” HCI systems. They think the edge and remote office-branch office (ROBO) IT environment will be increasingly influenced by AI, and DataCore’s partner channel will relish having an AI-enabled edge HCI offering to add to their portfolio.

From left, StarWind founders CEO Anton Kolomyeytsev and COO Artem Berman

StarWind’s tech becomes a vehicle for DataCore to deliver its software-based services to edge and ROBO locations and complements its existing AI services-focused Perifery and the SANSymphony (block) and Swarm (object) core storage offerings, as well as its acquired Arcastream parallel file system tech.

Zabrowski is a serial acquirer at DataCore, having purchased Object Matrix, Caringo (for Swarm), and Kubernetes-focused MayaData.

The StarWind team will be mostly joining DataCore.

+Comment
The acquisition price was not revealed, but we think it’s a single-digit multiple of StarWind’s funding as the firm is highly regarded and successful, albeit by some standards a tad under-marketed.

VAST Data launches AI operating system

Disaggregated shared everything (DASE) parallel storage and AI software stack provider VAST Data has announced an AI operating system.

VAST envisages a new computing paradigm in which “trillions of intelligent agents will reason, communicate, and act across a global grid of millions of GPUs that are woven across edge deployments, AI factories, and cloud datacenters.” It will provide a unified computing and data cloud and feed new AI workloads with near-infinite amounts of data from a single fast and affordable tier of storage.

Renen Hallak.

VAST co-founder and CEO Renen Hallak stated: “This isn’t a product release – it’s a milestone in the evolution of computing. We’ve spent the past decade reimagining how data and intelligence converge. Today, we’re proud to unveil the AI Operating System for a world that is no longer built around applications – but around agents.” 

The AI OS is built on top of VAST’s existing AI Data Platform and provides services for distributed agentic computing and AI agents. It consists of:

  • Kernel to run platform services on private and public clouds
  • Runtime to deploy AI agents – AgentEngine
  • Eventing infrastructure for real-time event processing – Data Engine
  • Messaging infrastructure
  • Distributed file and database storage system that can be used for real-time data capture and analytics – DataStore, DataBase, and Data Space

VAST is introducing a new AgentEngine feature. Its InsightEngine prepares data for AI using AI. The AgentEngine is an auto-scaling AI agent deployment runtime that equips users with a low-code environment to build intelligent workflows, select reasoning models, define agent tools, and operationalize reasoning.

VAST AI Operating System

It has an AI agent tool server providing support for agents to invoke data, metadata, functions, web search or other agents using them as MCP-compatible tools. Agents can assume multiple personas with different purposes and security credentials. The Agent tool server provides secure, real-time access to various tools. Its scheduler and fault-tolerant queuing mechanisms ensure agent resilience against machine or service failure.

The AgentEngine has agentic workflow observability, using parallel, distributed tracing, so that developers have a unified and simple view into massively scaled and complex agentic pipelines. 

VAST says it will release a set of open source Agents at a rate of one per month. Some personal assistant agents will be tailored to industry use cases, while others will be designed for general-purpose use. Examples include:

  • A reasoning chatbot, powered by all of an organization’s VAST data
  • A data engineering agent to curate data automatically
  • A prompt engineer to help optimize AI workflow inputs
  • An meta-agent, to automate the deployment, evaluation, and improvement of agents
  • A compliance agent, to enforce data and activity level regulatory compliance
  • An editor agent, to create rich media content
  • A life sciences researcher, to assist with bioinformatic discovery 

VAST Data will run a series of “VAST Forward” global workshops, both in-person and online, throughout the year. These will include training on AI OS components and sessions on how to develop on the platform. 

Comment

VAST’s AI OS is not a standalone OS like Windows or Linux, which are low-level, processor-bound systems with a focus on hardware and basic services. The AI OS represents the culmination of its Thinking Machines vision and is a full AI stack entity. 

Nvidia has its AI Enterprise software suite that supports the development and deployment of production-grade AI applications, including generative AI and agentic AI systems. It includes microservices like NIM and supports tools for building and deploying AI agents and managing AI workflows. But it is not an overall operating system.

Both Dell and HPE have AI factory-type approaches that could be regarded as approaches that could be developed into AI operating systems. 

Bootnote

VAST claims it has recorded the fastest path to $2 billion in cumulative bookings of any data company in history. It experienced nearly 5x year-over-year growth in the first 2025 quarter and its DASE clusters support over 1 million GPUs around the world. VAST says it has a cash-flow-positive business model.

Hitachi Vantara adds VSP 360 see everything storage control plane

VSP 360 is Hitachi Vantara’s new management control plane for its VSP One storage portfolio, delivered as a service and covering on-premises, public cloud and hybrid deployments.

VSP One (Virtual Storage Platform One) is a storage product portfolio including the on-premises VSP One SDS Block, VSP One Block appliance, VSP One File and – a low-cost all flash array and object storage – VSP One Object offerings, and VSP One SDS Cloud (cloud-native SVOS) for the AWS cloud. These products were managed through Hitachi V’s Ops Center, which was described in 2023 as the company’s primary brand for infrastructure data management on the VSP One platform. Now it has evolved into VSP 360 which provides control for VSP One hybrid cloud deployments, AIOps predictive insights, and simplified and compliance-ready data lifecycle governance.

Octavian Tanase

Hitachi Vantara’s Chief Product Officer, Octavian Tanase, enthused: “VSP 360 represents a bold step forward in unifying the way enterprises manage their data. It’s not just a new management tool—it’s a strategic approach to modern data infrastructure that gives IT teams complete command over their data, wherever it resides.”

The company positions VSP One as a unified, multi-protocol, multi-tier data plane with VSP 360 being its unified control plane. It provides a single interface to manage VSP One resources, configurations and policies across its several environments, simplifying administration. Routine tasks like provisioning, monitoring and upgrades can be streamlined and it provides information about actual data usage and storage performance.

The AIOps facilities enable automated telemetry data correlation, identification of the root cause of performance problems, and sustainability analytics. 

An Ops Center Clear Sight facility provided cloud-based monitoring and management for VSP One products.This has become VSP 360 Clear Sight. Hitachi V says customers can use VSP 360 Clear Sight’s advanced analytics to optimize storage performance, capacity utilization and troubleshoot problems on-premise.

VSP 360 has built-in AI and automation and is  available via SaaS, Private, or via mobile phone. Hitachi V says it supports integrated fleet management, intelligent protection and infrastructure as code (IaC) interoperability across multiple storage types. Plus AI, PII (Personal Identifying Information) discovery, cybersecurity, and IaaS use cases.

VSP 360 integrates data management tools across the VSP One enterprise storage products, using AIOps observability, to monitor performance indicators such as storage capacity utilization and overall system health. It’s claimed to streamline data services delivery.

Dell’s CloudIQ is a cloud-based AIOps platform that aligns quite closely with VSP One’s capabilities, offering AI-driven insights, monitoring, and predictive analytics for Dell storage systems like PowerStore and PowerMax.

HPE’s InfoSight is a similar product, with AI-powered management for HPE storage arrays like Primera and Alletra. It focuses on predictive analytics, performance optimization, and automated issue resolution, with a centralized dashboard for system health, capacity, and performance insights. 

NetApp’s has its BlueXP storage and data services control plane facility.

Pure Storage’s Pure1 platform provides AI-driven management for Pure’s FlashArray and FlashBlade systems, with performance monitoring, capacity forecasting, and predictive analytics through a cloud-based interface.

You can dig deeper into VSP 360 here and in this blog.

Kioxia revenues expected to go down

Kioxia announced fourth quarter and full fiscal 2024 year results with revenues up slightly but set to decline as rising datacenter and AI server sales fail to offset a smartphone and PC/notebook market slump and an expectation that the future exchange rate will look bad.

Q4 revenues were ¥347.1 billion ($2.25 billion), up 2.9 percent annually, but 33 percent down sequentially, with IFRS net income of ¥20.3 billion ($131.8 million), better than the year-ago ¥64.9 billion loss.

Full fy2024 [PDF] revenues were ¥1.706 trillion ($11.28 billion), up 58.5 percent annually, with an IFRS profit of ¥272.3 billion ($1.77 billion), again much better than the year-ago ¥243.7 billion loss.

Hideki Hanazawa

CFO Hideki Hanazawa said: “Our revenue exceeded the upper end of our guidance. … Demand from enterprises remained steady. However ASPS were down around 20 percent quarter-on-quarter, influenced by inventory adjustments at PC and smartphone customers.” The full year revenue growth “was due to an increase in ASPS and bit shipments resulting fro recovery in demand from the downturn in fiscal year 2023, the effect of cost-cutting measures taken in 2023 and the depreciation of the yen.” 

Financial summary

  • Free cash flow: ¥46.6 billion ($302.6 million)
  • Cash & cash equivalents: ¥167.9 billion vs year-ago ¥187.6 billion
  • EPS: ¥24.6 ($0.159)

Kioxia’s ASP declined around 20 percent Q/Q with circa 10 percent decrease in bits shipped. It said it has had positive free cash flow for five consecutive quarters.

It’s NAND fab joint venture partner Sandisk reported $1.7 billion in revenues for its latest quarter, 0.6 percent down Y/Y, meaning that Kioxia performed proportionately better.

Kioxia ships product to three market segments and their revenues were: 

  • Smart Devices (Phones): ¥79.6 billion ($516.8 million), up 29.2 percent Y/Y
  • SSD & Storage: ¥215.2 billion ($$1.4 billion) up 32.5 percent
  • Other (Retail + sales to Sandisk): ¥52.3 billion ($339.6 million) up10.7 percent

Smart devices Q/Q sales decreased due to lower demand and selling prices as customers used up inventory. The SSD and storage segment is divided into Data Center/Enterprise, which is 60 percent of its revenues, and PC and Others which is 40 percent. Demand remained strong in Data Center/Enterprise, driven by AI adoption, but Q/Q sales declined mainly due to lower selling prices. There was ongoing weak demand in the PC sub-segment and lower selling prices led to reduced Q/Q sales.

Overall there was strong demand in the quarter for Data Center/Enterprise product with continued softness in the PC and smartphone markets. There was around 300 percent growth in Data Center/Enterprise SSD sales in the year compared to last year.

Kioxia had an IPO in the year and has improved its net debt to equity ratio from 277 percent at the end of fy2023 to 126 percent at fy2024 year end, strengthening its balance sheet. The company has started investing in BiCS 10 (332-layer 3D NAND) technology for future growth. This has a 9 percent improvement in bit density compared to BiCS 8 (218 layers).

The calculations for next quarter’s outlook assumes a minimal impact from US tariff changes, some improvement in smartphone and PC demand, accelerating AI server deployments and strong data center server demand, fueled by AI. However it anticipates a decrease in both revenue and profit on a quarterly basis. The corp believes the main reason for this sales decline will be the exchange rate, with the current assumed rate for June being ¥140 to the dollar. It was ¥154 to the dollar in the fourth fiscal 2024 quarter. A change of ¥1 to the dollar is expected to have an impact on Kioxia’s operating income of approximately ¥6 billion.

With this in mind, the outlook for the next quarter (Q1 fy2025) is revenues of ¥310 billion +/- ¥15 million; $2 billion at the mid-point and a 10.1 percent Y/Y decline. Kioxia says the potential for growth in the NAND market remains strong due to AI, datacenter and storage demand. If only the smartphone and PC/notebook markets would pick up!

Hypervisor swap or infrastructure upgrade?

a wave of data represented as a turning page

Don’t just replace VMware; tune up your infrastructure

Finding a suitable VMware alternative has become a priority for many IT organizations, especially following Broadcom’s acquisition of VMware. Rather than swapping out hypervisors, IT should seize this opportunity to reevaluate and modernize the entire infrastructure, adopting an integrated software-defined approach that optimizes resource efficiency, scalability, and readiness for emerging technologies, such as enterprise AI.

Beyond simple hypervisor replacement

Replacing a hypervisor without addressing underlying infrastructure limitations provides temporary relief. Issues such as siloed resource management, inefficient storage operations, and limited scalability remain, continuing to increase operational complexity and costs.

Adopting a comprehensive software-defined infrastructure ensures seamless integration, flexibility, and efficient scaling—critical factors for handling modern workloads, including legacy applications, VDI, and AI workloads.

Preparing infrastructure for enterprise AI

The rapid adoption of AI increases demands on existing IT infrastructure. AI workloads require extensive computational resources, high-speed data storage access, advanced GPU support, and low-latency networking. Traditional infrastructures, with their inflexible storage, compute, and network architectures, fail to meet these dynamic and intensive requirements.

Due to these software inefficiencies, IT teams are forced to create separate infrastructures for AI workloads. These new infrastructures require dedicated servers, specialized storage systems, and separate networking, thereby creating additional silos to manage. This fragmented approach increases operational complexity and duplication of resources.

A modern, software-defined infrastructure integrates flexible storage management, GPU pooling, and dynamic resource allocation capabilities. These advanced features enable IT teams to consolidate AI and traditional workloads, eliminating unnecessary silos and allowing resources to scale smoothly as AI workloads evolve.

What to look for in software-defined infrastructure

When selecting a VMware alternative and transitioning to a software-defined model, consider the following essential capabilities:

Integrated storage management

Choose solutions that manage storage directly within the infrastructure software stack, removing the need for external SAN or NAS devices. This integration streamlines data management, optimizes data placement, minimizes latency, and simplifies operational complexity.

No compromise storage performance

The solution should deliver storage performance equal to or exceeding that of traditional three-tier architectures using dedicated all-flash arrays. Modern software-defined infrastructure must leverage the speed and efficiency of NVMe-based flash storage to optimize data paths and minimize latency. This ensures consistently high performance, meeting or surpassing the demands of the most intensive workloads, including databases, VDI, and enterprise AI applications, without compromising simplicity or scalability.

Advanced GPU support for AI

Look beyond basic GPU support traditionally used for VDI environments. Modern infrastructure solutions should offer advanced GPU features, including GPU clustering, GPU sharing, and efficient GPU virtualization explicitly designed for AI workloads.

Efficient resource usage

Prioritize infrastructure that supports precise and dynamic resource allocation. Solutions should offer granular control over CPU, memory, storage, and networking, reducing wasted resources and simplifying management tasks.

Efficiency is critical due to the high cost of GPUs. Don’t waste these valuable resources on unnecessary virtualization overhead. To maximize your investment, modern solutions must deliver performance as close to bare-metal GPU speeds as possible, ensuring AI workloads achieve optimal throughput and responsiveness without resource inefficiencies.

High-performance networking

Evaluate infrastructures that feature specialized networking protocols optimized for internal node communications. Look for active-active network configurations, low latency, and high bandwidth capabilities to ensure consistent performance during intensive operations and in the event of node failures.

Global inline deduplication and data efficiency

Ensure the infrastructure offers global inline deduplication capabilities to reduce storage consumption, beneficial in environments with substantial VM or AI workload duplication. Confirm that deduplication does not negatively impact system performance.

Ask yourself when the vendor introduced deduplication into its infrastructure software. If it added this feature several years after the platform’s initial release, it is likely a bolt-on introducing additional processing overhead and negatively impacting performance. Conversely, a platform that launched with deduplication will be tightly integrated, optimized for efficiency, and unlikely to degrade system performance.

Simplified management and automation

Choose solutions that provide comprehensive management interfaces with extensive automation capabilities across provisioning, configuration, and scaling. Automated operations reduce manual intervention, accelerate deployments, and minimize human error.

Enhanced resiliency and high availability

Opt for infrastructures with robust redundancy and availability features such as distributed data mirroring, independent backup copies, and intelligent automated fail-over. These capabilities are critical for maintaining continuous operations, even during hardware failures or scheduled maintenance.

IT should evaluate solutions capable of providing inline protection from multiple simultaneous hardware failures (such as drives or entire servers) without resorting to expensive triple mirroring or even higher replication schemes. Solutions that achieve advanced redundancy without significant storage overhead help maintain data integrity, reduce infrastructure costs, and simplify operational management.

Evaluating your VMware alternative

Transitioning from VMware provides an ideal opportunity to rethink your infrastructure strategy holistically. A well-designed, software-defined infrastructure enables superior resource management, simplified operational processes, and improved scalability, preparing your organization for current needs and future innovations.

By carefully evaluating VMware alternatives through the lens of these critical infrastructure capabilities, IT organizations can significantly enhance their infrastructure agility, efficiency, and ability to support emerging technology demands, particularly enterprise AI.

Don’t view your VMware exit merely as a hypervisor replacement. Use it as a strategic catalyst to modernize your infrastructure. Storage is another key concern when selecting a VMware alternative. Check out “Comparing VMware Alternative Storage” to dive deeper. 

Sponsored by VergeIO

Dell updates PowerScale, ObjectScale to accelerate AI Factory rollout

Dell is refreshing its PowerScale and ObjectScale storage systems as part of a slew of AI Factory announcements on the first day of the Dell Technologies World conference.

The company positions its storage systems, data lakehouse, servers, and so on as integrated parts of an AI Factory set of offerings closely aligned with Nvidia’s accelerators and providing a GenAI workflow capability as well as supporting traditional apps. PowerScale – formerly known as Isilon – is its scale-out, clustered filer node offering. ObjectScale is distributed, microservices-based, multi-node, scale-out, and multi-tenant object storage software with a single global namespace that supports the S3 API.  

Jeff Clarke, Dell
Jeff Clarke

Jeff Clarke, Dell Technologies COO, stated: “It has been a non-stop year of innovating for enterprises, and we’re not slowing down. We have introduced more than 200 updates to the Dell AI Factory since last year. Our latest AI advancements – from groundbreaking AI PCs to cutting-edge data center solutions – are designed to help organizations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results.” 

As background, Nvidia has announced an extension of its storage server/controller host CPU+DRAM bypass NVMe/RDMA-based GPUDirect file protocol to S3, so that object data can be fed fast to its GPUs using similar, RDMA-based technology. Parallel access file systems like IBM’s Storage Scale, Lustre, VAST Data, VDURA, and WEKA have a speed advantage over serial filers, even scale-out ones, like PowerScale and Qumulo. Dell has responded to this with its Project Lightning initiative.

Dell slide

With these points in mind, the ObjectScale product is getting a denser version along with Nvidia BlueField-3 DPU (Data Processing Unit)  and Spectrum-4 networking support. BlueField 3 is powered by ARM processors and can run containerized software such as ObjectScale. Spectrum-4 is an Ethernet platform product providing 400Gbit/s end-to-end connectivity. Its components include a Spectrum-4 switch, ConnectX-7 SmartNIC, BlueField-3, and DOCA infrastructure software.

The denser ObjectScale system will support multi-petabyte scale andis built from PowerEdge R7725xd server nodes with 2 x AMD EPYC gen 5 CPUs, launching June 2025. It will offer the highest storage density NVMe configurations in the Dell PowerEdge portfolio. The system will feature Nvidia-enhanced BlueField-3 DPUs and Spectrum-4 Ethernet switches and provide planned network connectivity of up to 800 Gb per second.

The vendor says ObjectScale will support S3 over RDMA, making unstructured data stored as objects available much faster for AI training and inferencing. It has up to 230 percent higher throughput and up to 80 percent lower latency. It will also provide 98 percent reduced CPU load compared to traditional S3 data transfers, it claims. A fully managed S3 Tables feature, supporting open table formats that integrate with AI platforms, will be available later this year.

PowerScale gets S3 Object Lock WORM in an upcoming release, along with S3 bucket logging and protocol access logging. PowerScale file-to-object SmartSync automates data replication directly to AWS, Wasabi or Dell ObjectScale for lower-cost back up storage, and can burst to the cloud using EC2 for compute-heavy applications.

A  PowerScale Cybersecurity Suite is an AI, software-driven product designed to provide ransomware detection, minimum downtime when a threat occurs, and near-instant recovery. There are 3 bundles:

  • Cybersecurity software for real-time ransomware detection and mitigation, including a full audit trail when an attack occurs. 
  • Airgap vault for immutable backups. 
  • Disaster recovery software for seamless failover and recovery to guarantee business continuity. 

Project Lightning is claimed to be “the world’s fastest parallel file system per new testing, delivering up to two times greater throughput than competing parallel file systems.” This is according to internal and preliminary Dell testing comparing random and sequential throughput per rack unit. Dell has not provided specific throughput figures, which makes independent comparison difficult. A tweet suggested 97Gbps throughput. The company says Project Lightning will accelerate training time for large-scale and complex AI workflows.

Dell says Lightning is purpose-built for the largest AI deployments with tens of thousands of GPUs. Partners such as WWT and customers such as Cambridge University are active participants in  a multi-phase customer validation program, which includes performance benchmarking, feature testing, and education to drive product requirements and feedback into the product. 

Dell is introducing a high-performance offering built with PowerScale, Project Lightning and PowerEdge XE servers. It will use KV cache and integrate Nvidia’s Inference Xfer Library (NIXL), part of Nvidia’s Dynamo offering, making it ideal for large-scale, complex, distributed inference workloads, according to Dell. Dynamo serves generative AI models in large-scale distributed environments and includes optimizations specific to large language models (LLMs), such as disaggregated serving and key-value cache (KV cache) aware routing.

A Dell slide shows Project Lightning sitting as a software layer above both ObjectScale and PowerScale in an AI Data Platform concept:

Dell slide

We asked about this, and Dell’s Geeta Vaghela, a senior product management director, said: ”We really start to see parallel file systems not being generic parallel file systems, but really optimised for this AI use case and workflow.” She envisages it integrating with KV cache. Dell is now looking to run private previews of the Project Lightning software.

Dell says its AI Data Platform updates improve access to high quality structured, semi-structured, and unstructured data across the AI life cycle. There are Dell Data Lakehouse enhancements to simplify AI workflows and accelerate use cases, such as recommendation engines, semantic search, and customer intent detection by creating and querying AI-ready datasets. Specifically the Dell Data Lakehouse gets:

  • Native Vector Search Integration in the Dell Data Analytics Engine, powered by Starburst, bringing semantic understanding directly into SQL workflows and bridging the gap between structured query processing and unstructured data exploration.
  • Hybrid Search builds on the vector search capability by combining semantic similarity with traditional keyword matching, within a single SQL query.
  • Built-In LLM Functions integrate tools like text summarization and sentiment analysis into SQL-based workflows. 
  • Automated Iceberg Table Management looks after maintenance tasks such as compaction and snapshot expiration. 

There are PowerEdge server and network switch updates as part of this overall Dell AI Factory announcement. Dell is announcing Managed Services for the Dell AI Factory with Nvidia to simplify AI operations with management of the full Nvidia AI solutions stack, including AI platforms, infrastructure, and Nvidia AI Enterprise software. Dell managed services experts will handle 24×7 monitoring, reporting, version upgrades, and patching.

The Nvidia AI Enterprise software platform is now available directly from Dell, and customers can use Dell’s AI Factory with Nvidia NIM, NeMo microservices, Blueprints, NeMo Retriever for RAG, and Llama Nemotron reasoning models. They can, Dell says, “seamlessly develop agentic workflows while accelerating time-to-value for AI outcomes.” 

Dell AI Factory with Nvidia offerings support the Nvidia Enterprise AI Factory validated design, featuring Dell and Nvidia compute, networking, storage, and Nvidia AI Enterprise software. This provides an end-to-end, fully integrated AI product for enterprises. Red Hat OpenShift is available on the Dell AI Factory with Nvidia.

Availability

  • Dell Project Lightning is available in private preview for select customers and partners now. 
  • Dell Data Lakehouse updates will be available beginning in July 2025. 
  • Dell ObjectScale with NVIDIA BlueField-3 DPU and Spectrum-4 Ethernet Switches is targeting availability in 2H 2025. 
  • The Dell high-performance system built with Dell PowerScale, Dell Project Lightning and PowerEdge XE servers is targeting availability later this year. 
  • Dell ObjectScale support for S3 over RDMA will be available in 2H 2025. 
  • The NVIDIA AI Enterprise software platform, available directly from Dell, will be available May 2025. 
  • Managed Services for Dell AI Factory with NVIDIA are available now. 

Dell intros all-flash PowerProtect target backup appliance

Dell is launching an all-flash deduping PowerProtect backup target appliance, providing competition for all-flash systems from Pure Storage (FlashBlade), Quantum, and Infinidat.

The company is also announcing PowerStore enhancements at its Las Vegas-based Dell Technologies World event, plus Dell Private Cloud and NativEdge capabilities.

Arthur Lewis, Dell’s president, Infrastructure Solutions Group, stated: “Our disaggregated infrastructure approach helps customers build secure, efficient modern datacenters that turn data into intelligence and complexity into clarity.” 

There are currently four PowerProtect systems plus a software-only virtual edition, offering a range of capacities and speeds. 

Dell table

The All-Flash Ready node is basically a Dell PowerEdge R760 server with 24 x 2.5-inch SAS-4 SSD drives and 8 TB HDDs for cache and metadata, a hybrid flash and disk server. The capacity is up to 220 TB/node and a node supports up to 8 x DS600 disk array enclosures, providing 480 TB raw each, for expanded storage.

The PowerProtect Data Domain All-Flash DD9910F appliance restores data up to four times faster than an equivalent capacity PowerProtect disk-based system, based on testing the DD9910F and an HDD-using DD9910. We don’t get given actual restore speed numbers for any PowerProtect appliance, and Dell isn’t providing the all-flash DD9910F ingest speed either.

All-flash PowerProtect systems have up to twice as fast replication performance and up to 2.8x faster analytics. This is based on internal testing of CyberSense analytics performance to validate data integrity in a PowerProtect Cyber Recovery vault comparing a disk-based DD9910 appliance to the all-flash DD9910F at similar capacity. 

They occupy up to 40 percent less rack space and save up to 80 percent on power compared to disk drive-based systems. With Dell having more than 15,000 PowerProtect/Data Domain customers, it has a terrific upsell opportunity here.

PowerStore, the unified file and block storage array, is being given Advanced Ransomware Detection. This validates data integrity and minimizes downtime from ransomware attacks using Index Engines’ CyberSense AI analytics. This indexes data using more than 200 content-based analytic routines. The AI system used has been trained on over 7,000 variants of ransomware and their activity signals. According to Dell, it detects ransomware corruption with a 99.99 percent confidence level.

Dell says it’s PowerStore’s fifth anniversary and there are more than 17,000 PowerStore customers worldwide. 

The company now has a Private Cloud to provide a cloud-like environment on premises, built using its disaggregated infrastructure – servers, storage, networking, and software. There is a catalog of validated blueprints. This gives Dell an answer to HPE’s Morpheus private cloud offering.

A Dell Automation Platform provides centralized management and zero-touch onboarding for this private cloud. By using it, customers can provision a private cloud stack cluster in 150 minutes, with up to 90 percent fewer steps than a manual process.

Dell has new NativeEdge products for virtualized workloads at the edge and in remote branch offices. These protect and secure data with policy-based load balancing, VM snapshots and backup and migration capabilities. Customers can manage diverse edge environments consistently with support for non-Dell and legacy infrastructure.

Comment

We expect that Dell announcing its first all-flash PowerProtect appliance will be the trigger ExaGrid needs to bring out its own all-flash product.

Kioxia and WEKA help calculate Pi to record 300 trillion decimal places

The Linus Tech Tips team has calculated Pi, the ratio of a circle’s circumference to its diameter, to a record 300 trillion decimal places with the help of AMD, Micron, Gigabyte, Kioxia, and WEKA.

Linus Sebastian and Jake Tivy of the Canadian YouTube channel produced a video about their Guinness Book of Records-qualified result. 

From left, Linus Sebastian and Jake Tivy

The setup involved a cluster of nine Gigabyte storage servers and a single Gigabyte compute node, all running the Ubuntu 22.04.5 LTS operating system. This was because the memory-intensive calculation was done using the Y-cruncher application, which uses external storage as RAM swap space. Eight storage servers were needed to provide the storage capacity needed.

The storage servers were 1 RU Gigabyte R183-Z95-AAD1 systems with 2 x AMD EPYC 9374F CPUs and around 1 TB DDR5 ECC memory. They were fitted with dual ConnectX-6 200 GbE network cards, giving each one 400 GbE bandwidth. Kioxia NVMe CM6 and CD6 series SSDs were used for storage. Overall, there was a total of 2.2 PB of storage spread across 32 x 30.72 TB CM6 SSDs and 80 x 15.36 TB CD6 SSDs for 245 TB per server.

The compute node was a Gigabyte R283-Z96-AAE1 with dual 96-core EPYC 9684X 3D V-Cache CPUs and 196 threads/CPU. There was 3 TB of DRAM, made up from 24 x 128 GB sticks of Micron ECC DDR5 5600MT/s CL46 memory. It was equipped with 4 x Nvidia ConnectX-7 200 GbE network cards with 2 x 200 GbE ports/card and 16 x PCIe Gen 5 bandwidth per slot capable of approximately 64 GBps bidirectional throughput. There was a total of 1.6 Tbps of throughput, around 100 GBps to each 96-core CPU.

WEKA parallel access file system software was used, providing one file system for all nine servers to Y-cruncher. The WEKA software was tricked into thinking each server was two servers so that the most space-efficient data striping algorithm could be used. Each chunk of data was divided into 16 pieces, with two parity segments – 16+2 equaling 18 – matching the 18 virtual nodes achieved by running two WEKA instances per server.

They needed to limit the amount of data that flowed between the CPUs to avoid latency build-ups from one CPU sending data to the other CPU’s network cards and there was also a limited amount of memory bandwidth.

Avoiding cross CPU memory transfers

Tivy configured two 18 GB WEKA client containers – instances of the WEKA application running on the compute node – to access the storage cluster. Each container had 12 cores assigned to it to match the 3D V-Cache of the Zen 4 CPU die. This has 12 chiplets sharing 32 MB of L3 cache with 64 MB layered above for a 96 MB total. Tivy did not want Y-cruncher’s buffers to spill out of that cache because that would mean more memory copies and more memory bandwidth would be needed.

WEKA’s write throughput was around 70.14 GBps and this was over the network with a file system. Read testing showed up to 122.63 GBps with a 2 ms latency. The system was configured with 4 NUMA nodes per CPU, which Y-cruncher leveraged for better memory locality. When reconfigured to a single NUMA node per socket, bandwidth increased increased to 155.17 GBps.

The individual Kioxia CD6 drive read speeds were in excess of 5 GBps. Tivy set the record for the single fastest client usage, according to WEKA. Tivy said WEKA subsequently broke that record with GPUDirect storage and RDMA.

Tivy says Y-cruncher uses the storage like RAM, as RAM swap space. That’s why so much storage is needed. The actual Y-cruncher output of 300 trillion digits is only about 120 TB compressed. Their system used about 1.5 PB of capacity at peak. Y-cruncher is designed for direct-attached storage without hardware or software RAID, as it implements its own internal redundancy mechanisms.

The Linus Tech Tips Pi project started its run on August 1, 2024, and crashed 12 days later due to a multi-day power outage. The run was restarted and then executed past shorter-term power cuts and air-conditioning failures, meaning the calculation had to stop and restart, to complete 191 days later. At that time, they discovered that the 300 trillionth digit of Pi is 5.

Pi calculation history

Here is a brief history of Pi calculation records:

  • Around 250 BCE: Archimedes approximated π as approximately 22/7 (~3.142857).
  • 5th century CE: Chinese mathematician Zu Chongzhi calculated π to 7 digits (3.1415926), a record that lasted almost 1,000 years.
  • 1596: Ludolph van Ceulen calculated π to 20 digits using Archimedes’ method with polygons of up to 262 sides.
  • 1706: John Machin developed a more efficient arctangent-based formula, calculating π to 100 digits.
  • 1844: Zacharias Dase and Johann Martin Strassmann computed π to 200 digits.
  • By 1873: William Shanks calculated π to 707 digits. Only the first 527 digits were correct due to a computational error.
  • By 1949: The ENIAC computer calculated π to 2,037 digits, the beginning of computer-based π calculations.
  • By 1989: The Chudnovsky brothers calculated π to over 1 billion digits on a supercomputer.
  • 1999: Yasumasa Kanada computed π to 206 billion digits, leveraging the Chudnovsky algorithm and advanced hardware.
  • 2016: 22.4 trillion digits computed by Peter Trueb taking 105.524 days to compute,
  • 2019: Emma Haruka Iwao used Google Cloud to compute π to 31.4 trillion digits (31,415,926,535,897 digits).
  • 2021: Researchers at the University of Applied Sciences of the Grisons, Switzerland, calculated π to 62.8 trillion digits, using a supercomputer, taking 108 days and nine hours.
  • 2022: Google Cloud, again with Iwao, pushed the record to 100 trillion digits, using optimized cloud computing.

 Wikipedia has a more detailed chronology.

Comment

This record could surely be exceeded by altering the Linus Tech Tips configuration to one using GPU servers, with an x86 CPU host, GPUs with HBM, and GPUDirect links to faster and direct-attached (Kioxia CM8 series) SSDs. Is a 1 quadrillion decimal place Pi calculation within reach? Could Dell, HPE, or Supermicro step up with a GPU server? Glory awaits.

Nvidia deepens AI datacenter push as DDN, HPE, NetApp integrate storage stacks

Nvidia is turning itself into an AI datacenter infrastructure company and storage suppliers are queuing up to integrate their products into its AI software stack. DDN, HPE, and NetApp followed VAST Data in announcing their own integrations at Taiwan’s Computex 2025 conference.

DDN and Nvidia launched a reference design for Nvidia’s AI Data Platform to help businesses feed unstructured data like documents, videos, and chat logs to AI models. Santosh Erram, DDN’s global head of partnerships, declared: “If your data infrastructure isn’t purpose-built for AI, then your AI strategy is already at risk. DDN is where data meets intelligence, and where the future of enterprise AI is being built.”

The reference design combines DDN Infinia with Nvidia’s NIM and NeMo Retriever microservices, RTX PRO 6000 Blackwell Server Edition GPUs, and its networking. Nvidia’s Pat Lee, VP of Enterprise Strategic Partnerships, said: “Together, DDN and Nvidia are building storage systems with accelerated computing, networking, and software to drive AI applications that can automate operations and amplify people’s productivity.” Learn more here.

HPE

Jensen Huang, Nvidia
Jensen Huang

Nvidia founder and CEO Jensen Huang stated: “Enterprises can build the most advanced Nvidia AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI.”

HPE president and CEO Antonio Neri spoke of a “strong collaboration” with Nvidia as HPE announced server, storage, and cloud optimizations for Nvidia’s AI Enterprise cloud-native offering of NIM, NeMo, and cuOpt microservices, and Llama Nemotron models. The Alletra MP X10000 unstructured data storage array node product will introduce an SDK for Nvidia’s AI Data Platform reference design to offer customers accelerated performance and intelligent pipeline orchestration for agentic AI.

The SDK will support flexible inline data processing, vector indexing, metadata enrichment, and data management. It will also provide remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000. 

Antonio Neri, HPE
Antonio Neri

HPE ProLiant Compute DL380a Gen12 servers featuring RTX PRO 6000 Blackwell GPUs will be available to order on June 4. HPE OpsRamp cloud-based IT operations management (ITOM) software is expanding its AI infrastructure optimization features to support the upcoming RTX PRO 6000 Blackwell Server Edition GPUs for AI workloads. The OpsRamp integration with Nvidia’s infrastructure including its GPUs, BlueField DPUs, Spectrum-X Ethernet networking, and Base Command Manager will provide granular metrics to monitor the performance and resilience of the HPE-Nvidia AI infrastructure.

HPE’s Private Cloud AI will support the Nvidia Enterprise AI Factory validated design. HPE says it will also support feature branch model updates from Nvidia AI Enterprise, which include AI frameworks, Nvidia NIM microservices for pre-trained models, and SDKs. Support for feature branch models will allow developers to test and validate software features and optimizations for AI workloads.

NetApp

NetApp’s AIPod product – Nvidia GPUs twinned with NetApp ONTAP all-flash storage – now supports the Nvidia AI Data Platform reference design, enabling RAG and agentic AI workloads. The AIPod can run Nvidia NeMo microservices, connecting them to its storage. San Jose-based NetApp has been named as a key Nvidia-Certified Storage partner in the new Enterprise AI Factory validated design.

Nvidia’s Rob Davis, VP of Storage Technology, said: “Agentic AI enables businesses to solve complex problems with superhuman efficiency and accuracy, but only as long as agents and reasoning models have fast access to high-quality data. The Nvidia AI Data Platform reference design and NetApp’s high-powered storage and mature data management capabilities bring AI directly to business data and drive unprecedented productivity.”

NetApp told us “the AIPod Mini is a separate solution that doesn’t incorporate Nvidia technology.”

Availability

  • HPE Private Cloud AI will add feature branch support for Nvidia AI Enterprise by Summer.
  • HPE Alletra Storage MP X10000 SDK and direct memory access to Nvidia accelerated computing infrastructure will be available starting Summer 2025.
  • HPE ProLiant Compute DL380a Gen12 with RTX PRO 6000 Server Edition will be available to order starting June 4, 2025.
  • HPE OpsRamp Software will be available in time to support RTX PRO 6000 Server Edition.

Bootnote

Dell is making its own AI Factory and Nvidia-related announcements at its Las Vegas-based Dell Technologies World conference. AI agent builder DataRobot announced its inclusion in the Nvidia Enterprise AI Factory validated design for Blackwell infrastructure. DataRobot and Nvidia are collaborating on multiple agentic AI use cases that leverage this validated design. More information here.

Transform your storage ownership experience with guaranteed IT outcomes

HPE expands its storage guarantee program with new SLAs for cyber resilience, zero data loss, and energy efficiency

Today’s enterprise storage customers demand more than just technology; they seek assured IT outcomes. HPE Alletra Storage MP B10000 meets this requirement head-on with an AIOps-powered operational experience and disaggregated architecture that lowers TCO, improves ROI, mitigates risk, and boosts efficiency. We stand behind these promises with industry-leading SLAs and commitments that include free non-disruptive controller upgrades, 100 percent data availability, and 4:1 data reduction.

We’re taking our storage guarantee program to the next level with the introduction of three new B10000 guarantees for cyber resilience, energy efficiency, and zero data loss. These SLA-driven commitments give B10000 customers the confidence and peace of mind to recover quickly from ransomware attacks, lower energy costs with sustainable storage, and experience zero data loss or downtime.

Here’s a breakdown of the new guarantees.

Achieve ransomware peace of mind with cyber resilient storage—guaranteed

Ransomware remains a significant threat to organizations globally, affecting every industry. And attacks are becoming increasingly sophisticated and costly, evolving at an unprecedented pace.

The B10000 offers a comprehensive suite of storage-level ransomware resilience capabilities to help you protect, detect, and rapidly recover from cyberattacks. It features real-time ransomware detection using anomaly detection methodologies to identify malicious encryption indicative of ransomware attacks. In the event of a breach, the B10000 facilitates the restoration of uncorrupted data from tamper-proof immutable snapshots, significantly reducing recovery time, costs, and potential data loss. It also helps identify candidates for data recovery by pinpointing the last good snapshots taken before the onset of an anomaly. 

We’re so confident in the built-in ransomware defense capabilities of the B10000 and our professional services expertise that we’re now offering a new cyber resilience guarantee1. This guarantees you access to an HPE services expert within 30 minutes of reporting an outage resulting from a ransomware incident, enabling you to rapidly address and mitigate the impact of a cyberattack. It also ensures that all immutable snapshots created on the B10000 remain accessible for the specified retention period.  

HPE’s expert-led outage response services diagnose ransomware alerts and speed up locating and recovering snapshot data. In the unlikely event that we can’t meet the guarantee commitment, we offer you compensation.

Lower your energy costs with sustainable storageguaranteed

Modern data centers are significant consumers of energy, contributing to 3.5 percent of global CO2 emissions2, with 11 percent of data center power used for storage3. As data grows exponentially, so does the environmental and cost impact. Transitioning from power-hungry, hardware-heavy legacy infrastructure to modern, sustainable storage is no longer optional; it’s now a fundamental business imperative.   

The B10000 features a power-efficient architecture that enhances performance, reliability, and simplicity while helping reduce energy costs, carbon emissions, and e-waste compared to traditional storage. With the B10000, you can reduce your energy consumption by 45 percent4, decrease your storage footprint by 30 percent5, and lower your total cost of ownership by 40 percent6.

The B10000 achieves these impressive sustainability outcomes with energy-efficient all-flash components, disaggregated storage that extends product life cycles while reducing e-waste, advanced data reduction capabilities, and AI-powered management that optimizes resource usage. Better still, B10000 management tools on the HPE GreenLake cloud like the HPE Sustainability Insight Center and the Data Services Cloud Console provide visibility into energy consumption and emissions, enabling informed decision-making to meet sustainability goals.

Because the B10000 is designed for sustainability, performance, and efficiency, we can guarantee that it operates at optimized power levels while maintaining peak performance. Our new energy consumption guarantee7 promises that your B10000 power usage will not exceed an agreed maximum target each month. This helps ensure that you can plan for and count on a maximum power budget. If you exceed the energy usage limit, you will receive a credit voucher to offset your additional energy costs.  

Sleep better at night with zero data loss or downtime—guaranteed

Application uptime is more important today than it’s ever been. Data loss and downtime mean lost time and money. You need highly available storage that will ensure multi-site business continuity and data availability for your mission-critical applications in the event of unexpected disruptions. 

That’s why we are introducing a new HPE Zero RTO / RPO guarantee for the B100008. It’s a cost-nothing, do-nothing guarantee that is unmatched in the industry. The active peer persistence feature of the B10000 combines synchronous replication and transparent fail-over across active sites to maintain continuous data availability with no data loss or downtime—even in the event of site-wide or natural disasters. Because active peer persistence is built into the B10000, there are no additional appliances or integration required.

If your application loses access to data during a fail-over, we will proactively provide credit that can be redeemed upon making a future investment in the B10000. 

Simplify and future-proof your storage ownership experience

In today’s fast-moving business world, your data infrastructure needs to keep up without surprises, disruptions, or unexpected costs. HPE Alletra Storage MP B10000 simplifies and future-proofs your experience, offering guaranteed outcomes, investment protection, and storage that gets better over time.

With the B10000, you get best-in-class ownership experience on the industry’s best storage array. And our outcome-focused SLAs provide peace of mind, so you can focus on business objectives rather than IT operations. Your storage stays modern, your costs stay predictable, and your business stays ahead.

Find out more

Sponsored by HPE


  1.  The cyber resilience guarantee is available on any B10000 system running OS release 10.5.  The guarantee is applicable whether you purchase your B10000 system through a traditional up-front payment or through the HPE GreenLake Flex pay-per-use consumption model. To be eligible for the guarantee, you will need to have an active support contract with a minimum three-year software and support software-as-a-service (SaaS) subscription. 
  2.  “Top Recommendations for Sustainable Data Storage,” Gartner, November 2024.
  3.  “Top Recommendations for Sustainable Data Storage,” Gartner, November 2024.
  4. Based on internal analysis comparing power consumption, cooling requirements, and energy efficiency when transitioning from spinning disk to HPE Alletra Storage MP all-flash architecture.
  5. Analyzing the Economic Impact of HPE Alletra Storage MP B10000,” ESG Economic Validation, HPE, February 2025.
  6.  Based on internal use case analysis, HPE Alletra Storage MP B10000 reduces total cost of ownership (TCO) versus non-disaggregated architectures by allowing organizations to avoid overprovisioning.
  7. Unlike competitive energy efficiency SLAs, the B10000 guarantee is applicable whether you purchase your B10000 system through a traditional up-front payment or the HPE GreenLake Flex pay-per-use consumption model. To qualify for this guarantee, an active HPE Tech Care Service or HPE Complete Care Service contract is required.
  8. To qualify for the HPE Zero RTO/RPO guarantee, customers must have an active support contract, identical array configurations in the replication group, and a minimum three-year SaaS subscription.