Nutanix is becoming cloud-native and hypervisor-independent, supporting external storage and embracing generative AI as it aims to provide a generalized software platform on which you can run anything, anywhere.
At its .NEXT 2025 conference, Nutanix is announcing cloud-native AOS, general availability of its Dell PowerFlex support, integration with Pure Storage FlashArray and FlashStack offerings, and a Nutanix Enterprise AI initiative with Nvidia and agentic embedding.

Lee Caswell, Nutanix SVP for product and solutions marketing, said: “The key themes fall into three categories. The first is about moving to a modern infrastructure, [then] the idea that you can build apps and run anywhere [and] supporting agentic workloads going forward.”
AOS is Nutanix’s Acropolis Operating System for hyperconverged infrastructure (HCI). There is a built-in hypervisor – AHV – to support guest operating systems and applications, and a virtual SAN aggregating the server node’s SSD and/or disk storage into a single pool of block storage. Flow provides virtual networking and security. AHV is actually separate from AOS and can be replaced by another hypervisor, such as VMware’s vSphere. The whole Nutanix software stack is called Nutanix Cloud Infrastructure (NCI).
The AOS virtual SAN can be replaced by an external storage system for users who need to separately grow their storage capacity from their server compute, with the first such storage product being Dell’s PowerFlex. This was first mentioned by Nutanix in June last year as part of efforts to provide an alternative destination following Broadcom’s changes to VMware’s business terms.
Dell has two HCI systems of its own. The VxRail offering supports VMware vSphere and vSAN, while PowerFlex, which can theoretically grow to thousands of nodes, supports other hypervisors such as KVM (the open source Kernel-based Virtual Machine). Nutanix’s AOS is based on KVM.
AOS PowerFlex integration has been developed by Dell and Nutanix and is now generally available so that Nutanix AOS customers can use Dell PowerFlex external storage. A cluster of Nutanix compute nodes can scale to 32 nodes and hook up to a PowerFlex cluster that can scale to 128 nodes. Snapshot protection and replication of data in the PowerFlex is initiated by Nutanix’s Prism management software.
Pure Storage
The same kind of arrangement has been extended to Pure Storage’s FlashArray system with an NVMe/TCP interconnection (which PowerFlex also supports). Here the Pure FlashArray can scale to ten storage arrays and 20 controllers. Nutanix AOS with Pure Storage will be supported by server hardware partners that currently support FlashArray, including Cisco, Dell, HPE, Lenovo, and Supermicro.
Caswell said: “Pure has … 13,500 customers. And so this is a really interesting opportunity to go and see how we can scale into new storage buyers. I think it’s a real evidence of the maturity of the Nutanix vision that HCI and external storage will coexist for years to come.”
FlashArray provides both compression and deduplication. PowerFlex is compression-only and, with 150 TB Direct Flash Modules (solid state drives) and 300 TB ones coming, has better density than other systems using off-the-shelf SSDs, which max out at the 128 TB level. It supports asynchronous replication now with a roadmap toward ActiveDR and ActiveCluster (metro-stretch cluster), enabling near-zero RPO/RTO recovery for Nutanix deployments.

Nutanix is working with Cisco “to make sure that Nutanix is integrated and supported with the FlashStack offering,” Caswell says. FlashStack is a joint Cisco-Pure converged infrastructure (CI) offering combining Cisco’s UCS servers and Nexus networking switches with Pure’s storage, for sale by channel partners.
Caswell said: “Cisco is validating the Nutanix solution in an offer that’s called FlashStack with Nutanix,” and will run Nutanix AOS software on the servers with Pure’s FlashArray storage.
Jeremy Foster, SVP and General Manager at Cisco Compute, said: “With nearly a decade of joint innovation with Pure Storage, and an expanded partnership and co-development roadmap with Nutanix, we’re offering a proven platform backed by Cisco validated designs, a world-class joint support model, and deep integration with Cisco Intersight – providing unified visibility across both Pure Storage and Nutanix clusters for a more complete view of the operating environment.”
Cloud-native AOS
Nutanix’s Cloud Native AOS product extends Nutanix storage and data services to hyperscaler Kubernetes services and cloud-native bare-metal environments, without requiring a hypervisor. The containerized AOS concept means AOS is cloud-native and will be able to run in the public cloud – AWS, Azure, GCP – and also on-premises on any bare-metal Linux server. The Nutanix Kubernetes Platform (NKP) can run Kubernetes-orchestrated container workloads through the acquired D2iQ Kubernetes Platform (DKP) software, and Nutanix says it can run containerized apps at the edge, in the datacenter, and in the public cloud:

There are a set of Nutanix data services for Kubernetes (NDK) that include app-centric snapshots, replication, and disaster recovery across availability zones in the public cloud, for example. Customers can build and deploy cloud-native apps, with app and data migration across sites, including repatriation, the ability to move applications back to on-prem containerized environments.
Caswell said: “We can push further into the cloud by running directly as a containerized AOS directly on a Kubernetes runtime. In this case we’re doing a EA (early access) of EKS.” This is Amazon’s Elastic Kubernetes Service.
“You’ve got the opportunity to run without our hypervisor” as “most clouds, including Amazon, have an underlying hypervisor they’re running their Kubernetes runtime on … They use their virtualization for managing the infrastructure. Now we have our Kubernetes and our distributed storage services that give you enterprise value in the public cloud.”
He thinks that edge IT infrastructure is evolving. “It’s our view that over five years, say, the edge is actually going to go and be like smaller instances of bare metal. You may have container-only versions doing AI inferencing, for example, and so the idea [is] to run cloud-native AOS with our data services at the edge,” with a hypervisor optionally present.
So customers at the edge could run containerized apps in virtual machines (VMs) inside a hypervisor environment or run the containers directly in a Linux bare metal server, or even run VMs inside containers.
The cloud-native AOS idea was initiated in Nutanix’s Project Beacon. Caswell said: ”We announced Project Beacon … three years ago with a concept that you could run any app anywhere. And this is the first product out of that Project Beacon vision … This basically now is the underpinnings of how we’ll provide all of our PaaS-level services. We’ll be able to run independent of whether there’s a hypervisor or just a Kubernetes runtime available.”
A cluster of containerized AOS instances is called a Nutanix Cloud Cluster (NC2) and NC2 can also run both on-premises and in the public cloud where it runs the Nutanix HCI software stack on server bare-metal instances. Both the AWS and Azure clouds have been supported and Nutanix is now adding support for Google Cloud’s bare-metal instances.
Nutanix is also supporting AWS’s new Intel i7 Sapphire Rapids CPU instances with AMX extensions for AI inferencing and RAG workloads.
Nutanix Enterprise AI (NAI)
Nutanix has looked at the surging interest in generative AI and has developed its own enterprise AI vision, NAI, which views customers as needing a whole AI system and not just component parts, helps customers start AI operations, and keeps them secure. This is a development building on its GPT-in-a-Box concept. The NAI core includes a centralized LLM repository that Nutanix says “creates secure endpoints that make connecting generative AI applications and agents simple and private.”
The Nutanix Cloud Platform offering builds on Nvidia’s AI Data Platform reference design and integrates Nutanix Unified Storage and Nutanix Database Service offerings for unstructured and structured data for AI. Caswell said Nutanix will provide reasoning, embedding, reranking, and guardrail models, and run and manage agentic AI, all on top of its Nutanix Kubernetes Platform.

Nutanix does not see its software being used for AI large language model (LLM) training because it’s a small market. Caswell said: “Basically imagine these LLMs are being developed in the public cloud. Most customers have figured that out … There’s only a hundred companies that have the resources to go build LLMs, in my opinion, because of the GPUs that are required.”
Caswell said that what customers are really thinking about is how to make these models more effective. Nutanix is helping here because “we’re now supporting the new Nvidia models. These are called NeMo and NIM … we actually qualify LLMs.”
“What does it mean to certify or qualify an LLM? It’s actually back to what we’ve wrestled with for years. It’s memory management. You need to make sure that the model running doesn’t overrun the memory limitations of an individual GPU. So we certify these LLMs … we certify them today from Nvidia and from Hugging face. These certified LLMs are private and public, including LLaMA 2, LLaMA 3, and others.”
“Those are now certified in the Nvidia case. They’re performance-optimized for the Nvidia GPUs. We support the range of GPUs underneath. That certification now is extending to make sure that we can add model support.”
One supported model is Guardrail. “Once you have your initial result, you now feed it into a Guardrail model that basically matches this against any ethical concerns you have, any things you want to weed out effectively … that would be inflammatory.” Another is Reranking to make a list of items ordered for customer relevancy. Caswell said Embedding adds audio, images, and video to text items.
He said NAI will support an agentic workflow cycle with plan, use, and critique phases. Nutanix is working with Nvidia “to basically bring in all of these models into a production grade workflow … What enterprise AI does is it gives you access to these models and a workflow. We have integrated our endpoints with each one of these models. And you can now build out a production system more quickly because of the joint collaboration across the two companies,” Nutanix and Nvidia.
Caswell said Nutanix’s run anything, anywhere concept means customers can use a Nutanix Cloud Platform system to run GenAI models, cloud-native apps, virtualized apps, databases, and desktops wherever it’s appropriate; AI Factories, datacenters, major and minor public clouds, and edge locations. Nutanix is like an over-arching environment covering all these bases and providing a distributed platform for the applications. It has become a common data platform that can run across bare-metal, virtualized, and containerized infrastructure.
Availability
Cloud-native AOS supports Amazon EKS (Elastic Kubernetes Service) at early access level now with general availability in the summer. It will support on-premises bare-metal servers in 2026 with early access starting at the end of this year. The Pure Storage FlashArray integration is at early access availability for now, with GA expected later this year. Nutanix integration with FlashStack is expected to be available for early access this summer and generally available at the end of the year. NAI with agentic model support is now generally available. Nutanix blogs provide more information.