VAST Data’s AI OS software stack has been ported to Azure and will be available as a managed service.
The AI OS is composed of a stack of data services; DataSpace, DataBase, DataStore, DataEngine, AgentEngine, and InsightEngine layered on top of its DASE (Distributed and Shared Everything) storage architecture. It will run on Azure’s IT infrastructure and, as Microsoft develops its AI infrastructure, including its own custom silicon initiatives, VAST will work with the Azure team to align its SW to Azure’s next-generation platform requirements, regardless of the processor or model architecture.

Aung Oo, VP, Azure Storage at Microsoft, said: “VAST’s AI Operating System running on Azure will give Azure customers a high-performance, scalable platform built on the Laos VM Series using Azure Boost that seamlessly extends on-premises AI pipelines into Azure’s GPU-accelerated infrastructure.
“Many AI model builders in the world leverage VAST for its scalability, breakthrough performance, and AI-native capabilities. This collaboration can help our mutual customers streamline operations, reduce costs, and accelerate time-to-insight for AI workloads of every size.”
The full VAST SW stack will be available in Azure, enabling customers to move VAST-based workloads between their on-premises VAST installations, VAST neocloud deployments, and Azure, and also the Google Cloud. VAST announced its SW was available as a managed service in GCP earlier this month.

In effect we have a hybrid VAST Data AI OS data fabric environment stretching across the on-premises, neocloud, GCP and now Azure environments. Our understanding is that an extension of this fabric to cover AWS is likely, and possibly/probably other clouds such as OCI, Asian-based ones, and also sovereign clouds as well. We think it wants to build out a comprehensive and global, mainstream public and neocloud, presence so that its AI OS storage becomes a default, both for existing businesses and organizations, and AI startups.
The idea of a storage data fabric enveloping both on-premises and public cloud instances of the storage SW has been pioneered by NetApp. Its customers get a fabric with a consistent ONTAP data access and management scenario wherever they are located in the fabric. Other storage suppliers such as DDN, Dell, Hammerspace, HPE, Pure Storage, Qumulo and others, like VAST, are pursuing a similar storage fabric strategy.
VAST differs in that it has built an AI software stack on top of its base storage, and now offers a storage and data operations environment for AI models and agents inside its fabric. With Azure and GCP on board, along with CoreWeave and other neoclouds, the company is further ahead in this regard than its competitors. That’s because they don’t have a vision of offering an AI data operations stack as part of their storage fabrics. For them, that is the responsibility of analytics and AI data operations players such as Databricks and Snowflake.


VAST reckons it is now positioned as a strategic element of Microsoft’s broader AI computing strategy. Its co-founder, Jeff Denworth, said: “This collaboration with Microsoft reflects our shared vision for the future of AI infrastructure, where performance, scale, and simplicity converge to enable enterprises to transform their business with agentic AI.
“Becoming an Azure Partner represents the first milestone in that journey. Customers will be able to unify their data and AI pipelines across environments with the same power, simplicity, and performance they expect from VAST, now with the reach, elasticity, and reliability of Microsoft’s global cloud.”
The VAST AI OS will be available “soon” to Azure customers.
Bootnote
Microsoft says the Laosv4-series of Azure Virtual Machines (VMs) features high-throughput, low latency, directly mapped local NVMe storage. These VMs utilize AMD’s fourth Generation EPYC 9004 processors that can achieve a boosted maximum frequency of 3.7GHz. The Laosv4-series VMs are available in sizes from 2 to 32 vCPUs, with 8 GiB of memory allocated per vCPU and 720GB of local NVMe temp disk capacity allocated per vCPU, with up to 23TB (12×1.92TB) of local temp disk capacity available on the L32aos_v4 size.
Azure Boost is a system designed by Microsoft that offloads server virtualization processes traditionally performed by the hypervisor and host OS onto purpose-built software and hardware. This offloading frees up CPU resources for the guest virtual machines, resulting in improved performance.
Storage processing operations are offloaded to the Azure Boost FPGA. This offload to FPGA provides leading efficiency and performance while improving security, reducing jitter, and improving latency for workloads. Local storage now runs at up to 36 GBps throughput and 6.6 million IOPS, and with remote storage up to 14 GBps throughput and 750K IOPS.








