At its Ignite event, Microsoft announced the Azure Boost data processing unit (DPU) to accelerate storage and networking efficiency in the Azure public cloud.
DPUs were a storage industry phenomenon a few years ago, with startups like Pensando, Fungible, Nebulon, Pliops, Kalray, Intel, and Nvidia saying that repetitive and specialized low-level storage and networking processing clogged up x86 CPUs whose main job is processing application data. Developing special ASIC or FPGA hardware to handle this processing would get it done quicker and offload the host x86 processor, speeding application processing.
Nebulon faced financial difficulties and was quietly acquired by Nvidia. Pensando was bought by AMD and Fungible by Microsoft as the general server, storage system, and network interface market refused to adopt DPU technology. Now Nvidia BlueField DPUs are becoming popular and the cloud hyperscalers, wanting to rent out x86 and Arm server processing capacity, see mileage in using DPUs to make more of that processing capacity available. Kalray and Pliops are still developing their technology.
In its Ignite Book of News, Azure Boost DPU is being introduced as Microsoft’s first in-house DPU “designed for scale-out, composable workloads on Azure.”
Fungible, co-founded by CEO Pradeep Sindhu and Bertrand Serlet in 2015, was bought for around $190 million by Microsoft in December 2022. Sindhu has been Corporate VP Silicon for Microsoft since then. Serlet was made a Microsoft VP of Software Engineering, but he left a year ago to be, as his LinkedIn entry says, a “Free Electron.” The Azure Boost DPU chip is based on Fungible chip technology.
Microsoft says it is optimizing every layer of its infrastructure in the era of AI “with Azure Boost DPUs joining the processor trifecta in Azure (CPU – AI accelerator – DPU), enhanced by hardware security capabilities of Azure Integrated HSM (Hardware Security Module), as well as continued innovations in Cobalt and Maia, paired with state-of-the-art networking, power management and hardware-software co-design capabilities.”
Azure HSM provides secure cryptographic key storage and operations. Cobalt is Microsoft’s in-house Arm-based CPU. The Maia 100 is its GPU-like AI training and inferencing acceleration hardware and software.
This cloud AI infrastructure grouping of x86 and Cobalt CPUs, Maia accelerators, and Azure Boost DPUs will, we understand, make the Azure Cloud infrastructure perform faster and more efficiently. However, it will do this in a proprietary way and not use industry standard hardware unlike, say, Meta, with its Open Compute Project.
Microsoft is investing heavily in developing its own silicon hardware and firmware for its private use. It must surely be using thousands of these chips inside its own operations.
Storage architect Chris Evans commented on Bluesky: “The amount of new silicon developed by Microsoft, AWS, GCP, etc, should be a worry for traditional vendors. It will represent a divergence from traditional standards and diverge the TCO models.” AWS has in-house Nitro hardware, and Google has also developed proprietary chip hardware.