Parallel NFS-based data manager/orchestrator Hammerspace has selected the Xsight Labs E1 800G DPU to eliminate legacy storage servers from AI data storage.
Hammerspace claims this collaboration advances the Open Flash Platform (OFP) vision of a democratized, efficient, and radically simplified data storage infrastructure offering more than 10x storage density and 90 percent lower total cost of ownership. Its OFP concept replaces all-flash arrays with directly accessed SSDs in JBOFs that have a controller DPU, Linux, its parallel NFS (pNFS) software, and a network connection. Israel-based fabless semiconductor business Xsight Labs provides end-to-end networking technologies to support exponential bandwidth growth (e.g., up to 800G and 12.8T speeds) for cloud infrastructure, 5G, machine learning, and compute-intensive workloads.

David Flynn, Hammerspace CEO, says: “Legacy storage is collapsing under the weight of AI. By fusing our orchestration software with Xsight’s 800G DPU, we flatten the data path and turn every Linux device into shared storage. The result is an open architecture that scales linearly with performance, slashes cost, and feeds GPUs at the speed AI demands.”
The company says its OFP architecture redefines the data center by removing the traditional storage server, “the expensive middleman,” from the storage path. Instead, flash storage connects directly to the network using open standards such as Network File System (NFS) and Linux, creating a dramatically simpler and more enduring architecture.

Hammerspace has selected Xsight’s E1 800G DPU to realize the OFP vision for the next generation of warm storage AI infrastructure. This required a DPU with the Arm core density, memory bandwidth, and 800Gbps Ethernet connectivity necessary for high-performance environments.
Xsight’s E1 800G DPU is an edge server for AI data center infrastructure and comes in the form of a PCIe Gen 5 add-in card or 1 RU edge server. It features:
- 800G DPU with 100 percent fast-path all-layer processing (no slow-path architecture – see bootnote)
- 64C x Arm N2s, TDP 90W, TSMC n5
- 8 x 112G SerDes allowing 2x400GE, 4x200GE, 8x100GE
- Networking, security, storage, and compute (SDN model with hardware acceleration)
- Arm System Level 6 Ready workload compatibility on all Linux distros
Xsight says there is no traditional slow-path and fast-path misalignment in the E1 – it has no slow path. This allows full line rate at 800G with no use of accelerators.

Ted Weatherford, VP Business Development at Xsight Labs, said: “Hammerspace’s orchestration software allows every network element, regardless of memory size, to function as a flat layer zero storage node. The scalability and performance story here will set the pace for the entire AI industry. Hammerspace’s stack coupled with our E1 DPU – which is silently a full-blown Edge server – offers the performance leading warm-storage solution.
“The magic is we are fast-piping warm storage directly to the GPU clusters eliminating all the legacy x86 CPUs. The solution offers an exabyte per rack, connected directly with hundreds of giant Ethernet pipes, thus simplifying the AI infrastructure profoundly.”
What Hammerspace and Xsight are saying is that AI datacenters no longer need external storage arrays to hold training and inference data. They can use basic Linux-controlled JBOFs (just a box of flash) with component SSDs directly connected to GPU servers to save costs and power. It’s somewhat similar to messaging from Western Digital with its OpenFlex JBOFs, and Kioxia with its now abandoned Kumoscale product technology.
Early access deployments of the Xsight Labs E1 DPU, integrated with Hammerspace’s orchestration software, are underway with strategic partners. Limited volume shipments are planned to begin in Q4 2025 with production systems available early Q1 2026.
Bootnote
We understand that Hammerspace’s flattened AI warm storage architecture is optimized for AI workloads, particularly in hyperscale datacenters. There is a direct data path between GPUs and SSDs holding so-called warm data – frequently accessed, but not as often as active training (hot) datasets.
Xsight’s fast path and slow path concepts relate to the fast path being the high-speed, hardware-accelerated data plane optimized for line-rate processing of the data traffic. The slow path is the exception-handling or control plane path, managed by embedded software running on Arm cores (or similar) within the DPU. Misalignment occurs when the fast-path and slow-path are not synchronized in their configuration, state, or behavior.