Interview. A meeting with new Nasuni CEO Sam King and field CTO John Capello naturally broached GenAI and agents. The discussion led to a Q&A, which provided insight into the cloud file services supplier’s AI roadmap. Both unstructured data vectorization and Model Context Protocol (MCP) support feature in this.
Blocks & Files: AI workloads are transitioning from AI training with files to AI training with objects and to AI inference, both at the on-prem and public datacenter and at the edge, both with GPU servers, and both, again, with files and object data. Does Nasuni agree with this picture?

John Capello: Yes, but the steady state will be a mix of file and object. File systems are still incredibly useful for all of the foundational work that is being done in AI. And we expect that, industry-wide, there will be more investments in high-performance file systems.
Blocks & Files: For training using public cloud-stored object data, will Nasuni support Nvidia’s GPUDirect for Objects scheme (S3 over RDMA)? Could you provide reasons why you will or will not?
John Capello: We are investing in a new offering that will represent a step-change in performance in terms of both IOPS and throughput as compared to our current Nasuni Edge. As part of that new solution, we anticipate supporting GPU Direct.
Blocks & Files: Do you see on-premises datacenter, as opposed to edge, Nasuni sites being involved in AI training with remote GPU server entities, and what are your thoughts around this?
John Capello: Yes. We regard the Edge as any location not co-located with the object store core of a Nasuni volume. For many of Nasuni’s customers, they store data in a cloud object store and manage compute in datacenters. We expect the same to continue especially as customers continue to face capacity constraints for compute in regions.
Blocks & Files: Do you see Nasuni edge sites, and Nasuni datacenter sites, being involved in AI inference with local GPU servers? How would Nasuni feed data to such GPU servers? GPUDirect for Objects?
John Capello: Yes, per the answer to question three above.
Blocks & Files: Do you see Nasuni preparing the file/underlying object data it holds for AI inference by selecting, filtering, and transforming it into vectors and storing those vectors inside the Nasuni environment? Or will you feed Nasuni-stored data to an AI pipeline that includes vectorization and a vector database?
John Capello: We expect to support multiple options for our customers. This includes API-based access to third-party services that will vectorize and index data for inferencing (available today with the Nasuni Data Service) and providing a service that indexes and potentially vectorizes data as a native capability on our platform. As a Platform provider, it is critical that we support best-of-breed integrations as well as native solutions for our customers.
Blocks & Files: Were Nasuni edge filers to include storing vectorized data in a vector database then they could feed data faster to AI models and agents for inferencing? What is Nasuni’s thinking on this topic?
John Capello: Per the answer to question five above, we are investing in a native service that will provide a fully integrated solution for agentic access to data on the Nasuni Platform. This will include an index and permissions enforcement for secure access at scale. We are still evaluating whether it makes sense for us to store the vectors as part of this service, or to rely on the services customers integrate with our platform to handle the vectorization of customers’ data.
Blocks & Files: How does Nasuni envisage using AI LLMs and agents to help its customers’ admins monitor and manage their Nasuni deployments and data estate?
John Capello: Today, through Nasuni Labs, customers can deploy MCP servers that front our control plane: customers use them to simplify provisioning and decommissioning, monitor the health of volumes, and automate workflows with other infrastructure. In addition to the infrastructure management, we have MCP servers which connect to the volumes themselves. Going forward, we plan to integrate this capability natively within the Nasuni Portal, both for administrative insights and actions, and for analyzing and acting on telemetry from the infrastructure our customers deploy and manage as part of their use of our Platform.








