MinIO is staking its claim in the large language model (LLM) market, adding support for the Model Context Protocol (MCP) to its AIStor software – a move sparked by agentic AI’s growing reliance on object storage.
MCP is an Anthropic-supported method for AI agents to connect to proprietary data sources. “Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools,” Anthropic says. As a result, Anthropic’s Claude model can query, read, and write to a customer’s file system storage.
MinIO introduced its v2.0 AIStor software supporting Nvidia GPUDirect, BlueField SuperNICS, and NIM microservices in March. Now it is adding MCP server support so AI agents can access AIStor. A “preview release includes more than 25 commonly used commands, making exploring and using data in an AIStor object store easier than ever.”
Pavel Anni, a MinIO Customer Engineer and Technology Educator, writes: “Agents are already demonstrating incredible intelligence and are very helpful with question answering, but as with humans, they need the ability to discover and access software applications and other services to actually perform useful work … Until now, every agentic developer has had to write their own custom plumbing, glue code, etc. to do this. Without a standard like MCP, building real-world agentic workflows is essentially impossible … MCP leverages language models to summarize the rich output of these services and can present crucial information in a human-readable form.”
The preview release “enables interaction with and management of MinIO AIStor … simply by chatting with an LLM such as Anthropic Claude or OpenAI ChatGPT.” Users can tell Claude to list all object buckets on an AIStor server and then to create a list of objects grouped by categories. Claude then creates a summary list:

Anni contrasts a command line or web user interface request with the Claude and MCP approach: “The command-line tool or web UI would give us a list of objects, as requested. The LLM summarizes the bucket’s content and provides an insightful narrative of its composition. Imagine if I had thousands of objects here. A typical command-line query would give us a long list of objects that could be hard to consume. Here, it gives us a human-readable overview of the bucket’s contents. It is similar to summarizing an article with your favorite LLM client.”
Anni then had Claude add tags to the bucket items. “Imagine doing the same operation without MCP servers. You would have to write a Python script to pull images from the bucket, send them to an AI model for analysis, get the information back, decode it, find the correct fields, apply tags to objects … You could easily spend half a day creating and debugging such a script. We just did it simply using human language in a matter of seconds.”
There is more information about AIStor and MCP in Anni’s blog.