Qumulo builds home-grown S3 bucket shop

Scale-out filer Qumulo is developing its own Amazon S3 bucket storage capability and has cloud archiving in mind.

Currently it uses Minio’s object storage S3 code to provide S3 storage capability. Datera does the same thing.

Molly Presley, Qumulo product marketing director, told a Technology Live briefing in London today that the company is developing its own S3 code. The first version should arrive in March 2019 and enables applications accessing the Qumulo store through NFS or SMB to read S3 object storage buckets.

Over time you’ll be able to write to that object, coming in from NFS or SMB again, and also influence the placement of the data object.

Presley said challenges include object storage being eventually consistent while file storage is instantly consistent. Another concerns the 1 to 1 mapping between files and objects. The S3 object in its AWS bucket needs to be a native S3 object.

Qumulo is also developing the ability to archive to S3 buckets, on top of the existing Archive-optimised nodes. This will see AWS S3 become a cloud tier behind the on-premises Qumulo archive box. Initially Amazon will move the objects to Glacier for lower cost archiving, but Presley said Qumulo is thinking about doing such things itself.

Joel Groen, Qumulo product manager, responding to an audience question, said multi-cloud access for file will emerge. Users will be able to move the data back and forth. Qumulo has replication today. Over time it will help develop ways to move data sets between clouds more efficiently.

Azure Blob storage support is coming sometime in 2019. Customers are running Qumulo in Google Cloud Platform but this capability has not been announced yet. Blocks & Files expects that to come this year as well.