IBM Storage Ceph gets locked objects and more

Cephalopd
Wikipedia public domain image: https://commons.wikimedia.org/wiki/File:C_Merculiano_-_Cephalopoda_1.jpg

IBM has updated Ceph with object lock immutability for ransomware protection and previews of NVMe-oF and NFS to object ingest.

Storage Ceph is what IBM now calls Red Hat Ceph, the massively scalable open-source object, file and block storage software it inherited when it acquired Red Hat. The object storage part of Ceph is called RADOS (Reliable Autonomous Distributed Object Store). A Ceph Object Gateway, also known as RADOS Gateway (RGW), is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. 

Ex-Red Hat and now IBM Product Manager Marcel Hergaarden says in a LinkedIn post that Storage Ceph v7.0 is now generally available. It includes certification of Object Lock functionality by Cohasset, enabling SEC and FINRA WORM compliance for object storage and meeting the requirements of CFTC Rule 1.31(c)-(d).

There is NFS support for the Ceph filesystem, meaning customers can now create, edit, and delete NFS exports from within the Ceph dashboard after configuring the Ceph filesystem. Hergaarden said: “CephFS namespaces can be exported over the NFS protocol, using the NFS Ganesha service. Storage Ceph Linux clients can mount CephFS natively because the driver for CephFS is integrated in the Linux kernel by default. With this new functionality, non-Linux clients can now also access CephFS, by using the NFS 4.1 protocol, [via the] NFS Ganesha service.”

RGW, the RADOS gateway,  can now be set up and configured in multi-site mode from the dashboard. The dashboard supports object bucket level interaction, provides multi-site synchronization status details and can be used for CephFS volume management and monitoring.

He said Storage Ceph provides improved performance for Presto and Trino applications, by pushing down S3select queries onto the RADOS Gateway (RGW). V7.0 supports CSV, JSON and Parquet defined S3select data formats. 

It also has RGW policy-based data archive and migration to the public cloud. Users can: “create policies and move data that meets policy criteria to an AWS-compatible S3 bucket for archive, for cost, and manageability reasons.” Targets could be AWS S3 or Azure Blob buckets. RGW gets better multi-site performance with object storage geo-replication. There is “improved performance of data replication and metadata operations “ plus “Increased operational parallelism by optimizing and increasing the RadosGW daemon count, allowing for improved horizontal scalability.” But no performance numbers are supplied.

The minimum node count for erasure coding has dropped to 4 with C2+2 erasure-coded pools.

Preview additions

Three Ceph additions are provided in technical preview mode with v7.0, meaning don’t use them yet in production applications:

NVMe for Fabrics over block storage. Clients interact with an NVMe-oF initiator and connect against an IBM Storage Ceph NVMe-OF gateway, which accepts initiator connections on its north end and connect into RADOS on the South end. The performance equals RBD (RADOS Block Device) native block storage usage.

An object archive zone that  keeps every version of every object, providing the user with an object catalogue that contains the full history of the object. It provides immutable objects that cannot be deleted nor modified from RADOS gateway (RGW) endpoints and enables the recovery of any version of any object that existed on production sites. This is good for ransomware and disaster recovery.

To limit what goes into the archive, there is archive zone bucket granularity which can be used to enable or disable replication to the archive zone on a per0object bucket case.

A third preview is of an NFS to RADOS Gateway back-end which allows for data ingest via NFS into the Ceph object store. Hergaarden says: “This can be useful for easy ingests of object data from legacy applications which do not natively support the S3 object API.”