Red Hat has made Ceph faster, scale out to a billion-plus objects, and added more automation for admins.
Red Hat Ceph Storage 4 provides a 2x acceleration of write-intensive object storage workloads plus lower latency. Object Store Daemons (OSDs) now write directly to disk, get a faster metadata store through RocksDB, and a write-ahead log that together enhances bandwidth and IO throughput performance.
Open-source Ceph provides block, file and object storage functions from a single distributed pool of storage, and keeps three copies of data for reliability.
Ceph Storage 4 features include:
- Simplified installation process, using Ansible playbooks, with standard installations completed in under 10 minutes
- New management dashboard with a “heads up” view of operations to help admin staff deal with problems faster
- Quality of service monitoring feature to help verify application QoS in a multi-tenant hosted cloud environment and control noisy neighbour apps
- Integrated bucket notifications as a Tech Preview to support Kubernetes-native serverless architectures, which enables automated data pipelines
- Ceph File System (CephFS) supports taking snapshots as a Tech Preview
- Added support for S3-compatible storage classes to better control data placement
- Improved scalability with starting cluster size of 3 nodes and support for 1 billion-plus objects.
Red Hat has published a scalability document detailing a billion-plus object test, and V4 release notes.