One of the dangers of Ceph was that by accident you could remove a multi TerraByte pool and loose all the data. Although the CLI tools asked you for conformation, librados and all it’s bindings did not.
Imagine explaining that you just removed a 200TB pool from your storage system due to a typo in your Python code…
So I suggested that we came up with a mechanism to prevent pools from being deleted from a Ceph cluster. And Sage quickly came up with something!
Ceph version 0.94 aka ‘Hammer’ came out a couple of weeks ago and it has a some fancy features which prevent you from removing a pool by accident or on purpose.
Monitors denying pool removal
A new configuration setting for the monitors has been introduced:
mon_allow_pool_delete = false
If you add that to the ceph.conf ([mon] section) and restart your MONs you will not be able to remove any pool from your Ceph cluster. Not via the CLI or directly via librados. The Monitors will simply refuse it:
root@admin:~# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
root@admin:~# rados rmpool rbd rbd --yes-i-really-really-mean-it
pool rbd does not exist
error 1: (1) Operation not permitted
This is a cluster-wide configuration setting and can only be changed by restarting your Monitors. A good way to prevent anybody from removing a pool by accident or on purpose.
A different way to achieve this is by setting the new nodelete flag on a pool. Setting this flag prevents the pool from being removed.
Next to this flag a couple of other flags were introduced:
The flags speak for themselves. If you set these flags those operations are no longer allowed:
root@admin:~# ceph osd pool set rbd nosizechange true
set pool 0 nosizechange to true
root@admin:~# ceph osd pool set rbd size 5
Error EPERM: pool size change is disabled; you must unset nosizechange flag for the pool first
I’m not allowed to change the size (aka replication level/setting) for the pool ‘rbd’ while that flag is set.
Applying all flags
To apply these flags quickly to all your pools, simply execute these three one-liners:
$ for pool in $(rados lspools); do ceph osd pool set $pool nosizechange true; done
$ for pool in $(rados lspools); do ceph osd pool set $pool nopgchange true; done
$ for pool in $(rados lspools); do ceph osd pool set $pool nodelete true; done
Your Ceph cluster just became a lot safer! No data loss or downtime due to fat fingers anymore 🙂