-
Notifications
You must be signed in to change notification settings - Fork 985
Set OSD pool size when creating ceph and cephfs storage pools
#14044
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Heads up @mionaalex - the "Documentation" label was applied to this issue. |
|
I can't seem to get the docs to pick up the new extensions, I just get Got it, it was |
44590c2 to
5430bf6
Compare
|
@simondeziel Got any recommendations for setting up more than 1 OSD for microceph? I'd like to add a test case for the new key but it's tied to the number of available disks supplied to MicroCeph, which is 1 in the github workflow tests. |
b44133e to
5a26cc4
Compare
|
Looks like 1 thing I overlooked in my testing is that So while the impetus for this PR was to avoid running It's still added flexibility, but if you'd prefer to avoid managing these keys in LXD since it doesn't actually solve the underlying problem of needing to set global ceph configuration, I can close the PR @tomponline |
850c106 to
3d07d9c
Compare
Could we maybe use the ephemeral disk, split it into 3 partitions and export each as loop dev backed by their respective disk part? |
simondeziel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general with the caveat that the setup-microceph action is being used elsewhere and should probably not default to using 3 partitions. Could you maybe make that an action parameter?
Hmm, does it matter since we set the replication factor to 1 anyway? |
That cuts the available space in 3 (so ~25GiB instead of ~75GiB IIRC) which might be a bit tight for some |
efff0ca to
10c9233
Compare
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
Signed-off-by: Max Asnaashari <[email protected]>
The pool should have been automatically removed by the preceeding command, so we shouldn't need to use ceph to manually remove it. Instead, we should check that the pool was indeed removed. Signed-off-by: Max Asnaashari <[email protected]>
tomponline
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
Closes #14006
Adds the keys
ceph.osd.pool_sizeandcephfs.osd_pool_size(in keeping with the other naming schemes).By default, if no value is supplied, the pool size will be pulled from the global default pool size. It can be set to any value larger than 0.
On update, we will try to re-apply the OSD pool size value, if it has changed.