Skip to content

Conversation

@masnax
Copy link
Contributor

@masnax masnax commented Sep 4, 2024

Closes #14006

Adds the keys ceph.osd.pool_size and cephfs.osd_pool_size (in keeping with the other naming schemes).

By default, if no value is supplied, the pool size will be pulled from the global default pool size. It can be set to any value larger than 0.

On update, we will try to re-apply the OSD pool size value, if it has changed.

@github-actions github-actions bot added Documentation Documentation needs updating API Changes to the REST API labels Sep 4, 2024
@github-actions
Copy link

github-actions bot commented Sep 4, 2024

Heads up @mionaalex - the "Documentation" label was applied to this issue.

@masnax
Copy link
Contributor Author

masnax commented Sep 4, 2024

I can't seem to get the docs to pick up the new extensions, I just get

WARNING: Could not find target storage-ceph-pool-conf:ceph.osd.pool_size in api-extensions

Got it, it was make update-metadata.

@masnax
Copy link
Contributor Author

masnax commented Sep 4, 2024

@simondeziel Got any recommendations for setting up more than 1 OSD for microceph? I'd like to add a test case for the new key but it's tied to the number of available disks supplied to MicroCeph, which is 1 in the github workflow tests.

@masnax masnax force-pushed the osd_pool_size branch 3 times, most recently from b44133e to 5a26cc4 Compare September 4, 2024 23:01
@masnax
Copy link
Contributor Author

masnax commented Sep 4, 2024

Looks like 1 thing I overlooked in my testing is that size=1 is disabled by default unless the global config is changed.

So while the impetus for this PR was to avoid running ceph config set global osd_pool_default_size 1 to set the default pool size, and adding these keys avoids that, a user will still need to run ceph config set global mon_allow_pool_size_one true to make the keys work if a user wants size=1

It's still added flexibility, but if you'd prefer to avoid managing these keys in LXD since it doesn't actually solve the underlying problem of needing to set global ceph configuration, I can close the PR @tomponline

@masnax masnax force-pushed the osd_pool_size branch 2 times, most recently from 850c106 to 3d07d9c Compare September 5, 2024 15:59
@simondeziel
Copy link
Member

simondeziel commented Sep 5, 2024

@simondeziel Got any recommendations for setting up more than 1 OSD for microceph? I'd like to add a test case for the new key but it's tied to the number of available disks supplied to MicroCeph, which is 1 in the github workflow tests.

Could we maybe use the ephemeral disk, split it into 3 partitions and export each as loop dev backed by their respective disk part?

@masnax masnax marked this pull request as ready for review October 22, 2024 23:20
Copy link
Member

@simondeziel simondeziel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general with the caveat that the setup-microceph action is being used elsewhere and should probably not default to using 3 partitions. Could you maybe make that an action parameter?

@masnax
Copy link
Contributor Author

masnax commented Oct 23, 2024

LGTM in general with the caveat that the setup-microceph action is being used elsewhere and should probably not default to using 3 partitions. Could you maybe make that an action parameter?

Hmm, does it matter since we set the replication factor to 1 anyway?

@simondeziel
Copy link
Member

LGTM in general with the caveat that the setup-microceph action is being used elsewhere and should probably not default to using 3 partitions. Could you maybe make that an action parameter?

Hmm, does it matter since we set the replication factor to 1 anyway?

That cuts the available space in 3 (so ~25GiB instead of ~75GiB IIRC) which might be a bit tight for some lxd-ci.git tests. Also, it comes with the losetup overhead.

@masnax masnax force-pushed the osd_pool_size branch 4 times, most recently from efff0ca to 10c9233 Compare November 29, 2024 23:38
simondeziel
simondeziel previously approved these changes Dec 4, 2024
masnax added 12 commits December 9, 2024 16:26
The pool should have been automatically removed by the preceeding
command, so we shouldn't need to use ceph to manually remove it.
Instead, we should check that the pool was indeed removed.

Signed-off-by: Max Asnaashari <[email protected]>
Copy link
Member

@tomponline tomponline left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@tomponline tomponline merged commit 22cbc6f into canonical:main Dec 10, 2024
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

API Changes to the REST API Documentation Documentation needs updating

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow setting ceph.osd.pool_size on pool creation

3 participants