Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage: Standardize volume.size usage across drivers #14829

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

hamistao
Copy link
Contributor

@hamistao hamistao commented Jan 21, 2025

On non-block based drivers, filesystem volumes can be unbounded, having access to the entire storage quota of its pool. When creating such volumes on btrfs and dir, the pool's volume.size config is ignored unbounded volumes are created. But on zfs those volumes are bounded by whatever is defined on volume.size, which is inconsistent behavior. To reproduce this, one can run the following on the specified drivers and compare their behaviors:

lxc launch ubuntu:n c -s poolName
lxc storage volume create poolName vol
lxc storage volume attach poolName vol c /mnt/foo
lxc exec c -- df -h

@hamistao hamistao force-pushed the standardize_volume.size_usage branch from 16a7426 to fced7e1 Compare January 21, 2025 04:04
srcVol: Volume{volType: VolumeTypeImage},
err: nil,
size: "2GiB",
size: "",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesnt look right, lets discuss.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lxd/storage/drivers/volume: only apply volume.size on block volumes

This is incorrect.

We need to apply volume.size on filesystem volumes too.

Copy link
Contributor Author

@hamistao hamistao Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if perhaps zfs is the only correct driver by some pre-existing standard. We can fix this inconsistency the other way around and make btrfs and dir apply volume.size in al volumes.
Unfortunately, however we proceed, I worry some user may be relying on the current behavior.

@tomponline
Copy link
Member

When creating such volumes on btrfs and dir, the pool's volume.size config is ignored unbounded volumes are created

@hamistao please can you see if this is a regression, specifically on btrfs by testing lxd 4.0 and upwards and see if at some point in the past it was respected?

dir pools are an exception here, because the majority of the time, even if you specify explicit a size property of a volume (or instance disk device) that size won't be respected unless the underlying directory is on a mount that has project quotas enabled (I understand this to be available on ext4 and xfs filesystems).

So the question is whether we should inherit a pool's volume.size setting and convert that to a size setting for custom volumes and a volatile.rootfs.size setting for instance volumes (where the instance root disk device doesn't have size) and have it not take effect, and yet still be reported in the API, which in turn may be reported to end userrs (such as via the LXD UI).

@hamistao
Copy link
Contributor Author

@hamistao please can you see if this is a regression, specifically on btrfs by testing lxd 4.0 and upwards and see if at some point in the past it was respected?

On LXD 4.0, btrfs already ignored volume.size when creating a filesystem volume. Here is the output of lxd exec c -- df -h, with c being a container created in a btrfs pool with volume.size=7GiB and a custom volume attached on /mnt/foo, similarly to what is described on the PR's description:

Filesystem       Size  Used Avail Use% Mounted on
/dev/loop24      **30G**  1.1G   29G   4% /
none                492K  4.0K  488K   1% /dev
efivarfs            184K  155K   25K  87% /sys/firmware/efi/efivars
tmpfs               100K     0  100K   0% /dev/lxd
tmpfs               100K     0  100K   0% /dev/.lxd-mounts
tmpfs                12G     0   12G   0% /dev/shm
tmpfs               4.7G  164K  4.7G   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/loop24      **30G**  1.1G   29G   4% /mnt/foo

For comparison, here is the same output when using zfs on 5.0/stable (zfs wasn't present on 4.0)

Filesystem              Size  Used Avail Use% Mounted on
zfs/containers/c        7.5G  483M  **7.0G**   7% /
none                    492K  4.0K  488K   1% /dev
efivarfs                184K  155K   25K  87% /sys/firmware/efi/efivars
tmpfs                   100K     0  100K   0% /dev/lxd
tmpfs                   100K     0  100K   0% /dev/.lxd-mounts
tmpfs                    12G     0   12G   0% /dev/shm
tmpfs                   4.7G  164K  4.7G   1% /run
tmpfs                   5.0M     0  5.0M   0% /run/lock
zfs/custom/default_vol  7.0G  128K  **7.0G**   1% /mnt/foo

I will try and test with dir with project quotas as well

@tomponline
Copy link
Member

Thanks @hamistao lets de-prioritize this for now and come back to it once we've finished the roadmap item you have, plus the size reporting issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants