Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use qvm.anon-whonix, deprecate securedrop-handle-upgrade for sys-whonix #1227

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rocodes
Copy link
Contributor

@rocodes rocodes commented Jan 2, 2025

Status

Ready for Review

Description of Changes

Refs #1060
Refs #1225
Refs / (possible fix) freedomofpress/securedrop-workstation-ci#68

Changes proposed in this pull request:

  • Use qvm.anon-whonix state as requisite for whonix template config + preferred way to download whonix templates and create anon-whonix VM.
  • Use Salt instead of securedrop-handle-upgrade to force shutdown sys-whonix in order to apply changes.

Testing

  • Visual review
  • CI
  • Manual upgrade testing (see below)

** Manual upgrade testing (existing SDW install) **

  • Start with Qubes 4.2 SDW.
  • On your system, change the template of sys-whonix and anon-whonix to any other template, or make changes to the existing template such as disabling apparmor. (Don't run/use these vms while you are making a mess, of course. This is basically to simulate a template upgrade/template change).
  • Build rpm from the tip of this branch and run apply.
  • Provisioning is successful
  • sys-whonix and anon-whonix have correct whonix 17 gw and ws templates, respectively, have apparmor configured, and sys-whonix has the anon-gateway tag.
  • sd-whonix has the anon-gateway tag.

Deployment

Any special considerations for deployment? Consider both:

  1. Upgrading existing pilot instances
  2. New installs

n/a.

Upgrading: todo. on existing installs, sd-whonix will be missing the tags that it should have inherited from whonix-gateway-17. Since sd-whonix is based on the sys-whonix template, and not cloned, it should have the anon-gateway tag after a successful Salt run.

Checklist

If you have made changes to the provisioning logic

  • All tests (make test) pass in dom0

If you have added or removed files

  • I have updated MANIFEST.in and rpm-build/SPECS/securedrop-workstation-dom0-config.spec

If documentation is required

  • I have opened a PR in the docs repo for these changes, or will do so later
  • I would appreciate help with the documentation

@rocodes rocodes force-pushed the 1060-qvm-whonix-state branch 2 times, most recently from b428544 to a53d4f3 Compare January 3, 2025 14:38
@rocodes rocodes force-pushed the 1060-qvm-whonix-state branch from a53d4f3 to 0def809 Compare January 10, 2025 16:36
@rocodes rocodes marked this pull request as ready for review January 10, 2025 16:48
@rocodes rocodes requested a review from a team January 10, 2025 16:58
@rocodes rocodes force-pushed the 1060-qvm-whonix-state branch from 0def809 to 6d4149d Compare January 10, 2025 18:22
@deeplow deeplow self-assigned this Jan 14, 2025
Copy link
Contributor

@deeplow deeplow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, however, I am not sure this is the right approach. I agree with the principle of simplifying logic and avoiding duplicated references (which I'm all for). However, if we do this, we have to go all the way in, over fear that we'll have out of sync versions. Currently there are two "whonix version" variables (outs and Qubes'), which could lead to breakage in our logic introduced by new whonix version being rolled out (something like this). I'm happy to go into a bit more detail if it sounds unclear.

I'm planning to write something up in #1225 describing my thoughts on system template upgrades.

@rocodes
Copy link
Contributor Author

rocodes commented Jan 15, 2025

I'm wondering what you think is a bad approach about applying qvm.anon-whonix? The reason I am suggesting it is that it is part of whonix's intended/expected configuration, and not running it goes against their expectations for vm configuration. For example, the salt states do things like apply tags (which then govern some rpc policies, including iirc policies around qubes.GetDate, which I think could be why we are having so many clock issues that cause CI and/or end user updates to stall out, per freedomofpress/securedrop-workstation-ci#68).

If we don't apply whonix's salt states, we are saying that we will be responsible for all whonix vm configuration besides just the template download, including RPC policies, and I think there's a risk that our config and theirs will (continue to) diverge. I would say that as long as we are using whonix VMs, my preference would be to use them as closely as possible to how they are tested upstream, or at least, to build on top of an accepted upstream configuration.

(Edit: a last thing I should note is that we were relying on the qvm.anon-whonix state up until Qubes 4.1 support. There isn't a lot of information in that PR on why it was dropped, but my guess is it was to be confident about supporting one whonix version across different Qubes OS versions. However, I'm not sure enough attention was paid to the side-effects of losing those salt states.)

Re template upgrades/version number: It's not ideal that we also maintain a hardcoded version number corresponding to our intended whonix version, because it is duplicative. But what it means is that as soon as a new whonix version is supported, users will have it downloaded onto their machines. Then, when we have tested the latest whonix templates, we set our version to apply them to both our components (sd-whonix) and also to sys- and anon-whonix, and the changes are not applied to our VMs or to preexisting sys- and anon- whonix until we explicitly bump the supported whonix version.

Maybe I'm missing something, I'm certainly open to hearing that, but I'm not sure that I see the downsides to this approach? (It does mean that we would have to be timely about whonix version bumps, but we have to do so anyway, because iirc whonix templates go EOL fairly quickly after a new version is available.)

…ff sys-whonix instead of securedrop-handle-upgrade script.
@rocodes rocodes force-pushed the 1060-qvm-whonix-state branch from 6d4149d to 1fa7a86 Compare January 16, 2025 14:37
@rocodes
Copy link
Contributor Author

rocodes commented Jan 16, 2025

(rebased + force-pushed to try to get the CI runner to do it's thing, looks like it was stalled out for a while)

@deeplow
Copy link
Contributor

deeplow commented Jan 17, 2025

But what it means is that as soon as a new whonix version is supported, users will have it downloaded onto their machines.
Then, when we have tested the latest whonix templates, we set our version to apply them to both our components (sd-whonix) and also to sys- and anon-whonix, and the changes are not applied to our VMs or to preexisting sys- and anon- whonix until we explicitly bump the supported whonix version.

I see your point. Even if qvm.anon-whonix fetches these templates and sets them for anon-whonix and other whonix qubes, we'll revert it right away. So, as long as we're not deploying in a system which only has the new whonix version (and SDW hasn't updated yet), we're golden. And that scenario is not likely to happen since we advise on the Qubes version to install.

I was originally more concerned of the original whonix formulas not accounting for inplace changes and being more aimed at running on install (and we do know that there are quite a few challenges with the mix of salt and state). But upon review the qvm.anon-whonix formulas are quite simple and change very rarely. So I think it's fine to implement this change.

I'm still have some questions about the specific implementation of using salt instead of the template-upgrade-script, especially given the unnecessary overhead of shutting down templates in formulas, regardless of the need to switch templates. But doing it in salt is not a regression, so I think it's fine to have it and get a final answer on that later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Under Review
Development

Successfully merging this pull request may close these issues.

2 participants