Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command to rebuild all modules / wrapper scripts #501

Open
muffato opened this issue Mar 3, 2022 · 14 comments · May be fixed by #590
Open

Command to rebuild all modules / wrapper scripts #501

muffato opened this issue Mar 3, 2022 · 14 comments · May be fixed by #590

Comments

@muffato
Copy link
Contributor

muffato commented Mar 3, 2022

Hello,

I am looking for a way of rebuilding all modules (or wrapper scripts, now). This could be used when a new release of shpc introduces changes to the way the modules work, e.g.

It would also be useful in case people have manually modified some modules, to reset everything.

Currently, we can do this in a few steps

shpc list > current_software.txt
# upgrade shpc, cleanup the directories
xargs -n 1 shpc install < current_software.txt

It would help if there was a a more automated way, for instance that preserves the containers on disk, only rebuilding the module directory. How feasible and useful do you think that would be ?

Matthieu

@vsoch
Copy link
Member

vsoch commented Mar 3, 2022

This is a good question! One thing I've thought about is:

# get updates from last month from GitHub
$ git pull origin main

# Run the update command to get the new "latest" (and I suppose we could have a variant to uninstall the old one, but I'm not sure users would like that.
$ shpc update <module>

# or
$ shpc update --all

And this would go through and install the latest versions for the modules. In the case of adding the symlinks, I'm going to write something similar in the docs to enable this, e.g.,:

for module in $(shpc list); do
        shpc install $module --symlink
  done

To give you some background on the update process, it definitely could be improved! We have a bot named binoc that runs monthly (only for my review sanity) to look for updated tags and digests. Then the user would need to manually pull and shpc install to get the updates. I also have a set of check commands that can, given the user has pulled, at least tell them if they have latest. I'm thinking you want something a little more streamlined - either an ability to run that update manually and on demand (in which case we'd add the functionality of the bot binoc here) for one or all recipes, or generate a local report and then somehow interactively choose how to update (e.g., installing a new digest vs new tag vs removing / keeping). Could you walk me through what you have in mind for this update workflow to start discussion?

@vsoch
Copy link
Member

vsoch commented Mar 3, 2022

PR for the symlink install is in #502 (you probably saw in the other thread) let's pick up discussion about this refactor of update tomorrow - feel free to leave me notes here to discuss! Also @marcodelapierre is across the world and it's his daytime now so y'all can probably have good discussion while I'm 😴

@marcodelapierre
Copy link
Contributor

no energy right now to comment on this, will be back on this one at some point :)

@vsoch
Copy link
Member

vsoch commented Mar 4, 2022

Totally get it - I'm in the same place!

@muffato I think we are going to review the wrapper scripts PR (linked) first and then we will dig into the discussion on an update command as discussed here. Sound good?

@muffato
Copy link
Contributor Author

muffato commented Mar 12, 2022

Thanks @vsoch , for your #501 (comment) . That made me realise that regenerating the modules can't be distinguished from upgrading them, esp. as shpc ships the default registry. A new version of shpc may have an updated registry, with new container.yml, which may not even contain the original software versions any more.
To reset the modules using a newer version of shpc, one would have to keep a copy of the old registry, and make the new shpc point at it.

Regardless, something we discussed here this week is that instead of trying to update a module tree in place with shpc update command, it could be better to work in a different deployment, and then make the switch in an atomic manner (or quasi-atomic). For instance, install the new version of shpc alongside the old one, do shpc list on the old one and shpc install in the new one, making it create a new module tree. Then, if everything works fine (which can be confirmed by further acceptance and integration tests), make the new module tree the current one by moving directories.

Now I'm less sure, about what shpc update should do ... Just leaving this for thought :)

@vsoch
Copy link
Member

vsoch commented Mar 12, 2022

I think we can separate two things:

  • updating the module files
  • updating a particular install of a module

For this first implementation, I decided that shpc could take responsibility for the first thing, and via automation in GitHub. Anyone can pull the new shpc containers and get updated modules. It should not impact their currently installed ones, but if they choose, they can run shpc check to see if they have the latest version, and if not, install it. I don’t think in practice anyone has used shpc check (including myself) so it likely has developed bugs. This decision to place updating in the hands of the user also reflects standard sysadmin practices where they tend to want complete control over things.

Now we move to this idea of automatic updates, and not for the container files but for the modules themselves. Arguably we can do a few things, either in isolation or combined.

  1. move the current functionality to update a container.yaml file into the hands of the user. They can ask for on demand updates, either in combination with the repository ones or instead of. We could arguably reduce the GitHub recipes to the simplest metadata and then have versions updated by the user, and that could be controlled by having a container.yaml with upper level metadata (less likely to change) and a separate file generated by the user for versions. This gets messy when we update the metadata because then the user would need to reinstall the container module files, but it’s not a crazy idea to be able to look for differences between installed and a fresh pull and then create a set of updates to do.
  2. Remove versions from the container.yaml file and get them dynamically when they are asked for. This removes the reproducibility aspect (eg I want this exact hash) but we are already doing that with the monthly updater. Then a global or scoped update just checks that file and the user can ask to install a new version, remove the old one, update containers based on new hashes, etc. I suspect different deployments will want different things!
  3. Extend the check command to be able to look for updates from the Docker registries on the fly and not rely on GitHub, and then (akin to how the lookout tool works by @alecbcs) simply make the command give a heads up for any container that has a new version, and to take some action. This is the same as 2 but on a global level. The user wants a specific recipe for when to do updates for all things an an automated action to take.4.
  4. Define a new version update as different from a hash update. I can ask for either kind, e.g always pull new versions for new modules vs. always delete and replace old containers when a new hash is available (possibly for security) and if it’s the same version it’s less likely to break reproducibility.

So I do think we have options - and the original design was done really without thinking hard about the update or check scenario. I figured most folks would install the latest and call it a day, and pull once in a while for new containers or versions. I also figured that hashes for a particular version would jump around a bit, but at some point a particular version won’t be updated and it’s left in its final state (the last digest released). Given that registries don’t purge old containers, the older versions will be pretty hard set. Would anyone want to use them though? I’m not sure. This is where reproducibility is kind of fuzzy. Personally speaking I’m not sure I care to use something older.

I do think we can improve upon the current state and just need to carefully think through goals and priorities for shpc. There is a bit of a trade off between automated updates and reproducibility, but that’s probably ok.

@muffato
Copy link
Contributor Author

muffato commented Mar 12, 2022

Thank you for your analysis, @vsoch

I personally value the reproducibility aspect quite a lot, incl. having explicit versions and hashes in container.yml. For instance, I would be maintaining my own registry, which would have all the old versions I need for whatever reasons. Ideally we should be using recent versions, but there will always be processes that have been tested on version X and we don't want to upgrade to Y unless properly tested. I would therefore consider the registry to be the source of truth for shpc install and shpc check.

But it sounds like a tool to update a registry would be useful. It would take as input the path to the registry, and optionally a subpath to a tool, look for new tags on the Docker registries themselves, and add them (+ the hashes) to container.yaml. Then, once the container.yaml have been updated, it's up to the user to decide what they want to do with each software. git diff can list the updates, shpc check could too. For the actual container/module upgrade, either shpc uninstall + shpc install or shpc upgrade could work. The latter could have --all to upgrade to everything. I would use the verb update for the registry, and upgrade for the modules.

It's obviously not as convenient as having the shpc commands do live queries to the Docker registries to check / install / upgrade a software, but that's where I would draw the line between reproducibility and automated updates.

@vsoch
Copy link
Member

vsoch commented Mar 12, 2022

But it sounds like a tool to update a registry would be useful. It would take as input the path to the registry, and optionally a subpath to a tool, look for new tags on the Docker registries themselves, and add them (+ the hashes) to container.yaml.

Agree! So how would the tool manage discrepancy between the registry provided in GitHub vs. the one you run locally? There is actually a separate tool called lookout https://github.com/alecbcs/lookout created by @alecbcs and this is the backend to the binoc bot that we run monthly to get updates. So if you want a tool separate from shpc I think this problem is solved! With lookout you can provide a container unique resource identifier and then see if you have latest. The way it works on GitHub is that lookout is connected to a bot named binoc, and binoc has an shpc parser that knows how to update the container.yaml files: https://github.com/autamus/binoc/blob/main/parsers/shpc.go. So I haven't tried it, but theoretically you could run binoc outside of the GitHub action.

Then, once the container.yaml have been updated, it's up to the user to decide what they want to do with each software. git diff can list the updates

If you run binoc, I think since that uses lookout that tool can also tell you directly.

shpc check could too. For the actual container/module upgrade, either shpc uninstall + shpc install or shpc upgrade could work. The latter could have --all to upgrade to everything. I would use the verb update for the registry, and upgrade for the modules.

I think this makes sense, but the million dollar question is how we suggest managing things between the GitHub updates vs the user requested updates. If you run an update locally and then want to pull fresh code, you likely (maybe) will have to deal with merge conflicts. You can't selectively grab changes. This is why I'm suggesting we have one or the other, or some means so that the registry doesn't serve the versions and we allow the user to generate and update them.

@muffato
Copy link
Contributor Author

muffato commented Mar 13, 2022

Thanks. FYI I'm trying to run binoc locally (cf autamus/binoc#18) and will get back to you once I understand how it can work in practice.

@vsoch
Copy link
Member

vsoch commented Sep 5, 2022

@muffato does the remote sync address / update set of commands address this, or is there an unmet need still?

@muffato
Copy link
Contributor Author

muffato commented Sep 8, 2022

My original request was about regenerating the wrapper scripts upon changes in the shpc codebase, not in the container.yaml themselves – which will now happen more regularly as the registry has been decoupled from the codebase. An example would be that shpc now creates wrapper scripts for run, exec, shell (#586): how could I update all the modules ?
To be honest, it's actually very easy to do: shpc list + shps install. Is there much to gain for baking this functionality into shpc ?

@vsoch
Copy link
Member

vsoch commented Sep 8, 2022

Probably not at this point? Maybe we just want to make sure this is clear as water in the documentation? 🤔

@muffato
Copy link
Contributor Author

muffato commented Sep 10, 2022

Thinking more about that: I wanted an option to regenerate all modules, wrapper scripts, etc, for existing containers. But that's not reliably tractable anymore since the container.yaml can now be updated remotely. shpc can't guarantee that the container.yaml it'd get would be exactly the same as the one previously used.

So that function would have to be part of shpc install. There's an opportunity here. Currently,shpc install only adds modules, scripts, etc, and if the module/version was already installed there could be some files hanging around from the previous install. Ideally it should be removing things first, and then doing a clean install.

In short:

  1. make shpc install bail out if the module is already installed. The user needs to add the --force option to do a clean reinstallation
  2. Add a --reinstall-all option to shpc install that (i) supersedes the positional argument install_recipe, (ii) goes through all the modules that are already installed, (iii) checks they still exist (it would quit if one is missing, unless --force is given), and (iv) cleanly reinstalls them.

What would you think of that ?

@vsoch
Copy link
Member

vsoch commented Sep 10, 2022

Thinking more about that: I wanted an option to regenerate all modules, wrapper scripts, etc, for existing containers. But that's not reliably tractable anymore since the container.yaml can now be updated remotely. shpc can't guarantee that the container.yaml it'd get would be exactly the same as the one previously used.

That's true - and for that specific case where it matters, I'd probably encourage the person to clone and make a filesystem registry they can control.

make shpc install bail out if the module is already installed. The user needs to add the --force option to do a clean reinstallation

That would make sense! When I'm testing between container technologies or wrapper -> no wrapper it would be nice to have the cleanup.

Add a --reinstall-all option to shpc install that (i) supersedes the positional argument install_recipe, (ii) goes through all the modules that are already installed, (iii) checks they still exist (it would quit if one is missing, unless --force is given), and (iv) cleanly reinstalls them.

The assumption would be that running the install command for any module that is installed would hit it in a registry that the user has added - so yes I like the "quit if this isn't the case." And the --force would say "I know I'm deleting the ones I can't reinstall.

What would you think of that ?

I say ship it! 😆 I can work on this after the current two PRs go in if you like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants