-
-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Command to rebuild all modules / wrapper scripts #501
Comments
This is a good question! One thing I've thought about is:
And this would go through and install the latest versions for the modules. In the case of adding the symlinks, I'm going to write something similar in the docs to enable this, e.g.,:
To give you some background on the update process, it definitely could be improved! We have a bot named binoc that runs monthly (only for my review sanity) to look for updated tags and digests. Then the user would need to manually pull and shpc install to get the updates. I also have a set of check commands that can, given the user has pulled, at least tell them if they have latest. I'm thinking you want something a little more streamlined - either an ability to run that update manually and on demand (in which case we'd add the functionality of the bot binoc here) for one or all recipes, or generate a local report and then somehow interactively choose how to update (e.g., installing a new digest vs new tag vs removing / keeping). Could you walk me through what you have in mind for this update workflow to start discussion? |
PR for the symlink install is in #502 (you probably saw in the other thread) let's pick up discussion about this refactor of update tomorrow - feel free to leave me notes here to discuss! Also @marcodelapierre is across the world and it's his daytime now so y'all can probably have good discussion while I'm 😴 |
no energy right now to comment on this, will be back on this one at some point :) |
Totally get it - I'm in the same place! @muffato I think we are going to review the wrapper scripts PR (linked) first and then we will dig into the discussion on an update command as discussed here. Sound good? |
Thanks @vsoch , for your #501 (comment) . That made me realise that regenerating the modules can't be distinguished from upgrading them, esp. as Regardless, something we discussed here this week is that instead of trying to update a module tree in place with Now I'm less sure, about what |
I think we can separate two things:
For this first implementation, I decided that shpc could take responsibility for the first thing, and via automation in GitHub. Anyone can pull the new shpc containers and get updated modules. It should not impact their currently installed ones, but if they choose, they can run shpc check to see if they have the latest version, and if not, install it. I don’t think in practice anyone has used shpc check (including myself) so it likely has developed bugs. This decision to place updating in the hands of the user also reflects standard sysadmin practices where they tend to want complete control over things. Now we move to this idea of automatic updates, and not for the container files but for the modules themselves. Arguably we can do a few things, either in isolation or combined.
So I do think we have options - and the original design was done really without thinking hard about the update or check scenario. I figured most folks would install the latest and call it a day, and pull once in a while for new containers or versions. I also figured that hashes for a particular version would jump around a bit, but at some point a particular version won’t be updated and it’s left in its final state (the last digest released). Given that registries don’t purge old containers, the older versions will be pretty hard set. Would anyone want to use them though? I’m not sure. This is where reproducibility is kind of fuzzy. Personally speaking I’m not sure I care to use something older. I do think we can improve upon the current state and just need to carefully think through goals and priorities for shpc. There is a bit of a trade off between automated updates and reproducibility, but that’s probably ok. |
Thank you for your analysis, @vsoch I personally value the reproducibility aspect quite a lot, incl. having explicit versions and hashes in But it sounds like a tool to update a registry would be useful. It would take as input the path to the registry, and optionally a subpath to a tool, look for new tags on the Docker registries themselves, and add them (+ the hashes) to It's obviously not as convenient as having the |
Agree! So how would the tool manage discrepancy between the registry provided in GitHub vs. the one you run locally? There is actually a separate tool called lookout https://github.com/alecbcs/lookout created by @alecbcs and this is the backend to the binoc bot that we run monthly to get updates. So if you want a tool separate from shpc I think this problem is solved! With lookout you can provide a container unique resource identifier and then see if you have latest. The way it works on GitHub is that lookout is connected to a bot named binoc, and binoc has an shpc parser that knows how to update the container.yaml files: https://github.com/autamus/binoc/blob/main/parsers/shpc.go. So I haven't tried it, but theoretically you could run binoc outside of the GitHub action.
If you run binoc, I think since that uses lookout that tool can also tell you directly.
I think this makes sense, but the million dollar question is how we suggest managing things between the GitHub updates vs the user requested updates. If you run an update locally and then want to pull fresh code, you likely (maybe) will have to deal with merge conflicts. You can't selectively grab changes. This is why I'm suggesting we have one or the other, or some means so that the registry doesn't serve the versions and we allow the user to generate and update them. |
Thanks. FYI I'm trying to run binoc locally (cf autamus/binoc#18) and will get back to you once I understand how it can work in practice. |
@muffato does the remote sync address / update set of commands address this, or is there an unmet need still? |
My original request was about regenerating the wrapper scripts upon changes in the shpc codebase, not in the container.yaml themselves – which will now happen more regularly as the registry has been decoupled from the codebase. An example would be that shpc now creates wrapper scripts for run, exec, shell (#586): how could I update all the modules ? |
Probably not at this point? Maybe we just want to make sure this is clear as water in the documentation? 🤔 |
Thinking more about that: I wanted an option to regenerate all modules, wrapper scripts, etc, for existing containers. But that's not reliably tractable anymore since the container.yaml can now be updated remotely. shpc can't guarantee that the container.yaml it'd get would be exactly the same as the one previously used. So that function would have to be part of In short:
What would you think of that ? |
That's true - and for that specific case where it matters, I'd probably encourage the person to clone and make a filesystem registry they can control.
That would make sense! When I'm testing between container technologies or wrapper -> no wrapper it would be nice to have the cleanup.
The assumption would be that running the install command for any module that is installed would hit it in a registry that the user has added - so yes I like the "quit if this isn't the case." And the
I say ship it! 😆 I can work on this after the current two PRs go in if you like. |
Hello,
I am looking for a way of rebuilding all modules (or wrapper scripts, now). This could be used when a new release of
shpc
introduces changes to the way the modules work, e.g.It would also be useful in case people have manually modified some modules, to reset everything.
Currently, we can do this in a few steps
It would help if there was a a more automated way, for instance that preserves the containers on disk, only rebuilding the module directory. How feasible and useful do you think that would be ?
Matthieu
The text was updated successfully, but these errors were encountered: