Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building and testing U2+L firmware from the current u64ii branch (3.12alpha) - recovering from a bad ESP32 flash? #465

Closed
Gee-64 opened this issue Feb 10, 2025 · 35 comments

Comments

@Gee-64
Copy link
Contributor

Gee-64 commented Feb 10, 2025

Since switching to a newer version of "lwip" the u64ii branch does not build at revision 7d16e38. The reason is the file software/lwip/src/sys_arch.c is missing from Git after lwip was moved to a Git submodule.

Maybe sys_arch.c is planned to be moved out of the lwip sub-module and into the main repository somewhere? Either way, copying the file from Git revision 2f6927f allows the build to work.

Also, for the build to complete it is necessary to have pre-built the firmware components for the wifi module:

software/wifi/raw_c3/build/**/{bootloader,partition_table,bridge}.bin

using the Espressif SDK ("idf.py build"). No Makefile entry for this yet.

I have built the firmware but have not tested it yet.

Update: Flashed the build (except the WiFi firmware) and things seems to work as it should. Guessing commit 6680112 did the trick for getting the FPGA core for U2+L working again!

Question: Is it possible to recover from a bad flash of the ESP32 firmware on the wifi module, just like the procedure in #344 can be used for the Ultimate cart itself?

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 11, 2025

Looking closer at the code it seems there is no risk of bricking the WiFi-module since the flashing process used is the same as esptool.py uses, i.e via SLIP commands to the ROM based first stage bootloader.

Also looks like one of the test points on the wifi module is TX for the debug serial port of the ESP32 (the other point is GND maybe?) and can be used to see debug messages from the ESP32 firmware.

Are these two conclusions correct @GideonZ ?

@GideonZ
Copy link
Owner

GideonZ commented Feb 12, 2025 via email

@markusC64
Copy link

Hi Gideon,

I also have problems compiling u64ii branch for the U2+L. While I can compile software(wifi/raw_i64, I cannot compile software/wifi/raw_c3, following the instructioins from Gee:

../main/rpc_dispatch.c: In function 'cmd_send_eth_packet':
../main/rpc_dispatch.c:120:11: warning: unused variable 'err' [-Wunused-variable]
120 | err_t err = esp_netif_transmit(my_sta_netif, &param->data, param->length);
| ^~~
../main/rpc_dispatch.c: In function 'cmd_socket':
../main/rpc_dispatch.c:128:5: error: unknown type name 'rpc_socket_req'; did you mean 'rpc_send_eth_req'?
128 | rpc_socket_req *param = (rpc_socket_req *)buf->data;
| ^~~~~~~~~~~~~~
| rpc_send_eth_req
../main/rpc_dispatch.c:128:30: error: 'rpc_socket_req' undeclared (first use in this function); did you mean 'rpc_send_eth_req'?
128 | rpc_socket_req *param = (rpc_socket_req *)buf->data;
| ^~~~~~~~~~~~~~
| rpc_send_eth_req
../main/rpc_dispatch.c:128:30: note: each undeclared identifier is reported only once for each function

No idea why, but it does not compile.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 12, 2025

Gideon: Thank you for answering even though you are super busy! I can confirm the full build I did of the u64ii branch (7d16e38, 3.12alpha) works fine (after adding that sys_arch.c file), including the new ESP32 firmware. No need to bring out the JTAG flasher this time 🥇

So far I've only found one issue: There is an alpha character (0x10) which is part of the version number for the current "3.12alpha" build. The "menu_header" field in the UDP port 64 discovery response packet (json style) includes the 0x10 byte which confuses Assembly64 and the Ultimate is not discoverable. I tried removing the alpha character and re-flashed, and sure enough, Assembly64 detected it again. I've notified Scooby about this, but I don't know for sure if the fix should go in the Ultimate, Assembly64 or both. Sure, the problem will go away when 3.12 "final" is released, but it would be preferable to have alpha and beta releases of the Ultimate firmware working with Assembly64.

Will leave my LAN cable disconnected for a while and run on Wifi only and report back if I find any other issues.

markusC64: I don' t think you are at the very tip of the u64ii branch. The error you mention I seem to recall seeing for an earlier revision of the u64ii branch. Could you do a git checkout u64ii ; git pull to get it fully up to date (7d16e38) and try building again (after adding back sys_arch.c to software/lwip/src/) ?

@GideonZ
Copy link
Owner

GideonZ commented Feb 16, 2025

@Gee-64 I added sys_arch.c in software/network, but I think that this file may differ from the one in the submodule. I have to check that. It would not be right to add it into the submodule, because I think the submodule should remain clean.

I am trying to set up a builder in github. This will also help to check if commits will build. It's quite a bit of work to get this to build in a docker container, so bear with me... It will take a while before this is actually up and running. I also have to find a way to separate the FPGA builds from software builds, or cache the FPGA builds somehow, while still doing a clean checkout.

@GideonZ
Copy link
Owner

GideonZ commented Feb 16, 2025

Absolutely horrendous to build fpga builds inside of a docker. The number of libraries needed inside of the docker is huge. I am giving up for now.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 16, 2025

Since sys_arch.c is the glue between lwip and FreeRTOS, maybe the build just needs to make use of https://github.com/GideonZ/lwip/blob/master/contrib/ports/freertos/sys_arch.c ? Or are there special adaptations needed in the Ultimate project?

Yeah, setting up build environments (docker or otherwise) is a bit of work. Separating out the FPGA builds from the remaining software builds would be one way of avoiding the difficulty of running proprietary node-locked FPGA tools inside a docker container. That would allow at least the software part of the build to be run using docker.

On the other hand, any developer who wants to work on the FPGA in conjunction with the software would still need the FPGA tooling working...

I was thinking of experimenting with automation that would use a small configuration file (specifying software versions and licenses) to download and build a Linux VM running under VirtualBox. The VM would contain a standardized development environment for the Ultimate products, with all necessary tools of specific "locked" versions ensuring consistent builds. Upgrading one of the build tools would mostly be a matter of updating the configuration file (software version, URL, checksum) then rebuilding the VM.

VirtualBox is a very stable open source product that runs under Windows/Linux/Mac and would lower barriers for anyone to develop for the Ultimate regardless of their current host OS. VirtualBox integrates nicely with the host OS (re-sizable windows, shared folders) and you can more or less run the GUI of a FPGA tools as a normal window inside your host OS. This is how I personally work with the Ultimate today. It did take quite some time to get that VM set up properly and build using versions of the tools that somewhat match those that you use today (newer tools breaks the build). Having a standardized version-locked setup removes this friction and lowers the barrier to get started developing. It also makes upgrading to newer build tools much smoother for everyone.

Such a VM could also be utilized as a self-hosted test runner in a CI/CD pipeline utilizing Github Actions allowing fully automated clean builds that includes the FPGA cores. Combining this with actual Ultimate hardware could be the starting point for fully automated on-hardware "smoke tests" to ensure the built firmware actually works.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 16, 2025

I posted the above before I saw your response about the horrors of docker + FPGA tools... If you feel a VirtualBox environment would be of interest, let me know and I'll give it a go. I'm sure I'm not alone in thinking you can spend your time on more important things than the build environment - the U64E2 is much awaited and I'm sure there is still a bit of work there :-)

@GideonZ
Copy link
Owner

GideonZ commented Feb 16, 2025

@Gee-64 I think a VM is simpler in the sense that you can just run the full (interactive) installers. The downside is the extra overhead in memory usage and performance. Another thing to consider is the licensing. I am not supposed to make such a VM available, since I would be redistributing the software, which the license does not permit. So I guess it's just a matter of putting my teeth into the docker thing a bit more and get it to run. Maybe I could map my host library directory and add it to the LD_LIBRARY_PATH for the time being.

As you said, the U64E2 needs more attention right now. It's mostly the factory test that needs a pull towards completeness.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 16, 2025

Yeah, the overhead for a VM is higher than for docker for sure, at least when it comes to memory and disk.

I definitely did not mean distributing the FPGA proprietary software. I just meant provide a set of scripts that allows anyone who wishes to develop for the Ultimate to

  1. Register an account at the FPGA vendors

  2. Get their free license files from the vendors and save them in a specific local directory

  3. Download the vendors proprietary FPGA software (at specific versions) placing them in a specific directory

  4. Run the script which will download all the open source stuff (linux, gcc toolchains, etc) and build their own local (and properly licensed) personal VM

Looking forward to the U64E2!

@GideonZ
Copy link
Owner

GideonZ commented Feb 16, 2025

Sounds good indeed, I think that's how it should be when people want to run their builds locally. The benefit of that method is that contributors can iterate more quickly than through github builders.

I still see benefit in using docker; even if it is just to get to know docker, because it's all new for me. The first time I ever touched docker was this Friday night.

The cool thing is that I found a github repository with someone who has Lattice Diamond running in a docker. He bases it on rocky linux as it seems: https://github.com/Gekkio/docker-fpga/tree/main

He basically installs the entire diamond inside of the docker. I was hoping that I could actually get away with a docker 'shell' that refers to one shared giga blob of installed tools on my host machine. Because... well, we have Lattice Diamond (U2+L), but we also need Altera Quartus (U2+ and U64) and AMD Vivado (U64-II) and an old version of ISE for the U2. Those are h.u.g.e:

$ du /opt/Xilinx/14.4 -d 0 -h
16G /opt/Xilinx/14.4

$ du /opt/Xilinx/Vivado -d 0 -h
28G /opt/Xilinx/Vivado

$ du /opt/altera_lite/18.1 -d 0 -h
9,7G /opt/altera_lite/18.1

$ du /opt/build-tools/diamond/3.13 -d 0 -h
7,4G /opt/build-tools/diamond/3.13

$ du /opt/build-tools/riscv -d 0 -h
1,2G /opt/build-tools/riscv

Quickly adding that up: 62.5 GB

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 16, 2025

Dont forget the ESP32 toolchain :-) So yes, it is a hefty amount of bytes for those FPGA tools. Not much to do about that though, comes with the territory I guess. Building the open source toolchains from source is no picnic either and takes a lot of time in addition to disk space. Building from source is sometimes necessary to allow newer versions of the open source tools to run on the older RHEL-derivatives (Rocky/Alma) that the older FPGA tools are compatible with (Xilinx especially).

Neat that someone got Lattice running under docker/podman. It does rely on the host OS having an X server since it just forwards the X connection, but maybe that is quite workable even under Windows/MacOS nowadays.

@GideonZ
Copy link
Owner

GideonZ commented Feb 16, 2025

The open source toolchains for FPGA are an absolute no-go. They are incomplete, as they fail to check the timing properly of I/Os or multiple clock domains, as of today's writing. On top of that, they are all verilog oriented, so to get them to work with VHDL, you need additional steps, if at all possible. I have not burnt myself on those yet, as I don't see any added value in tools that are based on reverse engineering. I might come across hard on this, but if in any area of expertise you'd NOT want to get burnt by the tools, it's in FPGA development. I already lost 3 months of my life on Diamond alone.

Regarding the X connection; I managed to get the docker to run diamond in headless (batch) mode. The list of libraries in that github project was almost complete; I had it translated from yum-based package manager to apt-based through ChatGPT, so that I could keep my docker running ubuntu. The latter is a choice.

Now I need to figure out how to publish the artifacts in github. But that's for later. With this setup it's now possible to build an entire u2+L release. Danger: if the FPGA build is not correct, then you'll get a bricked device.. I need to think of something to avoid this, e.g. hardware in the loop testing.

Re: ESP32 toolchain:
$ du -h -d 0 ~/.espressif/tools
3,9G ../.espressif/tools

@GideonZ
Copy link
Owner

GideonZ commented Feb 16, 2025

Looking into google vps: Compute Oriented, 4 CPUs, 16 GB of ram, 70 hours per month, 90 GB boot disk would cost about 7-8 euro per month. This might be a great option to completely isolate it from the home environment and attach this to github runners. 16 GB of ram is a bit tight, but it will do for these projects.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 16, 2025

To clarify; I didn't mean OSS toolchains for FPGA, I meant gcc/clang/binutils for RiscV on the software side. I can certainly believe what you say about OSS FPGA tools not quite being there yet, and struggling with bugs and inconsistencies even in the commercial tools is painful enough.

Tons of affordable VPS providers out there that should help with isolation from a security perspective.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 20, 2025

Small update:

Scooby has implemented a workaround in Assembly64 for the "alpha character" in the discovery packet response. He filters out non-printable ascii characters, like the "Ultimate alpha character".

Have started work on the VirtualBox based development environment. Will probably be a couple of weeks until I have something to show.

@GideonZ
Copy link
Owner

GideonZ commented Feb 20, 2025

@Gee-64 Also a small update from my side.. The Github actions runner is almost functional. It was not easy, but it is getting one step further every time. It is already able to build all ESP32 targets, the U2 targets, and I am confident that it will also build the U2+ and U64 targets within a few more attempts. Let's see if it results in a complete downloadable package of artifacts. :-)

If it helps you, I can offer you my Docker file, which tells you exactly what shared linux libraries you need, at least for headless builds.

@GideonZ
Copy link
Owner

GideonZ commented Feb 20, 2025

@Gee-64 Maybe useful to mention: the old Xilinx ISE build needs a license, to be generated online with Xilinx. The Lattice FPGA build also does and expires every 6 months. (Party time!) When the complete build works, I'll look into the github cache function, which may be excellent for FPGA and ESP32 builds, so that the updater builds will be ready quickly.

I'll let you know when the action runner passes all builds; then you can bump your latest pull requests, and see if they also automatically build without my intervention.

@GideonZ
Copy link
Owner

GideonZ commented Feb 20, 2025

@Gee-64 The automatic build now works! The first downloadable zip with updates is available!

Please try to merge my latest changes into your branch and let's see if github actions builds a set of artifacts for you. The build time is currently 47 minutes. Next week I'll try to apply some github caching to speed this up.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 20, 2025

Nice work @GideonZ ! I'm rebasing on top of your u64ii branch regularly (I try to avoid merges to keep the history clean and easy to review). I pushed two of the rebased PRs now. Will check build results of those two branches tomorrow.

About the licenses, yeah I know about that part. I remember the ISE license, but I didn't remember that the Lattice one was needed as well... I'm going to have the license files as inputs to the VM builder, and also add a way to easily update an existing VM with updated licenses.

@Grrrolf
Copy link

Grrrolf commented Feb 21, 2025

VirtualBox is a very stable open source product that runs under Windows/Linux/Mac and would lower barriers for anyone to develop for the Ultimate regardless of their current host OS.

FYI:

VirtualBox for apple silicon mac as a host VirtualBox only supports arm64 based operating systems.

Also, depending on whether you need to use the extension pack (e.g. for access to USB functionality), you should read the VirtualBox licence. It is not allowed to freely use the extension pack for anything else than personal and educational use.

If Gideon's Logic B.V. would be running this to push out binaries, then Gideon would most likely need a licence for the extension pack.

QEMU may be a suitable alternative, and you don't have to deal with Oracle (who can change any of the terms and condition / licences whenever they want).

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 21, 2025

@Grrrolf Good point about Apple silicon! That more or less does away with Mac as a suitable platform for FPGA development.

For commercial work I would definitely avoid anything that has a Oracle license (just see the havoc they caused in the Java ecosystem). However, the base VirtualBox software is GPL and as long as you can live with USB 1.1 forwarding then it works nicely (I've never needed USB2/3 for VMs). Sure, Oracle can stop releasing newer versions under GPL at any time, but this is just a low effort tooling setup and can easily by migrated to something else then. So building and releasing "commercial open source" Ultimate firmware using the base VirtualBox product should be fine.

That said, however, since only Linux and Windows is viable, and Gideon got the docker version running, it might make more sense to just test if the docker image works with X forwarding. Headless is perfect for doing the builds, and if the GUI tools work with X forwarding under Linux+Windows then there is no need to mess with VirtualBox at all.

@GideonZ If you can share the Dockerfile that would indeed help! Also, the Github runner found a problem with my branches where I missed fixing one of the targets during refactoring. This is a perfect real world example of when CI/CD is super useful. Will fix the problem and go again :-)

@GideonZ
Copy link
Owner

GideonZ commented Feb 21, 2025

@Gee-64 To my knowledge, I dropped the docker file already in your email last night. ;-)

@markusC64
Copy link

markusC64 commented Feb 21, 2025

@GideonZ Is your docker image creating latest U64 FPGA? If so, it would make sense to add the prebuild FPGA to the artefacts, too.
I am actually saving U2+L FPGA blobs, since when I know I hav'nt modified the FPGA, it saves a lot of time not to compile it.
Regarding the ESP32 parts, I tghink I can skip them. Unless I modify them, they don't need to be programmed again. Anyway your docker toolchain could copy their binary build to the artefact, too.

@GideonZ
Copy link
Owner

GideonZ commented Feb 21, 2025

@markusC64 As mentioned above in the discussion with Gee-64, the FPGAs are now always built, and I will address this issue by using GitHub caching methods. So they will build automatically only when the source code has changed. The same for the ESP32 stuff of course. You'd still need these binaries for the updaters, since the ESP32 is now flashed as part of the firmware update.

Regarding the U64 FPGA, this cannot be built using the docker, because the U64 FPGA is not part of the repository. So, I either have to add build instructions in my private repository, and use some mechanism in Github to copy one to the other, OR I have to manually commit a new FPGA blob into the public repository. I think the latter is better, since it supports all build systems, not only Github.

That said, @Gee-64 Please note that the U64 build might not be the most recent FPGA. It requires a binary blob from another repository.

@markusC64
Copy link

Thanks for the reminder. I think if I comment out the actual call to the ESP32 updater, then a dummy esp32 file will safely do.
Of cause, using a prebuild esp32 binary would be better, but there is none available.
And the problem with the esp32 toolchain, it can be everything and nothing. I'll search it another time - or never in case there will be a ready made public vm.

@Gee-64 Gee-64 changed the title Building and testing U2+L firmware from the current u64ii branch (3.12alpha) - recovering from a bad flash? Building and testing U2+L firmware from the current u64ii branch (3.12alpha) - recovering from a bad ESP32 flash? Feb 21, 2025
@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 21, 2025

Thank you for clearing up that the U64 core is not part of the public repo. That explains why I couldn't find the VIC-II vhdl when I was briefly looking for it a while back :-)

Anyway, this issue was about getting the u64ii branch building for U2+L, and also recovering from a bad flash of the ESP32. The former is now solved with Gideons latest commits, and the latter was shown not to be a problem at all.

Thank you everyone who chimed in! Closing the issue as resolved.

@Gee-64 Gee-64 closed this as completed Feb 21, 2025
@GideonZ
Copy link
Owner

GideonZ commented Feb 23, 2025

Build caches have been implemented. Build time is down from 45 minutes to less than 5, if only the software changes. I also added the git tag to the container name of the artifacts.

The last step for the automated build process would be to find a good way to load the FPGA binaries for U64 and U64E-II.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 23, 2025

Getting down to 5 minutes makes a huge difference in the developer workflow. Awesome work!

I rebased one of my tiny PRs on top of u64ii (instead of master) and it is building now. The cache was deemed empty, but I'll queue up another small PR the same way and see if that one manages to use the cache.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 23, 2025

Update @GideonZ : I checked the builds now, and the second build does not seem to find the needed item in the cache. Everything looks right: The first jobs stores the output in the cache with a specific key, and the same key is queried in the second build but gets a cache miss anyway! I just checked the U2PL FPGA which in the first job was stored as

Linux-u2pl-1bfc140a9ab6209c82540a2639b3c4163194cfcdaf5e6943d2253fbd3fcd97b8

Can you see the entries in the cache on your end so you can verify they are really there?

@GideonZ
Copy link
Owner

GideonZ commented Feb 23, 2025

I can see some keys that are exactly the same, so it's a mystery for me also why it is not picked up. I know that there are some rules for the tree as well. Maybe I misunderstood, or maybe your PR is not a branch from the head.

Note that I mistakenly labeled the ESP32 firmware with -u2pl- as well. The 1MB versions are the ones for ESP32 and the 380KB versions are the ones for u2+L.

Linux-u2pl-41610603bc832b39a7aa7cfae6e19ab322a3635cb699e0c77377d292c629c109
1 MB cached 11 hours ago
u64ii
Last used 2 hours ago

Linux-u2pl-1bfc140a9ab6209c82540a2639b3c4163194cfcdaf5e6943d2253fbd3fcd97b8
380 KB cached 12 hours ago
u64ii
Last used 2 hours ago

Linux-u2p-baa1987eba488f88e4f83139392e66610d5622093b69954bf46c38cc04706e32
290 KB cached 12 hours ago
u64ii
Last used 2 hours ago

Linux-u2-6f03682d4894711737b90a28f21213baaf9c41378a3242cec6fb12993af1fcaf
320 KB cached 12 hours ago
u64ii
Last used 2 hours ago

Linux-u2-6f03682d4894711737b90a28f21213baaf9c41378a3242cec6fb12993af1fcaf
320 KB cached 2 hours ago
refs/pull/446/merge
Last used 2 hours ago

Linux-u2p-baa1987eba488f88e4f83139392e66610d5622093b69954bf46c38cc04706e32
290 KB cached 2 hours ago
refs/pull/446/merge
Last used 2 hours ago

Linux-u2pl-1bfc140a9ab6209c82540a2639b3c4163194cfcdaf5e6943d2253fbd3fcd97b8
380 KB cached 2 hours ago
refs/pull/446/merge
Last used 2 hours ago

Linux-u2pl-41610603bc832b39a7aa7cfae6e19ab322a3635cb699e0c77377d292c629c109
1.5 MB cached 2 hours ago
refs/pull/446/merge
Last used 2 hours ago

Linux-u2-6f03682d4894711737b90a28f21213baaf9c41378a3242cec6fb12993af1fcaf
320 KB cached 3 hours ago
refs/pull/463/merge
Last used 3 hours ago

Linux-u2p-baa1987eba488f88e4f83139392e66610d5622093b69954bf46c38cc04706e32
290 KB cached 3 hours ago
refs/pull/463/merge
Last used 3 hours ago

Linux-u2pl-1bfc140a9ab6209c82540a2639b3c4163194cfcdaf5e6943d2253fbd3fcd97b8
380 KB cached 3 hours ago
refs/pull/463/merge
Last used 3 hours ago

Linux-u2pl-41610603bc832b39a7aa7cfae6e19ab322a3635cb699e0c77377d292c629c109
1.5 MB cached 3 hours ago
refs/pull/463/merge
Last used 3 hours ago

@GideonZ
Copy link
Owner

GideonZ commented Feb 23, 2025

@Gee-64 Since the hashes are exactly the same, it wouldn't make sense to store the hash files that I generate before requesting a cache item, right? We already know that they are exactly the same.

BTW.. I haven't been able yet to test any of your PRs.. ;-) My focus for the coming week will be to complete the U64E-II factory test system, as the boards will be ready in about 4 weeks time. Then all the test means (loopbacks, cables, test gui, traceability database) need to be ready.

@GideonZ
Copy link
Owner

GideonZ commented Feb 23, 2025

@Gee-64 I think I fixed it... Building now. The default behavior of github is to keep cache items separate from different branches, even if the hashes are exactly the same. With 'restore-keys' this can be circumvented, ... probably. I'll clean up the cache after this, to see if it works for your furture PRs.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 23, 2025

@GideonZ Regarding not testing my PRs, that is fully understandable! Getting everything ready for the U64E2 production and launch is prioritized for sure! :-) I've have tested the PRs fairly well so hopefully there should not be any major problems. I'll be re-testing stuff again in the coming days and weeks as things come together.

Scooby spent a couple of hours today implementing password support in Assembly64. It is more or less working, he just need to iron out some final issues and let Sarge do his magic with some gfx to indicate the unit is password protected.

@Gee-64
Copy link
Contributor Author

Gee-64 commented Feb 24, 2025

I played around with PRs and the build caching and I would say it works as it should with your latest changes @GideonZ.

There is just one case where it doesn't work, which naturally was the one causing frustration yesterday...

If you have an existing PR against master (with no build.yml) then there are two possible sequences on how to rebase the PR on top of the u64ii:

  1. Rebase the branch locally, push to GitHub, click "edit" on the PR on GitHub and change "merge into" from master to u64ii

  2. Click "edit" on the PR on GitHub and change "merge into" from master to u64, rebase the branch locally, push to GitHub

If you do 1) like I did, then the cache will not be used in the automatically triggered build. If you do 2) then the cache will be used.

I suggest not spending any more time on this. It works fine now and when everything has landed on master it will work even better.

I created a small cosmetic fix for step names in the workflow: #475

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants