-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/fetch sync #2
base: main
Are you sure you want to change the base?
Conversation
@KyleMit, apologies for letting this PR sit for so long. I promise I will review it this week and then we can migrate this same strategy over to the 2019 site to create the speakers and sessions pages. |
No big timeline on this. Definitely wherever we land here can help inform 2019. I've been meaning to better document both approaches and their tradeoffs. Because this PR definitely provides greater resilience, but at the cost of complexity and infrastructure. |
Ok I've read through the code and I think I mostly understand how this works even if I don't completely get all the details. I like this approach a lot! You are right about the extra complexity though. One thing I would still like is a way to run the fetch data from Sessionize process manually on a local machine without triggering a commit to the repo. That would be pretty useful for testing and debugging. However, it looks like the script fetches data and then commits changed files directly to github without writing to the filesystem. I'm not super familiar with serverless functions yet, but if they don't really have a filesystem then that would make sense. So yeah, differing execution environments makes it a bit tricky. Anyway, before I look into that more I'm curious to hear how you feel about the trade offs of this approach. If the complexity and infrastructure feels worth it to you or if you'd prefer another approach. Some additional thoughts:
|
Sessionize data shouldn't change that much. Even when we are working on the schedule, y ou have to perform an explicit save and I wouldn't want it posting until we were ready with it anyway. After that first time publishing it however, if we are tweaking the schedule (or bios or abstracts) it would be nice for those to auto-publish. |
I outlined the two high level approaches: One interesting thing about the JAMStack (JavaScript, API, Markup) stack is that the JavaScript + API seems to refer to client side JS API calls. Which kinda makes sense as JAM isn't opinionated about your build stack, so while eleventy is JS based, there are other SSG engines that use Ruby or Python or Go. Which means there's not a well established best practice for consuming API data during the build process and baking it into the HTML. I'm leaning toward the second approach though because I think it's too big a hit to have local builds need to hit a fresh API call every time you run I'll outline some of the data caching workflows in another comment |
Here's the various workflows depending on which method we pick
|
Thanks for the visuals. Very helpful for thinking about the process. Here's what I think we should do. Define these four npm scripts: build
serve
update-data
update-build
In development, we can do In production (Netlify), we change the build command from Once code camp is over, all we have to do to winterize is make sure the committed json data files are up to date and then change the Netlify build command to To me this seems a bit less complex than the serverless function and Octokit method while still meeting our needs, but I think either way will work well for us. What do you think and what have I overlooked? 😃 |
Script to call sessionize and cache json data files as commits to GitHub files
Script is exposed via Netlify Functions via this url
Additionally, add support for Zapier integration which calls the data cache on a daily chron job
Any updates to the data files will trigger a commit, which is automatically picked up by Netlify and published for a build
Alternative multi-step approach which closes #1