-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(refactor) Improve code splits further across apps #1472
base: main
Are you sure you want to change the base?
Conversation
This PR makes changes to several `index.ts` files across apps converting synchronous component imports to asynchronous ones using dynamic imports. This refactoring improves performance by reducing the initial bundle size and improving the page load time. With the additional code splitting, components are only loaded when they are actually needed. Key changes include: - Replacing `getSyncLifecycle` with `getAsyncLifecycle` and dynamic imports. - Removing direct component imports from `index.ts` files.
Size Change: -1.99 MB (-24.07%) 🎉 Total Size: 6.28 MB
ℹ️ View Unchanged
|
Note from Dennis: Seems this and this will shave off 1.99 MB from esm-pt-management (package size has been an expressed community concern from some orgs). But, he and Ian want to unpack further whether this will have any unintended consequences for implementers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM; thanks @denniskigen!
Thanks, @samuelmale. Please don't merge this yet. Keen to discuss the potential tradeoffs some more. Also keen to develop a clear rubric that we can document about where the sweet spot is for this potential performance optimization. |
What are the bundle size and performance implications of not lazy-loading the root component(s)? (I understand that some root components heavily ref Carbon components) |
Are there prior discussions about performance that might provide context to what / how to optimize? Reducing the bundle size is good, but don't we aggressively cache the JS files in the browser anyway? If I understand correctly, this should improve the load time when we first time we load O3 after a deploy, but the performance impact would not be that significant for subsequent loads. |
It's hard to know in detail. This is a balancing act, not an exact science. The microfrontend architecture and especially the extension system make ensuring that quite challenging, so a lot of this is going to be trial-and-error. High-level, we'd want to get to a place where:
How we get there, though, is what we need to experiment with. Essentially, adding code splits trade a network request for a smaller volume of data-over-the-wire in each case, but it can go wrong in two ways: Chunks that are too smallCurrently, the leading edge of this is the translations for the home page app, which has only a single translation key, "use strict";(globalThis.webpackChunk_openmrs_esm_home_app=globalThis.webpackChunk_openmrs_esm_home_app||[]).push([[300],{4300:e=>{e.exports={home:"Home"}}}]); Which is 160 bytes of data to load 13 bytes of "usable" data (the JS object Chunks that are too largeOn the flip side of things, we also end up with chunks that are probably larger than they should be. Currently on dev3, the largest JS file is a chunk from the billing app consisting of this: However, it compresses pretty well (a lot of that is embedded CSS, which likely has a large amount of repetition, so it compresses very well), so the most space consuming thing sent is currently this: Which is Carbon packaged as a peer dependency (although if we start using Carbon as a peer dependency everywhere, this chunk only needs to be loaded once and we can replace all the places that Carbon gets embedded in the app, so maybe we do want this). Even these "large" chunks aren't huge, under 300 kB when gzipped, but they can often point to places where code has either been split inefficiently (e.g., why are all the SCSS files for the whole billing app in a single chunk instead of paired with the code for each extension?) or where there may be unnecessary duplication of libraries (which is what #1468 was ultimately about). But I don't have any really rote approach to how we determine these, we just need to keep looking at:
Because, ultimately, the goal here—more than any hard numbers—is that it the app feels responsive to users. To be clear, I don't mean the above examples to be cases of things we should get away with. It probably makes sense to load translations the way we currently do because while the 782 bytes to 13 useful bytes ratio sounds atrocious, it also means we can use a single loading pathway that supports both the home app and an app with 120 different translation keys. Similarly, the Carbon chunk is huge relative to the app, but if we only load it a single time, it will likely reduce the overall amount of data transferred and, since it's loaded fairly early, doing so might actually make the app "feel" more responsive (since we paid the loading cost before the login page fully rendered).
Yes, but they are mostly captured in PRs like this. I think we've synthesized some of them into the developer docs though, where we can.
Yeah, but we run into the reality that many long-term users of OpenMRS-based systems have been taught to empty the cache an reload pages when things in the EMR don't work. And, in any case, as we see larger facilities deploy OpenMRS and more cloud-based deployments, any bandwidth use we can shave helps. |
Requirements
Summary
Related: #1468
This PR makes changes to several
index.ts
files across apps converting synchronous component imports to asynchronous ones using dynamic imports. This refactoring improves performance by reducing the initial bundle size and improving the page load time. With the additional code splitting, components are only loaded when they are actually needed. Key changes include:getSyncLifecycle
withgetAsyncLifecycle
and dynamic imports.index.ts
files.Screenshots
Related Issue
Other