Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new method isEnabled to ConfigSourceInterceptor #1311

Merged
merged 1 commit into from
Feb 9, 2025

Conversation

radcortez
Copy link
Member

@radcortez radcortez commented Feb 9, 2025

The goal is to remove the interceptor from the chain if isEnabled=false to reduce the stack call, instead of keeping a passthrough interceptor.

@radcortez radcortez changed the title Add new method isEmpty to ConfigSourceInterceptor Add new method isEnabled to ConfigSourceInterceptor Feb 9, 2025
@radcortez radcortez merged commit fc77388 into smallrye:main Feb 9, 2025
6 checks passed
@github-actions github-actions bot added this to the 3.11.3 milestone Feb 9, 2025
@dmlloyd
Copy link
Contributor

dmlloyd commented Feb 10, 2025

I'm concerned about how this might affect the API. The new isEnabled method isn't really specified to be constant/immutable, which is a little weird. The phases for which it is enabled are pretty unclear from the documentation. The time at which this property would be evaluated is not defined by the API.

Why is the deep call stack a problem (and how deep is "deep")? Perhaps we need to look at allowing the interceptor factory to indicate a pass-through interceptor (for example by returning null) rather than relying on a property which cannot be enforced to be immutable.

@radcortez
Copy link
Member Author

Yes, the goal was to reduce the stack trace on error a bit, because it is a bit long. Currently, a typical Quarkus app has 8 interceptors, plus the chain invoke, adds 16 lines to the trace.

I thought about enable because some of the interceptors already had enabled flags. The idea was to just remove the interceptors in the final stage, but we can probably remove them on every stage.

If we change this to just a simple enable / disable which works for every stage, are you ok with that?

I thought about returning null as well, but some interceptors do not use the factory and are initialized directly via ServiceLoader or the builder, so they are never null.

@dmlloyd
Copy link
Contributor

dmlloyd commented Feb 12, 2025

If it's just to reduce the stack trace for aesthetic reasons, then I don't know if this is really worthwhile. Was there some kind of complaint or problem that led you to this solution?

@radcortez
Copy link
Member Author

@franz1981 did complain that about it :)

I was also hoping that by removing some of the interceptors that do not require to do work (consider the LoggingInterceptor, if there is no logging), which become just a passthrough to improve some things. I didn't notice a considerable improvement (or degradation), but I would still need to collect more data. Still, we let the change in.

If you feel this is not required, I can revert the change.

@franz1981
Copy link
Contributor

We don't know yet IIRC (unless Roberto tried it) if it is worthy perf wise, whilst the class value lookup implementation in native image CE (let's underline CE, which is the one which does it bad...) seems so. IDK if @radcortez opened already any issue for that but I can better share what is going on there

@radcortez
Copy link
Member Author

The issue for ClassValue lookup was fixed here #1309

This also requires some Quarkus code, which I haven't pushed yet, to record the ClassValue lookup to populate the cache and avoid the expensive lock of ConcurrentHashMap (which the native image uses as a Substitute of ClassValue).

@dmlloyd
Copy link
Contributor

dmlloyd commented Feb 12, 2025

Expensive lock of ConcurrentHashMap? This should be a very cheap/fast data structure in most cases AFAIK, right?

@radcortez
Copy link
Member Author

It is an issue in the native image only. The ClassValue we use to get a hold of the MethodHandle to contruct the mapping, plus any subelements, is actually replaced by a ConcurrentHashMap.

When we moved the Vertx HTTP config to use @ConfigMapping, this became very noticeable (adding a 2 - 3 ms additional startup time, since the config tree is quite big). In reality, we can cache the lookup during build time to not incur that cost, and that brought down the numbers to the ones we had before the move (actually a bit better because these also help the other mappings available). It was not just very noticeable because the other trees were small compared to Vert.x.

@radcortez
Copy link
Member Author

Sorry, forgot to detail why ConcurrentHashMap is somehow expensive than what you would expect:

image

@franz1981
Copy link
Contributor

And the root cause is here https://github.com/oracle/graal/blob/18996df1503b91f9c7882d969726b5d4ea003209/substratevm/src/com.oracle.svm.core/src/com/oracle/svm/core/util/BasedOnJDKClass.java#L41-43

synchronized in native image are based on ReentrantLock and for impl reasons they always end up into the so called slow path shown in the flame by @radcortez
Note: that's likely the reason why loom just work fine with synchronized in native image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants