-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CachingHttpContentFactory$CachedHttpContent already released buffer #12681
Comments
@lorban if I remove the setting of |
@gregw I unfortunately cannot reproduce this locally, however the exception is thrown from Can you put a breakpoint on that catch block's |
@lorban I modified the method to: @Override
public void writeTo(Content.Sink sink, long offset, long length, Callback callback)
{
try
{
_buffer.retain();
sink.write(true, BufferUtil.slice(_buffer.getByteBuffer(), (int)offset, (int)length), Callback.from(_buffer::release, callback));
}
catch (Throwable x)
{
// BufferUtil.slice() may fail if offset and/or length are out of bounds.
try
{
_buffer.release();
}
catch (Throwable x2)
{
x2.addSuppressed(x);
throw x2;
}
callback.failed(x);
}
} and now it reports:
So the buffer has already been released, but not nulled? |
The release is happening from:
|
@lorban the issue is that the content is being released from the cache in the shrink, immediately after it has been added and before it has been used. |
@lorban the model for the |
This is a partial fix for #12681. It does not fix the actual release race on a single content, but by restricting the shrinking to a single thread, make the race less likely to occur. There is a more comprehensive fix in #12678 for 12.1, but the cache has changed too much for a straight forward back port.
@gregw I think you're right, the I have mixed feelings about your proposed fix in #12678 as using an atomic to avoid concurrent releases will only lower the bug's odds while raising the odds of overshooting the cache, and I think using an Contrary to what the comment on line 233 says, So |
@lorban my fix is indeed racy, but has a fallback in that if it loses the race, then the retain will fail and it will fall back to the uncached content. So it should work. However, doing something better would be good. Do you want to have a go of the But please have a go at something better than #12678 if you can. I can test in #12678 as those demos are at least a way of reproducing the issue.... or maybe review merge #12678 as is to get the demos back in, then we can have a separate PR to try to improve the caching? |
…the cache, otherwise delegate to the wrapped content Signed-off-by: Ludovic Orban <[email protected]>
Signed-off-by: Ludovic Orban <[email protected]>
IMHO we can merge #12678 as-is and fix the caching problem separately. |
Signed-off-by: Ludovic Orban <[email protected]>
…the cache, otherwise retain fails + remove single-threaded shrinking Signed-off-by: Ludovic Orban <[email protected]>
Signed-off-by: Ludovic Orban <[email protected]>
Signed-off-by: Ludovic Orban <[email protected]>
…er (#12704) #12681 atomically retain the buffer with a check that it still is in the cache, otherwise delegate to the wrapped content Signed-off-by: Ludovic Orban <[email protected]>
…the cache, otherwise retain fails + remove single-threaded shrinking Signed-off-by: Ludovic Orban <[email protected]>
Jetty version(s)
12.1.x
Jetty Environment
EE11
Description
Whilst developing PR #12678, I have woken up
org.eclipse.jetty.servlet6.embedded.Http2Server
even though there is noPushCacheFilter
anymore. The intention is to convert it to early hints instead.However, whilst running the draft Http2Server, I often get exceptions like below when serving the tiles:
How to reproduce?
run the
Http2Server
class, hit http://localhost:8080, do shift reload a few timesThe text was updated successfully, but these errors were encountered: