-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-deterministic premature EOF #637
Comments
Thanks for the reproducible case. This is an excellent report. I sent a PR: slve/http4s-eof#1. I think this clears up by sharing one client for the app instead of creating a client per request. |
Thanks for the quick response @rossabaker! I've tested it side-by-side with my original version where I've inadvertently created a client on each request and it definitely made a change. I also quickly further tested your fixed version, only increasing the payload sizes from ~32kB and ~81kB both to 500kB and managed to get the same EOF error message, will get back to the topic in a bit. EDIT: I've fine tuned the thresholds and opened a PR with my changes https://github.com/rossabaker/http4s-eof/pull/1/files?w=1 |
Okay, that was the only misuse I saw reviewing it last night. Going to give this the bug label and try to chase it more this weekend. |
Awesome, thank you @rossabaker. |
Hi @rossabaker We're facing EOF error in one of our services and I'm using the project provided by @slve to reproduce the issue. Please see the findings below:
channelRead().onComplete {
case Success(b) =>
currentBuffer = BufferTools.concatBuffers(currentBuffer, b)
go()
case Failure(Command.EOF) =>
cb(eofCondition())
case Failure(t) =>
logger.error(t)("Unexpected error reading body.")
cb(Either.left(t))
} Could you please take a look at the issue again and provide some updates or estimates on how soon it could be fixed? Thanks in advance! |
Story
I have a service that is
90% of the requests are fine while 10% fails with
org.http4s.InvalidBodyException: Received premature EOF
,where the 10% is not tied to particular requests,
so if I retry the same requests there's 90% chance they'd pass.
Reproduction
I managed to reproduce the issue in a controlled environment.
I emulate the stream of request bodies, by keep
emmiting a single static request payload
val body = "x".repeat(requestPayloadSize)
at a fixed rate
Stream.fixedRate
query the local test server
val req = Request[IO](POST, uri).withEntity(body)
...
simpleClient.stream.flatMap(c => c.stream(req)).flatMap(_.bodyText)
that responds with a static payload
val response = "x".repeat(responsePayloadSize)
...
case POST -> Root => Ok(response)
finally I print the index of the request along with the chunk size
.evalMap(c => IO.delay(println(s"$i ${c.size}")))
Conclusion
Based on some extended experiment I managed to find some magic numbers
for request and response payload sizes.
Below these payload sizes I can run the app for an extended period without any exceptions,
while if both request and response sizes reach these thresholds
the client will eventually throw an EOF exception.
Notes
You can find the test project at https://github.com/slve/http4s-eof,
there the only scala source in
https://github.com/slve/http4s-eof/blob/master/src/main/scala/Http4sEof.scala.
Using fs2 2.5.0, http4s 0.21.18. https://github.com/slve/http4s-eof/blob/master/build.sbt
The server part is only there to aid the testing,
but regardless of what server you'd run your test with,
the client will eventually drop an EOF exception in a short period.
The text was updated successfully, but these errors were encountered: