Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Support for batched reads is not uniform among the various backends we have.In some cases, partial batches will wait longer or shorter than the specified deadline.
The odd one out right now is RabbitMQ which supports configuring a channel with a pre-fetch limit, but you still have to just consume messages one-by-one (the batching is all under the hood). Additionally, there doesn't seem to be a way to specify a timeout or deadline (apparently this is left up to individual clients to decide what makes the most sense).As such, RabbitMQ is under-served compared to the rest. The implementation for batch reads is entirely done in the client code (i.e. reading messages one-by-one, buffering them, then returning).One possible solution is to add a max prefetch config option and use this when initializing a consumer channel, but this would also mean themax_messages
argument would be meaningless for RabbitMQ, and usingreceive_all
at all would probably be irrelevant. Callers should probably just usereceive
instead.For now, I've left some comments in the rabbit impl and will leave this as future work.Support for batched reads is not uniform among the various backends we have.
The in-memory and rabbitmq impls differ from the rest as these consumers are channel/stream based, with no explicit option for consuming messages in batches.
Using
tokio::time::timeout
for the first item, then looping to opportunistically try to fill up the batch with any remaining already ready messages, we can emulate a similar behavior as the other backends.In all cases, receive_all should:
are available.
Additionally, a new bit of config has been exposed for rabbitmq so callers can specify a prefetch count. This didn't seem to have any impact on the tests, but seems good to have.