Rate limiting in httpx #2989
-
Rate LimitingWhy?Rate Limiting is a very important feature in my opinion. It can be used i.e. to:
LibrariesA short research on rate limiting showed several packages, that do the tick in some way, but are not nicely integrated in httpx e.g.:
My approachesI did not find a rate limiting option in httpx and took some approaches to tackle that. I would like to get some feedback on those. Approach 1: Abusing event hooks import httpx
from aiolimiter import AsyncLimiter
class ApiClient:
def __init__():
self.rate_limiter = AsyncLimiter(max_rate=1, time_period=10)
async def rate_limiting_event_hook(self, request):
async with self.rate_limiter:
print("called event hook for request {}".format(request))
event_hooks={'request': [self.rate_limiting_event_hook]}
self.client = httpx.AsyncClient(event_hooks=event_hooks, ......) This approach is working, but it seems like event hooks are not the way to go from a design perspective Approach 2: Extending httpx.AsyncHTTPTransport class RateLimitedAsyncTransport(httpx.AsyncHTTPTransport):
"""
A rate limited async httpx TransportLayer.
"""
from datetime import datetime
from httpx._config import DEFAULT_LIMITS, Limits, Proxy
from httpx._transports.default import SOCKET_OPTION
from httpx._types import CertTypes, VerifyTypes
import typing
SOCKET_OPTION = typing.Union[
typing.Tuple[int, int, int],
typing.Tuple[int, int, typing.Union[bytes, bytearray]],
typing.Tuple[int, int, None, int],
]
def __init__(
self,
verify: VerifyTypes = True,
cert: CertTypes | None = None,
http1: bool = True,
http2: bool = False,
limits: Limits = ...,
trust_env: bool = True,
proxy: Proxy | None = None,
uds: str | None = None,
local_address: str | None = None,
retries: int = 0,
socket_options: typing.Iterable[SOCKET_OPTION] | None = None,
rate_limiter: AsyncLimiter = None,
) -> None:
super().__init__(
verify,
cert,
http1,
http2,
limits,
trust_env,
proxy,
uds,
local_address,
retries,
socket_options,
)
self.rate_limiter = rate_limiter # aiolimiter.AsyncLimiter(max_rate, time_period)
async def __aenter__(self: httpx.AsyncHTTPTransport) -> httpx.AsyncHTTPTransport:
return await super().__aenter__()
async def __aexit__(
self,
exc_type: type[BaseException] | None = None,
exc_value: BaseException | None = None,
traceback: TracebackType | None = None,
) -> None:
return await super().__aexit__(exc_type, exc_value, traceback)
async def handle_async_request(self, request):
async with self.rate_limiter: # this section is *at most* going to entered "max_rate" times in a "time_period" second period.
print("handled request at {}".format(self.datetime.now().isoformat()))
return await super().handle_async_request(request)
class ApiClient:
def __init__():
rate_limiter = AsyncLimiter(max_rate=1, time_period=10)
limits = httpx.Limits(max_keepalive_connections=None, max_connections=max_connections)
timeout = httpx.Timeout(connect=30.0, read=20.0, write=20.0, pool=20.0)
transport = RateLimitedAsyncTransport(
limits=limits,
http2=True,
cert=cert,
verify=False,
retries=3,
http2=True,
rate_limiter=rate_limiter,
)
self.client = httpx.AsyncClient(
transport=transport,
...... FeedbackCould something like this be integrated via parameters in the httpx.(Async)Client classes? Looking forward to hearing your thoughts! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
The approach I'd take if implementing this would be... Approach 3: Implement rate limiting in a transport, using composition. class RateLimit(httpx.BaseTransport):
def __init__(self, transport, ...):
self.transport = transport
... # Initial configuration
def handle_request(self, request):
... # Wait for any existing rate limits
response = self.transport.handle_request(request)
... # Update rate limit state
return response
# Usage...
transport = RateLimit(
httpx.HTTPTransport(),
...
)
client = httpx.Client(transport=transport)
It could. (Many things are possible)
I'd push back on it, at least at this point in time. (There's lots of different rate limiting policies that might be valid here, I'd rather see them addressed and explored through third party support. At least initially.)
I've treated it as out-of-scope. Also it can be handled at the transport layer, which isn't true for connection_retries and the connection pool limits. They're behaviour that can only be handled within the internals of |
Beta Was this translation helpful? Give feedback.
-
Thank you both! |
Beta Was this translation helpful? Give feedback.
The approach I'd take if implementing this would be...
Approach 3: Implement rate limiting in a transport, using composition.
It…