Skip to content

Commit

Permalink
Merge pull request #4706 from BerriAI/litellm_retry_after
Browse files Browse the repository at this point in the history
Return `retry-after` header for rate limited requests
  • Loading branch information
krrishdholakia authored Jul 14, 2024
2 parents 4d7d650 + de8230e commit d0fb685
Show file tree
Hide file tree
Showing 7 changed files with 93 additions and 22 deletions.
1 change: 0 additions & 1 deletion litellm/proxy/_experimental/out/404.html

This file was deleted.

1 change: 0 additions & 1 deletion litellm/proxy/_experimental/out/model_hub.html

This file was deleted.

1 change: 0 additions & 1 deletion litellm/proxy/_experimental/out/onboarding.html

This file was deleted.

6 changes: 6 additions & 0 deletions litellm/proxy/_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -1624,11 +1624,17 @@ def __init__(
type: str,
param: Optional[str],
code: Optional[int],
headers: Optional[Dict[str, str]] = None,
):
self.message = message
self.type = type
self.param = param
self.code = code
if headers is not None:
for k, v in headers.items():
if not isinstance(v, str):
headers[k] = str(v)
self.headers = headers or {}

# rules for proxyExceptions
# Litellm router.py returns "No healthy deployment available" when there are no deployments available
Expand Down
Loading

0 comments on commit d0fb685

Please sign in to comment.