Skip to content

Releases: BerriAI/litellm

v1.59.10

30 Jan 16:47
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.59.9...v1.59.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.10

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083
Aggregated Passed ✅ 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083

v1.59.8-stable

31 Jan 04:56
Compare
Choose a tag to compare

Full Changelog: v1.57.8-stable...v1.59.8-stable

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.59.8-stable

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 260.0 291.2207591958183 6.075260080470321 0.0 1818 0 223.10552599998346 3813.1267819999266
Aggregated Passed ✅ 260.0 291.2207591958183 6.075260080470321 0.0 1818 0 223.10552599998346 3813.1267819999266

v1.59.9

29 Jan 15:44
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.59.8...v1.59.9

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.9

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 270.0 301.01550717582927 6.14169679840119 0.0 1837 0 234.85362500002793 3027.238808999982
Aggregated Failed ❌ 270.0 301.01550717582927 6.14169679840119 0.0 1837 0 234.85362500002793 3027.238808999982

v1.59.8-dev1

29 Jan 08:11
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.59.8...v1.59.8-dev1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.8-dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 253.74562668371757 6.073890684010945 0.0 1818 0 198.74819999995452 1957.5085989999934
Aggregated Passed ✅ 230.0 253.74562668371757 6.073890684010945 0.0 1818 0 198.74819999995452 1957.5085989999934

v1.59.8

27 Jan 16:15
Compare
Choose a tag to compare

What's Changed

  • refactor: cleanup dead codeblock by @krrishdholakia in #7936
  • add type annotation for litellm.api_base (#7980) by @krrishdholakia in #7994
  • (QA / testing) - Add unit testing for key model access checks by @ishaan-jaff in #7999
  • (Prometheus) - emit key budget metrics on startup by @ishaan-jaff in #8002
  • (Feat) set guardrails per team by @ishaan-jaff in #7993
  • Supported nested json schema on anthropic calls via proxy + fix langfuse sync sdk issues by @krrishdholakia in #8003
  • Bug fix - [Bug]: If you create a key tied to a user that does not belong to a team, and then edit the key to add it to a team (the user is still not a part of a team), using that key results in an unexpected error by @ishaan-jaff in #8008
  • (QA / testing) - Add e2e tests for key model access auth checks by @ishaan-jaff in #8000
  • (Fix) langfuse - setting LANGFUSE_FLUSH_INTERVAL by @ishaan-jaff in #8007

Full Changelog: v1.59.7...v1.59.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.8

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 280.0 325.48398318207154 6.003526201462839 0.0 1796 0 234.56590200004257 3690.442290999954
Aggregated Failed ❌ 280.0 325.48398318207154 6.003526201462839 0.0 1796 0 234.56590200004257 3690.442290999954

v1.59.7

25 Jan 06:44
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.59.6...v1.59.7

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.7

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 260.0 294.5630730660492 6.1254059494010225 0.0 1832 0 231.04980300001898 2728.9633709999634
Aggregated Passed ✅ 260.0 294.5630730660492 6.1254059494010225 0.0 1832 0 231.04980300001898 2728.9633709999634

v1.59.6

24 Jan 06:37
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.59.5...v1.59.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.6

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 250.0 302.94444351157557 6.065526445072595 0.0 1814 0 184.99327999995785 3192.1896389999915
Aggregated Failed ❌ 250.0 302.94444351157557 6.065526445072595 0.0 1814 0 184.99327999995785 3192.1896389999915

v1.59.5

23 Jan 07:30
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.59.3...v1.59.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.5

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 227.08635060543418 6.150672112760015 0.0 1840 0 180.76872099999264 2652.4827009999967
Aggregated Passed ✅ 210.0 227.08635060543418 6.150672112760015 0.0 1840 0 180.76872099999264 2652.4827009999967

v1.59.3.dev1

23 Jan 06:11
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.59.3...v1.59.3.dev1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.3.dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 259.2853146928995 6.073999238925992 0.0 1817 0 211.11294400003544 2538.129180999988
Aggregated Passed ✅ 230.0 259.2853146928995 6.073999238925992 0.0 1817 0 211.11294400003544 2538.129180999988

v1.59.3

22 Jan 07:11
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.59.2...v1.59.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.3

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 229.9985951234699 6.27846665942667 0.0 1879 0 179.09318400000984 3769.753647000016
Aggregated Passed ✅ 200.0 229.9985951234699 6.27846665942667 0.0 1879 0 179.09318400000984 3769.753647000016