Skip to content

Releases: pkelaita/l2m2

v0.0.35

23 Oct 04:20
5ed0caf
Compare
Choose a tag to compare

0.0.35 - October 22, 2024

Added

Changed

  • claude-3.5-sonnet now points to version claude-3-5-sonnet-latest

v0.0.34

30 Sep 21:51
c4281af
Compare
Choose a tag to compare

0.0.34 - September 30, 2024

Caution

This release has breaking changes! Please read the changelog carefully.

Added

  • New supported models gemma-2-9b, llama-3.2-1b, and llama-3.2-3b via Groq.

Changed

  • In order to be more consistent with l2m2's naming scheme, the following model ids have been updated:
    • llama3-8bllama-3-8b
    • llama3-70bllama-3-70b
    • llama3.1-8bllama-3.1-8b
    • llama3.1-70bllama-3.1-70b
    • llama3.1-405bllama-3.1-405b
  • This is a breaking change!!! Calls using the old model_ids (llama3-8b, etc.) will fail.

Removed

  • Provider octoai has been removed as they have been acquired and are shutting down their cloud platform. This is a breaking change!!! Calls using the octoai provider will fail.
    • All previous OctoAI supported models (mixtral-8x22b, mixtral-8x7b, mistral-7b, llama-3-70b, llama-3.1-8b, llama-3.1-70b, and llama-3.1-405b) are still available via Mistral, Groq, and/or Replicate.

v0.0.33

11 Sep 22:14
91e36a8
Compare
Choose a tag to compare

0.0.33 - September 11, 2024

Changed

  • Updated gpt-4o version from gpt-4o-2024-05-13 to gpt-4o-2024-08-06.

v0.0.32

06 Aug 01:03
c28dd38
Compare
Choose a tag to compare

0.0.32 - August 5, 2024

Added

  • Mistral provider support via La Plateforme.

  • Mistral Large 2 model availibility from Mistral.

  • Mistral 7B, Mixtral 8x7B, and Mixtral 8x22B model availibility from Mistral in addition to existing providers.

  • 0.0.30 and 0.0.31 are skipped due to a packaging error and a model key typo.

v0.0.29

05 Aug 04:52
7078afd
Compare
Choose a tag to compare

0.0.29 - August 4, 2024

Caution

This release has breaking changes! Please read the changelog carefully.

Added

  • alt_memory and bypass_memory have been added as parameters to call and call_custom in LLMClient and AsyncLLMClient. These parameters allow you to specify alternative memory streams to use for the call, or to bypass memory entirely.

Changed

  • Previously, the LLMClient and AsyncLLMClient constructors took memory_type, memory_window_size, and memory_loading_type as arguments. Now, it just takes memory as an argument, while window_size and loading_type can be set on the memory object itself. These changes make the memory API far more consistent and easy to use, especially with the additions of alt_memory and bypass_memory.

Removed

  • The MemoryType enum has been removed. This is a breaking change!!! Instances of client = LLMClient(memory_type=MemoryType.CHAT) should be replaced with client = LLMClient(memory=ChatMemory()), and so on.

v0.0.28

04 Aug 03:23
d317e3e
Compare
Choose a tag to compare
[client] add default provider activation

v0.0.27

25 Jul 05:34
670530e
Compare
Choose a tag to compare
v0.0.27 (#8)

v0.0.26

19 Jul 20:11
4acc6f0
Compare
Choose a tag to compare
[models] add gpt-4o-mini

v0.0.25

12 Jul 18:14
f969534
Compare
Choose a tag to compare
[exceptions] add LLMRateLimitError

v0.0.24

12 Jul 07:54
eb1ccbe
Compare
Choose a tag to compare
[feature] add timeout option to call and call_custom