Releases: ahyatt/llm
Releases · ahyatt/llm
Github Models support, PDF support for Sonnet, r1, and more
What's Changed
- Add pdf input for Sonnet by @ultronozm in #142
- Add GitHub Models support by @gs-101 in #113
- Add deepseek r1 to the model list by @ahyatt in #146
- Simplify github models to be based of llm-azure by @ahyatt in #147
- Use "chat-model" as the name for "llm-open-compatible" by @whhone in #148
- Accept lists as valid non-standard-params, but prefer vectors by @ahyatt in #149
New Contributors
Full Changelog: 0.22.0...0.23.0
Improvements to llm-make-tool, rename from llm-make-tool-function
Tool use interface change, more Claude functionality
What's Changed
- Corrected typos in Snowflake model names and symbols by @s-kostyaev in #135
- Introduce llm-models-add, plus fix an issue with gemini 2.0 model by @ahyatt in #136
- Change function calling methods to tool-use, modify input format, add Claude image and streaming tool use by @ahyatt in #133
Full Changelog: 0.20.0...0.21.0
JSON-mode with schema following, new models
What's Changed
- Fix url by @jsntn in #118
- Add Gemini 2.0 flash to the list of Gemini models by @ahyatt in #121
- Add llama-3.3 and QwQ models by @ahyatt in #122
- Add ability to get JSON object as a specific schema by @ahyatt in #123
- Add Snowflake Artic Embed 2.0 model by @s-kostyaev in #125
- docs: update README.org by @eltociear in #126
- Add gemini-2.0-flash-thinking-exp, fix output selection by @ahyatt in #127
- Fix missing capabilities in Gemini thinking model by @ahyatt in #128
- Fix integration test false positives by @ahyatt in #129
New Contributors
- @jsntn made their first contribution in #118
- @eltociear made their first contribution in #126
Full Changelog: 0.19.1...0.20.0
Fix Open AI's model context sizes
What's Changed
Full Changelog: 0.19.0...0.19.1
json-mode and functions as keys
What's Changed
- Add BGE-M3 multilingual embedding model by @s-kostyaev in #109
- Add JSON mode for the providers that support it by @ahyatt in #112
- Add support for auth-source secret functions to :key argument by @minad in #111
- Extend key function to other models, add to README by @ahyatt in #114
New Contributors
Full Changelog: 0.18.1...0.19.0
FIx for llm-batch-embeddings-async
What's Changed
- Fix for ollama name and capabilities for embedding only models by @ahyatt in #106
- Fix extra argument in llm-batch-embeddings-async by @ahyatt in #107
Full Changelog: 0.18.0...0.18.1
0.18.0
What's Changed
- Add Azure support by @ahyatt in #96
- Add multimodal support for Openai, Gemini, and Ollama by @awswan in #88
- Add batch embeddings capability, implement for OpenAI and Ollama by @ahyatt in #93
- Add ability to fill prompt variable backwards by @ahyatt in #82
- Add new ollama models to the list of function calling models by @ahyatt in #84
- Centralizing model information in new llm-models.el by @ahyatt in #85
- Unvendor plz-event-source and plz-media-type by @leotaku in #87
- Ignore tests and utilities for elpa by @ahyatt in #89
- Update Claude Sonnet version by @ahyatt in #90
- Fix issue with empty function call args for Open AI by @ahyatt in #99
- Handle plz-error-message error type by @leotaku in #95
New Contributors
Full Changelog: 0.17.4...0.18.0
Add llm-prompt-default-max-tokens, open AI token limit fixes, parallel tool use fixes
What's Changed
- Fix breakage with Open AI's llm-chat-token-limit by @ahyatt in #77
- Fix Vertex and Open AI's parallel call tool use by @ahyatt in #78
- Add variable llm-prompt-default-max-tokens by @ahyatt in #79
- Fix how we look for ollama models in integration tests by @ahyatt in #80
Full Changelog: 0.17.3...0.17.4
More efficient, streaming text insertion, function calling conversational fixes
What's Changed
- Make streaming not repeatedly insert the same thing by @ultronozm in #72
- Fix error with ollama function results by @ultronozm in #74
- Fix bug involving multiple function calls with Claude by @ultronozm in #73
- Remove the debug logs output on streaming, to reduce log volume by @ahyatt in #75
Full Changelog: 0.17.2...0.17.3