Add llm-prompt-default-max-tokens, open AI token limit fixes, parallel tool use fixes
What's Changed
- Fix breakage with Open AI's llm-chat-token-limit by @ahyatt in #77
- Fix Vertex and Open AI's parallel call tool use by @ahyatt in #78
- Add variable llm-prompt-default-max-tokens by @ahyatt in #79
- Fix how we look for ollama models in integration tests by @ahyatt in #80
Full Changelog: 0.17.3...0.17.4