You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement an option to integrate Ollama with ShellGPT. These changes should include the ability to easily switch LLM backends for ShellGPT, allowing users to toggle between OpenAI and Ollama. Since Ollama responses are slightly different (compared to OpenAI), we can utilise Ollama Python Library.
With multiple LLM backends, dependencies specific to a particular LLM/Backend should utilize the Python package 'extras'. This way, users will have the option to install ShellGPT with the default OpenAI client, e.g., pip install shell-gpt, or with a specific backend, e.g., pip install shell-gpt[ollama]
Since Ollama supports multiple open-source models, we need to identify the specific model that performs best for ShellGPT use cases. Based on my research, mistral:7b-instruct outperforms ollama2:*-* in shell command generation tasks.
UPD: Seems it would be much easier to integrate Ollama using LiteLLM. This will also enable ShellGPT to work with the Azure OpenAI API.
The text was updated successfully, but these errors were encountered:
Implement an option to integrate Ollama with ShellGPT. These changes should include the ability to easily switch LLM backends for ShellGPT, allowing users to toggle between OpenAI and Ollama. Since Ollama responses are slightly different (compared to OpenAI), we can utilise Ollama Python Library.
With multiple LLM backends, dependencies specific to a particular LLM/Backend should utilize the Python package 'extras'. This way, users will have the option to install ShellGPT with the default OpenAI client, e.g.,
pip install shell-gpt
, or with a specific backend, e.g.,pip install shell-gpt[ollama]
Since Ollama supports multiple open-source models, we need to identify the specific model that performs best for ShellGPT use cases. Based on my research,
mistral:7b-instruct
outperformsollama2:*-*
in shell command generation tasks.UPD: Seems it would be much easier to integrate Ollama using LiteLLM. This will also enable ShellGPT to work with the Azure OpenAI API.
The text was updated successfully, but these errors were encountered: