-
-
Notifications
You must be signed in to change notification settings - Fork 822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama integration and other backends #463
Conversation
5b1c545
to
265dc41
Compare
2fc1549
to
6f5e965
Compare
Hi! - Let me begin by thanking you for this awesome project! I'll test this more soon (also on WSL), but seems to work on Os X!
For now, two little remarks:
straight from source:
Just install it outside of the venv
|
Its asking for API key even when i press enter as im trying to setup ollama. |
I tested this PR with MistralAI API and it fixes the issue I was having before where it was complaining about |
Integrating multiple locally hosted LLMs using LiteLLM.
Test It
To test ShellGPT with ollama, follow these steps:
Ollama
Note
ShellGPT is not optimized for local models and may not work as expected.
Installation
MacOS
Download and launch Ollama app.
Linux & WSL2
curl https://ollama.ai/install.sh | sh
Setup
We can have multiple large language models installed in Ollama like Llama2, Mistral and others. It is recommended to use
mistral:7b-instruct
for the best results. To install the model, run the following command:This will take some time to download the model and install it. Once the model is installed, you can start API server:
ShellGPT configuration
Now when we have Ollama backend running we need to configure ShellGPT to use it. Check if Ollama backend is running and accessible:
If you are running ShellGPT for the first time, you will be prompted for OpenAI API key. Just press
Enter
to skip this step.Now we need to change few settings in
~/.config/shell_gpt/.sgptrc
. Open the file in your editor and changeDEFAULT_MODEL
toollama/mistral:7b-instruct
. Also make sure thatOPENAI_USE_FUNCTIONS
is set tofalse
. And that's it! Now you can use ShellGPT with Ollama backend.sgpt "Hello Ollama"
Azure