-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MacOS 404 reply with both Ollama and Chatgpt #949
Comments
This issue is affect also m1 with sequoia 15.1 , please can we look into this ? |
I have the same problem- attempting to use any ollama model produces the above 404 messages... |
we just released a new version: https://github.com/block/goose/releases/download/stable/Goose.zip can you please retry and see if that works for you? |
Just downloaded the version you shared @salman1993 (1.0.3) and still seeing the same issue with Ollama: |
Hello! @salman1993 now facing a different issue with this version
|
New user, I just installed Goose on Mac OS 15.2 a few minutes ago, tried to get it to talk to |
Thanks for reporting, let me check it |
Can you try to run |
I got the same issue I ran cat ~/.config/goose/config.yaml its the exact same ip in goose for ollama "Ran into this error: Execution error: builder error. |
Can you share your yaml? Would like to confirm what format ollama host looks like? Also, can you try to restart mac to see whether the issue is resolved? I am still debugging it and find restarting mac can be a walkaround |
sure this is what's inside the yaml "OLLAMA_HOST: 127.0.0.1:11434" I restarted same issue |
Can you try to set your host to "http://127.0.0.1:11434"(adding http:// prefix) |
I did try same issue |
I am not sure what you meant but I clicked Reset Provider and it started a new window the ollama key is already there and I can't change it I can only change it from configure and also when I close the window and open a new one after clicking Reset Provider same issue |
Thanks, I will continue debug on our side, to unblock you, can you try the previous version: https://github.com/block/goose/releases/tag/v1.0.2 |
just fyi - I had the 404 message yestrerday after installing and using 1.0.0, so doubt that 1.0.2 will be a remedy... |
I had faced the same issue. I wonder whether the problem is with the model and without checking whether the model exists in the system. You can see the model added is These are the current models I have downloaded. One issue I noticed is when I switch to
|
Thanks @yingjiehe-xyz, it took both suggestions in combination, but it's working
~ cat ~/.config/goose/config.yaml
OLLAMA_HOST: http://127.0.0.1:11434
~ ollama list
NAME ID SIZE MODIFIED
qwen2.5:latest 845dbda0ea48 4.7 GB 5 minutes ago
michaelneale/deepseek-r1-goose:latest 425664f4d998 9.0 GB 12 hours ago
~ ps aux | grep ollama
raymondgasper 25890 0.0 0.0 410742592 1696 s011 S+ 8:17AM 0:00.01 grep --color=auto ollama
raymondgasper 7382 0.0 0.1 411863504 35568 ?? S 8:09AM 0:44.23 /Applications/Ollama.app/Contents/Resources/ollama serve
~ Goose is working w/ both models |
The issue seems to be that Ollama sets the host to "0.0.0.0" without the In the mean time, can try the following for goose On Mac:
For Linux, it'll be similar but you'd have to update Then, you should restart Ollama and Goose. That should make it work. Please let us know if that works. |
I had the same error on Windows using WSL2 (Ubuntu). The key issue is networking between WSL2 and your Windows host. Since I had Ollama running on the Windows host, and Goose running in WSL - I had to take measures to allow interaction between the two. Find Windows host IP in WSL2:cat /etc/resolv.conf | grep nameserver | awk '{print $2}' Test the Ollama server running on Windows host:curl http://172.27.*.1:11434 By default, Ollama binds to 127.0.0.1 (localhost), which is only accessible within Windows. To allow connections from WSL2:In Windows PowerShell (run as admin): $env:OLLAMA_HOST="0.0.0.0:11434" Configure Goose:OLLAMA_HOST=http://172.27.*.1:11434 Optional: In powershell (run as admin):Enable-NetFirewallRule -DisplayName 'Virtual Machine Monitoring (Echo Request - ICMPv4-In)' Tip: If you still have an issue, disable VPN |
Describe the bug
I have Ollama running serving Qwen2.5. I can access it on localhost:11434 and Goose also finds it and can list the models and I can select Qwen2.5.
However, Goose then sends the requests to:
/v1/chat/completions
for which Ollama returns a 404.
I believe the Ollama API is not mimicing the chatgpt one anymore but expects calls on /api/chat instead, or must I do something on my end to make the /v1/chat/completions available in Ollama?
The stranger thing is that also after adding my ChatGPT API key and switched to gpt-4o, I also get a 404 response trying to use OpenAI, but it says it successfully switched to gpt-4o.
To Reproduce
Expected behavior
No 404
Screenshots
Please provide following information:
Additional context
The text was updated successfully, but these errors were encountered: