Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Aider ignores --timeout arg and times out while waiting for locally hosted LLM reply #8174

Open
vmajor opened this issue Feb 1, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@vmajor
Copy link

vmajor commented Feb 1, 2025

What happened?

I am hosting R1 (the actual model) locally, and it takes a while for things to happen. I have been trying to tell it not to time out my requests, to no avail.

This is not solved for Aider v0.72.3 when accessing the llama-server endpoint.

aider ignores --timeout flag and times out the requests and then attempts to cancel the request, which of course does not work:

aider --model openai/deepseek-r1 --timeout 60000

litellm.APIConnectionError: APIConnectionError: OpenAIException - timed out
Retrying in 0.2 seconds...
litellm.APIConnectionError: APIConnectionError: OpenAIException - peer closed connection without sending complete message
body (incomplete chunked read)
Retrying in 0.5 seconds...
litellm.APIError: APIError: OpenAIException - Connection error.
Retrying in 1.0 seconds...

This is what llama-server sees:

main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv  update_slots: all slots are idle
slot launch_slot_: id  0 | task 0 | processing task
slot update_slots: id  0 | task 0 | new prompt, n_ctx_slot = 42016, n_keep = 0, n_prompt_tokens = 2074
slot update_slots: id  0 | task 0 | kv cache rm [0, end)
slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.987464
srv  cancel_tasks: cancel task, id_task = 0
request: POST /chat/completions 127.0.0.1 200
srv  cancel_tasks: cancel task, id_task = 2
request: POST /chat/completions 127.0.0.1 200
...

Relevant log output

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

No idea, how do I find out?

Twitter / LinkedIn details

No response

@vmajor vmajor added the bug Something isn't working label Feb 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant