You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am hosting R1 (the actual model) locally, and it takes a while for things to happen. I have been trying to tell it not to time out my requests, to no avail.
This is not solved for Aider v0.72.3 when accessing the llama-server endpoint.
aider ignores --timeout flag and times out the requests and then attempts to cancel the request, which of course does not work:
aider --model openai/deepseek-r1 --timeout 60000
litellm.APIConnectionError: APIConnectionError: OpenAIException - timed out
Retrying in 0.2 seconds...
litellm.APIConnectionError: APIConnectionError: OpenAIException - peer closed connection without sending complete message
body (incomplete chunked read)
Retrying in 0.5 seconds...
litellm.APIError: APIError: OpenAIException - Connection error.
Retrying in 1.0 seconds...
This is what llama-server sees:
main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv update_slots: all slots are idle
slot launch_slot_: id 0 | task 0 | processing task
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 42016, n_keep = 0, n_prompt_tokens = 2074
slot update_slots: id 0 | task 0 | kv cache rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.987464
srv cancel_tasks: cancel task, id_task = 0
request: POST /chat/completions 127.0.0.1 200
srv cancel_tasks: cancel task, id_task = 2
request: POST /chat/completions 127.0.0.1 200
...
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
No idea, how do I find out?
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
I am hosting R1 (the actual model) locally, and it takes a while for things to happen. I have been trying to tell it not to time out my requests, to no avail.
This is not solved for Aider v0.72.3 when accessing the llama-server endpoint.
aider ignores --timeout flag and times out the requests and then attempts to cancel the request, which of course does not work:
aider --model openai/deepseek-r1 --timeout 60000
This is what llama-server sees:
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
No idea, how do I find out?
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: