Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Something went wrong: Telegram server says - Bad Request: message is too long #79

Open
picarica opened this issue Feb 11, 2025 · 10 comments

Comments

@picarica
Copy link

picarica commented Feb 11, 2025

hello i have this running with deepseek-r1:1.5b model, i wonder if this model is limited in such a message lengths or is my setup wrong ?

2025-02-11 11:07:13.256343+00:00users_ids: []
2025-02-11 11:07:13.256396+00:00allowed_ids: []
2025-02-11 11:07:13.487569+00:00INFO:aiogram.dispatcher:Start polling
2025-02-11 11:07:13.526973+00:00INFO:aiogram.dispatcher:Run polling for bot @Ollama_bot id=REDACTED - 'Ollama-LLM'
2025-02-11 11:07:34.800606+00:00INFO:root:[OllamaAPI]: Processing 'Conversation thread:
2025-02-11 11:07:34.800709+00:002025-02-11T11:07:34.800709338Z
2025-02-11 11:07:34.800723+00:00User: @Ollama_bot what is IPEx for intel ARC gpus? how does it compare to CUDA  or ROCM ?
2025-02-11 11:07:34.800734+00:002025-02-11T11:07:34.800734725Z
2025-02-11 11:07:34.800744+00:00History:' for REDACED
2025-02-11 11:07:34.800757+00:00INFO:root:Sending request to Ollama API: http://192.168.0.10:30068/api/chat
2025-02-11 11:07:34.800984+00:00INFO:root:Payload: {
2025-02-11 11:07:34.801038+00:00"model": "deepseek-r1:1.5b",
2025-02-11 11:07:34.801052+00:00"messages": [
2025-02-11 11:07:34.801062+00:00{
2025-02-11 11:07:34.801073+00:00"role": "user",
2025-02-11 11:07:34.801100+00:00"content": "Conversation thread:\n\nUser: @Ollama_bot what is IPEx for intel ARC gpus? how does it compare to CUDA  or ROCM ?\n\nHistory:",
2025-02-11 11:07:34.801112+00:00"images": []
2025-02-11 11:07:34.801122+00:00}
2025-02-11 11:07:34.801131+00:00],
2025-02-11 11:07:34.801140+00:00"stream": true
2025-02-11 11:07:34.801156+00:00}
2025-02-11 11:07:58.915540+00:00-----
2025-02-11 11:07:58.915645+00:00[OllamaAPI-ERR] CAUGHT FAULT!
2025-02-11 11:07:58.915664+00:00Traceback (most recent call last):
2025-02-11 11:07:58.915679+00:00File "/code/run.py", line 488, in ollama_request
2025-02-11 11:07:58.915717+00:00if await handle_response(message, response_data, full_response):
2025-02-11 11:07:58.915733+00:00^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-11 11:07:58.915747+00:00File "/code/run.py", line 418, in handle_response
2025-02-11 11:07:58.915761+00:00await send_response(message, text)
2025-02-11 11:07:58.915782+00:00File "/code/run.py", line 433, in send_response
2025-02-11 11:07:58.915797+00:00await bot.send_message(chat_id=message.chat.id, text=text,parse_mode=ParseMode.MARKDOWN)
2025-02-11 11:07:58.915811+00:00File "/usr/local/lib/python3.12/site-packages/aiogram/client/bot.py", line 2917, in send_message
2025-02-11 11:07:58.915825+00:00return await self(call, request_timeout=request_timeout)
2025-02-11 11:07:58.915845+00:00^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-11 11:07:58.915859+00:00File "/usr/local/lib/python3.12/site-packages/aiogram/client/bot.py", line 488, in __call__
2025-02-11 11:07:58.915873+00:00return await self.session(self, method, timeout=request_timeout)
2025-02-11 11:07:58.915894+00:00^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-11 11:07:58.915909+00:00File "/usr/local/lib/python3.12/site-packages/aiogram/client/session/base.py", line 254, in __call__
2025-02-11 11:07:58.915923+00:00return cast(TelegramType, await middleware(bot, method))
2025-02-11 11:07:58.915936+00:00^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-11 11:07:58.915956+00:00File "/usr/local/lib/python3.12/site-packages/aiogram/client/session/aiohttp.py", line 189, in make_request
2025-02-11 11:07:58.915971+00:00response = self.check_response(
2025-02-11 11:07:58.915984+00:00^^^^^^^^^^^^^^^^^^^^
2025-02-11 11:07:58.915997+00:00File "/usr/local/lib/python3.12/site-packages/aiogram/client/session/base.py", line 120, in check_response
2025-02-11 11:07:58.916018+00:00raise TelegramBadRequest(method=method, message=description)
2025-02-11 11:07:58.916032+00:00aiogram.exceptions.TelegramBadRequest: Telegram server says - Bad Request: message is too long
2025-02-11 11:07:58.916046+00:002025-02-11T11:07:58.916046870Z
2025-02-11 11:07:58.916066+00:00-----
2025-02-11 11:07:58.986017+00:00INFO:aiogram.event:Update id=OPAAA is handled. Duration 24461 ms by bot id=7785230416

@lemassykoi
Copy link

for me, ollama deepseek gives an answer like this:

<think>
The user has asked something about...
So I will...
</think>
Hey, from what you've asked...

and the <think> tag is not supported by the 2 parsemode available : html and markdown

@picarica
Copy link
Author

i dont get how its related, but i had same error with llama3 model

@lemassykoi
Copy link

it's related because of Telegram server says - Bad Request: message is too long

also, your telegram bot ID is written at the last line (last chars) of your logs

@picarica
Copy link
Author

ye but t hat message in the example isnt that long is it ? and thank fro the heads up :D

@lemassykoi
Copy link

no it's not, but try without mentioning the bot, the @ is maybe the problem

@picarica
Copy link
Author

ye without it, he doesnt even answer, it was working week ago, idk what changed, but the bot is in group and without mentioning him he doesnt answer at all

@gjmulder
Copy link

I added a pagination function to split the LLM reply into pages of less than 4096 chars as that is the maximum Telegram message size

@windijolbars
Copy link

I added a pagination function to split the LLM reply into pages of less than 4096 chars as that is the maximum Telegram message size

Could we try your version? It seams that long replies of models like deepseek-r1-72b do cause sending telegram messages to fail.

@gjmulder
Copy link

It's not really mergable. There's complications when say <think> blocks wrap over multiple pages.

@lemassykoi
Copy link

use telegramify from https://github.com/sudoskys/telegramify-markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants