Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bakllava does not follow the prompt and sometimes gives nonsense responses.(Ollama) #23

Open
KansaiTraining opened this issue Feb 11, 2025 · 0 comments

Comments

@KansaiTraining
Copy link

I am trying bakllava with Ollama (after I tried Llava) and when I send a query (with system and human prompt) two things happen:

  1. Bakllava >never< follows the indications of the system prompt. I indicated it explicitly how I want the response to be , but it never does
  2. Sometimes the responses are nonsensical

I format the query as indicated here with

formatted_prompt=f"{system_query}\nUSER: {human_query}\nASSISTANT:"

where the system_query is

"Return the requested information in the section delimited by ### ###. format the output as a JSON object. ###  Result:{True or False}   Reason:{from one to  three lines explaining the reason }### Always start with the Result."

but bakllava never returns Result or Reason, in the best of cases it answers the human query in free format
and in the worst cases it responds with

  • [0.18, 0.42, 0.36, 0.59]
  • the date
  • the date with "kp2 3k" added
  • KP (kilopixels per second)

Is this common or am I doing something wrong?

I call the model as indicated in the ollama page in the API usage.
When I do this with llava it runs well (although the responses are not always accurate)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant