You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using Ollama with the Mistral model, and the responses can sometimes be too verbose when I ask for sensor statuses. It particularly likes to tell me how it derived the information ("The front door is locked, as indicated by sensor..." or "The washer is currently idle sensor.washer...") I've added prompts like telling it to respond concisely, not to tell me where the data comes from etc etc. It has somewhat mitigated the issue, but it still likes to be too verbose pretty often.
On another note, is it possible to use the Home model with Ollama and have it be GPU accelerated?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm using Ollama with the Mistral model, and the responses can sometimes be too verbose when I ask for sensor statuses. It particularly likes to tell me how it derived the information ("The front door is locked, as indicated by sensor..." or "The washer is currently idle sensor.washer...") I've added prompts like telling it to respond concisely, not to tell me where the data comes from etc etc. It has somewhat mitigated the issue, but it still likes to be too verbose pretty often.
On another note, is it possible to use the Home model with Ollama and have it be GPU accelerated?
Beta Was this translation helpful? Give feedback.
All reactions