Replies: 1 comment 2 replies
-
You can use Thanks for the idea :) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
First of all, thank you for the fantastic project.
I'm trying to figure out if I can add a prefix to assistant response to guide the generation better.
Imagine I'm trying to use the model for summarization. My system prompt reflects that, but some models still get lost if the provided content in the user message is too long. So to guide the generation better, I'd like the assistant generation to start with a given prefix for the next message such as "Here is the summary of the provided text:"
Right now, I can call session.prompt but seems like chat template takes over and I have no way of filling in the response of the assistant manually. Or do I?
I could just construct the messages manually using a given model's format but I was wondering if there is a way to do this without giving up the ergonomics of LlamaChatSession.
Beta Was this translation helpful? Give feedback.
All reactions