-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditional flow with tool calls #33
Comments
Hi, thanks for your question @r-leyshon. I'm having a hard time understanding the question though. Could you outline, maybe in pseudo-code, what you're wanting to do? I think in particular I'm having a hard time seeing what you mean with "the tool call response ... being available for additional conditional flow". |
@gadenbuie thanks for the quick look at that, I'll try to show what I intend to do with the streamed model response below:
I've struggled to inspect the streamed parts and do things with them, at least when trying to await certain events for async appending to the message stream. My fallback is to switch off streaming but I'd rather not do that. |
Thanks for the clarification. Right now, I don't think it's possible to do what you're trying to do, at least not with the streaming response approach. Given that, I think it's very reasonable to use something like a notification to indicate that a tool was called. We'll definitely keep your use case in mind as we develop chatlas and the Shiny chat component. |
Gotcha, no worries I can fall back to removing streaming. Incase I wasn't clear, I need to handle the response for reasons other than showing a notification to the user. The notification would be a placeholder for the conditional logic that I'm using to pull the model response apart for other things that would be too onerous to share in the little example above. I wanted to say that the chatlas approach seems very friendly and in the future, if I can keep each tool as a functional unit, I shall prefer to use chatlas over handling the response myself. |
Could you describe the kinds of things you're wanting to do with the model response? That would help us understand the use case. I'm going to reopen this issue so that we make sure we keep it in mind moving forward. |
Sure thing, the code I've fallen back to lives here: https://github.com/ministryofjustice/github-chat , Specifically this bit:
|
@r-leyshon would it help you solve your use case if the |
@cpsievert something like this?
|
Yea; although chatlas would still handle the tool execution on your behalf, and so in your case, I think it could be as simple as: async def handle_tool_request(request: ContentToolRequest):
name = request.name
args = request.arguments
ui.notification_show(f"Tool {name} executed with args {args}")
async def handle_tool_response(result: ContentToolResult):
# ContentToolResult currently doesn't have a name attribute, but maybe it should?
name = result.name
res = result.value
ui.notification_show(f"Tool {name} resulted in a value of: {res}")
# Instantiate the chat model and register callbacks
chat_model = ChatModel()
chat_model.on_tool_request(handle_tool_request)
chat_model.on_tool_response(handle_tool_response)
# Start streaming
response_stream = chat_model.stream(chat.user_input())
async for _ in response_stream:
pass Note that the model is given the tool result in the prompt, and most models tend to make it clear that they're using tool information in their response. so including the result in the middle of the message may lead to a weird result (this is why I changed the response handler to be a notification instead of inserted in the message stream) |
@cpsievert Could this approach be used to create an experience like the "Tool" demo here: https://www.assistant-ui.com/? |
Could you be more specific? Are you referring to the "Tool UI" example? |
I don't know for sure, but seems it'd be possible with the proposal above |
Hi there,
I'm interested in this package for a more complex chatbot that I've been developing. I need to preferably stream openAI responses with tools. Getting this to work with the shiny chat component has been a little tricky, but then I came across a reference to chatlas in the docstring for the shiny Chat class. Working through your docs and examples, I can get so far with it.
This approach allows for streaming responses with tool calls and shiny ui
widgets. Though the tool calls need to be self-contained. Whatever the tool
returns is fed into the model for a subsequent response, rather than being
available for additional conditional flow.
I would like to instead receive the function name and parameter values and
call get_current_temperature(), passing the json response back to model.
This would allow me to display a notification without relying on a side
effect.
The text was updated successfully, but these errors were encountered: