-
Notifications
You must be signed in to change notification settings - Fork 888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: completing text /chat-completion and /completion tests #1223
Conversation
25257c4
to
165b613
Compare
lgtm. cc @hardikjshah @ehhuang to take one look given you have been in the test world lately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left a couple of small comments
@@ -0,0 +1,171 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please update the MANIFEST.in to include these files and remove the deleted ones ? https://github.com/meta-llama/llama-stack/blob/main/MANIFEST.in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, didn't know this before. fixed.
): | ||
# TODO: more dynamic lookup on tool_prompt_format for model family | ||
tool_prompt_format = "json" if "3.1" in text_model_id else "python_list" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets just drop this as we recently added (https://github.com/meta-llama/llama-stack/pull/1214/files) defaults to be inferred based on model-id at the api level when not provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me fix this with a new diff, so that i can keep this one iso-transformation without any coding logic change
def test_text_chat_completion_with_tool_calling_and_non_streaming( | ||
client_with_models, text_model_id, get_weather_tool_definition, provider_tool_format | ||
client_with_models, text_model_id, provider_tool_format, test_case |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
provider_tool_format
does not seem to be used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, there are several places that need some cleanup, and I'll fix them with one more diff, instead of this one
165b613
to
056432f
Compare
What does this PR do?
The goal is to have a fairly complete set of provider and e2e tests for /chat-completion and /completion. This is the current list,
Test plan
== Set up Ollama local server
== Run a provider test
== Run an e2e test