-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model validation for FS models #165
Conversation
@aliazzzdat thanks for all the changes! Looks like some tests are failing, can we fix those and also do a full instantiation of Stacks to confirm everything is still working as expected? Version changes can be tricky, and want to make sure we don't break anything |
1084e49
to
87efc6c
Compare
...late `project_name_alphanumeric_underscore` .}}/validation/notebooks/ModelValidation.py.tmpl
Outdated
Show resolved
Hide resolved
...late `project_name_alphanumeric_underscore` .}}/validation/notebooks/ModelValidation.py.tmpl
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the changes @aliazzzdat !
FS models cannot predict as pyfunc models yet (issue #70), thus we can't use MLflow Evaluate with FS models.
However, MLflow Evaluate can take a function instead of a pyfunc model_uri.
This means we can call an FS model in the function and use this function in the Evaluate method.
Although this is a good solution, we still cannot enable baseline comparison, as we need to provide a model_uri of a pyfunc model (so this means it's not possible to compare a new FS model with a baseline FS model).