You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a "robojudge" that evaluates how well the predictors perform and how they compare.
Different metrics can be computed and compared, like mean absolute error (MAE) or MAE / hectar, mean square error, etc. Mean rank is another useful metric: rank the predictors for each country, and compute their mean rank on aggregate.
It should be possible to compare the predictors at an aggregated level, and also at a country level.
Add a "robojudge" that evaluates how well the predictors perform and how they compare.
Different metrics can be computed and compared, like mean absolute error (MAE) or MAE / hectar, mean square error, etc. Mean rank is another useful metric: rank the predictors for each country, and compute their mean rank on aggregate.
It should be possible to compare the predictors at an aggregated level, and also at a country level.
See https://phase1.xprize.evolution.ml/ for an example of robojudge that was used for the Pandemic Resilience XPRIZE challenge.
See https://github.com/cognizant-ai-labs/covid-xprize/blob/master/predictor_robojudge.ipynb for a notebook that was used to compare the predictors
The text was updated successfully, but these errors were encountered: