Skip to content

Commit

Permalink
last fix
Browse files Browse the repository at this point in the history
  • Loading branch information
clefourrier committed Jan 26, 2024
1 parent 109ce8b commit b48f201
Showing 1 changed file with 2 additions and 4 deletions.
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,12 @@
## Context
LightEval is an evaluation suite which gathers a selection of features from widely used benchmarks recently proposed:
- from the [Eleuther AI Harness](https://github.com/EleutherAI/lm-evaluation-harness), we use the nice request management
- from [HELM](https://crfm.stanford.edu/helm/latest/), we keep the qualitative metrics
- from our previous internal evaluation suite, we keep the easy evaluation loading.
- from [HELM](https://crfm.stanford.edu/helm/latest/), we keep the qualitative and rich metrics
- from our previous internal evaluation suite, we keep the easy edition, evaluation loading and speed.

We also ported all the evaluations from HELM and BigBench.

## How to install and use
At the moment, the core of our code relies on the evaluation harness as a dependency. This is likely to change from v0 to v1.

### Requirements
0) Create your virtual environment using virtualenv or conda depending on your preferences. We require Python3.10

Expand Down

0 comments on commit b48f201

Please sign in to comment.