Airtable extension for helping us evaluate applications to our courses using large language models (LLMs).
We previously manually evaluated each applicant for our educational courses on a set of objective criteria. We would then use these scores plus some additional subjective judgement to come to an application decision. This is time-intensive, and it's difficult to align all the humans to give the same responses for the same person.
In a previous pilot project we concluded that LLMs, while not perfect, could help us automate the first scoring part of our application process.
This repository holds code for an Airtable extension that we can run inside our applications base. We set the relevant inputs (e.g. answers to application questions), the decisioning criteria, and let it evaluate applicants.
To start developing this extension:
- Clone this git repository
- Install Node.js
- Run
npm install
- Run
npm start
(for the 'Applications' base in the BlueDot Impact AirTable account) - Load the relevant base
- Make changes to the code and see them reflected in the app!
If the changes don't appear to be updating the app, try clicking the extension name then 'Edit extension', then pasting in the server address printed to the console from step 4 (probably https://localhost:9000
).
Changes merged into the default branch will automatically be deployed. You can manually deploy new versions using npm run deploy
. If you get the error airtableApiBlockNotFound
, set up the block CLI with npx block set-api-key
with a personal access token.
If you want to install this on a new base see these instructions.