Welcome to the open source Scott Logic prompt injection playground!
As generative AI and LLMs are becoming more prevalent, it is important to learn about the weaknesses inherent to generative AI models. We have built an application called SpyLogic to teach people in a fun way about prompt injection attacks, and how to defend against them.
SpyLogic is presented in two modes:
Go undercover and use prompt injection attacks on ScottBrewBot, a clever but flawed generative AI bot. Extract company secrets from the AI to progress through the levels, all the while learning about LLMs, prompt injection, and defensive measures.
Activate and configure a number of different prompt injection defence measures to create your own security system. Then talk to the AI and try to crack it!
This app is build using the OpenAI API. To use it you will need to have an OpenAI account, and that account must have credit! You can check your credit on the billing page.
$5 of credit is issued to every new free acount, however this expires after 3 months (true in July 2023). Note: When you verify a new account, you do so with a phone number. To gain free credits, you will need to use a phone number that has not yet verified an account. See OpenAI Pricing for more information.
Minimum requirement: Node v18
npm ci
To run locally, a few environment variables must be defined. We are using dotenv
to load local .env
files.
Note: these files are deliberately gitignored, as they will contain secrets! Never commit them.
- In the backend directory, copy file .env.example and name the copy
.env
, then open for editing - Set value of
OPENAI_API_KEY
to your OpenAI API key - Set value of
SESSION_SECRET
to a random UUID
- In the frontend directory, copy file .env.example and name the copy
.env
- If you've changed the default port for running the server, modify the value of
VITE_BACKEND_URL
accordingly
It is easiest to host both API and UI through the server. From project root:
npm run build
npm start
Alternatively, to run in Docker we have provided a Compose file and npm scripts for convenience:
# Run container - image will be built first time this is run, so be patient
npm run docker:start
# Tail the server logs
npm run docker:logs
# Stop the container
npm run docker:stop
In either case you will need the backend/.env
file, as mentioned above.
For those wishing to host the application in their own infrastructure, we have provided two Dockerfiles:
- Dockerfile in the backend directory will generate an image for running just the API. If you intend to deploy to the cloud, this is the one to use.
- prodlite.Dockerfile in the root directory will generate an image hosting UI and API from the same server, for convenience. This will get you up and running as quickly as possible.
In either case, you will need to provide environment vars OPENAI_API_KEY
and SESSION_SECRET
(as described in
Setup Environment above).
Please note server-side session storage is currently in-memory, so if you wish to scale the API you will either need to enable sticky load-balancing, or, modify the code to use a shared storage solution - refer to Express-Session for the various options.
For all the hot reloading and React DevTools comforts, you'll want to run UI and API separately in dev mode. See the frontend and backend READMEs for instructions.
Thank you for considering contributing to this open source project!
Please read the our contributing guide and our code of conduct first.