This documentation page contains information about running the bot locally for development purposes. This can also helpful for quickly testing the bot in a containerized environment, with all dependency services included.
For running the bot against your Matrix server, see the 🚀 Installation documentation.
This bot is built in 🦀 Rust and uses the mxlink library (built on top of matrix-rust-sdk).
For local development, we run all dependency services in 🐋 Docker containers via docker-compose.
- 🐋 Docker and docker-compose
- Just
- (Optional) 🦀 Rust - for compiling and running outside of a container
- (Optional) an API key for some Large Language Model ☁️ provider (e.g. OpenAI), though we recommend using LocalAI or Ollama for local development
Developing locally is possible, but requires a Rust toolchain. If this dependency is problematic for you, consider 🐋 running in a container.
In any case, you will need 🐋 Docker as dependency services run there.
- Start the core dependency services (Postgres, Synapse, Element Web):
just services-start
- (Only the first time around) Prepare initial app configuration in
var/app/local/config.yml
:just app-local-prepare
- (Only the first time around) Prepare your configuration file
- (Only the first time around) Prepare initial default Matrix user accounts (
admin
andbaibot
):just users-prepare
- (Optional) Start additional services depending on which agent provider you've chosen:
- for LocalAI:
- Start services:
just localai-start
- Wait a while for LocalAI to start up. It has a lot of models to download. Monitor progress using
just localai-tail-logs
- When ready, you'll be able to reach LocalAI's web interface at http://localai.127.0.0.1.nip.io:42027/ (not that you really need it)
- Start services:
- for Ollama:
- Start services:
just ollama-start
- (Only the first time around) Pull the model configured in
agents.static_definitions
in the configuration file:just ollama-pull-model gemma2:2b
- Start services:
- Start the bot:
just run-locally
- Go to http://element.127.0.0.1.nip.io:42025/ and login with
admin
/admin
- Create a new room and invite
@baibot:synapse.127.0.0.1.nip.io
- When done, stop the bot (
Ctrl
+C
) - Stop the core dependency services:
just services-stop
- (Optional) Stop additional services:
You can avoid having a Rust toolchain installed locally and build/run this in a container.
- Start the core dependency services (Postgres, Synapse, Element Web):
just services-start
- (Only the first time around) Prepare initial app configuration in
var/app/container/config.yml
:just app-container-prepare
- (Only the first time around) Prepare your configuration file
- (Only the first time around) Prepare initial default Matrix user accounts (
admin
andbaibot
):just users-prepare
- (Optional) Start additional services depending on which agent provider you've chosen:
- for LocalAI:
- Start services:
just localai-start
- Wait a while for LocalAI to start up. It has a lot of models to download. Monitor progress using
just localai-tail-logs
- When ready, you'll be able to reach LocalAI's web interface at http://localai.127.0.0.1.nip.io:42027/ (not that you really need it)
- Start services:
- for Ollama:
- Start services:
just ollama-start
- (Only the first time around) Pull the model configured in
agents.static_definitions
in the configuration file:just ollama-pull-model gemma2:2b
- Start services:
- Start the bot:
just run-in-container
- Go to http://element.127.0.0.1.nip.io:42025/ and login with
admin
/admin
- Create a new room and invite
@baibot:synapse.127.0.0.1.nip.io
- When done, stop the bot (
Ctrl
+C
) - Stop the dependency services:
just services-stop
- (Optional) Stop additional services:
This is about editing your configuration. The initial configuration is created based on etc/app/config.yml.dist
when you run just app-local-prepare
or just app-container-prepare
.
Depending on whether you run locally or in a container, your configuration lives in a different file (var/app/local/config.yml
and var/app/container/config.yml
, respectively).
Before starting the bot, you may wish to adjust this configuration.
You can create 🤖 agents either statically or dynamically using any of the supported ☁️ providers.
For getting started most quickly (and locally), we recommend using LocalAI or Ollama. These services are already configured to run as local services via docker-compose.
Ollama is most lightweight (~2GB for the container image + ~1.6GB for the model), but supports only 💬 text-generation.
LocalAI requires 4x more disk space (~6GB for the container image + ~12GB for the models), but supports 💬 text-generation, 🗣️ text-to-speech, 🦻 speech-to-text and 🖼️ image-generation.
OpenAI supports all of these capabilities as well and does not require powerful hardware or lots of disk space. However, it requires signup and an API key.
For local testing, we recommend LocalAI, because it runs fully locally and supports more features than Ollama.
LocalAI supports all 🌟 features of the bot.
If you decided to go with LocalAI:
- enable the
localai
entry in theagents.static_definitions
list in the configuration file - adjust the
initial_global_config.handler.catch_all
setting in the configuration file (null
->static/localai
)
By default, we configure LocalAI to use the All-In-One images running on the CPU. Performance is not great, but it should work reasonably well on good hardware.
If you'd like to use GPU acceleration, you may adjust the SERVICE_LOCALAI_IMAGE_NAME
variable in var/services/env (this file is automatically prepared for you based on etc/services/env.dist) to use other available LocalAI All-In-One images.
Ollama only supports 💬 text-generation.
If you decided to go with Ollama:
- enable the
ollama
entry in theagents.static_definitions
list in the configuration file - adjust the
initial_global_config.handler.catch_all
setting in the configuration file (null
->static/ollama
)
The gemma2:2b model was chosen as a default, because it's smallest/lightest and should run well under Ollama on most machines.
OpenAI supports all 🌟 features of the bot.
If you decided to go with OpenAI:
- enable the
openai
entry in theagents.static_definitions
list in the configuration file - adjust the
initial_global_config.handler.catch_all
setting in the configuration file (null
->static/openai
)