Skip to content

Latest commit

 

History

History
134 lines (88 loc) · 8.32 KB

development.md

File metadata and controls

134 lines (88 loc) · 8.32 KB

🧑‍💻 Development

This documentation page contains information about running the bot locally for development purposes. This can also helpful for quickly testing the bot in a containerized environment, with all dependency services included.

For running the bot against your Matrix server, see the 🚀 Installation documentation.

This bot is built in 🦀 Rust and uses the mxlink library (built on top of matrix-rust-sdk).

For local development, we run all dependency services in 🐋 Docker containers via docker-compose.

Prerequisites

Getting started guide

Developing locally is possible, but requires a Rust toolchain. If this dependency is problematic for you, consider 🐋 running in a container.

In any case, you will need 🐋 Docker as dependency services run there.

Running locally

  1. Start the core dependency services (Postgres, Synapse, Element Web): just services-start
  2. (Only the first time around) Prepare initial app configuration in var/app/local/config.yml: just app-local-prepare
  3. (Only the first time around) Prepare your configuration file
  4. (Only the first time around) Prepare initial default Matrix user accounts (admin and baibot): just users-prepare
  5. (Optional) Start additional services depending on which agent provider you've chosen:
  • for LocalAI:
    • Start services: just localai-start
    • Wait a while for LocalAI to start up. It has a lot of models to download. Monitor progress using just localai-tail-logs
    • When ready, you'll be able to reach LocalAI's web interface at http://localai.127.0.0.1.nip.io:42027/ (not that you really need it)
  • for Ollama:
    • Start services: just ollama-start
    • (Only the first time around) Pull the model configured in agents.static_definitions in the configuration file: just ollama-pull-model gemma2:2b
  1. Start the bot: just run-locally
  2. Go to http://element.127.0.0.1.nip.io:42025/ and login with admin / admin
  3. Create a new room and invite @baibot:synapse.127.0.0.1.nip.io
  4. When done, stop the bot (Ctrl + C)
  5. Stop the core dependency services: just services-stop
  6. (Optional) Stop additional services:

Running in a container

You can avoid having a Rust toolchain installed locally and build/run this in a container.

  1. Start the core dependency services (Postgres, Synapse, Element Web): just services-start
  2. (Only the first time around) Prepare initial app configuration in var/app/container/config.yml: just app-container-prepare
  3. (Only the first time around) Prepare your configuration file
  4. (Only the first time around) Prepare initial default Matrix user accounts (admin and baibot): just users-prepare
  5. (Optional) Start additional services depending on which agent provider you've chosen:
  • for LocalAI:
    • Start services: just localai-start
    • Wait a while for LocalAI to start up. It has a lot of models to download. Monitor progress using just localai-tail-logs
    • When ready, you'll be able to reach LocalAI's web interface at http://localai.127.0.0.1.nip.io:42027/ (not that you really need it)
  • for Ollama:
    • Start services: just ollama-start
    • (Only the first time around) Pull the model configured in agents.static_definitions in the configuration file: just ollama-pull-model gemma2:2b
  1. Start the bot: just run-in-container
  2. Go to http://element.127.0.0.1.nip.io:42025/ and login with admin / admin
  3. Create a new room and invite @baibot:synapse.127.0.0.1.nip.io
  4. When done, stop the bot (Ctrl + C)
  5. Stop the dependency services: just services-stop
  6. (Optional) Stop additional services:

Prepare your configuration file

This is about editing your configuration. The initial configuration is created based on etc/app/config.yml.dist when you run just app-local-prepare or just app-container-prepare.

Depending on whether you run locally or in a container, your configuration lives in a different file (var/app/local/config.yml and var/app/container/config.yml, respectively).

Before starting the bot, you may wish to adjust this configuration.

Choosing an agent provider

You can create 🤖 agents either statically or dynamically using any of the supported ☁️ providers.

For getting started most quickly (and locally), we recommend using LocalAI or Ollama. These services are already configured to run as local services via docker-compose.

Ollama is most lightweight (~2GB for the container image + ~1.6GB for the model), but supports only 💬 text-generation.

LocalAI requires 4x more disk space (~6GB for the container image + ~12GB for the models), but supports 💬 text-generation, 🗣️ text-to-speech, 🦻 speech-to-text and 🖼️ image-generation.

OpenAI supports all of these capabilities as well and does not require powerful hardware or lots of disk space. However, it requires signup and an API key.

For local testing, we recommend LocalAI, because it runs fully locally and supports more features than Ollama.

LocalAI

LocalAI supports all 🌟 features of the bot.

If you decided to go with LocalAI:

  • enable the localai entry in the agents.static_definitions list in the configuration file
  • adjust the initial_global_config.handler.catch_all setting in the configuration file (null -> static/localai)

By default, we configure LocalAI to use the All-In-One images running on the CPU. Performance is not great, but it should work reasonably well on good hardware.

If you'd like to use GPU acceleration, you may adjust the SERVICE_LOCALAI_IMAGE_NAME variable in var/services/env (this file is automatically prepared for you based on etc/services/env.dist) to use other available LocalAI All-In-One images.

Ollama

Ollama only supports 💬 text-generation.

If you decided to go with Ollama:

  • enable the ollama entry in the agents.static_definitions list in the configuration file
  • adjust the initial_global_config.handler.catch_all setting in the configuration file (null -> static/ollama)

The gemma2:2b model was chosen as a default, because it's smallest/lightest and should run well under Ollama on most machines.

OpenAI

OpenAI supports all 🌟 features of the bot.

If you decided to go with OpenAI:

  • enable the openai entry in the agents.static_definitions list in the configuration file
  • adjust the initial_global_config.handler.catch_all setting in the configuration file (null -> static/openai)