Minimal multimodal AI chat app with dynamic conversation routing
VT.ai is a VT.ai - Minimal multimodal AI chat app that provides a seamless chat interface for interacting with various Large Language Models (LLMs). It supports both cloud-based providers and local model execution through Ollama.
-
Multi-modal Interactions
- Text and image processing capabilities
- Real-time streaming responses
- [Beta] Advanced Assistant features via OpenAI's Assistant API
-
Flexible Model Support
- OpenAI, Anthropic, and Google integration
- Local model execution via Ollama
- Dynamic parameter adjustment (temperature, top-p)
-
Modern Architecture
- Built on Chainlit for responsive UI
- SemanticRouter for intelligent conversation routing
- Real-time response streaming
- Customizable model settings
- Python 3.7+
- (Recommended)
rye
for dependency management - For local models:
- Ollama client
- Desired Ollama models
- Clone the repository
- Copy
.env.example
to.env
and configure your API keys - Set up Python environment:
python3 -m venv .venv source .venv/bin/activate pip3 install -r requirements.txt
- Optional: Train semantic router
python3 src/router/trainer.py
- Launch the application:
chainlit run src/app.py -w
# Download model
ollama pull llama3
# Start Ollama server
ollama serve
- Chainlit: Frontend framework
- LiteLLM: LLM integration layer
- SemanticRouter: Conversation routing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open a Pull Request
Check our releases page for version history and updates.
This project is licensed under the MIT License. See LICENSE for details.