A multi-model chat application that enables natural conversations between different AI models and humans.
- Chat with a variety of AI models from OpenRouter
- Multiple conversation modes:
- Explicit: Directly address a specific model
- Meta: Use a router model to decide which model should respond
- Round Robin: Rotate through available models
- Collaborative: Models decide among themselves who should respond
- Autonomous: Models talk to each other autonomously
- Model response classification (empty, brief, content)
- Conversation history stored in SQLite
- Response logging for analysis
- Model management system
- Clone the repository
- Install dependencies with UV:
uv pip install -e .
- Set up your OpenRouter API key. You can use any of these methods:
- Provide it directly when running commands:
uv run chat.py --key "your_api_key"
- Use the
/setkey
command in the chat interface:/setkey your_api_key
- Set the environment variable (old method):
export OPENROUTER_API_KEY=$(llm keys get openrouter)
- Use llm CLI tool integration (happens automatically if key not set)
- Provide it directly when running commands:
API keys will be saved to config/api_keys.json
for future use.
Note: FriendShaped can use the llm CLI tool to retrieve API keys as a fallback.
uv run chat.py
You can specify different trigger modes using the --mode
flag:
uv run chat.py --mode autonomous
Available modes: explicit
, meta
, round_robin
, collaborative
, autonomous
For the best terminal experience with command history, auto-completion, and proper arrow key handling, use the dedicated TUI:
uv run tui.py
This provides:
- Command history (saved between sessions)
- Tab completion for commands
- Auto-suggestions as you type
- Proper arrow key navigation
- Ctrl+L to clear screen
You can also use the standard interface with the --simplified
flag:
# Use simplified UI explicitly
uv run chat.py --simplified
Alternatively, you can use rlwrap
for basic readline features:
# Install rlwrap if needed (on Ubuntu/Debian)
sudo apt install rlwrap
# Run with readline support
rlwrap uv run chat.py
Note: Direct prompt_toolkit integration within chat.py is currently disabled due to asyncio compatibility issues.
uv run chat.py --query "Your question here"
/help
- Show help/quit
- Exit the chat/models
- List available models/model MODEL_NAME
- Switch to a specific model (can use aliases like/model claude
)/modelmanager
- Manage models (see below)/aliases
- List all configured model aliases/alias NAME MODEL_ID
- Create or update an alias/unalias NAME
- Remove an alias/personality MODEL_ID NAME
- Set a model's personality/identification/setkey API_KEY
- Save OpenRouter API key for future use
The model manager allows you to curate which models are available in the chat application.
Use the /modelmanager
command in the chat interface:
/modelmanager list
- List all available models/modelmanager search TERM
- Search for models/modelmanager enable MODEL_ID
- Enable a model/modelmanager disable MODEL_ID
- Disable a model/modelmanager favorite MODEL_ID
- Add a model to favorites/modelmanager unfavorite MODEL_ID
- Remove a model from favorites/modelmanager default MODEL_ID
- Set default model/modelmanager update
- Update model list from OpenRouter API/modelmanager refresh
- Reload models in current session
Note: You can use partial model IDs for most commands.
You can also manage models directly from the command line using the model_manager.py
script:
# List all models
uv run model_manager.py list
# Search for models
uv run model_manager.py search claude
# Enable a model
uv run model_manager.py enable claude-3.5-sonnet
# Disable a model
uv run model_manager.py disable claude-2
# Add a model to favorites
uv run model_manager.py favorite claude-3.5-sonnet
# Remove a model from favorites
uv run model_manager.py unfavorite claude-2
# Set default model
uv run model_manager.py default claude-3.5-sonnet
# Update model list from OpenRouter API
uv run model_manager.py update
# Managing aliases
uv run model_manager.py alias list
uv run model_manager.py alias set c3 claude-3-opus
uv run model_manager.py alias remove c3
# Managing model personalities
uv run model_manager.py personality list
uv run model_manager.py personality set claude-3.5-sonnet "Claude 3.5 Sonnet"
Aliases provide short, convenient names for models, similar to the llm
CLI tool:
# Default aliases (examples)
claude → anthropic/claude-3.7-sonnet:beta
gpt4 → openai/gpt-4o
4o → openai/gpt-4o
c37 → anthropic/claude-3.7-sonnet:beta
c35 → anthropic/claude-3.5-sonnet
You can use aliases wherever you would use a model ID, such as:
/model claude
to switch to Claude 3.7uv run model_manager.py default c37
to set the default model
Model personalities ensure that models correctly identify themselves in conversations, rather than using incorrect or outdated names. For example, Claude 3.7 Sonnet was identifying itself as "Claude 3 Opus" in its responses.
The personality system injects a system prompt before each conversation to ensure the model uses the correct identity:
You are Claude 3.7 Sonnet, an AI assistant. When asked about your identity, always identify yourself as Claude 3.7 Sonnet.
This helps maintain consistency across models in a multi-model chat application.
chat.py
: Main chat applicationconfig/models.py
: Model configuration system (models, aliases, personalities, API keys)model_manager.py
: Command-line tool for model managementsetup_models.py
: Script to set up default model configurationchat_history.db
: SQLite database for conversation historyconfig/models.json
: Model configuration file (enabled models, favorites, aliases, personalities)config/api_keys.json
: API key storagelogs/
: Directory for chat logs and model output logs
The project includes several testing scripts:
- Simple test:
uv run simple_test.py
- Multiple models:
uv run multiple_models.py
- Autonomous test:
uv run autonomous_test.py [turns] [prompt]
- Analyze logs:
uv run analyze_logs.py
- Improved logging system
- Asynchronous conversation flow
- Model identity enhancement
- Alias recognition
- UI improvements
- Decentralized architecture
- Memory systems
- Interruption mechanics
- Selective attention
- Real-time adaptation
This project is licensed under the MIT License - see the LICENSE file for details.