A simple chat interface built with SolidJS and Tailwind CSS that uses LiteLLM as a proxy to support multiple LLM backends. This allows you to use various AI models through a unified OpenAI-like interface.
- Chat interface with streaming responses
- Support for any LLM provider through LiteLLM proxy
- Message management (edit, copy, clear conversation)
- Basic model configuration panel
- Temperature
- Max tokens
- Top P
- Model selection
- Clone the repository:
git clone https://github.com/AdjectiveAllison/solid-llm-frontend
cd solid-llm-frontend
- Install dependencies:
bun install
- Set up your environment variables:
cp .env.example .env
Edit .env
to add your:
VITE_API_BASE_URL
: Your LiteLLM proxy URLVITE_API_KEY
: Your API key (if required by your proxy)
Run the development server:
bun run dev
The application will be available at http://localhost:5173
.
This project is primarily designed for development and testing purposes. The current configuration uses dangerouslyAllowBrowser: true
in the OpenAI client setup, which is not recommended for production environments. A backend proxy would be needed for secure production deployment.
This frontend is designed to work with a LiteLLM proxy, which allows you to use any supported LLM provider including:
- OpenAI
- Anthropic
- AWS Bedrock
- Azure OpenAI
- Google Vertex AI
- And many more
For setting up the LiteLLM proxy, refer to the LiteLLM proxy documentation.