-
Notifications
You must be signed in to change notification settings - Fork 82
/
Copy pathREADME
226 lines (159 loc) · 4.34 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
# Simba Docker Deployment Guide
This guide will help you deploy Simba using Docker with support for various hardware configurations including CPU, GPU (NVIDIA CUDA), and Apple Silicon (M1/M2/M3).
## Prerequisites
- Docker and Docker Compose installed
- Git to clone the repository
- For GPU usage: NVIDIA Docker runtime and drivers installed
- For Apple Silicon: macOS on an M1/M2/M3 Mac
## Quick Start Guide
### 1. Clone the Repository
```bash
git clone https://github.com/GitHamza0206/simba.git
cd simba
```
### 2. Basic Commands
#### Run on Specific Hardware
**For CPU:**
```bash
DEVICE=cpu make up
```
**For NVIDIA GPU with Ollama:**
```bash
DEVICE=cuda make up
```
**For Apple Silicon:**
```bash
DEVICE=mps make up
```
**Run with Ollama service (for CPU/MPS):**
```bash
DEVICE=mps ENABLE_OLLAMA=true make up
```
**Run in background mode:**
```bash
# All commands run in detached mode by default
```
### 3. Managing Your Deployment
**View logs:**
```bash
make logs
```
**Stop all containers:**
```bash
make down
```
**Restart all containers:**
```bash
DEVICE=mps make restart
```
**Clean up everything:**
```bash
make clean
```
## Advanced Usage
### Building Images Separately
If you want to build the Docker image without starting containers:
```bash
# For CPU
DEVICE=cpu make build
# For NVIDIA GPU
DEVICE=cuda make build
# For Apple Silicon
DEVICE=mps make build
```
### Custom Tags
Tag your images for versioning:
```bash
IMAGE_TAG=v1.0.0 make build
```
### Running With/Without Ollama
By default, Ollama runs only with CUDA. To control Ollama service:
```bash
# Enable Ollama on CPU/MPS
DEVICE=mps ENABLE_OLLAMA=true make up
# Disable Ollama (default for CPU/MPS)
DEVICE=mps make up
```
## Platform-Specific Instructions
### NVIDIA GPU Setup
1. Ensure you have NVIDIA drivers installed on your host
2. Install the NVIDIA Container Toolkit:
```bash
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
3. Run Simba with GPU support:
```bash
DEVICE=cuda make up
```
### Apple Silicon (M1/M2/M3) Setup
1. Ensure you're running on macOS with an Apple Silicon processor
2. Run Simba with Metal Performance Shader support:
```bash
DEVICE=mps make up
```
## Docker Compose Structure
Simba's Docker setup consists of several services:
- **server**: The main application server
- **celery_worker**: Background task processing
- **redis**: Message broker and caching
- **frontend**: User interface
- **ollama** (optional): Local language model service
## Troubleshooting
### Network Already Exists Error
If you see an error about the network already existing:
```bash
# First stop all containers
make down
# Clean up Docker networks
docker network rm simba_network
# Then try running again
DEVICE=mps make up
```
### Complete Reset
For a full reset of your Docker environment:
```bash
# Stop containers
make down
# Remove the network
docker network rm simba_network
# Create a fresh network
docker network create simba_network
# Start containers
DEVICE=mps make up
```
### Container Fails to Start
Check the logs to see what's happening:
```bash
make logs
```
## Accessing the Application
After starting the containers:
- **Frontend**: http://localhost:5173
- **Backend API**: http://localhost:8000
- **Ollama API** (if enabled): http://localhost:11434
## Configuration
Simba uses several configuration options:
- **DEVICE**: Hardware to use (`cpu`, `cuda`, or `mps`)
- **ENABLE_OLLAMA**: Whether to include Ollama service (default: `false` for CPU/MPS, `true` for CUDA)
- **IMAGE_NAME**: Name for Docker images (default: `simba`)
- **IMAGE_TAG**: Tag for Docker images (default: `latest`)
## Updating the Application
To update to the latest version:
```bash
git pull
make clean
DEVICE=mps make up
```
## Development Workflow
When working on the application:
1. Edit files locally
2. The changes are reflected in the containers through volume mounts
3. Restart containers if necessary:
```bash
make restart
```
This guide should help you get Simba up and running on various hardware configurations using Docker.