Skip to content

Commit

Permalink
Fixed mistake in readme (#933)
Browse files Browse the repository at this point in the history
Co-authored-by: Olatunji Ruwase <[email protected]>
  • Loading branch information
SCheekati and tjruwase authored Oct 29, 2024
1 parent 130fb58 commit 5a61193
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion inference/huggingface/zero_inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ deepspeed --num_gpus 1 run_model.py --model bigscience/bloom-7b1 --batch-size 8
Here is an example of running `meta-llama/Llama-2-7b-hf` with Zero-Inference using 4-bit model weights and offloading kv cache to CPU:

```sh
deepspeed --num_gpus 1 run_model.py --model meta-llama/Llama-2-7b-hf` --batch-size 8 --prompt-len 512 --gen-len 32 --cpu-offload --quant-bits 4 --kv-offload
deepspeed --num_gpus 1 run_model.py --model meta-llama/Llama-2-7b-hf --batch-size 8 --prompt-len 512 --gen-len 32 --cpu-offload --quant-bits 4 --kv-offload
```

## Performance Tuning Tips
Expand Down

0 comments on commit 5a61193

Please sign in to comment.