Skip to content

Commit

Permalink
fix typo (hpcaitech#2721)
Browse files Browse the repository at this point in the history
  • Loading branch information
lich99 authored Feb 15, 2023
1 parent 9c0943e commit 7aacfad
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions applications/ChatGPT/benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ We only support `torchrun` to launch now. E.g.

```shell
# run GPT2-S on single-node single-GPU with min batch size
torchrun --standalone --nproc_pero_node 1 benchmark_gpt_dummy.py --model s --strategy ddp --experience_batch_size 1 --train_batch_size 1
torchrun --standalone --nproc_per_node 1 benchmark_gpt_dummy.py --model s --strategy ddp --experience_batch_size 1 --train_batch_size 1
# run GPT2-XL on single-node 4-GPU
torchrun --standalone --nproc_per_node 4 benchmark_gpt_dummy.py --model xl --strategy colossalai_zero2
# run GPT3 on 8-node 8-GPU
Expand Down Expand Up @@ -84,7 +84,7 @@ We only support `torchrun` to launch now. E.g.

```shell
# run OPT-125M with no lora (lora_rank=0) on single-node single-GPU with min batch size
torchrun --standalone --nproc_pero_node 1 benchmark_opt_lora_dummy.py --model 125m --strategy ddp --experience_batch_size 1 --train_batch_size 1 --lora_rank 0
torchrun --standalone --nproc_per_node 1 benchmark_opt_lora_dummy.py --model 125m --strategy ddp --experience_batch_size 1 --train_batch_size 1 --lora_rank 0
# run OPT-350M with lora_rank=4 on single-node 4-GPU
torchrun --standalone --nproc_per_node 4 benchmark_opt_lora_dummy.py --model 350m --strategy colossalai_zero2 --lora_rank 4
```
Expand Down

0 comments on commit 7aacfad

Please sign in to comment.