Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

什么运行不起来 #28

Open
VM1MV opened this issue Oct 30, 2024 · 1 comment
Open

什么运行不起来 #28

VM1MV opened this issue Oct 30, 2024 · 1 comment

Comments

@VM1MV
Copy link

VM1MV commented Oct 30, 2024

bash urbangpt_train.sh
W&B offline. Running your script from this directory will only write metadata locally. Use wandb disabled to completely turn off W&B.
/home/admin/UrbanGPT/urbangpt/train /home/admin/UrbanGPT
ST_Encoder /home/admin/UrbanGPT/urbangpt/model/st_layers
compute_dtype torch.bfloat16
You are using a model of type llama to instantiate a model of type STLlama. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 751) of binary: /usr/bin/python
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 798, in
main()
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

urbangpt/train/train_mem.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2024-10-30_18:57:25
host : dsw-468486-77f8bcd5db-dk2dj
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 751)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 751

@LZH-YS1998
Copy link
Collaborator

你好,可能是由于权重加载和多卡配置的原因,请根据运行环境调整参数

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants