Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NaN Values in Training #233

Open
Nancyeleanor opened this issue Oct 29, 2024 · 0 comments
Open

NaN Values in Training #233

Nancyeleanor opened this issue Oct 29, 2024 · 0 comments

Comments

@Nancyeleanor
Copy link

while training it shows NaN Values for each epoch, what to do?

loss_disc=nan, loss_gen=nan, loss_fm=nan,loss_mel=nan, loss_kl=9.000
INFO:test: ====> Epoch: 2 [2024-10-29 11:12:31] | (0:00:49.900411)

I also get warnings when training was started

I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
max value is tensor(1.1533)
max value is tensor(1.2329)
max value is tensor(1.1363)
max value is tensor(1.1421)
max value is tensor(1.1523)
max value is tensor(1.1372)
min value is tensor(-1.1182)
max value is tensor(1.1308)
max value is tensor(1.0706)
I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
return VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\autograd_init
.py:200: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [64, 1, 4], strides() = [4, 1, 1]
bucket_view.sizes() = [64, 1, 4], strides() = [4, 4, 1] (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\distributed\c10d\reducer.cpp:337.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
INFO:test:Train Epoch: 1 [0%]
INFO:test:[0, 0.0001]
INFO:test:loss_disc=nan, loss_gen=nan, loss_fm=nan,loss_mel=nan, loss_kl=9.000
DEBUG:matplotlib:matplotlib data path: I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\matplotlib\mpl-data
DEBUG:matplotlib:CONFIGDIR=C:\Users\Creative.matplotlib
DEBUG:matplotlib:interactive is False
DEBUG:matplotlib:platform is win32
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:test:====> Epoch: 1 [2024-10-29 12:44:56] | (0:00:53.102187)
I:\Mangio\Mangio-RVC-v23.7.0\runtime\lib\site-packages\torch\optim\lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant