You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The "Distributed Training with Tensorflow" guide produces NaN loss and metric values. source rendered
Standalone code to reproduce the issue or tutorial link
Download the script and run `python distributed_training_with_tensorflow.py`
Relevant log output
python3 distributed_training_with_tensorflow.py
2025-01-23 17:00:28.269053: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1737669628.282307 1630955 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1737669628.286622 1630955 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-23 17:00:28.301267: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
I0000 00:00:1737669630.802286 1630955 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9618 MB memory: -> device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5
I0000 00:00:1737669630.803592 1630955 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 9618 MB memory: -> device: 1, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:03:00.0, compute capability: 7.5
Number of devices: 2
Epoch 1/2
1558/1563 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: nan - sparse_categorical_accuracy: nan2025-01-23 17:00:41.430527: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
2025-01-23 17:00:41.430575: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
2025-01-23 17:00:41.430626: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
2025-01-23 17:00:43.051552: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 11s 6ms/step - loss: nan - sparse_categorical_accuracy: nan - val_loss: nan - val_sparse_categorical_accuracy: nan
Epoch 2/2
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 9s 6ms/step - loss: nan - sparse_categorical_accuracy: nan - val_loss: nan - val_sparse_categorical_accuracy: nan
308/313 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: nan - sparse_categorical_accuracy: nan2025-01-23 17:00:53.476748: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: nan - sparse_categorical_accuracy: nan
Creating a new model
1563/1563 - 10s - 6ms/step - loss: nan - sparse_categorical_accuracy: nan - val_loss: nan - val_sparse_categorical_accuracy: nan
Restoring from ./ckpt/ckpt-1.keras
2025-01-23 17:01:12.767428: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
1563/1563 - 9s - 6ms/step - loss: nan - sparse_categorical_accuracy: nan - val_loss: nan - val_sparse_categorical_accuracy: nan
The text was updated successfully, but these errors were encountered:
Issue Type
Bug
Source
binary
Keras Version
Keras 3.8.0
Custom Code
No
OS Platform and Distribution
No response
Python version
3.12
GPU model and memory
2x GPU (NVidia 2080 Ti)
Current Behavior?
The "Distributed Training with Tensorflow" guide produces NaN loss and metric values.
source
rendered
Standalone code to reproduce the issue or tutorial link
Download the script and run `python distributed_training_with_tensorflow.py`
Relevant log output
The text was updated successfully, but these errors were encountered: