You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
because the input is 112x112, not 32x32 in cifar 10, so i modified the fc layer shape to avoid size mis-match problem, but
even i set batch_size to 2, it will always report cuda of memory error:
RuntimeError: CUDA out of memory. Tried to allocate 196.00 MiB (GPU 0; 11.91 GiB total capacity; 11.26 GiB already allocated; 47.06 MiB free; 50.22 MiB cached)
my gpu is titan x, has more than 12 gb memory.
Moreover,
i set os.environ["CUDA_VISIBLE_DEVICES"]='0,1' to use 2 gpus at the same time, but it always only use 1 gpu....
The text was updated successfully, but these errors were encountered:
was that because you use many models for single input? is that the concept of your multi-task implementation for multi-task?
like, if we have 10 classes, then we your code will generate 10 models, calculate the loss and backward them all for the total loss?
when i expand the class number to 40, it reports out of memory when running loss.backward(), is that because of too many models were generated?
hi, i modified the input size to receive 112x112 and fc layer, but always get out of memory after running....
The locations I modified in the code are as follows:
in models.py
because the input is 112x112, not 32x32 in cifar 10, so i modified the fc layer shape to avoid size mis-match problem, but
even i set batch_size to 2, it will always report cuda of memory error:
my gpu is titan x, has more than 12 gb memory.
Moreover,
i set
os.environ["CUDA_VISIBLE_DEVICES"]='0,1'
to use 2 gpus at the same time, but it always only use 1 gpu....The text was updated successfully, but these errors were encountered: