Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuda out of memory even batch_size = 2 #2

Open
Light-- opened this issue Jul 8, 2020 · 1 comment
Open

cuda out of memory even batch_size = 2 #2

Light-- opened this issue Jul 8, 2020 · 1 comment

Comments

@Light--
Copy link

Light-- commented Jul 8, 2020

hi, i modified the input size to receive 112x112 and fc layer, but always get out of memory after running....

The locations I modified in the code are as follows:
in models.py

class _Decoder(nn.Module):
    def __init__(self, output_size):
        super(_Decoder, self).__init__()
        self.layers = nn.Sequential(
            # nn.Linear(128*8*8, 512),
            nn.Linear(8 * 112 * 112, 512),
            nn.BatchNorm1d(512),
            nn.ReLU(),
            nn.Linear(512, output_size)
        )

because the input is 112x112, not 32x32 in cifar 10, so i modified the fc layer shape to avoid size mis-match problem, but
even i set batch_size to 2, it will always report cuda of memory error:

RuntimeError: CUDA out of memory. Tried to allocate 196.00 MiB (GPU 0; 11.91 GiB total capacity; 11.26 GiB already allocated; 47.06 MiB free; 50.22 MiB cached)

my gpu is titan x, has more than 12 gb memory.

Moreover,
i set os.environ["CUDA_VISIBLE_DEVICES"]='0,1' to use 2 gpus at the same time, but it always only use 1 gpu....

@Light--
Copy link
Author

Light-- commented Jul 8, 2020

was that because you use many models for single input? is that the concept of your multi-task implementation for multi-task?
like, if we have 10 classes, then we your code will generate 10 models, calculate the loss and backward them all for the total loss?

when i expand the class number to 40, it reports out of memory when running loss.backward(), is that because of too many models were generated?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant