Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyCudaHandler: Crash in LSTM backpass if sequence length is 1 #109

Open
michaelwand opened this issue Dec 30, 2015 · 0 comments
Open

PyCudaHandler: Crash in LSTM backpass if sequence length is 1 #109

michaelwand opened this issue Dec 30, 2015 · 0 comments

Comments

@michaelwand
Copy link
Contributor

The LSTM backpass with Cuda does not work properly in the (rare, but possible) case that the length of an input sequence is 1. On the CPU, everything works fine.

The underlying reason is that an array of size (0,whatever) is allocated, which apparently works with Numpy, but results in an uninitialized array (gpudata==None) with PyCuda, on which subsequent operations ("fill" in the attached backtrace) fail. (To the best of my knowledge, this occurs first in lstm_layer.py, line 264 - flat_cell may have zero size. But I cannot guarantee that's the only occurrence.)

Further information:
$ git log --oneline -n 1
a68bf03 Release 0.5
$ uname -a
Linux nikola 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt7-1 (2015-03-01) x86_64 GNU/Linux

... and a backtrace, as well as a script to create the behavior (UseGPU and MakeItCrash must be set to 1!!)
LSTMCrashWithGPUandLength1Seq.py.txt
LSTMCrashBacktrace.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant