-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update to newer versions of Pytorch #1
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, it's good. I've made some comments. The vast majority are me being nit-picky, mostly saying you can remove commented out code if it's not needed. If you wouldn't mind addressing those just so the code can be nice and clean, that would be appreciated.
The only real question I have is it seems you've changed all the forward/backward methods to staticmethods, which is totally fine, if that's what it takes to work. But I'll admit, I haven't worked with PyTorch extensions for awhile, so I'm unclear as to why this should be done. If you wouldn't mind giving me a little insight into your reasoning there, that would be most helpful.
@@ -480,6 +484,7 @@ def main(): | |||
obj_poses = torch.zeros(1, 1, 7).cuda() | |||
obj_poses[:, :, -1] = 1.0 | |||
while True: | |||
# pdb.set_trace() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leftover debugging code?
hashorder = _HashgridOrderFunction(self.radius, self.max_grid_dim, self.cellIDs, | ||
self.cuda_buffer) | ||
idxs = hashorder(locs, lower_bounds, grid_dims) | ||
# hashorder = _HashgridOrderFunction(self.radius, self.max_grid_dim, self.cellIDs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can probably just remove the commented out code if it's not needed anymore. I'm assuming this is updated to the new style of pytorch?
self.cellStarts, self.cellEnds, self.include_self) | ||
neighbors = coll(qlocs if qlocs is not None else locs, | ||
locs, lower_bounds, grid_dims) | ||
# coll = _ParticleCollisionFunction(self.radius, self.max_collisions, self.cellIDs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
def __init__(self, radius, max_grid_dim, cellIDs, cuda_buffer): | ||
super(_HashgridOrderFunction, self).__init__() | ||
self.radius = radius | ||
self.max_grid_dim = max_grid_dim | ||
self.cellIDs = cellIDs | ||
self.cuda_buffer = cuda_buffer | ||
|
||
def forward(self, locs, lower_bounds, grid_dims): | ||
@staticmethod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the thinking here on making this a static method? I'll admit, I haven't looked at this code in awhile, but a static method wouldn't have a self
argument. And I see self
being used for save_for_backward
. Although it looks like you removed the rest of the self
uses.
|
||
return idxs | ||
|
||
@staticmethod | ||
def backward(self, grad_idxs): | ||
locs, lower_bounds, grid_dims = self.saved_tensors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the saved tensors from forward
are grabbed here for the backward pass. Without self
I'm not sure where those would come from. Did pytorch change how this works?
def backward(self, grad_output): | ||
pdb.set_trace() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like debugging code?
@@ -123,13 +123,14 @@ def forward(self, locs, data, neighbors, qlocs=None): | |||
self.device_id) | |||
|
|||
# Do the compution. | |||
convsp = _ConvSPFunction(self.radius, self.kernel_size, self.dilation, | |||
self.dis_norm, self.kernel_fn, self.ncells, self.nshared_device_mem) | |||
# convsp = _ConvSPFunction(self.radius, self.kernel_size, self.dilation, self.dis_norm, self.kernel_fn, self.ncells, self.nshared_device_mem) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same same, if the commented out code is not needed, delete it.
# data.shape = BxCxN | ||
data = convsp(qlocs if qlocs is not None else locs, | ||
locs, data, neighbors, self.weight, self.bias) | ||
# data = convsp(qlocs if qlocs is not None else locs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
def forward(self, qlocs, locs, data, neighbors, weight, bias): | ||
self.save_for_backward(qlocs, locs, data, neighbors, weight, bias) | ||
|
||
#def __init__(self, radius, kernel_size, dilation, dis_norm, kernel_fn, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as before.
@@ -8,7 +8,7 @@ | |||
#include <stdint.h> | |||
#include <stdio.h> | |||
|
|||
#include "../external/cub-1.3.2/cub/cub.cuh" | |||
#include "cub/cub.cuh" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is much better than that hackey nonsense I was doing before :) .
Hello Connor, I am a student from UCSD, I believe we contacted you a few weeks ago about using your SPNet work. We updated the code to work with the newest versions of pytorch and CUDA. We confirmed your example code (fluid_sim.py) was able to run on Pytorch 1.2.0, CUDA 10.1, and Python 3.7.9 now and it should also work with newer versions after these as well, feel free to contact me as [email protected] if you have any questions. Thanks!