-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about reproducing Replica example and PyTorch log_prob_unnorm method #3
Comments
No you don't have. Dataset should be here: https://github.com/facebookresearch/Replica-Dataset |
Sorry for the late reply. I've updated the google drive folder! |
I believe the second issue is caused by a different pytorch version. I'm using
to check which version you are using. |
@kungfrank Hi, I use the pytorch 1.12.1, but the MultivariateNormal still does not have a log_prob_unnorm method and only log_prob can be used. Is log_prob correct for training? |
Hi @mappro6, Thank you for reporting this. The log_prob_unnorm is a custom function. Please pull the repo again to get the updated code. |
I am currently working on integrating my own RGBD dataset, which I previously used for normal Gaussian Splatting without depth, into this repository. For this, I'm investigating how to handle depth, including loading and scaling depth, creating the raw point cloud (raw_pc), and so on.
As I still struggle with it, I am trying to reproduce the project using your provided Replica Dataset example. However, I have encountered a couple of issues:
Replica Dataset Example: I cannot locate the
traj.txt
file which seems to be necessary for the example. Could you please upload it or direct me to where I can find it?PyTorch issue with
MultivariateNormal
: While experimenting, I ran into an issue with this line of code . It seems that PyTorch does not have alog_prob_unnorm
method for MultivariateNormal. I am uncertain if it is correct to uselog_prob
instead. Do you have any suggestions on how to fix this or what the appropriate method would be?Thank you in advance for your help!
The text was updated successfully, but these errors were encountered: