Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do we need to run COLMAP with the exact poses given in the datasets like DTU and Blender #100

Open
Dharmendra04 opened this issue Aug 16, 2023 · 2 comments

Comments

@Dharmendra04
Copy link

In your project, you utilized img2poses.py to generate poses via COLMAP. Additionally, you employed the same poses_bounds.npy to create both Depth rays and normal rays.

I have a question regarding other datasets such as DTU and Blender. Do we need to run Colmap using the poses provided in their respective datasets?

DTU or Blender datasets have their own poses written in a json, or npy file. should I need to use the same poses given in the datasets to run the COLMAP in order to obtain the sparse point clouds?

I'm wondering if running COLMAP without providing any poses to generate a sparse point cloud will result in similar depth lengths, as the poses used for producing ground truth depths(poses obtained from COLMAP) and rendered depths(Poses of Datasets) would be different.

I am creating a depth incorporation model similar to yours and am only able to get good results only for llff datasets. that's why I would like to know how did you run the COLMAP for datasets like Blender and DTU?

@dunbar12138
Copy link
Owner

Yes, we run COLMAP with the given poses on DTU.

https://colmap.github.io/faq.html#reconstruct-sparse-dense-model-from-known-camera-poses

@Navaneeth-Sivakumar
Copy link

What about blender dataset? Is depth NeRF compatible with blender dataset? Does this nerf work only with sparse images or can we provide it with many images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants