You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your project, you utilized img2poses.py to generate poses via COLMAP. Additionally, you employed the same poses_bounds.npy to create both Depth rays and normal rays.
I have a question regarding other datasets such as DTU and Blender. Do we need to run Colmap using the poses provided in their respective datasets?
DTU or Blender datasets have their own poses written in a json, or npy file. should I need to use the same poses given in the datasets to run the COLMAP in order to obtain the sparse point clouds?
I'm wondering if running COLMAP without providing any poses to generate a sparse point cloud will result in similar depth lengths, as the poses used for producing ground truth depths(poses obtained from COLMAP) and rendered depths(Poses of Datasets) would be different.
I am creating a depth incorporation model similar to yours and am only able to get good results only for llff datasets. that's why I would like to know how did you run the COLMAP for datasets like Blender and DTU?
The text was updated successfully, but these errors were encountered:
What about blender dataset? Is depth NeRF compatible with blender dataset? Does this nerf work only with sparse images or can we provide it with many images.
In your project, you utilized img2poses.py to generate poses via COLMAP. Additionally, you employed the same poses_bounds.npy to create both Depth rays and normal rays.
I have a question regarding other datasets such as DTU and Blender. Do we need to run Colmap using the poses provided in their respective datasets?
DTU or Blender datasets have their own poses written in a json, or npy file. should I need to use the same poses given in the datasets to run the COLMAP in order to obtain the sparse point clouds?
I'm wondering if running COLMAP without providing any poses to generate a sparse point cloud will result in similar depth lengths, as the poses used for producing ground truth depths(poses obtained from COLMAP) and rendered depths(Poses of Datasets) would be different.
I am creating a depth incorporation model similar to yours and am only able to get good results only for llff datasets. that's why I would like to know how did you run the COLMAP for datasets like Blender and DTU?
The text was updated successfully, but these errors were encountered: