-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runtime Error on MipNeRF-360 dataset #60
Comments
Hi! Did you change any code? The '2356.25 GiB' to be allocated seems a little weird. This seems to be caused by an unexpected broadcast operation. |
Thank you for your reply. I did not change the code. I found that there should be a problem with the command I used, I should use the following command. |
Which command should be used if running MipNeRF-360 dataset?llff/kitchen.py or nerf_unbounded/kitchen.py |
When I run the following commond : I get this error : The above problem occurs when the above command is run on the 360_v2 data set. In nerf_llff_data data can run properly and generate Fly-through videos. |
You may need to use the nerf_unbounded config as the MIP360 dataset involves several unbounded in-the-wild scenes. Here I found some similar issues with your missed module problem. You can check whether they can help you. In practice we have never met such problem before. |
Thank you very much for your reply. |
When I run the following commond : I get this error : The error you're encountering is an AssertionError, which means that a specific assertion condition failed in the code. In this case, the error occurred in the init_model function within the sam3d.py file at line 89. I don't know what to do to solve this problem. |
Have you run the run.py successfully to get the pertained NeRF model? Maybe you can check whether the reload_ckpt_path has the corresponding NeRF model (like fine_last.tar). |
I guess this is caused by the missing 'c' in the config file 'seg_kitchen.py'. I mean the expname = 'dvgo_kitchen_unbounded' should be 'dcvgo_kitchen_unbounded'. |
yes, thank you very much for your reply. Now it's ready to run. |
hello,I would like to ask where the whole model framework is and how to understand it. There seems to be no clear framework for NeRF in the entire code. What should I do if I want to modify NeRF. |
Hi! You can find the code about NeRF in lib/dvgo.py (dcvgo, seg_dvgo, ...)
|
thanks, I get it. I will be careful to understand the code logic. |
There is no batch size in SA3D. You can reduce the resolution of TensoRF (mask grids resolution, TensoRF grids resolution, rendering resolution, density grids resolution, ...) for saving memory. |
hello, I would like to ask if the parameters in the code are already optimal? Do you still need hyperparameter optimization? |
For some scenes and targets it is. However it depends on the concrete scene and target you choose. |
Ok, thank you very much for your reply. |
hello, I would like to ask which NeRF article is based on ? |
1 similar comment
hello, I would like to ask which NeRF article is based on ? |
The main branch of SA3D is based on TensoRF. NerfStudio branch is based on Nerfecto. SA3D-GS branch is based on 3D-GS. |
ok, thanks for you reply. |
When I run the following commond :
Python run.py --config=configs/llff/kitchen.py --stop_at=20000 --render_video --i_weights=10000
I get this error :
File "G:\SegmentAnythingin3D-master\lib\grid.py", line 171, in init
self.xy_plane = nn.Parameter(torch.randn([1, Rxy, X, Y]) * 0.1)
RuntimeError: CUDA out of memory. Tried to allocate 2356.25 GiB (GPU 0; 23.99 GiB total capacity; 25.00 KiB already allocated; 22.04 GiB free; 2.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The error occurs on line 171 of the lib\grid.py file, where an attempt is made to allocate memory to self.xy_plane. It seems to be trying to use random initialization to create a tensor of the shape [1, Rxy, X, Y], but the allocated memory size is unusually large.
Did anyone face this issue ?
The text was updated successfully, but these errors were encountered: