Official Pytorch implementation of Har Far Can We Compress Instant-NGP-Based NeRF?.
Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai
[Paper
] [Arxiv
] [Project
] [Github
]
Welcome to check a series of works from our group on 3D radiance field representation compression as listed below:
- 🎉 CNC [CVPR'24] is now released for efficient NeRF compression! [
Paper
] [Arxiv
] [Project
] - 🏠 HAC [ECCV'24] is now released for efficient 3DGS compression! [
Paper
]Arxiv
] [Project
] - 🚀 FCGS [ARXIV'24] is now released for fast optimization-free 3DGS compression! [
Arxiv
] [Project
]
In this paper, we introduce the Context-based NeRF Compression (CNC) framework, which leverages highly efficient context models to provide a storage-friendly NeRF representation. Specifically, we excavate both level-wise and dimension-wise context dependencies to enable probability prediction for information entropy reduction. Additionally, we exploit hash collision and occupancy grids as strong prior knowledge for better context modeling.
We tested our code on a server with Ubuntu 20.04.1, cuda 11.8, gcc 9.4.0
- Create a new environment to run our code
conda create -n CNC_env python==3.7.11
conda activate CNC_env
- Install necessary dependent packages
pip install -r requirements.txt
pip install ninja
You might need to run the following command before continuing.
pip uninstall nvidia-cublas-cu11
- Install tinycudann
- Install our CUDA backens
pip install gridencoder
pip install my_cuda_backen
- Manually replace the nerfacc package in your environment (
PATH/TO/YOUR/nerfacc
) by ours (./nerfacc
).
- Put dataset to
./data
folder, such as./data/nerf_synthetic/chair
or./data/TanksAndTemple/Barn
- To train a scene in nerf_synthetic or tanks_and_temple dataset, conduct the following.
- We use a learning rate of
1e-2
in our paper for both MLPs but6e-3
in this repo as we find it is more stable.
CUDA_VISIBLE_DEVICES=0 python examples/train_CNC_nerf_synthetic.py --lmbda 0.7e-3 --scene chair --sample_num 150000 --n_features 8
CUDA_VISIBLE_DEVICES=0 python examples/train_CNC_tank_temples.py --lmbda 0.7e-3 --scene Barn --sample_num 150000 --n_features 8
Optionally, you can try --lmbda
in [0.7e-3, 1e-3, 2e-3, 4e-3] to control rate,
and try --sample_num
in [150000, 200000], and --n_features
in [1, 2, 4, 8] to adjust training time and performance tradeoff.
Please use --sample_num 150000
for --n_features 8
and --sample_num 200000
otherwise
The code will automatically run the entire process of: training, encoding, decoding, testing.
- Output data includes:
- Recorded output results in
./results
. (Including fidelity, size, training time, encoding/decoding time) - Encoded bitstreams of the hash grid are in
./bitstreams
- Our
gcc
version is 9.4.0. If you encounter RuntimeError, please check yourgcc
version. - In some cases, it may be necessary to uninstall
nvidia-cublas-cu11
before installingtinycudann
andour CUDA backens
- If you install
nerfacc
using pip, the code will need to build the CUDA code on the first run (JIT). See nerfacc for more details.
- Yihang Chen: [email protected]
If you find our work helpful, please consider citing:
@inproceedings{cnc2024,
title={How Far Can We Compress Instant-NGP-Based NeRF?},
author={Chen, Yihang and Wu, Qianyi and Harandi, Mehrtash and Cai, Jianfei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}