Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 2.22 KB

2410.18822.md

File metadata and controls

6 lines (4 loc) · 2.22 KB

Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis

Novel view synthesis from sparse inputs is a vital yet challenging task in 3D computer vision. Previous methods explore 3D Gaussian Splatting with neural priors (e.g. depth priors) as an additional supervision, demonstrating promising quality and efficiency compared to the NeRF based methods. However, the neural priors from 2D pretrained models are often noisy and blurry, which struggle to precisely guide the learning of radiance fields. In this paper, We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting that does not require external prior as supervision. Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images constructed with disparity-guided image warping. To this end, we additionally introduce a Gaussian opacity constraint which regularizes the Gaussian locations and avoids Gaussian redundancy for improving the robustness and efficiency of inferring 3D Gaussians from sparse views. Extensive experiments on the LLFF, DTU, and Blender datasets demonstrate that our method significantly outperforms the state-of-the-art methods.

稀疏输入下的视图合成是三维计算机视觉中一个重要但具有挑战性的任务。先前的方法探索了结合神经先验(如深度先验)作为额外监督的三维高斯喷涂(3D Gaussian Splatting),在质量和效率方面相比 NeRF 方法显示出可喜的进展。然而,由二维预训练模型生成的神经先验通常存在噪声和模糊性,难以有效引导辐射场的学习。本文提出了一种新的方法,通过高斯喷涂从稀疏视图生成新视图,而不需要外部先验作为监督。 我们的核心思想在于利用双目立体一致性所固有的自监督,通过视差引导的图像变换构建的双目图像对之间的自监督来学习。为此,我们引入了一种高斯不透明度约束,以正则化高斯的位置,避免高斯冗余,从而提升从稀疏视图推理三维高斯的鲁棒性和效率。在 LLFF、DTU 和 Blender 数据集上的大量实验表明,我们的方法在效果上显著优于最新的方法。