GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views
Differentiable rendering techniques have recently shown promising results for free-viewpoint video synthesis of characters. However, such methods, either Gaussian Splatting or neural implicit rendering, typically necessitate per-subject optimization which does not meet the requirement of real-time rendering in an interactive application. We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting. To this end, we introduce Gaussian parameter maps defined on the source views and directly regress Gaussian properties for instant novel view synthesis without any fine-tuning or optimization. We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space. The proposed framework is fully differentiable with both depth and rendering supervision or with only rendering supervision. We further introduce a regularization term and an epipolar attention mechanism to preserve geometry consistency between two source views, especially when neglecting depth supervision. Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
可微分渲染技术近年来在角色的自由视角视频合成中展现了令人瞩目的成果。然而,无论是高斯点绘制还是神经隐式渲染,这些方法通常需要针对每个目标进行优化,难以满足交互式应用中实时渲染的需求。为此,我们提出了一种可泛化的高斯点绘制方法,能够在稀疏视角相机设置下实现高分辨率图像渲染。 我们引入了一种基于源视图定义的高斯参数映射,通过直接回归高斯属性,无需任何微调或优化即可实现即时的新视角合成。我们的高斯参数回归模块可以在仅包含人体的数据或包含人体与场景的数据上进行训练,并与深度估计模块联合,借助二维参数映射提升至三维空间。该框架完全可微分,可以利用深度和渲染监督,或仅通过渲染监督进行训练。 此外,我们提出了一种正则化项以及极线注意力机制,以在忽略深度监督时仍然保持两视图之间的几何一致性。在多个数据集上的实验表明,我们的方法在渲染质量上优于当前最先进的方法,同时实现了显著的渲染速度提升。