Novel view synthesis has been greatly enhanced by the development of radiance field methods. The introduction of 3D Gaussian Splatting (3DGS) has effectively addressed key challenges, such as long training times and slow rendering speeds, typically associated with Neural Radiance Fields (NeRF), while maintaining high-quality reconstructions. In this work (BeSplat), we demonstrate the recovery of sharp radiance field (Gaussian splats) from a single motion-blurred image and its corresponding event stream. Our method jointly learns the scene representation via Gaussian Splatting and recovers the camera motion through Bezier SE(3) formulation effectively, minimizing discrepancies between synthesized and real-world measurements of both blurry image and corresponding event stream. We evaluate our approach on both synthetic and real datasets, showcasing its ability to render view-consistent, sharp images from the learned radiance field and the estimated camera trajectory. To the best of our knowledge, ours is the first work to address this highly challenging ill-posed problem in a Gaussian Splatting framework with the effective incorporation of temporal information captured using the event stream.
新视角合成技术因辐射场方法的发展而得到了极大提升。三维高斯散射(3D Gaussian Splatting, 3DGS)的引入有效解决了神经辐射场(NeRF)通常面临的关键挑战,例如训练时间长和渲染速度慢,同时仍能保持高质量的重建。在本研究(BeSplat)中,我们展示了如何从单张运动模糊图像及其对应的事件流中恢复清晰的辐射场(高斯散点)。 我们的方法通过联合学习,利用高斯散射进行场景表示,并通过Bezier SE(3)形式有效恢复相机运动,最小化合成图像与真实世界数据(模糊图像和对应事件流)之间的差异。我们在合成数据集和真实数据集上评估了该方法,展示了其从学习的辐射场和估计的相机轨迹中渲染视角一致、清晰图像的能力。据我们所知,这是首次在高斯散射框架中结合事件流捕获的时间信息,有效解决这一高度挑战的病态问题。