Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.07 KB

2409.19228.md

File metadata and controls

5 lines (3 loc) · 2.07 KB

GS-EVT: Cross-Modal Event Camera Tracking based on Gaussian Splatting

Reliable self-localization is a foundational skill for many intelligent mobile platforms. This paper explores the use of event cameras for motion tracking thereby providing a solution with inherent robustness under difficult dynamics and illumination. In order to circumvent the challenge of event camera-based mapping, the solution is framed in a cross-modal way. It tracks a map representation that comes directly from frame-based cameras. Specifically, the proposed method operates on top of gaussian splatting, a state-of-the-art representation that permits highly efficient and realistic novel view synthesis. The key of our approach consists of a novel pose parametrization that uses a reference pose plus first order dynamics for local differential image rendering. The latter is then compared against images of integrated events in a staggered coarse-to-fine optimization scheme. As demonstrated by our results, the realistic view rendering ability of gaussian splatting leads to stable and accurate tracking across a variety of both publicly available and newly recorded data sequences.

可靠的自我定位是众多智能移动平台的基础能力。本文探讨了使用事件相机进行运动跟踪,提供了一种在复杂动态和照明条件下具有固有鲁棒性的解决方案。为了解决基于事件相机的映射挑战,该解决方案采用跨模态的方式,跟踪直接由基于帧的相机生成的地图表示。具体来说,所提出的方法基于高斯分布(Gaussian Splatting),这是一种最先进的表示方法,能够实现高效且逼真的新视角合成。我们方法的关键在于一种新颖的姿态参数化方案,利用参考姿态加上一阶动态来进行局部差分图像渲染。然后,将渲染的结果与通过事件积分得到的图像进行比较,采用分级的粗到细优化策略。正如我们的实验结果所示,高斯分布的逼真视图渲染能力在多个公开数据集和新录制的数据序列中,实现了稳定且精确的跟踪。