Skip to content

Deep Automated Aerial Triangulation for Fast UAV-based Mapping

License

Notifications You must be signed in to change notification settings

WHU-USI3DV/DeepAAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepAAT: Deep Automated Aerial Triangulation for Fast UAV-based mapping

This is the implementation of the DeepAAT architecture, presented in our JAG paper DeepAAT: Deep Automated Aerial Triangulation for Fast UAV-based Mapping. The codebase is forked from the implementation of the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion, available at https://github.com/drormoran/Equivariant-SFM. That architecture is also used as a baseline and referred to as ESFM in our paper.

DeepAAT considers both spatial and spectral characteristics of imagery, enhancing its capability to resolve erroneous matching pairs and accurately predict image poses. DeepAAT marks a significant leap in AAT's efficiency, ensuring thorough scene coverage and precision. Its processing speed outpaces incremental AAT methods by hundreds of times and global AAT methods by tens of times while maintaining a comparable level of reconstruction accuracy. Additionally, DeepAAT's scene clustering and merging strategy facilitate rapid localization and pose determination for large-scale UAV images, even under constrained computing resources. The experimental results demonstrate DeepAAT's substantial improvements over conventional AAT methods, highlighting its potential in the efficiency and accuracy of UAV-based 3D reconstruction tasks.

Contents


Setup

This repository is implemented with python 3.8, and in order to run bundle adjustment requires linux. We have used Ubuntu 22.04. You should also have a CUDA-capable GPU.

Directory structure

The repository should contain the following directories:

DeepAAT
├── bundle_adjustment
├── code
├── scripts
├── environment.yml

Conda environment

Create the environment using the following commands:

conda env create -f environment.yml
conda activate deepaat

PyCeres

Next follow the bundle adjustment instructions.

Data and pretrained models

Attached to this. You can find both the datasets and pretrained models for Euclidean reconstruction of novel scenes. Download the data and pretrained model, and then modify the path in the conf file accordingly. Due to the large amount of training data, only pretrained models and test data are provided.

Usage

To execute the code, first navigate to the code subdirectory. Also make sure that the conda environment is activated.

To train a model from scratch for reconstruction of novel test scenes, run (Please make sure to modify the corresponding data path and configuration in the conf file correctly):

python multiple_scenes_learning.py --conf path/to/conf

where path/to/conf is relative to code/confs/, and may e.g. be training.conf for training a Euclidean reconstruction model using data augmentation.

The training phase is succeeded by bundle adjustment, evaluation, and by default also by separate fine-tuning of the model parameters on every test scene.

To infer a new scene, run (Please make sure to modify the corresponding data path and configuration in the conf file correctly):

python inference.py --conf inference.conf

Citation

If you find this work useful, please cite our paper:

@article{chen2024deepaat,
  title={DeepAAT: Deep Automated Aerial Triangulation for Fast UAV-based Mapping},
  author={Chen, Zequan and Li, Jianping and Li, Qusheng and Dong, Zhen and Yang, Bisheng},
  journal={International Journal of Applied Earth Observation and Geoinformation},
  volume={134},
  pages={104190},
  year={2024},
  publisher={Elsevier}
}

Acknowledgement

We make improvements based on ESFM. We thank the authors for releasing the source code.

About

Deep Automated Aerial Triangulation for Fast UAV-based Mapping

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published