This folder contains tools to generate diverse scene-aware or object-aware human motion data using existing datasets.
- Follow instructions in HumanML3D
- Copy the dataset to our repository:
cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D
- Set
$HUMANML_3D_ROOT
to the HumanML3D dataset folder
- Download from AMASS
- Set
$AMASS_DATA
to the dataset folder
- Download from 3D-FRONT
- Set
$threeDFront_root
to the dataset folder. - Get the bird-view floor plan and object mask for each scene, you can refer to this scripts.
- Fitting script:
data_generation/locomotion/align_motion_amass.py
Based on Summon, we predict contact areas for each motion frame and fit objects of corresponding categories.
- SAMP dataset: Download from SAMP and set
$DATA_ROOT/SAMP
- 3D-FUTURE dataset:
- Download from 3D-FUTURE
- Set
$DATA_ROOT/3D-FUTURE-model
- We use
raw_model.obj
for each subject
- Fit objects to predicted contact areas:
data_generation/interaction/summon/fit_best_obj.py
- Calculate transform matrix and merge into
.pkl
:data_generation/interaction/summon/sort_out_result.py
- Visualization:
data_generation/interaction/summon/vis_fitting_results.py