PyTorch implementation for "Modality-agnostic Self-Supervised Learning with Meta-Learned Masked Auto-Encoder" (accepted in NeurIPS 2023)
![](https://private-user-images.githubusercontent.com/69646951/278808106-ed05afec-a7fd-4ae8-aa68-4ead3f7e8d40.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyOTI5MTIsIm5iZiI6MTczOTI5MjYxMiwicGF0aCI6Ii82OTY0Njk1MS8yNzg4MDgxMDYtZWQwNWFmZWMtYTdmZC00YWU4LWFhNjgtNGVhZDNmN2U4ZDQwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDE2NTAxMlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTMyYTM0MmViOTU1ZWNhNzRjZWFiNTAxM2VhZWY4ZDliY2RmZTk4MDA5MGU0NTMyOGFiZjg0NDcyYTBjOTllZWYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.v4YzVrGB0fH_GkPOAmdCImuPn61UEGHwX-4ORmWtHK4)
TL;DR: Interpreting MAE through meta-learning and applying advanced meta-learning techniques to improve unsupervised representation of MAE on arbitrary modalities.
conda create -n meta-mae python=3.9
conda activate meta-mae
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=10.2 -c pytorch
pip install numpy==1.21.5
conda install ignite -c pytorch
pip install timm==0.6.12
pip install librosa
pip install pandas
pip install packaging tensorboard sklearn
- we get the datasets following DABS datasets source codes of official github page: https://github.com/alextamkin/dabs/tree/main/src/datasets
- Our code can be executed only for the preprocessed data with the above source codes (e.g., spliting to make scv, ...)
- E.g., pamap2
python pretrain.py --logdir ./logs_final/pamap2/metamae --seed 0 --model metamae \
--datadir [DATA_ROOT] --dataset pamap2 \
--inner-lr 0.5 --reg-weight 1 --num-layer-dec 4 --dropout 0.1 --mask-ratio 0.85
python linear_evaluation.py --ckptdir ./logs_final/pamap2/metamae --seed 0 --model metamae \
--datadir [DATA_ROOT] --dataset pamap2 \
--inner-lr 0.5 --reg-weight 1 --num-layer-dec 4 --dropout 0.1 --mask-ratio 0.85