Skip to content

Codes for "Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective" (AAAI 2025)

License

Notifications You must be signed in to change notification settings

UNITES-Lab/VPNs

Repository files navigation

Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective (VPNs)

License

Code for the paper Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective

Authors: Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen

Overview

The rapid development of large-scale deep learning models questions the affordability of hardware platforms, which necessitates the pruning to reduce their computational and memory footprints. Sparse neural networks as the product, have demonstrated numerous favorable benefits like low complexity, undamaged generalization, etc. Most of the prominent pruning strategies are invented from a model-centric perspective, focusing on searching and preserving crucial weights by analyzing network topologies. However, the role of data and its interplay with model-centric pruning has remained relatively unexplored. In this research, we introduce a novel data-model co-design perspective: to promote superior weight sparsity by learning important model topology and adequate input data in a synergetic manner. Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework. As a pioneering effort, this paper conducts systematic investigations about the impact of different visual prompts on model pruning and suggests an effective joint optimization approach. Extensive experiments with 3 network architectures and 8 datasets evidence the substantial performance improvements from VPNs over existing start-of-the-art pruning algorithms. Furthermore, we find that subnetworks discovered by VPNs from pre-trained models enjoy better transferability across diverse downstream scenarios. These insights shed light on new promising possibilities of data-model co-designs for vision model sparsification.

VPNs

1. Install requirements:

conda create -n vpns python=3.8
conda activate vpns
pip install -r requirements.txt

2. Symlink datasets (Optional):

If you already have the datasets downloaded, create a symlink. If you skip this step, the datasets will be downloaded automatically.

mkdir ./dataset
ln -s <DATASET_PARENT_DIR> ./datasets

3. Run VPNs pruning

ResNet-18 on CIFAR100 at 40%, 50%, 60%, 70%, 80%, and 90% sparsity levels.

bash vpns_resnet18_cifar100.sh

ResNet-50 on Tiny-ImageNet at 40%, 50%, 60%, 70%, 80%, and 90% sparsity levels (need to put the data under ./dataset/tiny_imagenet/).

bash vpns_resnet50_tiny_imagenet.sh

Please change the network and dataset in vpns.sh file to run more experiments.

4. Checkpoints

We provide some best checkpoints of VPNs pruning here.

Network+Dataset 40% sparsity 60% sparsity 70% sparsity
ResNet18+CIFAR10 ckpt ckpt ckpt
ResNet18+CIFAR100 ckpt ckpt ckpt
ResNet18+Tiny-ImageNet ckpt ckpt ckpt

Citation

@misc{jin2023visual,
      title={Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective}, 
      author={Can Jin and Tianjin Huang and Yihua Zhang and Mykola Pechenizkiy and Sijia Liu and Shiwei Liu and Tianlong Chen},
      year={2023},
      eprint={2312.01397},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

Codes for "Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective" (AAAI 2025)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published