Skip to content

[NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching

Notifications You must be signed in to change notification settings

horseee/learning-to-cache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching


(Results on DiT-XL/2 and U-ViT-H/2)

Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching 🥯[Arxiv]
Xinyin Ma, Gongfan Fang, Michael Bi Mi, Xinchao Wang
Learning and Vision Lab, National University of Singapore, Huawei Technologies Ltd

Introduction

We introduce a novel scheme, named Learning-to-Cache (L2C), that learns to conduct caching in a dynamic manner for diffusion transformers. A router is optimized to decide the layers to be cached.


(Changes in the router for U-ViT when optimizing across different layers (x-axis) over all steps (y-axis). The white indicates the layer is activated, while the black indicates it is disabled.)

Some takeaways:

  1. A large proportion of layers in the diffusion transformer can be removed, without updating the model parameters.

    • In U-ViT-H/2, up to 93.68% of the layers in the cache steps (46.84% for all steps) can be removed, with less than 0.01 drop in FID.
  2. L2C largely outperforms samplers such as DDIM and DPM-Solver.


(Comparison with Baselines. Left: DiT-XL/2. Right: U-ViT-H/2)

Checkpoint for Routers

Model NFE Checkpoint
DiT-XL/2 50 link
DiT-XL/2 20 link
U-ViT-H/2 50 link
U-ViT-H/2 20 link

Code

We implement Learning-to-Cache on two basic structures: DiT and U-ViT. Check the instructions below:

  1. DiT: README
  2. U-ViT: README

Citation

@misc{ma2024learningtocache,
      title={Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching}, 
      author={Xinyin Ma and Gongfan Fang and Michael Bi Mi and Xinchao Wang},
      year={2024},
      eprint={2406.01733},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

About

[NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages