We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello! I appreciate your great work on diffusion models.
For experimenting on W8A8 models, as I could see there is no config present in https://github.com/mit-han-lab/deepcompressor/tree/main/examples/diffusion/configs/svdquant whereas on the results its reported with this performance:
Precision | Method | FID (↓) | IR (↑) | LPIPS (↓) | PSNR( ↑) INT W8A8 | Ours | 16.3 | 0.955 | 0.109 | 23.7
here: https://github.com/mit-han-lab/deepcompressor/tree/main/examples/diffusion Could you please elaborate how to generate the DiT for W8A8 precision as its not published in huggingface?
The text was updated successfully, but these errors were encountered:
Hi,
For W8A8 configuration, you may directly change the quantization configuration to:
dtype: sint8 group_shapes: - - 1 - -1 - 1 - 1 - 1 scale_dtypes: - null
for both wgts (weights) and ipts (input activations).
wgts
ipts
Sorry, something went wrong.
No branches or pull requests
Hello!
I appreciate your great work on diffusion models.
For experimenting on W8A8 models, as I could see there is no config present in https://github.com/mit-han-lab/deepcompressor/tree/main/examples/diffusion/configs/svdquant
whereas on the results its reported with this performance:
Precision | Method | FID (↓) | IR (↑) | LPIPS (↓) | PSNR( ↑)
INT W8A8 | Ours | 16.3 | 0.955 | 0.109 | 23.7
here: https://github.com/mit-han-lab/deepcompressor/tree/main/examples/diffusion
Could you please elaborate how to generate the DiT for W8A8 precision as its not published in huggingface?
The text was updated successfully, but these errors were encountered: