Replies: 2 comments
-
i know @sagadre has for the OAI ViT models at least, but i don't think the open_clip ones, he'll know more |
Beta Was this translation helpful? Give feedback.
0 replies
-
Yeah! Check out this notebook from Hila Chefer: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb Found this to qualitatively work pretty well on OAI ViT B/32 model! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Has anyone tried saliency map visualizations with open_clip models?
I came across these examples, but they only use OpenAI ResNet-based models.
https://colab.research.google.com/github/kevinzakka/clip_playground/blob/main/CLIP_GradCAM_Visualization.ipynb
https://huggingface.co/spaces/njanakiev/gradio-openai-clip-grad-cam
Beta Was this translation helpful? Give feedback.
All reactions