Experimentations with CLIP (Contrastive Language–Image Pre-training models for zero-shot classification on images from CIFAR100 dataset. See notebook
Specifically, tackle things like:
- How to download and run pre-trained CLIP models
- How to calculate the cosine similarity between arbitrary image and text inputs pairs by projecting them to CLIP embedding space.
- How to perform zero-shot image classifications on images from the CIFAR100 dataset
Linking the original paper for Ref: CLIP