Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update SimCLR model model example #1782

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion docs/source/examples/simclr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,21 @@
SimCLR
======

Example implementation of the SimCLR architecture.
SimCLR is a framework for self-supervised learning of visual representations using contrastive learning. It aims to maximize agreement between different augmented views of the same image.

Key Components
--------------

- **Data Augmentations**: SimCLR uses random cropping, resizing, color jittering, and Gaussian blur to create diverse views of the same image.
- **Backbone**: Convolutional neural networks, such as ResNet, are employed to encode augmented images into feature representations.
- **Projection Head**: A multilayer perceptron (MLP) maps features into a space where contrastive loss is applied, enhancing representation quality.
- **Contrastive Loss**: The normalized temperature-scaled cross-entropy loss (NT-Xent) encourages similar pairs to align and dissimilar pairs to diverge.

Good to Know
----------------

- **Backbone Networks**: SimCLR is specifically optimized for convolutional neural networks, with a focus on ResNet architectures. We do not recommend using it with transformer-based models.
- **Learning Paradigm**: SimCLR is based on contrastive learning which makes it sensitive to the augmentations you pick and the method benefits from larger batch sizes.

Reference:
`A Simple Framework for Contrastive Learning of Visual Representations, 2020 <https://arxiv.org/abs/2002.05709>`_
Expand Down