Forked from the original SAM 2 repository, using the 2.1 model with webcam.
Also inspired by SAM2-RealTime-Webcam by EllenGYY.
This repository drop the way to save files into a folder which is used between Windows USB camera and WSL code. So this repository can be run on Windows like this:
python webCamSeg.py
By running this demo, you can chose current frame and click the object you want to segmente. Then press 's' to start tracking. Certainly, press 's' first also can do the same thing. You can press 'q' to quit.
Before running this demo, please install SAM2 firstly. The "Installation & Getting Started " below is same from SAM2.
SAM 2 needs to be installed first before use. The code requires python>=3.10
, as well as torch>=2.3.1
and torchvision>=0.18.1
. Please follow the instructions here to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using:
git clone https://github.com/facebookresearch/sam2.git && cd sam2
pip install -e .
If you are installing on Windows, it's strongly recommended to use Windows Subsystem for Linux (WSL) with Ubuntu.
To use the SAM 2 predictor and run the example notebooks, jupyter
and matplotlib
are required and can be installed by:
pip install -e ".[notebooks]"
Note:
- It's recommended to create a new Python environment via Anaconda for this installation and install PyTorch 2.3.1 (or higher) via
pip
following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version usingpip
. - The step above requires compiling a custom CUDA kernel with the
nvcc
compiler. If it isn't already available on your machine, please install the CUDA toolkits with a version that matches your PyTorch CUDA version. - If you see a message like
Failed to build the SAM 2 CUDA extension
during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases).
Please see INSTALL.md
for FAQs on potential issues and solutions.
First, we need to download a model checkpoint. All the model checkpoints can be downloaded by running:
cd checkpoints && \
./download_ckpts.sh && \
cd ..
or individually from:
(note that these are the improved checkpoints denoted as SAM 2.1; see Model Description for details.)
Then SAM 2 can be used in a few lines as follows for image and video prediction.
The table below shows the improved SAM 2.1 checkpoints released on September 29, 2024.
Model | Size (M) | Speed (FPS) | SA-V test (J&F) | MOSE val (J&F) | LVOS v2 (J&F) |
---|---|---|---|---|---|
sam2.1_hiera_tiny (config, checkpoint) |
38.9 | 47.2 | 76.5 | 71.8 | 77.3 |
sam2.1_hiera_small (config, checkpoint) |
46 | 43.3 (53.0 compiled*) | 76.6 | 73.5 | 78.3 |
sam2.1_hiera_base_plus (config, checkpoint) |
80.8 | 34.8 (43.8 compiled*) | 78.2 | 73.7 | 78.2 |
sam2.1_hiera_large (config, checkpoint) |
224.4 | 24.2 (30.2 compiled*) | 79.5 | 74.6 | 80.6 |