Skip to content

Commit

Permalink
branch name change (#118)
Browse files Browse the repository at this point in the history
Summary:
changes to prepare branch name change

Pull Request resolved: #118

Reviewed By: haooooooqi

Differential Revision: D31042735

Pulled By: kalyanvasudev

fbshipit-source-id: 572b35a3a12f455c1abf26abd81ace4d940c6625
  • Loading branch information
kalyanvasudev authored and facebook-github-bot committed Sep 20, 2021
1 parent 4b3494b commit 9d0ca90
Show file tree
Hide file tree
Showing 17 changed files with 34 additions and 34 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -202,4 +202,4 @@ workflows:
filters:
branches:
only:
- master
- main
2 changes: 1 addition & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ We do not always accept new features, and we take the following factors into con

When sending a PR, please ensure you complete the following steps:

1. Fork the repo and create your branch from `master`. Follow the instructions
1. Fork the repo and create your branch from `main`. Follow the instructions
in [INSTALL.md](../INSTALL.md) to build the repo.
2. If you've added code that should be tested, add tests.
3. If you've changed any APIs, please update the documentation.
Expand Down
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ possible.
## Pull Requests
We actively welcome your pull requests.

1. Fork the repo and create your branch from `master`.
1. Fork the repo and create your branch from `main`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
Expand All @@ -14,12 +14,12 @@ We actively welcome your pull requests.

## Testing

Please follow the instructions mentioned in [test-README](https://github.com/facebookresearch/pytorchvideo/blob/master/tests/README.md) to run the existing and your newly added tests.
Please follow the instructions mentioned in [test-README](https://github.com/facebookresearch/pytorchvideo/blob/main/tests/README.md) to run the existing and your newly added tests.

## Linting

We provide a linting script to correctly format your code changes.
Please follow the instructions mentioned in [dev-README](https://github.com/facebookresearch/pytorchvideo/blob/master/dev/README.md) to run the linter.
Please follow the instructions mentioned in [dev-README](https://github.com/facebookresearch/pytorchvideo/blob/main/dev/README.md) to run the linter.


## Contributor License Agreement ("CLA")
Expand Down
4 changes: 2 additions & 2 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,9 +60,9 @@ conda install -c pytorch pytorch=1.8.0 torchvision cudatoolkit=10.2

## Testing

Please follow the instructions mentioned in [test-README](https://github.com/facebookresearch/pytorchvideo/blob/master/tests/README.md) to run the provided tests.
Please follow the instructions mentioned in [test-README](https://github.com/facebookresearch/pytorchvideo/blob/main/tests/README.md) to run the provided tests.

## Linting

We also provide a linting script to correctly format your code edits.
Please follow the instructions mentioned in [dev-README](https://github.com/facebookresearch/pytorchvideo/blob/master/dev/README.md) to run the linter.
Please follow the instructions mentioned in [dev-README](https://github.com/facebookresearch/pytorchvideo/blob/main/dev/README.md) to run the linter.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@
</p>

<p align="center">
<a href="https://github.com/facebookresearch/pytorchvideo/blob/master/LICENSE">
<a href="https://github.com/facebookresearch/pytorchvideo/blob/main/LICENSE">
<img src="https://img.shields.io/pypi/l/pytorchvideo" alt="CircleCI" />
</a>
<a href="https://pypi.org/project/pytorchvideo/">
<img src="https://img.shields.io/pypi/v/pytorchvideo?color=blue&label=release" alt="CircleCI" />
</a>
<a href="https://circleci.com/gh/facebookresearch/pytorchvideo/tree/master">
<img src="https://img.shields.io/circleci/build/github/facebookresearch/pytorchvideo/master?token=efdf3ff5b6f6acf44f4af39b683dea31d40e5901" alt="Coverage" />
<a href="https://circleci.com/gh/facebookresearch/pytorchvideo/tree/main">
<img src="https://img.shields.io/circleci/build/github/facebookresearch/pytorchvideo/main?token=efdf3ff5b6f6acf44f4af39b683dea31d40e5901" alt="Coverage" />
</a>
<a href="https://codecov.io/gh/facebookresearch/pytorchvideo/branch/master">
<img src="https://codecov.io/gh/facebookresearch/pytorchvideo/branch/master/graph/badge.svg?token=OSZSI6JU31"/>
<a href="https://codecov.io/gh/facebookresearch/pytorchvideo/branch/main">
<img src="https://codecov.io/gh/facebookresearch/pytorchvideo/branch/main/graph/badge.svg?token=OSZSI6JU31"/>
</a>
</a>
<a href="https://join.slack.com/t/pytorchvideo/shared_invite/zt-qjrkknes-7bt0qjcmVNvXcceg9zlgOA">
Expand Down Expand Up @@ -44,7 +44,7 @@ Key features include:

## Updates

- Aug 2021: [Multiscale Vision Transformers](https://arxiv.org/abs/2104.11227) has been released in PyTorchVideo, details can be found from [here](https://github.com/facebookresearch/pytorchvideo/blob/master/pytorchvideo/models/vision_transformers.py#L97).
- Aug 2021: [Multiscale Vision Transformers](https://arxiv.org/abs/2104.11227) has been released in PyTorchVideo, details can be found from [here](https://github.com/facebookresearch/pytorchvideo/blob/main/pytorchvideo/models/vision_transformers.py#L97).

## Installation

Expand All @@ -65,7 +65,7 @@ Get started with PyTorchVideo by trying out one of our [tutorials](https://pytor


## Model Zoo and Baselines
We provide a large set of baseline results and trained models available for download in the [PyTorchVideo Model Zoo](https://github.com/facebookresearch/pytorchvideo/blob/master/docs/source/model_zoo.md).
We provide a large set of baseline results and trained models available for download in the [PyTorchVideo Model Zoo](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.md).

## Contributors

Expand Down
2 changes: 1 addition & 1 deletion dev/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@


Before running the linter, please ensure that you installed the necessary additional linter dependencies.
If not installed, check the [install-README](https://github.com/facebookresearch/pytorchvideo/blob/master/INSTALL.md) on how to do it.
If not installed, check the [install-README](https://github.com/facebookresearch/pytorchvideo/blob/main/INSTALL.md) on how to do it.

Post that, you can run the linter from the project root using,

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@
)
]

github_doc_root = "https://github.com/facebookresearch/pytorchvideo/tree/master"
github_doc_root = "https://github.com/facebookresearch/pytorchvideo/tree/main"


def setup(app):
Expand Down
8 changes: 4 additions & 4 deletions docs/source/data_preparation.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ path_to_video_3 label_3
path_to_video_N label_N
```

All the Kinetics models in the Model Zoo are trained and tested with the same data as [Non-local Network](https://github.com/facebookresearch/video-nonlocal-net/blob/master/DATASET.md) and [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/master/slowfast/datasets/DATASET.md). For dataset specific issues, please reach out to the [dataset provider](https://deepmind.com/research/open-source/kinetics).
All the Kinetics models in the Model Zoo are trained and tested with the same data as [Non-local Network](https://github.com/facebookresearch/video-nonlocal-net/blob/main/DATASET.md) and [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/main/slowfast/datasets/DATASET.md). For dataset specific issues, please reach out to the [dataset provider](https://deepmind.com/research/open-source/kinetics).


### Charades

We follow [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/master/slowfast/datasets/DATASET.md) to prepare the Charades dataset as follow:
We follow [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/main/slowfast/datasets/DATASET.md) to prepare the Charades dataset as follow:

1. Download the Charades RGB frames from [official website](http://ai2-website.s3.amazonaws.com/data/Charades_v1_rgb.tar).

Expand All @@ -32,7 +32,7 @@ We follow [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/master/

### Something-Something V2

We follow [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/master/slowfast/datasets/DATASET.md) to prepare the Something-Something V2 dataset as follow:
We follow [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/main/slowfast/datasets/DATASET.md) to prepare the Something-Something V2 dataset as follow:

1. Download the dataset and annotations from [official website](https://20bn.com/datasets/something-something).

Expand All @@ -49,7 +49,7 @@ We follow [PySlowFast](https://github.com/facebookresearch/SlowFast/blob/master/
The AVA Dataset could be downloaded from the [official site](https://research.google.com/ava/download.html#ava_actions_download)
We followed the same [downloading and preprocessing procedure](https://github.com/facebookresearch/video-long-term-feature-banks/blob/master/DATASET.md) as the [Long-Term Feature Banks for Detailed Video Understanding](https://arxiv.org/abs/1812.05038) do.
We followed the same [downloading and preprocessing procedure](https://github.com/facebookresearch/video-long-term-feature-banks/blob/main/DATASET.md) as the [Long-Term Feature Banks for Detailed Video Understanding](https://arxiv.org/abs/1812.05038) do.
You could follow these steps to download and preprocess the data:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/model_zoo.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,4 +78,4 @@ All top1/top5 accuracies are measured with 10-clip evaluation. Latency is benchm


### TorchHub models
We provide a large set of [TorchHub](https://pytorch.org/hub/) models for the above video models with pre-trained weights. So it's easy to construct the networks and load pre-trained weights. Please refer to [PytorchVideo TorchHub models](https://github.com/facebookresearch/pytorchvideo/blob/master/pytorchvideo/models/hub/README.md) for more details.
We provide a large set of [TorchHub](https://pytorch.org/hub/) models for the above video models with pre-trained weights. So it's easy to construct the networks and load pre-trained weights. Please refer to [PytorchVideo TorchHub models](https://github.com/facebookresearch/pytorchvideo/blob/main/pytorchvideo/models/hub/README.md) for more details.
4 changes: 2 additions & 2 deletions projects/video_nerf/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Train a NeRF model with PyTorchVideo and PyTorch3D

This project demonstrates how to use the video decoder from PyTorchVideo to load frames from a video of an object from the [Objectron dataset](https://github.com/google-research-datasets/Objectron), and use this to train a NeRF [1] model with [PyTorch3D](https://github.com/facebookresearch/pytorch3d). Instead of decoding and storing all the video frames as images, PyTorchVideo offers an easy alternative to load and access frames on the fly. For this project we will be using the [NeRF implementation from PyTorch3D](https://github.com/facebookresearch/pytorch3d/tree/master/projects/nerf).
This project demonstrates how to use the video decoder from PyTorchVideo to load frames from a video of an object from the [Objectron dataset](https://github.com/google-research-datasets/Objectron), and use this to train a NeRF [1] model with [PyTorch3D](https://github.com/facebookresearch/pytorch3d). Instead of decoding and storing all the video frames as images, PyTorchVideo offers an easy alternative to load and access frames on the fly. For this project we will be using the [NeRF implementation from PyTorch3D](https://github.com/facebookresearch/pytorch3d/tree/main/projects/nerf).

### Set up

Expand Down Expand Up @@ -116,7 +116,7 @@ python test_nerf.py --config-name objectron test.mode='export_video' data.image_

For a higher resolution video you can increase the image size to e.g. [192, 256] (note that this will slow down inference).

You will need to specify the `scene_center` for the video in the `objectron.yaml` file. This is set for the demo video specified in `download_objectron_data.py`. For a different video you can calculate the scene center inside [`eval_video_utils.py`](https://github.com/facebookresearch/pytorch3d/blob/master/projects/nerf/nerf/eval_video_utils.py#L99). After line 99 you can add the following code to compute the center:
You will need to specify the `scene_center` for the video in the `objectron.yaml` file. This is set for the demo video specified in `download_objectron_data.py`. For a different video you can calculate the scene center inside [`eval_video_utils.py`](https://github.com/facebookresearch/pytorch3d/blob/main/projects/nerf/nerf/eval_video_utils.py#L99). After line 99 you can add the following code to compute the center:

```python
# traj is the circular camera trajectory on the camera mean plane.
Expand Down
4 changes: 2 additions & 2 deletions pytorchvideo/models/hub/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ PyTorchVideo provides a large set of [TorchHub](https://pytorch.org/hub/) models

### Kinetics-400

Models are trained on Kinetics-400. For more benchmarking and model details, please check the [PyTorchVideo Model Zoo](https://github.com/facebookresearch/pytorchvideo/blob/master/docs/source/model_zoo.md)
Models are trained on Kinetics-400. For more benchmarking and model details, please check the [PyTorchVideo Model Zoo](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.md)

torchhub name | arch | depth | frame length x sample rate | top 1 | top 5 |
------------------------ | -------- | ----- | -------------------------- | ----- | ----- |
Expand Down Expand Up @@ -45,4 +45,4 @@ model = torch.hub.load("facebookresearch/pytorchvideo", model=model_name, pretra

Notes:
* Please check [torchhub inference tutorial](https://pytorchvideo.org/docs/tutorial_torchhub_inference) for more details about how to load models from TorchHub and perform inference.
* Check [Model Zoo](https://github.com/facebookresearch/pytorchvideo/blob/master/docs/source/model_zoo.md) for the full set of supported PytorchVideo model zoo and more details about how the model zoo is prepared.
* Check [Model Zoo](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.md) for the full set of supported PytorchVideo model zoo and more details about how the model zoo is prepared.
2 changes: 1 addition & 1 deletion tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@


Before running the tests, please ensure that you installed the necessary additional test dependencies.
If not installed, check the [install-README](https://github.com/facebookresearch/pytorchvideo/blob/master/INSTALL.md) on how to do it.
If not installed, check the [install-README](https://github.com/facebookresearch/pytorchvideo/blob/main/INSTALL.md) on how to do it.

Use the the following command to run the tests:
```
Expand Down
2 changes: 1 addition & 1 deletion tutorials/torchhub_inference_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@
"source": [
"### Load Model using Torch Hub API\n",
"\n",
"PyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation](https://github.com/facebookresearch/pytorchvideo/blob/master/docs/source/model_zoo.md#kinetics-400). \n",
"PyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.md#kinetics-400). \n",
"\n",
"Here we are selecting the `slowfast_r50` model which was trained using a 8x8 setting on the Kinetics 400 dataset. \n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@
"metadata": {},
"source": [
"## Load Model using Torch Hub API\n",
"PyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation.](https://github.com/facebookresearch/pytorchvideo/blob/master/docs/source/model_zoo.md)\n",
"PyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation.](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.md)\n",
"\n",
"Here we are selecting the slow_r50_detection model which was trained using a 4x16 setting on the Kinetics 400 dataset and \n",
"fine tuned on AVA V2.2 actions dataset.\n",
Expand Down Expand Up @@ -132,7 +132,7 @@
"These bounding boxes later feed into our video action detection model.\n",
"For more details, please refer to the Detectron2's object detection tutorials.\n",
"\n",
"To install Detectron2, please follow the instructions mentioned [here](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md)"
"To install Detectron2, please follow the instructions mentioned [here](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/video_detection_example/visualization.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def __init__(
self, img_rgb: torch.Tensor, meta: Optional[SimpleNamespace] = None, **kwargs
) -> None:
"""
See https://github.com/facebookresearch/detectron2/blob/master/detectron2/utils/visualizer.py
See https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/visualizer.py
for more details.
Args:
img_rgb: a tensor or numpy array of shape (H, W, C), where H and W correspond to
Expand Down
6 changes: 3 additions & 3 deletions website/docs/tutorial_torchhub_detection_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ PyTorchVideo provides several pretrained models through [Torch Hub](https://pyto

NOTE: Currently, this tutorial only works if ran on local clone from the directory `pytorchvideo/tutorials/video_detection_example`

This tutorial assumes that you have installed [Detectron2]((https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md)) and [Opencv-python](https://pypi.org/project/opencv-python/) on your machine.
This tutorial assumes that you have installed [Detectron2]((https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md)) and [Opencv-python](https://pypi.org/project/opencv-python/) on your machine.

# Imports
```python
Expand Down Expand Up @@ -38,7 +38,7 @@ from visualization import VideoVisualizer
```

# Load Model using Torch Hub API
PyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation.](https://github.com/facebookresearch/pytorchvideo/blob/master/docs/source/model_zoo.md)
PyTorchVideo provides several pretrained models through Torch Hub. Available models are described in [model zoo documentation.](https://github.com/facebookresearch/pytorchvideo/blob/main/docs/source/model_zoo.md)

Here we are selecting the slow_r50_detection model which was trained using a 4x16 setting on the Kinetics 400 dataset and fine tuned on AVA V2.2 actions dataset.

Expand All @@ -56,7 +56,7 @@ We use the object detector to detect bounding boxes for the people.
These bounding boxes later feed into our video action detection model.
For more details, please refer to the Detectron2's object detection tutorials.

To install Detectron2, please follow the instructions mentioned [here](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md)
To install Detectron2, please follow the instructions mentioned [here](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md)

```python
cfg = get_cfg()
Expand Down
2 changes: 1 addition & 1 deletion website/website/pages/en/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ pip install pytorchvideo
<Container>
<ol>
<li>
<strong>Install pytorchvideo </strong> (Confirm requirements following the instructions <a href="https://github.com/facebookresearch/pytorchvideo/blob/master/INSTALL.md">here</a>)
<strong>Install pytorchvideo </strong> (Confirm requirements following the instructions <a href="https://github.com/facebookresearch/pytorchvideo/blob/main/INSTALL.md">here</a>)
<MarkdownBlock>{install}</MarkdownBlock>
</li>
<li>
Expand Down

0 comments on commit 9d0ca90

Please sign in to comment.