Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 239 and 189 solution #252

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 16 additions & 4 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@

<!-- TOC -->

- [Requirements](#requirements)
- [Prepare environment](#prepare-environment)
- [Install MMHuman3D](#install-mmhuman3d)
- [A from-scratch setup script](#a-from-scratch-setup-script)
- [Installation](#installation)
- [Requirements](#requirements)
- [Prepare environment](#prepare-environment)
- [Install MMHuman3D](#install-mmhuman3d)
- [A from-scratch setup script](#a-from-scratch-setup-script)

<!-- TOC -->

Expand Down Expand Up @@ -54,6 +55,12 @@ conda install pytorch=1.8.0 torchvision cudatoolkit=10.2 -c pytorch

**Important:** Make sure that your compilation CUDA version and runtime CUDA version match.
Besides, for RTX 30 series GPU, cudatoolkit>=11.0 is required.
To make sure that your installed the right pytorch version, you should check if you get `True` when running the following commands:
```
import torch
torch.cuda.is_available()
```
If you get `False`, you should choose to install other pytorch versions.

d. Install PyTorch3D from source.

Expand Down Expand Up @@ -150,6 +157,11 @@ cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .
```
Here, to check that your mmdet is compatible with pytorch, check if you have problems with running the following command:
```
from mmdet.apis import inference_detector, init_detector
```
If you meet errors, please check your pytorch and mmcv versions and install other versions of pytorch.

- mmpose (optional)
```shell
Expand Down
56 changes: 35 additions & 21 deletions docs/preprocess_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,35 @@
<!-- * [Overview](#overview)
* [Generate dataset files](#generate-dataset-files)
* [Obtain preprocessed datasets](#obtain-preprocessed-datasets) -->
- [Datasets for supported algorithms](#datasets-for-supported-algorithms)
- [Folder structure](#folder-structure)
* [AGORA](#agora)
* [COCO](#coco)
* [COCO-WholeBody](#coco-wholebody)
* [CrowdPose](#crowdpose)
* [EFT](#eft)
* [GTA-Human](#gta-human)
* [Human3.6M](#human36m)
* [Human3.6M Mosh](#human36m-mosh)
* [HybrIK](#hybrik)
* [LSP](#lsp)
* [LSPET](#lspet)
* [MPI-INF-3DHP](#mpi-inf-3dhp)
* [MPII](#mpii)
* [PoseTrack18](#posetrack18)
* [Penn Action](#penn-action)
* [PW3D](#pw3d)
* [SPIN](#spin)
* [SURREAL](#surreal)
- [Data preparation](#data-preparation)
- [Overview](#overview)
- [Datasets for supported algorithms](#datasets-for-supported-algorithms)
- [Folder structure](#folder-structure)
- [AGORA](#agora)
- [AMASS](#amass)
- [COCO](#coco)
- [COCO-WholeBody](#coco-wholebody)
- [CrowdPose](#crowdpose)
- [EFT](#eft)
- [GTA-Human](#gta-human)
- [Human3.6M](#human36m)
- [Human3.6M Mosh](#human36m-mosh)
- [HybrIK](#hybrik)
- [LSP](#lsp)
- [LSPET](#lspet)
- [MPI-INF-3DHP](#mpi-inf-3dhp)
- [MPII](#mpii)
- [PoseTrack18](#posetrack18)
- [Penn Action](#penn-action)
- [PW3D](#pw3d)
- [SPIN](#spin)
- [SURREAL](#surreal)
- [VIBE](#vibe)
- [FreiHand](#freihand)
- [EHF](#ehf)
- [FFHQ](#ffhq)
- [ExPose](#expose)
- [Stirling](#stirling)


## Overview
Expand Down Expand Up @@ -131,7 +140,8 @@ DATASET_CONFIGS = dict(

## Datasets for supported algorithms

For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`. As such, use this command with the listed `dataset-names`:
For all algorithms, the root path for our datasets and output path for our preprocessed npz files are stored in `data/datasets` and `data/preprocessed_datasets`.
As such, use this command with the listed `dataset-names`:

```bash
python tools/convert_datasets.py \
Expand Down Expand Up @@ -188,6 +198,10 @@ mmhuman3d
├── mpii_train.npz
└── pw3d_test.npz
```
Note that, to avoid generating npz files every iteration during training, please create a cache directory linked with the preprocessed files. To do so, run the following command:
```
ln -s data/cache data/preprocessed_datasets
mingyuan-zhang marked this conversation as resolved.
Show resolved Hide resolved
```

For SPIN training, the following datasets are required:
- [COCO](#coco)
Expand Down