forked from GewelsJI/SINet-V2
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Gepeng_Ji
committed
Feb 21, 2021
1 parent
47d2d0a
commit 0c579ea
Showing
2 changed files
with
132 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -30,3 +30,4 @@ | |
*.exe | ||
*.out | ||
*.app | ||
/snapshot/20201214-Network_Res2Net_GRA_NCD/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,131 @@ | ||
# SINet-V2 | ||
Search and Identification Network Version-2 | ||
# Concealed Object Detection (TPAMI-2021) | ||
|
||
> **Authors:** | ||
> [Deng-Ping Fan](https://dpfan.net/), | ||
> [Ge-Peng Ji](https://scholar.google.com/citations?user=oaxKYKUAAAAJ&hl=en), | ||
> [Ming-Ming Cheng](https://mmcheng.net/), | ||
> [Ling Shao](http://www.inceptioniai.org/). | ||
## 1. Preface | ||
|
||
- **Introduction.** This repository contains the source code, prediction results, and evaluation toolbox of our method, which is the | ||
journal extension version of our paper SINet (github) published at CVPR-2020. | ||
|
||
- **Highlights.** Compared to our conference version, we achieve new SOTA in the field of COD via the two | ||
well-elaborated sub-modules, including neighbor connection decoder(NCD) and group-reversal attention (GRA). | ||
Please refer to our paper for more details. | ||
|
||
> If you have any questions about our paper, feel free to contact me via e-mail ([email protected]). | ||
> And if you are using our our and evaluation toolbox for your research, please cite this paper ([BibTeX](#4-citation)). | ||
|
||
## 2. :fire: NEWS :fire: | ||
|
||
- [2021/01/16] Create repository. | ||
|
||
|
||
## 3. Overview | ||
|
||
<p align="center"> | ||
<img src="imgs/TaskRelationship.png"/> <br /> | ||
<em> | ||
Figure 1: Task relationship. One of the most popular directions in computer vision is generic object detection. | ||
Note that generic objects can be either salient or camouflaged; camouflaged objects can be seen as difficult cases of | ||
generic objects. Typical generic object detection tasks include semantic segmentation and panoptic | ||
segmentation (see Fig. 2 b). | ||
</em> | ||
</p> | ||
|
||
<p align="center"> | ||
<img src="imgs/CamouflagedTask.png"/> <br /> | ||
<em> | ||
Figure 2: Given an input image (a), we present the ground-truth for (b) panoptic segmentation | ||
(which detects generic objects including stuff and things), (c) salient instance/object detection | ||
(which detects objects that grasp human attention), and (d) the proposed camouflaged object detection task, | ||
where the goal is to detect objects that have a similar pattern (e.g., edge, texture, or color) to the natural habitat. | ||
In this case, the boundaries of the two butterflies are blended with the bananas, making them difficult to identify. | ||
This task is far more challenging than the traditional salient object detection or generic object detection. | ||
</em> | ||
</p> | ||
|
||
> References of Salient Object Detection (SOD) benchmark works<br> | ||
> [1] Video SOD: Shifting More Attention to Video Salient Object Detection. CVPR, 2019. ([Project Page](http://dpfan.net/davsod/))<br> | ||
> [2] RGB SOD: Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground. ECCV, 2018. ([Project Page](https://dpfan.net/socbenchmark/))<br> | ||
> [3] RGB-D SOD: Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks. TNNLS, 2020. ([Project Page](http://dpfan.net/d3netbenchmark/))<br> | ||
> [4] Co-SOD: Taking a Deeper Look at the Co-salient Object Detection. CVPR, 2020. ([Project Page](http://dpfan.net/CoSOD3K/)) | ||
|
||
## 4. Proposed Framework | ||
|
||
### 4.1. Training/Testing | ||
|
||
The training and testing experiments are conducted using [PyTorch](https://github.com/pytorch/pytorch) with | ||
a single GeForce RTX TITAN GPU of 24 GB Memory. | ||
|
||
> Note that our model also supports low memory GPU, which means you should lower the batch size. | ||
1. Prerequisites: | ||
|
||
Note that SINet is only tested on Ubuntu OS with the following environments. | ||
It may work on other operating systems as well but we do not guarantee that it will. | ||
|
||
+ Creating a virtual environment in terminal: `conda create -n SINet python=3.6`. | ||
|
||
+ Installing necessary packages: [PyTorch > 1.1](https://pytorch.org/), [opencv-python](https://pypi.org/project/opencv-python/) | ||
|
||
1. Prepare the data: | ||
|
||
+ downloading testing dataset and move it into `./data/TestDataset/`, | ||
which can be found in this [download link (Google Drive)](https://drive.google.com/file/d/1o8OfBvYE6K-EpDyvzsmMPndnUMwb540R/view?usp=sharing). | ||
|
||
+ downloading training dataset and move it into `./data/TrainDataset/`, | ||
which can be found in this [download link (Google Drive)](https://drive.google.com/file/d/1lODorfB33jbd-im-qrtUgWnZXxB94F55/view?usp=sharing). | ||
|
||
+ downloading pretrained weights and move it into `snapshots/PraNet_Res2Net/PraNet-19.pth`, | ||
which can be found in this [download link (Google Drive)](https://drive.google.com/file/d/1pUE99SUQHTLxS9rabLGe_XTDwfS6wXEw/view?usp=sharing). | ||
|
||
+ downloading Res2Net weights [download link (Google Drive)](https://drive.google.com/file/d/1_1N-cx1UpRQo7Ybsjno1PAg4KE1T9e5J/view?usp=sharing). | ||
|
||
1. Training Configuration: | ||
|
||
+ Assigning your costumed path, like `--train_save` and `--train_path` in `MyTrain_Val.py`. | ||
|
||
+ Just enjoy it! | ||
|
||
1. Testing Configuration: | ||
|
||
+ After you download all the pre-trained model and testing dataset, just run `MyTesting.py` to generate the final prediction map: | ||
replace your trained model directory (`--pth_path`). | ||
|
||
+ Just enjoy it! | ||
|
||
### 3.2 Evaluating your trained model: | ||
|
||
One-key evaluation is written in MATLAB code ([link](https://drive.google.com/file/d/1_h4_CjD5GKEf7B1MRuzye97H0MXf2GE9/view?usp=sharing)), | ||
please follow this the instructions in `./eval/main.m` and just run it to generate the evaluation results in `./res/`. | ||
The complete evaluation toolbox (including data, map, eval code, and res): [link](https://drive.google.com/file/d/1qga1UJlIQdHNlt_F9TdN4lmmOH4gN7l2/view?usp=sharing). | ||
|
||
### 3.3 Pre-computed maps: | ||
They can be found in [download link](https://drive.google.com/file/d/1tW0OOxPSuhfSbMijaMPwRDPElW1qQywz/view?usp=sharing). | ||
|
||
|
||
## 4. Citation | ||
|
||
Please cite our paper if you find the work useful: | ||
|
||
@article{fan2020pra, | ||
title={PraNet: Parallel Reverse Attention Network for Polyp Segmentation}, | ||
author={Fan, Deng-Ping and Ji, Ge-Peng and Zhou, Tao and Chen, Geng and Fu, Huazhu and Shen, Jianbing and Shao, Ling}, | ||
journal={MICCAI}, | ||
year={2020} | ||
} | ||
|
||
## 6. FAQ | ||
|
||
1. If the image cannot be loaded in the page (mostly in the domestic network situations). | ||
|
||
[Solution Link](https://blog.csdn.net/weixin_42128813/article/details/102915578) | ||
|
||
--- | ||
|
||
**[⬆ back to top](#0-preface)** |