The code structure is basically based on the official HRNet project (https://github.com/leoxiaobin/deep-high-resolution-net.pytorch), with plenty of modifications.
Trained with the REFUGE 2018 train+val data, the best model achieves 7.5 pixels in average L2 distance on the REFUGE 2018 Test set.
-
The code is developed under Python2.7 and Pytorch 0.4.0. Other versions of environment should work well, but have not been fully tested. Virtualenv is highly recommended for installation.
-
Install dependencies:
pip install -r requirements.txt
-
Initialize output (training model output directory) and log (tensorboard log directory) directory:
mkdir output mkdir log
-
Download pretrained models from Baidu Yun Drive: https://pan.baidu.com/s/1xucVbfCvkXSTu62b8NuN8Q (password: 6gpu), and put them into models/pretrained.
-
Download the REFUGE data and uncompress them into a single directory, including the training, validation, and testing set. The folder structure should be like
${DATA_ROOT}
|-- REFUGE-Training400
`-- |--Training400
`-- |-- Glaucoma
|-- Non-Glaucoma
|-- Annotation-Training400
`-- |-- Annotation-Training400
`-- |-- Disc_Cup_Fovea_Illustration
|-- Disc_Cup_Masks
|-- Fovea_location.xlsx
|-- REFUGE-Validation400
`-- |-- REFUGE-Validation400
|-- REFUGE-Validation400-GT
`-- |-- Disc_Cup_Masks
|-- Fovea_locations.xlsx
|-- REFUGE-Test400
`-- |-- Test400
|-- REFUGE-Test-GT
|-- Disc_Cup_Masks
|-- Glaucoma_label_and_Fovea_location.xlsx
Testing the pretrained model:
python tools/test.py --cfg experiments/refuge.yaml TEST.MODEL_FILE ./models/pretrained/best_model.pth
First ensure that the data root in experiments/refuge.yaml is set correctly. Note that the final performance may slightly differ from the pretrained model due to the randomness in the algorithm.
python tools/train.py --cfg experiments/refuge.yaml