Skip to content

xmed-lab/CardiacNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CardiacNet: Learning to Reconstruct Abnormalities for Cardiac Disease Assessment from Echocardiogram Videos

Currently, we are now prepare for the Arxiv version.

The paper can be accessed after few days

The code and dataset is now ready (!!The dataset code will be updated soonly!!), please follow the instruction to access and download the code and dataset

🔨 PostScript

  😄 This project is the pytorch implemention of [paper].

  😆 Our experimental platform is configured with Four RTX3090 (cuda>=11.0)

  😃 The CardiacNet (PAH & ASD) are currently available at the :

     https://github.com/XiaoweiXu/CardiacNet-dataset. (Our Data Request Page is Under Development. Currently, you can access the dataset via the link : [Dataset Link], Before You Download the Dataset, Please Contact the [email protected].)

💻 Installation

  1. You need to build the relevant environment first, please refer to : requirements.yaml

  2. Install Environment:

    conda env create -f requirements.yaml
    
  • We recommend you to use Anaconda to establish an independent virtual environment, and python > = 3.8.3;

📘 Data Preparation

CardiacNet

  1. Please access the dataset through : XiaoweiXu's Github
  2. Follw the instruction and download.
  3. Finish dataset download and unzip the datasets.
       Our Dataset include the CardiacNet-PAH & CardiacNet-ASD
       The Layer of our dataset *CardiacNet* should be :
       # CardiacNet-PAH:
          ## PAH
             ### 001_image.nii.gz
             ### 001_label.nii.gz
             ### ...
             ### 342_image.nii.gz
          ## Non-PAH
             ### 001_image.nii.gz
             ### 001_label.nii.gz
             ### ...
             ### 154_image.nii.gz
       # CardiacNet-PAH:
          ## ASD
             ### 001_image.nii.gz
             ### 001_label.nii.gz
             ### ...
             ### 100_image.nii.gz
          ## Non-ASD
             ### 001_image.nii.gz
             ### 001_label.nii.gz
             ### ...
             ### 131_image.nii.gz
  4. Modify your code in the file:
    Fine the file
    ..\train.py & ..\evaluate.py
    and 
    modify dataset path 
    in
    parser.add_argument('--dataset-path', type=str, default='your path', help='Path to data.') 
    or
    your can just use the command -- dataset_path='you path' when you train or evalute the model

🐾 Training

  1. In this framework, after the parameters are configured in the file train.py, you only need to use the command:

    python train.py
  2. You are also able to start distributed training.

    • Note: Please set the number of graphics cards you need and their id in parameter "enable_GPUs_id". For example, if you want to use the GPU with ID 3,4,5,6 for training, just enter the 3,4,5,6 in args.enable_GPUs_id.

🐾 Evaluation

  1. For the evaluation, you can directly use the following command:

    python evaluate.py
    
    Note that:
    Before you evaluate the model, remember to modified the saved model path in the file evaluate.py, 
    which is args.checkpoint_path
    
🚀 Code Reference
🚀 Updates Ver 1.0(PyTorch)
🚀 Project Created by Jiewen Yang : [email protected]

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages