Skip to content

reshalfahsi/medical-image-generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Medical Image Generation Using Diffusion Model

colab

Image synthesis on medical images can aid in generating more data for biomedical problems, which is hindered due to some legal and technical issues. Using the diffusion model, this problem can be solved. The diffusion model works by progressively adding noise, typically Gaussian, to an image until it is entirely undistinguishable from randomly generated pixels. Then, the noisy image is restored to its original appearance gradually. The forward process (noise addition) is guided by a noise scheduler, and the backward process (image restoration) is carried out by a U-Net model. In this project, the diffusion model is trained on the BloodMNIST dataset from the MedMNIST dataset.

Experiment

To see the code under the hood, visit this link.

Result

Quantitative Result

Fréchet Inception Distance (FID) is leveraged to quantitatively measure the performance of the diffusion model, which is presented below.

Evaluation metric Score
FID 4.071

Evaluation Metric Curve

loss_curve
Loss of the model at the training stage.

fid_curve
FID on the training and validation sets.

Qualitative Result

Qualitatively, the generated images are shown in the following figure:

qualitative_result
Unconditional progressive generation on the BloodMNIST dataset (left) and a montage of the actual BloodMNIST dataset (right).

Credit