Week 12 & Week 13 - August 21, 2023

lakshmi97
Published: 08/23/2023

Finalized experiments using both datasets: Week 12 & Week13

============================================================

 

What I did this week

~~~~~~~~~~~~~~~~~~~~

Monai's VQVAE results on T1-weighted NFBS dataset, 125 samples, for batch_size=5 were qualitatively and quantitatively superior to all previous results. I continued the same experiments on the T1-weighted CC359(Calgary-Campinas-359) public dataset consisting of 359 anatomical MRI volumes of healthy individuals. Preprocessed the data using existing `transform_img` function -

1. skull-strips the volume using the respective mask

2. dipy's `resize` & scipy's `affine_transform` scale the volume to (128,128,128,1) shape

3. MinMax normalization to limit the range of intensities to (0,1)

Using existing training parameters, carried out two experiments, one on CC359 alone & another on both datasets combined. Additionally, I made a slight modification in the loss definition by attributing different weights of 0.5 & 1 to background & foreground pixels compared to equal weights from previous experiments. This resulted in faster convergence as shown in the red, blue & purple lines in the combined plot below-

Combined training plots for all experiments

 

Inference results on the best performing model - B12-both - is as follows-

VQVAE-Monai-B12-both reconstructions & originals showing equally spaced 5 slices for 2 samples

 

This shows that our training not only converged quickly but also improved visually. Here's a comparison of our current best performing model i.e., VQVAE-Monai-B12-both & the previous one i.e., VQVAE-Monai-B5-NFBS. The test reconstruction loss is 0.0013 & 0.0015 respectively.

VQVAE reconstruction comparison for B12-both & B5-NFBS

 

I also carried Diffusion Model training for the bets performing B12-both model for 300 & 500 diffusion steps and the training curve obtained is as follows-

Diffusion Model training plots for 300 & 500 diffusion steps

 

These curves seemed to converge pretty quickly but the sampling outputs in the generation pipeline are still pure noise.

 

What is coming up next week

~~~~~~~~~~~~~~~~~~~~~~~~~~~

Wrapping up documentation & final report

 

Did I get stuck anywhere

~~~~~~~~~~~~~~~~~~~~~~~~

Yes, I carried out debugging to understand the generation pipeline of the Diffusion Model. Cross-checked implementations of posterior mean & variance in the code base with respective formulas from the paper, as well as with MONAI's DDPM implementation. Didn't come across any error, yet the generated samples are erroneous.






 

DJDT

Versions

Time

Settings from gsoc.settings

Headers

Request

SQL queries from 1 connection

Static files (2312 found, 3 used)

Templates (11 rendered)

Cache calls from 1 backend

Signals

Log messages