Testing and benchmarking the Image Registration with DIPY

   

Hey,

I am very happy to share that the first version of the Image registration workflow is now complete with associated test cases and benchmark data.

It will pave the way for a user-friendly way of image alignment (for both standalone and collection of MRI images) thus leading to efficient image registration for non-domain users.

Link to the PR: Image Registration Workflow 

Visualising the results of the Image Registration with DIPY

With the workflow in place, the task now was to register a set of images (other than the ones used during the development) to test and benchmark the performance of the workflow on real and simulated datasets.

The dataset: For testing the workflow on real images, I used the B0 and T1 experimental images. Different modes of registration are used to show the efficacy of each one on the end result.

A pair of b0, static (left) and T1, moving (right). The image extraction was done using the fsl_bet tool with a custom value for the ‘functional intensity threshold’ to enhance the quality of the extracted data.

The extracted B0 image.
The extracted T1 image.

 

 

 

 

 

Overlaying the Images: It can be seen that the images are not aligned, the T1 can be seen as tilted towards the right. However, the difference can be seen more profoundly if we overlay these images on top of each other. Below is the overlay of these images to show the differences in their native configuration (before registration).

Sagittal view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on top of the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.
Coronal view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.
Axial view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on top of the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.

Registration-Mode ‘Center of mass-based’ registration: To start with, let’s use the center of mass based registration option of the workflow and align the images. 

Sagittal view of the overlaid images, this result is better than the previous one where the images were simply overlaid one on top of the other. However, here also we can see that the alignment is not perfect as there are portions of ‘RED’ which don’t overlap well.
Coronal view of the overlaid images. There is a mismatch in the lower sagittal region.
Axial view of the overlaid images, though better still the portions of the moving image (in RED) are not fitting perfectly on top of the static.

Registration-Mode ‘Translation based’ registration: Now let’s employ the 2ns option available in the workflow i.e. the translation based image registration, the images below show how the translation based registration improves on the simple ‘center of mass’ based registration.

Sagittal view of the overlaid images, the translation based registration produced better results compared to the center of mass based registration.
Coronal view of the overlaid images, the translation based registration produced better results compared to the center of mass based registration.
Axial view of the overlaid images, there are still non-overlapping portions of ‘RED’ but they are lesser in proportion relatively compared to the center of mass based results.

Registration-Mode ‘Rigid body based’ registration: The results can be further improved by registering the images using the rigid body based registration, the results are shown below and they emphasize the improvements obtained. The rigid body based registration was done progressively i.e. it started with the results obtained from the translation based registration. 

Sagittal view of the overlaid images, produced by rigid body based registration.
Coronal view of the overlaid images produced by rigid body based registration.
Axial view of the overlaid images produced by rigid body based image registration.

Registration-Mode ‘Full affine’ registration: The affine registration involves scaling and shearing of the data also to produce the optimal results achievable for a given pair of images. The results are shown below that underscores the improvements obtained by the affine registration.

Sagittal view of the images registered by full affine registration.
Coronal view of the images aligned using the full affine registration.
Axial view of the registered images.

Here are the links for the commits made for the completed image registration workflow.

Commit Link for the completed Workflow: Registration Workflow

Commit Link for the completed unit test(s) for the Workflow: Registration Test

Benchmarking the workflow

The performance of the workflow (in terms of execution time) on both the simulated and real dataset is shown below.

The simulated dataset: The simulated dataset was generated by stacking slices of brain images on top of each other (to generate an image in the static configuration, these were coronal slices) and then using random transform to create the corresponding moving image. I set up the factors and used the setup_random_transform() function of DIPY to create the test data.

factors = {
 ('TRANSLATION', 3): (2.0, None, np.array([2.3, 4.5, 1.7])),
 ('RIGID', 3): (0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5,
 1.7])),
 ('AFFINE', 3): (0.1, None, np.array([0.99, -0.05, 0.03, 1.3,
 0.05, 0.99, -0.10, 2.5,
 -0.07, 0.10, 0.99, -1.4]))}

In principle, any number of such images can be created by using random transformation but for the sake of testing 3 distinct set of images were created.

Set-A: A pair of images created using random translation.

Set-B: A pair of images created using rigid body transformation.

Set-C: A pair of images created using full affine registration. The randomly transformed images using full affine transformation (translation, rigid, shear and scaling)

Tests run on the Simulated Data (The reported run times are the best out of 5 executions when no other intensive application was running on the system)

Data Type Run Time (seconds) Transformation Type Applied Progressive
Translated Images 16 Translation N/A
Images created by Rigid body Transformation 37 Rigid without progressive No
Images created by Rigid body Transformation 40 Rigid with progressive Yes
Images created by Affine Transformation 117 Affine without progressive No
Images created by Affine Transformation 158 Affine with progressive yes

 

Tests run on the real Data (The reported run times are the best out of 5 executions when no other intensive application was running on the system)

Data Type Run Time (seconds) Transformation Type Applied Progressive
T1 and T1_run2 40 Translation N/A
T1 and T1_run2 277 Rigid with progressive yes
T1 and T1_run2 593 Affine with progressive yes

Discussion: The affine registration is by far the most time-consuming type of alignment. The tests revealed that for both simulated and real data, the maximum amount of time was being spent in the affine registration, this is not a surprise since affine registration involves full-scale alignment whereas other types of transformations (translation, rigid etc.) do not involve scaling and shear operations.

Applying the transform on a collection of images

Once, the initial registration is done, the workflow now has the option of also saving the affine matrix that was used during the transformation.

The affine matrix contains the optimal parameters used for translation and rotation. This is especially useful in the case where multiple moving images are generated during various runs of the system.

In order to produce accurate alignment, the affine matrix can be used for transforming all the moving images rather than calling the ImageRegistration Workflow every time (since image registration is a resource-intensive process).

The Apply Transform Workflow: I developed a separate workflow for applying the transform to multiple images. The workflow is in beta phase and being tested currently, below is a link to the code for the workflow:

Commit Link for the Apply Transform Workflow: Apply Transform

I will be posting another Blog soon with the details about the Apply Transform workflow and how it works with the associated benchmark data.

Adios for now!

Parichit

Leave a Reply

Your email address will not be published. Required fields are marked *