Summary of my project work in GSoC, 2018

Note: Since this is the last blog post and covers the entire work summary of my project during the GSoC, 18, therefore, I will be comprehensive and will go into specific details (which were not covered in earlier posts). I will start with the basics and hope that it helps the reader in understanding the usefulness and application of my work.

Project report for the Google Summer of Code, 2018

Project: DIPY Workflow(s) and Quality Assurance

Abstract: DIPY is a widely used python package for analyzing diffusion MRI images. There are several modules for performing common tasks related to bio-medical image analysis (such as de-noising, image registration, and clustering etc.) [DIPY, 1]. These modules have been under active development as a community initiative and have been well-tested over time.

In this project, I have extended the core functionality of DIPY modules by developing scientific workflows for the Image registration problem, integrating the quantitative and qualitative metrics with the workflows for assessment of results, providing intuitive visualization support for the registered image data and the capability to register a set of moving images with one single command by using the calculated transformation matrix or deformation field.

Methods, Challenges, and Procedure(s)

Methods

In bio-medical image processingImage registration is the process of aligning of images (spatially) so that data from multiple images can be combined for assisting the downstream analysis [BarbaraZitova2]. In a clinical setting, the images are obtained from multiple imaging systems, from different viewing angles, can be of different modalities (imaging system and its spatial orientation), taken at different times. These factors make optimal image registration algorithmically and mathematically challenging.

Image Registration is often the very first step in medical imaging and it combines the data from varying modalities such as PET, MRI, CT-SCAN to obtain comprehensive information about the patient’s condition. The importance of registration grows with the growing array of imaging systems, their sensitivity to the data and increase in the data acquisition capability of the system.

The downstream analysis can involve various of procedures such as fusion, clustering [Timothy Fox et al., 3] and can be used to measure intervention, quantify tumor growth or to compare the patient’s condition with a dataset of other patients for exploratory analysis, such as finding other patients who have shown the similar tumor growth and response to a specific treatment.

Primary Challenges

The sheer diversity of algorithms for registration, types of registration (affine or deforming which can be further subclassified), imaging systems and their sensitivity for capturing the images makes it hard for a single registration algorithm to universally solve the problem of image alignment (alignment here refers to registration, it is used in the context of spatial alignment which can also involve deformation depending on the problem [J. B. Antoine Maintz et al., 4]). Therefore, it is crucial to both develop new image registration methods and also study their feasibility on different datasets through both quantitative and qualitative assessment.

During the project, in addition to other significant work, I have also focussed on developing workflows for both affine registration and the diffeomorphic registration (which can essentially deform the shape of the captured brain image for optimal alignment and is mathematically difficult to assess).

An affine Transformation is a form of linear transform that preserves the points and ratios of lengths after the transformation. It means that any point or set of points lying on a line will still continue to be on the line after the transformation. However, it may not preserve the angle between the lines or distances between the points. An affine transformation can be further classified into sub-categories (Figure-1),

Figure-1: Subcategories of the affine registration.

The dataset for Affine Registration: The following images are both B0 volumes taken from the stanford_hardi [stanford_hardi, 4] and Syn dataset [syn datset, 5] and to emphasize that they are clearly not aligned, the moving image (in green) is resampled on a grid based on the static image (in red).

Link to Pull Request, Developed Workflow(s), Unit Test cases and experimental branch used for creating the data for this blog

PR 1604

Pull request for Image registration Workflow/ Affine image registration (PR 1604)

Affine Registration Workflow

Testing the Affine Registration Workflow

 Experimental test branch used to generate the test data for this blog

Raw data (No registration, only resampled, Figure-2: A-C)

Figure-2-A: The axial view of the moving and static image. As can be seen, the moving and static images are clearly out of alignment. After overlaying them on top of each other, the moving (Green) is not superimposed on the static (Red).
Figure-2-B: The coronal view of the moving and static image. This view also confirms that the moving and static images are clearly out of alignment. After overlaying them on top of each other, the moving (GREEN) is shifted away from the static (RED).
Figure-2-C: The sagittal view of the moving and static image. After overlaying them on top of each other, the moving image (GREEN)  is shifted outside the spatial origin of the static (RED).

Procedure: I will start with a set of static and moving image and show the effect of each type of registration on the moving image. Thereafter, I will argue about the quality of the registration based on the value of the quality metrics generated from the workflow.

The registered images after performing the ‘center of mass’ registration (COM)

The COM-based alignment has done a good job of aligning the moving image to the static one (Red) but we can clearly see that the moving is still not completely registered to the static. The moving image (Green) has its spatial orientation located away from the static image (Figure-3: A-C).

Figure-3-A: The axial view of the registered images, the moving image (Green) overlaid with the static (reference image in Red).
Figure-3-B: The axial view of the registered images, the moving image (Green) overlaid with the static (reference image in red).
Figure-3-C: The axial view of the registered images, the moving image (Green) overlaid  with the static (reference image in Red).

The registered images after performing the translation based registration

The translation based affine registration has definitely improved the quality of registration as is evident from the (Figure4: A-C) below, the extent of overlap has increased in all three views indicating the improvement in registration via translation as compared to COM.

The optimal similarity metric is:  -0.2202966

Figure-4-A: The axial view of the moving image (Green) after translation based registration on the static image (Red).
Figure-4-B: The coronal view of the moving image (Green) after translation based registration on the static image (Red).
Figure-4-C: The sagittal view of the moving image (Green) after translation based registration on the static image (Red).

The registered image after performing the rigid body based registration

The rigid body based registration improves further beyond the translation and hence the quality of the registered images is better (Figure-5: A-C) and also it is quantified in the value of optimal similarity metric as indicated below.

The optimal similarity metric:  -0.2752 (lesser than translation based registration thus implying that the images are closer in space relative to the translation based registration). 

Figure-5-A: The axial view of the moving image (Green) after registering with the static image (Red).
Figure-5-B: The coronal view of the moving image (Green) after registering with the static image (Red).
Figure-5-C: The sagittal view of the moving image (Green) after registering with the static image (red).

The registered images after performing the full affine registration

The full affine includes all the modes of registration and also shear and scaling of the image for maximizing the extent of alignment between the images. The quality of registration is by far the best in this case (Figure-6: A-C) and is supported by the value of the similarity metric also.

The optimal similarity metric-0.283808 (the least so far and implies that the images are closest possible in space relative to the center of mass, translation and rigid body based registration.)

Figure-6-A: The axial view of the registered moving image (green) on the static image (Red) after performing the full affine transformation.
Figure-6-B: The coronal view of the registered moving image (green) on the static image (Red) after performing the full affine transformation.
Figure-6-C: The sagittal view of the registered moving image (green) on the static image (Red) after performing the full affine transformation.

The optimal similarity metric can be used as the quantitative measurement to understand the quality of registration, for example, the following image shows the gradual decrement in the metric for different modes of affine registrations.

Figure-7: The improvement in the quality of registration as indicated by the decrement in the value of the optimal similarity metric. The lower the metric better the quality of registered data.

The Image registration Workflow combined with the facility to produce quantitative metrics can be used to analyze hundreds of bio-medical images in an automated manner. For example, by plotting the optimal similarity metric values for a set of images, the outliers can be easily identified (not possible so easily with manual inspection). Therefore, this workflow can play the pivotal role in expanding the application of DIPY to big datasets.

The Diffeomorphic Image Registration is a deformation transform or diffeomorphic map where the shape and form of the images can also be affected. In contrast to the affine transformation, it does not preserve the points and planes in the images. The diffeomorphic registration is particularly useful when a deformed object needs to be aligned with a normal object for medical analysis, such as for a brain with substantial tumor growth to a normal brain.

The dataset for the Diffeomorphic Registration: The following images are both B0 volumes taken from the stanford_hardi [stanford_hardi, 4] and Syn dataset [syn datset, 5] and to emphasize that they are clearly not aligned, the moving image (in green) is resampled on a grid based on the static image (in red).

Procedure: I will start with a set of static and moving image and show the effect of diffeomorphic registration on the moving image. Thereafter, I will argue about the quality of the registration based on the visual assessment of the registered image.

Raw data (No registration, only resampled, Figure-8)

Figure-8: The coronal view of the static (Red) and moving (Green) images and the overlay shows the necessity of applying the deformation transform on the moving image. Note that the bottom portion of the moving image contains an extended section which is missing in the static image.

The Registered image after the Diffeomorphic Transformation

By comparing the below image (Figure-9) with the image before registration (Figure-8), it is evident that the warped moving image has been deformed in the lower middle section to resemble the static image.

Figure-9: The registered image, the moving image now can be seen to have deformed by losing the ‘extended section at the bottom’ to resemble the static image. The overlay image shows that the static (Red) and moving images (Green) are superimposed optimally.

Link to the pull request, the diffeomorphic registration workflow and the experimental branch for generating the data for the blog,

PR 1616

Pull request for the Diffeomorphic registration (for registering the brain images under deformation)

Diffeomorphic registration Workflow

Experimental branch for generating the test data for this blog

Applying the Affine transform to a set of moving imagesonce the images have been registered by the affine transform, the calculated affine matrix (a matrix that records the optimal parameter values such as the extent of translation, rotation, shear etc.) is saved on the disk. This matrix can be leveraged to quickly transform multiple moving images to a static image without having to go through the affine registration again (especially because it is a time-consuming process).

The Apply Affine Workflow loads the parameters present in the affine matrix and applies the appropriate transform (using the parameters) on a set of moving images to quickly register them to the static image. This workflow can come in handy when the registration has to be done for thousands of images.

The image below (Figure-10) outlines how a user can combine the two workflow(s) to achieve this goal,

Figure-10: Combining the Affine registration and Apply Affine Workflows in a pipeline to register a set of moving images.

Links to the pull request, the Apply Affine workflow, and the unit tests for the workflow,

PR 1605

Pull request for the applying Affine matrix workflow/ applying the affine transform (PR 1605)

Apply Affine Workflow

Testing the Apply Affine Workflow

Visualizing the data with DIPY, Apart from visualizing the overlay of slices from the registered image where the static and moving images have been colored in different channels to highlight their overlap (Figures-2, 3, 4, 5, 6, 8), the visualisation can be extended to create the mosaic’s and animations for displaying all the slices rather than just one. 

In this regard, the visualization workflow provides the features of visualizing the mosaic of all the slices from the registered images and also have an option of creating the GIF animation from the slices to highlight the overlap of slices as the registration progresses.

‘The visualization workflow provides intuitive way to convey the power of Affine Registration for different modes on a given set of images. It clearly shows how the different slices from the static and moving images are overlapping as the registration is going on’. 

Note: The visualization modules are still being developed and need more detail oriented development especially as it deals with the domain of images and graphics that pose their own challenges in terms of allowed color ranges, image resolution that can be supported in GIF, lossless and lossy compression, to name a few.

The dataset for visualization Workflow:  I will use the B0 volume taken from the stanford_hardi [stanford_hardi, 4] as the static image (in Red) and the transformed images (in Green) created from the affine registration (as shown in Figure-3, 4, 5, 6, 7 above).

Links to the pull request, the Visualisation workflow, and the unit tests for the workflow, this PR  needs more work and hence I will be contributing to the DIPY after GSOC

PR 1606

Pull request for visualizing the registered images from affine or diffeomorphic registration

Visualization workflow

Testing the visualization Workflow

Last GSoC commit

Visualising the COM-based (center of mass based) registration through the mosaic (Figure-11) and GIF animation (Figure-12)

Figure-11: The mosaic depicting the overlap of brain images as the COM-based registration progresses.
Figure-12: A GIF animation showing the overlap of brain slices as the registration progresses.

It is clear as already pointed above in the Affine Registration section that the overlap is not good enough in case of the center of mass based registration which can also be viewed from the mosaic and the animation since the moving image (in Green) is lying outside the spatial context of the static image (in red).

Visualising the Translation-based registration through the mosaic (Figure-13) and GIF animation (Figure-14)

Figure-13: The mosaic depicting the overlap of brain images as the translation based registration progresses.

 

Figure-14: The GIF animation depicting the overlap of brain images as the translation based registration progresses.

Clearly, the translation based image registration produces better quality registered data since the spatial orientation of the moving image (in Green) appears to be shifted towards the static image (in Red) when compared with the COM-based registration.

Visualising the rigid body based registration through the mosaic (Figure-15)

Figure-15: The axial view of the slices from the static image and the moving image showing the overlap as the rigid body based registration progresses.

The rigid body based registration clearly highlights the superiority of this method relative to COM and translation based methods as the moving image (Green) now is very well superimposed on the static image (Red).

Visualising the Full affine based registration through the mosaic (Figure-16)

Figure-16: The axial view of the slices from the static and moving images overlapping as the full affine registration progresses.

By far, the results produced by the full affine registration are the best when compared with the other modes of alignment. This accuracy and precision come at the cost of the execution time. Therefore the ‘Apply Affine Workflow‘ as described above is an attempt to save this time consumption.

Comparative Analysis of the project and code base before and after GSoC

State of the art before GSoC and improvements through the contributions

DIPY tutorials were a good starting point for learning about the various Image Registration(s) modes, how to implement them and their applications in biomedical image processing. The present tutorials walk the user through various modes of registrations and their effect on the end results. However,

  1. Workflow(s): There are no workflows to combine the various types of image registrations modes in an end-to-end manner. Furthermore, there is no integration with metric-based quality assessment and visual output. The current tutorials focus on calling various modes of registration without reference to the quality of registration.

1.1 Project Contribution(s) and improvement: I have addressed this issue by reporting the quantitative metrics such as the optimal parameters and the value of the function at the optimal parameters in the workflow.

Using these values, a factual inference can be made about the quality of the registration. For example, as the registration progress from the center of mass to full affine, the optimum value of the function decreases implying that the registered images are getting closer in the space.

2. Performance on big data: The tutorials are meant for introducing the user to the DIPY packages and it features and hence did not elaborate on the performance on different test cases (in terms of time and quality of results). This factor becomes very important when a workflow is executed in a cloud environment running on big datasets (thousands of images) that can consume more time.

2.1 Project Contributions and improvement: I addressed this issue by exhaustively testing the image registration workflow with various data sets varying in both size and quality of the extracted brain image. As expected, the time and quality of registration vary with the input data. In this context, I reported on the benchmarks associated with such testing. Since the testing was done iteratively on different platforms, so the report only points out the best results (as they represent the best that can be achieved with the current state of things in DIPY). 

3. Visualization: The visualization tutorials were incorporated as a part of image registration workflow supporting both mosaic and animated GIF’s creation. The visualization features together with the quantitative metrics will provide significant support for assessing the quality of the registered images especially when running in an automated manner where manual inspection is not always feasible.

3.1 Project Contributions and improvement: I addressed this issue by animating the slices from the registered image in a GIF image sequentially. In this context, the slices from the static and moving images are shown in different color channels (to emphasize the overlap of slices) as registration progresses. Also, the Visualisation Workflow support the creation of mosaics for viewing the slices parallelly. 

4. Application of optimal transformation/deformation data: Currently, there is no facility to make use of the transformation data or deformation data for registering a set of moving images quickly. This means that even for different moving images of the same brain, the registration needs to be done repeatedly thus increasing the analysis time manifold. This magnifies when the dataset is large and complex.

4.1 Project Contributions and improvement: I addressed this issue by providing the option to save the affine matrix (generated from affine registration) and the deformation field (experimental, generated from diffeomorphic registration) to be used in transforming a set of moving images quickly by directly applying the affine transform or the diffeomorphic field without having to go through the registration process all over again.

Qualitative addition to DIPY through my GSoC project

On the qualitative side, my project has added significant value in the following areas,

1)     Reduction in code redundancy: Through the Image registration workflow, the users won’t need to iterate (or repeat) the same module or functions for image registration. Even for registering an image in various modes (center of mass, translation, rigid and affine) a single workflow will suffice with a different set of command line parameters.

2)     Increase in the level of abstraction for non-tech-savvy users. With some thoughtful training, the domain experts (doctors and practitioners, who are not trained in computer science/programming background) can call the DIPY workflows with their input data and get the end result without debugging the myriad of errors. The tight integration with the quality assessment metrics will help in reasoning about the good from bad registration.

3)     Increase in the level of automation in performing the common tasks. The workflows can also be combined together to accomplish more complicated analysis. One example can be where the user wants to first register a set of moving images to a static image and then use the generated affine matrix to further transform other moving images to the static image.

Doing this will require the user to combine the ImageRegistrationFlow() and ApplyAffineFlow(). This level of automation is now possible by combining these workflows in DIPY (see Figure-10 above).

Fixing the bugs in DIPY code base, In addition to the primary project goals I was also able to find and fix bugs in the codebase, details about them  and the link to the commits can be seen in the earlier blog post. All of the PR’s mentioned in the following blog post have been merged with the code base 

Finding and fixing the ‘small and crucial’ issues in the DIPY.

Acknowledgments

I am especially grateful to the project mentors Dr. Eleftherios Garyfallidis and Mr. Serge Koudoro for their helpful guidance and detailed discussion on the algorithmic background and the code walkthroughs. I also extend my thanks to the community developer Mr. Nil Goyotte for reviewing the PR’s and providing the feedback for improving the code consistency and brevity.

In this regard, I am also sincerely thankful to the PSF foundation and the GSoC admins for carefully reviewing my work and ensuring that I stick to the timeline. This was a great learning experience for me and the fact that PSF is supporting DIPY is encouraging for the all open source enthusiast (including me) to keep on contributing to the DIPY package.

Without all this support, I wouldn’t be able to meet the high standards set by the project mentors and the GSoC program for completing the end product.

Many thanks to GSoC mentors, DIPY community developers and GSoC admins.

References:

[1]: https://www.researchgate.net/publication/233960603_Eleftherios_Garyfallidis_Towards_an_accurate_brain_tractography_PhD_thesis_University_of_Cambridge_2012

[2]: https://www.sciencedirect.com/science/article/pii/B9780128012383999902

[3]: https://www.sciencedirect.com/science/article/pii/B9781416032243500062

[4]: http://www.iro.umontreal.ca/~sherknie/articles/medImageRegAnOverview/brussel_bvz.pdf

[5]. http://nipy.org/dipy/reference/dipy.data.html#dipy.data.fetch_stanford_t1

[6]. http://nipy.org/dipy/reference/dipy.data.html#dipy.data.fetch_syn_data

 

Leave a Reply

Your email address will not be published. Required fields are marked *