Integrating animation in DIPY to help in quality assessment for big datasets.

Hey,

I will start with the good news, the image registration workflow is in the process of being merged.

The code went under several iterations of improvement and optimization (after the PR was created), thanks to the community developer for sharing their useful comments.

Wonder, what the following animation is saying ? read on to learn more about it.

For the past few days, I have been brainstorming and experimenting with the following objectives,

Objectives for this work

1) Adding an intuitive component to help in the assessment of the quality of registered images.

Please see my earlier posts about visualizing the results of Image Registration,

Visualizing the Registration Progress

2) Trying different options about what possibly is the optimal and platform independent way of communicating the results for quality assessment

Adding an intuitive Visualization Component

I have already discussed the usage and application of ‘Mosaics’ in assessing the visual quality of the registered data. A mosaic presents a holistic view of the registered image by displaying all the slices (from a specific plane) parallelly so that the change in the voxels can be observed in a sequential manner.

What are mosaics? and their limitations

Mosaics can be handy when a user wants to view the slices and not the entire volume. This provides for more fine-grained control over the quantity of information presented.

However, there is one thing lacking in the mosaic based approach (not that it is not useful), in case of large volumes the number of slices can be overwhelming.  A large number of slices present difficulty not only in creating the mosaic but also makes it hard for the user to understand the data/results easily. Just imagine looking at hundreds of slices while trying to track the changes from one to the other.

Previous Commit LinkMosaics

Making the registered slices into an ‘Animations’

So, I needed another method to go with the mosaics and complement the quality assessment arsenal in DIPY. After discussing with the project mentors, it was decided to use an animation (preferably a GIF) to show the changes in the moving image as the registration goes on.

I started experimenting in a separate branch for creating the animations to visually convey the quality of the registered data.

Commit Link: Adding the animation 

After multiple rounds of debugging and data specific and exploration (adjusting the code for the specific number of slices and volumes), I finally was able to create an animation for showing the progress of image registration.

Animating the slices from the ‘Center of Mass’ based registration

The static image (in RED) and the moving image (in GREEN) shown to be overlapping. The dots of yellow indicates the areas where the overlap is good.

Animating the slices from the ‘Translation’ registration

The animation below shows the slices (from both the static and the moving images) in the axial plane overlapping with each other. This particular image is a bit blurry due to the enlarged size, but I am working on to improve the quality of the results further.

Translation based registration using Mutual Information Metric: The static image (in RED) and the moving image (in GREEN) shown to be overlapping.

Animating the ‘Rigid body based’ registration

The following animations show the effect of rigid body based registration on the moving image when the rigid registration is done using the Mutual Information metric.

Rigid registration using Mutual Information Metric: The static image (in RED) and the moving image (in GREEN) shown to be overlapping. The portion in yellow are the ones that aligned well.

Animating the slices from the ‘Affine’ registration

The best results were obtained in terms of the affine registration as is being shown in the following animation. The portions of ‘RED’ are most superimposed by the GREEN in the AFFINE registration (relatively better when compared to the other modes).

Full Affine registration using Mutual Information Metric: The static image (in RED) and the moving image (in GREEN) shown to be overlapping.

Correcting the orientation

By interpolating the moving image and normalizing it (for the color range of a GIF), the animations were reproduced in the correct orientation.

The MP4 video below shows the animation in the correct orientation (this is the center of mass based for demonstration purposes but the similar results can be produced for other modes too.)

Video: The animation in the correct orientation (scroll to the bottom of the page)

Commit Link: Visualising with the correct orientation

Merging the experimental code with the local repo

Since the initial experiments with the animations yielded satisfactory output so I started adding the changes in the branch from my local repository and also made the code more generalized.

Commit Link: Complete Code Affine Visualizations

These changes are still under testing and will be merged with the experimental branch of DIPY for large data (nipy/dipy_qa).

That’s about it for now. Future posts will focus on developing new workflows with DIPY (now that Image Registration is complete with robust quality assessment metrics)

Adios!

Parichit

Improving the Workflow Code (based on the Community Feedback)

 

Hey there,

So, there has been some delay in the timeline for my project but this delay isn’t intentional, I have been able to create good results for visual quality assessment. In fact, the Image Registration workflow is supplemented by both quantitative and qualitative metrics for assessment.

This blog post is about reviewing what has been done in the Image Registration Workflow and also about the many improvements that (I have been working on lately) have been done in the code based on community feedback.

Reviewing the additions to the workflow (for assessing quality)

For quantitative assessment: There is the option of saving the distance and optimal parameter metric now. For details about the code, see the following PR.

(old) PR for quantitative metric addition in the workflow

Testing and benchmarking the Image Registration with DIPY

Qualitative assessment: I am working on a separate branch to create the mosaic of registered images, this branch isn’t the part of the master because the primary Image registration workflow is not being merged (yet). More details about what these mosaics are and how they can help in quality assessment can be seen in one of my older posts below.

Visualizing the Registration Progress

Reviewing the improvements to the Image Registration workflow

The PR 1581 got a good response from the DIPY development community. I am discussing a few of the issues (by no means this discussion is exhaustive, it is just that I am selecting the points which I think made a difference). Majority of these issues are noted by the DIPY developer @nilgoyette.  Many thanks to @Nilgoyette for pointing them out and helping me to improve the code.

Modifying the code based on the community feedback (Many thanks to @NilGoyette & @skoudoro for the useful comments) 

Improvement-1: Following the code consistency and uniform standards. I overlooked the fact that simple things like, ‘import statement’ and line widths are not uniform in the file after I added my code.  For example, using both “,” and “()” for importing multiple classes from a module.

Old Commit

New Commit 

While these things don’t hurt the functionality of the code but they make the code look more consistent. I updated the code to follow same standards everywhere.

Improvement-2:  Moving to assert_almost_equal for comparing long floating point numbers. I wrote redundant code by rounding off the floating number to compare it with another float whereas, the NumPY’s assert_almost_equal are made for this purpose, so I updated the old test case by moving to assert_almost_equal.

This not only improved the code readability but also made the test objective clear.  Furthermore, using the NumPy’s default functions made the test cases look more consistent with the code base.

Commit Link

Improvement-3: Reducing the code duplication. The Image Registration workflow is complicated by the fact that it supports multiple registration modes both progressively and non-progressively. This lead to a part of various local functions being duplicated, I have reduced the code duplication (marginally, though) by moving a part of original code into a separate function.

Commit Link

Improvement-4: Using the “_” placeholder in python for variables not going to be used (but returned by the function call). I was using a variable name to hold the data returned by the function but the variable wasn’t used anywhere in the code later, So I moved to a more pythonic way of holding the data by using the “_” placeholder.

Commit Link

Improvement-5: Using the python’s default assert statement in the test cases. Part of the test cases was simply using the NumPy’s assert_equal to check for equality but the equality was checked against the boolean ‘True/False’ and so it made more sense to just use the default assert for doing such checks. Not that the assert_equal was incorrect but using assert made more sense for doing unit checks such as checking for equality to True/False.

Commit Link

All this feedback made the code more consistent and optimal. All these improvements (along with other changes to the code base) are now part of the PR 1581 and waiting to be merged.

In the coming weeks, I will be sharing more details about the results of apply_transform workflow (as promised earlier) and also about awesome new visualization that can be done with registered data by using the native matplotlib calls. More details about the apply_transform_workflow can be seen in the following post,

Transforming multiple MRI Images (in a Jiffy!)

Adios for now!

Parichit.

Visualizing the Registration Progress

Hey,

The Image Registration Workflow is getting significant feedback from the DIPY developer community. The reviews have proved useful for both the code quality and semantics of the code.

While I am constantly improving the code and updating the PR with the new commits, I thought that it will be worthwhile to give a sneak peek into the intuitive visualization that can be achieved with the VTK modules in DIPY.

So, we know that a given pair of images can be registered in primarily four modes, more details about these and how to perform each registration can be seen in my earlier post on the following link,

Testing and benchmarking the Image Registration with DIPY

Today, I will be sharing results about the powerful visualization(s) that can be generated with DIPY for interpreting the results in an intuitive manner.

Comparative assessment of the results generated by various modes of registration

To compare and assess the quality of image registration relatively for different modes, a mosaic is generated for the registered image to show the progress of the registration process. This also helps in comparing the various modes with each other. As will be evident in the later sections of the post, affine registration results in the best possible overlap between the images.

Experiment-1 on forked repository: Creating Overlaying Images

Experiment-2 on forked repository: Creating a mosaic of the registered image

A) Visualising the Mosaic of the Registered Images (Translation based):

The given mosaic is currently displaying all the slices of the registered image. The registered image is produced by using Translations based registration.

A mosaic displaying all the slices of the registered image data. RED: Static Image, GREEN: Moving Image. A proportion of YELLOW shows the areas where the overlap is very good.

As can be seen from the above mosaic, as the slices are displayed progressively from (left-right, top-bottom), the proportion of red and green are overlapping and it increases towards the end of the slices.

This depicts the progress of the registration process as the two images are being aligned in a common space.

B) Visualising the Mosaic of the Registered Images (Rigid body based)

The rigid body registration was done in a progressive manner, meaning that it uses the affine matrix obtained from the translation in step-(A) above. Below mosaic shows, the progress of the rigid registration process and highlights how the two images are being aligned as the color channels overlap with each other.

A mosaic displaying all the slices of the registered image data. RED: Static Image, GREEN: Moving Image. A proportion of YELLOW shows the areas where the overlap is very good.

C) Visualising the Mosaic of the Registered Images (Affine registration based)

The best possible results are obtained with the affine registration mode done in a progressive manner. The below mosaic displays the progress of the registration process as the alignment proceeds and the images are aligned.

Important to note is how in the starting there is only RED and then increasingly the GREEN is introduced in the mosaic which means that the moving image is being bought into the common alignment space. Eventually, the moving image is superimposed on top of the static image.

A mosaic displaying all the slices of the registered image data. RED: Static Image, GREEN: Moving Image. A proportion of YELLOW shows the areas where the overlap is very good.

Mosaic for a relatively larger MRI data

The following mosaic has been created for a different brain image, these are relatively bigger as compared to the earlier data. This can also be noticed from the number of tiles in the mosaic present in the following image.

Visualising the mosaic of the images from two different runs of the same brain MRI.

Notice the varying hue of YELLOW in this image, this indicates the varying degree of superimposition of the voxels between the static and the moving images. The ‘YELLO‘ is also present in the previous mosaic’s but its intensity is relatively poor and that renders the difference between the images hard to comprehend. I am still working on improving the visual quality of these mosaics so that the differences can be made more apparent.

Here is how the different channels are created for the above images.

# Normalize the input images to [0,255]
static_img = 255 * ((static_img - static_img.min()) / (static_img.max() - static_img.min()))
moved_img = 255 * ((moved_img - moved_img.min()) / (moved_img.max() - moved_img.min()))

# Create the color images
overlay = np.zeros(shape=(static_img.shape) + (3,), dtype=np.uint8)

# Copy the normalized intensities into the appropriate channels of the
# color images
overlay[..., 0] = static_img
overlay[..., 1] = moved_img

The overlay contains both the RED and the GREEN channels and is being used afterward to extract the slices out for creating the mosaic.

List of PR’s that are now merged with the code base

Some of my earlier PR’s that I created are now merged with the DIPY code base and are available below for review and further comments,

Showing help when no input parameter is present

Updating the default output strategy

Next blog posts will present more details about the new methods of visualization and optimized code for the Image registration, hopefully, the Image Registration Workflow will be merged with the master codebase by then. 🙂

Adios for now,

Parichit

 

 

 

Transforming multiple MRI Images (in a Jiffy!)

 

 

Hey,

Update: So far, The Image registration workflow has been put into the PR zone. See the PR below (from earlier this week) for details:

Link to the PR: Image Registration Workflow 

This blog post is about the extension of the Image Registration Workflow for transforming several quickly. Wondering why is that any different from the conventional image registration (which is already pending approval).

Here is the thing, the typical image registration process takes anywhere between 16 – 158 seconds (just for the coronal slices) and 34-593 seconds (for brain images depending on the input size and type of registration). So, the idea is to not do the registration repeatedly for different images of the same brain but rather use the affine transformation matrix (obtained from the first registration itself) to align different images at once to the static/reference image.

Apart from huge savings (both time and computation), this will be handy when a user has a large collection of moving images where any one of them is aligned and the rest can then simply follow the information in the affine matrix without having to be registered individually.

Step-1: Load the static and moving image/image(s) as numpy arrays

static_image = nib.load(static_image_file)
static_grid2world = static_image.affine

moving_image = nib.load(moving_image_file)
image_data = moving_image.get_data()

Step-2: Do the basic sanity check before moving on to align them. This will ensure that the dimensions of the images are same so that the workflow doesn’t break down while running.

affine_matrix = load_affine_matrix(affine_matrix_file)

Step-3: Instantiate the AffineMap (with the affine matrix that is obtained in step-2 above) class and call the transform() method on that object.

img_transformation = AffineMap(affine=affine_matrix, domain_grid_shape=image_data.shape)
transformed = img_transformation.transform(image_data)

Step-4: Save the transformed image on the disk using the coordinate system of the static image.

save_nifti(out_dir+out_file, transformed, affine=static_grid2world)

Here is the link to the latest version of the workflow

Commit Link: Apply Transform Workflow

Commit Link: Unit test cases for the Apply Transform Workflow

I will open the PR for this workflow once the initial Image Registration workflow is merged (that’s a pre-requisite for this one to work). Also, this is a good opportunity for getting the feedback about possible code modifications, optimizations, and testing with different input data.

Soon, I will be updating this blog with images that were aligned using this workflow along with the benchmarks.

Adios for now,

Parichit

Testing and benchmarking the Image Registration with DIPY

   

Hey,

I am very happy to share that the first version of the Image registration workflow is now complete with associated test cases and benchmark data.

It will pave the way for a user-friendly way of image alignment (for both standalone and collection of MRI images) thus leading to efficient image registration for non-domain users.

Link to the PR: Image Registration Workflow 

Visualising the results of the Image Registration with DIPY

With the workflow in place, the task now was to register a set of images (other than the ones used during the development) to test and benchmark the performance of the workflow on real and simulated datasets.

The dataset: For testing the workflow on real images, I used the B0 and T1 experimental images. Different modes of registration are used to show the efficacy of each one on the end result.

A pair of b0, static (left) and T1, moving (right). The image extraction was done using the fsl_bet tool with a custom value for the ‘functional intensity threshold’ to enhance the quality of the extracted data.

The extracted B0 image.
The extracted T1 image.

 

 

 

 

 

Overlaying the Images: It can be seen that the images are not aligned, the T1 can be seen as tilted towards the right. However, the difference can be seen more profoundly if we overlay these images on top of each other. Below is the overlay of these images to show the differences in their native configuration (before registration).

Sagittal view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on top of the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.
Coronal view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.
Axial view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on top of the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.

Registration-Mode ‘Center of mass-based’ registration: To start with, let’s use the center of mass based registration option of the workflow and align the images. 

Sagittal view of the overlaid images, this result is better than the previous one where the images were simply overlaid one on top of the other. However, here also we can see that the alignment is not perfect as there are portions of ‘RED’ which don’t overlap well.
Coronal view of the overlaid images. There is a mismatch in the lower sagittal region.
Axial view of the overlaid images, though better still the portions of the moving image (in RED) are not fitting perfectly on top of the static.

Registration-Mode ‘Translation based’ registration: Now let’s employ the 2ns option available in the workflow i.e. the translation based image registration, the images below show how the translation based registration improves on the simple ‘center of mass’ based registration.

Sagittal view of the overlaid images, the translation based registration produced better results compared to the center of mass based registration.
Coronal view of the overlaid images, the translation based registration produced better results compared to the center of mass based registration.
Axial view of the overlaid images, there are still non-overlapping portions of ‘RED’ but they are lesser in proportion relatively compared to the center of mass based results.

Registration-Mode ‘Rigid body based’ registration: The results can be further improved by registering the images using the rigid body based registration, the results are shown below and they emphasize the improvements obtained. The rigid body based registration was done progressively i.e. it started with the results obtained from the translation based registration. 

Sagittal view of the overlaid images, produced by rigid body based registration.
Coronal view of the overlaid images produced by rigid body based registration.
Axial view of the overlaid images produced by rigid body based image registration.

Registration-Mode ‘Full affine’ registration: The affine registration involves scaling and shearing of the data also to produce the optimal results achievable for a given pair of images. The results are shown below that underscores the improvements obtained by the affine registration.

Sagittal view of the images registered by full affine registration.
Coronal view of the images aligned using the full affine registration.
Axial view of the registered images.

Here are the links for the commits made for the completed image registration workflow.

Commit Link for the completed Workflow: Registration Workflow

Commit Link for the completed unit test(s) for the Workflow: Registration Test

Benchmarking the workflow

The performance of the workflow (in terms of execution time) on both the simulated and real dataset is shown below.

The simulated dataset: The simulated dataset was generated by stacking slices of brain images on top of each other (to generate an image in the static configuration, these were coronal slices) and then using random transform to create the corresponding moving image. I set up the factors and used the setup_random_transform() function of DIPY to create the test data.

factors = {
 ('TRANSLATION', 3): (2.0, None, np.array([2.3, 4.5, 1.7])),
 ('RIGID', 3): (0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5,
 1.7])),
 ('AFFINE', 3): (0.1, None, np.array([0.99, -0.05, 0.03, 1.3,
 0.05, 0.99, -0.10, 2.5,
 -0.07, 0.10, 0.99, -1.4]))}

In principle, any number of such images can be created by using random transformation but for the sake of testing 3 distinct set of images were created.

Set-A: A pair of images created using random translation.

Set-B: A pair of images created using rigid body transformation.

Set-C: A pair of images created using full affine registration. The randomly transformed images using full affine transformation (translation, rigid, shear and scaling)

Tests run on the Simulated Data (The reported run times are the best out of 5 executions when no other intensive application was running on the system)

Data Type Run Time (seconds) Transformation Type Applied Progressive
Translated Images 16 Translation N/A
Images created by Rigid body Transformation 37 Rigid without progressive No
Images created by Rigid body Transformation 40 Rigid with progressive Yes
Images created by Affine Transformation 117 Affine without progressive No
Images created by Affine Transformation 158 Affine with progressive yes

 

Tests run on the real Data (The reported run times are the best out of 5 executions when no other intensive application was running on the system)

Data Type Run Time (seconds) Transformation Type Applied Progressive
T1 and T1_run2 40 Translation N/A
T1 and T1_run2 277 Rigid with progressive yes
T1 and T1_run2 593 Affine with progressive yes

Discussion: The affine registration is by far the most time-consuming type of alignment. The tests revealed that for both simulated and real data, the maximum amount of time was being spent in the affine registration, this is not a surprise since affine registration involves full-scale alignment whereas other types of transformations (translation, rigid etc.) do not involve scaling and shear operations.

Applying the transform on a collection of images

Once, the initial registration is done, the workflow now has the option of also saving the affine matrix that was used during the transformation.

The affine matrix contains the optimal parameters used for translation and rotation. This is especially useful in the case where multiple moving images are generated during various runs of the system.

In order to produce accurate alignment, the affine matrix can be used for transforming all the moving images rather than calling the ImageRegistration Workflow every time (since image registration is a resource-intensive process).

The Apply Transform Workflow: I developed a separate workflow for applying the transform to multiple images. The workflow is in beta phase and being tested currently, below is a link to the code for the workflow:

Commit Link for the Apply Transform Workflow: Apply Transform

I will be posting another Blog soon with the details about the Apply Transform workflow and how it works with the associated benchmark data.

Adios for now!

Parichit