Integrating animation in DIPY to help in quality assessment for big datasets.


I will start with the good news, the image registration workflow is in the process of being merged.

The code went under several iterations of improvement and optimization (after the PR was created), thanks to the community developer for sharing their useful comments.

Wonder, what the following animation is saying ? read on to learn more about it.

For the past few days, I have been brainstorming and experimenting with the following objectives,

Objectives for this work

1) Adding an intuitive component to help in the assessment of the quality of registered images.

Please see my earlier posts about visualizing the results of Image Registration,

Visualizing the Registration Progress

2) Trying different options about what possibly is the optimal and platform independent way of communicating the results for quality assessment

Adding an intuitive Visualization Component

I have already discussed the usage and application of ‘Mosaics’ in assessing the visual quality of the registered data. A mosaic presents a holistic view of the registered image by displaying all the slices (from a specific plane) parallelly so that the change in the voxels can be observed in a sequential manner.

What are mosaics? and their limitations

Mosaics can be handy when a user wants to view the slices and not the entire volume. This provides for more fine-grained control over the quantity of information presented.

However, there is one thing lacking in the mosaic based approach (not that it is not useful), in case of large volumes the number of slices can be overwhelming.  A large number of slices present difficulty not only in creating the mosaic but also makes it hard for the user to understand the data/results easily. Just imagine looking at hundreds of slices while trying to track the changes from one to the other.

Previous Commit LinkMosaics

Making the registered slices into an ‘Animations’

So, I needed another method to go with the mosaics and complement the quality assessment arsenal in DIPY. After discussing with the project mentors, it was decided to use an animation (preferably a GIF) to show the changes in the moving image as the registration goes on.

I started experimenting in a separate branch for creating the animations to visually convey the quality of the registered data.

Commit Link: Adding the animation 

After multiple rounds of debugging and data specific and exploration (adjusting the code for the specific number of slices and volumes), I finally was able to create an animation for showing the progress of image registration.

Animating the slices from the ‘Center of Mass’ based registration

The static image (in RED) and the moving image (in GREEN) shown to be overlapping. The dots of yellow indicates the areas where the overlap is good.

Animating the slices from the ‘Translation’ registration

The animation below shows the slices (from both the static and the moving images) in the axial plane overlapping with each other. This particular image is a bit blurry due to the enlarged size, but I am working on to improve the quality of the results further.

Translation based registration using Mutual Information Metric: The static image (in RED) and the moving image (in GREEN) shown to be overlapping.

Animating the ‘Rigid body based’ registration

The following animations show the effect of rigid body based registration on the moving image when the rigid registration is done using the Mutual Information metric.

Rigid registration using Mutual Information Metric: The static image (in RED) and the moving image (in GREEN) shown to be overlapping. The portion in yellow are the ones that aligned well.

Animating the slices from the ‘Affine’ registration

The best results were obtained in terms of the affine registration as is being shown in the following animation. The portions of ‘RED’ are most superimposed by the GREEN in the AFFINE registration (relatively better when compared to the other modes).

Full Affine registration using Mutual Information Metric: The static image (in RED) and the moving image (in GREEN) shown to be overlapping.

Correcting the orientation

By interpolating the moving image and normalizing it (for the color range of a GIF), the animations were reproduced in the correct orientation.

The MP4 video below shows the animation in the correct orientation (this is the center of mass based for demonstration purposes but the similar results can be produced for other modes too.)

Video: The animation in the correct orientation (scroll to the bottom of the page)

Commit Link: Visualising with the correct orientation

Merging the experimental code with the local repo

Since the initial experiments with the animations yielded satisfactory output so I started adding the changes in the branch from my local repository and also made the code more generalized.

Commit Link: Complete Code Affine Visualizations

These changes are still under testing and will be merged with the experimental branch of DIPY for large data (nipy/dipy_qa).

That’s about it for now. Future posts will focus on developing new workflows with DIPY (now that Image Registration is complete with robust quality assessment metrics)



Improving the Workflow Code (based on the Community Feedback)


Hey there,

So, there has been some delay in the timeline for my project but this delay isn’t intentional, I have been able to create good results for visual quality assessment. In fact, the Image Registration workflow is supplemented by both quantitative and qualitative metrics for assessment.

This blog post is about reviewing what has been done in the Image Registration Workflow and also about the many improvements that (I have been working on lately) have been done in the code based on community feedback.

Reviewing the additions to the workflow (for assessing quality)

For quantitative assessment: There is the option of saving the distance and optimal parameter metric now. For details about the code, see the following PR.

(old) PR for quantitative metric addition in the workflow

Testing and benchmarking the Image Registration with DIPY

Qualitative assessment: I am working on a separate branch to create the mosaic of registered images, this branch isn’t the part of the master because the primary Image registration workflow is not being merged (yet). More details about what these mosaics are and how they can help in quality assessment can be seen in one of my older posts below.

Visualizing the Registration Progress

Reviewing the improvements to the Image Registration workflow

The PR 1581 got a good response from the DIPY development community. I am discussing a few of the issues (by no means this discussion is exhaustive, it is just that I am selecting the points which I think made a difference). Majority of these issues are noted by the DIPY developer @nilgoyette.  Many thanks to @Nilgoyette for pointing them out and helping me to improve the code.

Modifying the code based on the community feedback (Many thanks to @NilGoyette & @skoudoro for the useful comments) 

Improvement-1: Following the code consistency and uniform standards. I overlooked the fact that simple things like, ‘import statement’ and line widths are not uniform in the file after I added my code.  For example, using both “,” and “()” for importing multiple classes from a module.

Old Commit

New Commit 

While these things don’t hurt the functionality of the code but they make the code look more consistent. I updated the code to follow same standards everywhere.

Improvement-2:  Moving to assert_almost_equal for comparing long floating point numbers. I wrote redundant code by rounding off the floating number to compare it with another float whereas, the NumPY’s assert_almost_equal are made for this purpose, so I updated the old test case by moving to assert_almost_equal.

This not only improved the code readability but also made the test objective clear.  Furthermore, using the NumPy’s default functions made the test cases look more consistent with the code base.

Commit Link

Improvement-3: Reducing the code duplication. The Image Registration workflow is complicated by the fact that it supports multiple registration modes both progressively and non-progressively. This lead to a part of various local functions being duplicated, I have reduced the code duplication (marginally, though) by moving a part of original code into a separate function.

Commit Link

Improvement-4: Using the “_” placeholder in python for variables not going to be used (but returned by the function call). I was using a variable name to hold the data returned by the function but the variable wasn’t used anywhere in the code later, So I moved to a more pythonic way of holding the data by using the “_” placeholder.

Commit Link

Improvement-5: Using the python’s default assert statement in the test cases. Part of the test cases was simply using the NumPy’s assert_equal to check for equality but the equality was checked against the boolean ‘True/False’ and so it made more sense to just use the default assert for doing such checks. Not that the assert_equal was incorrect but using assert made more sense for doing unit checks such as checking for equality to True/False.

Commit Link

All this feedback made the code more consistent and optimal. All these improvements (along with other changes to the code base) are now part of the PR 1581 and waiting to be merged.

In the coming weeks, I will be sharing more details about the results of apply_transform workflow (as promised earlier) and also about awesome new visualization that can be done with registered data by using the native matplotlib calls. More details about the apply_transform_workflow can be seen in the following post,

Transforming multiple MRI Images (in a Jiffy!)

Adios for now!


Visualizing the Registration Progress


The Image Registration Workflow is getting significant feedback from the DIPY developer community. The reviews have proved useful for both the code quality and semantics of the code.

While I am constantly improving the code and updating the PR with the new commits, I thought that it will be worthwhile to give a sneak peek into the intuitive visualization that can be achieved with the VTK modules in DIPY.

So, we know that a given pair of images can be registered in primarily four modes, more details about these and how to perform each registration can be seen in my earlier post on the following link,

Testing and benchmarking the Image Registration with DIPY

Today, I will be sharing results about the powerful visualization(s) that can be generated with DIPY for interpreting the results in an intuitive manner.

Comparative assessment of the results generated by various modes of registration

To compare and assess the quality of image registration relatively for different modes, a mosaic is generated for the registered image to show the progress of the registration process. This also helps in comparing the various modes with each other. As will be evident in the later sections of the post, affine registration results in the best possible overlap between the images.

Experiment-1 on forked repository: Creating Overlaying Images

Experiment-2 on forked repository: Creating a mosaic of the registered image

A) Visualising the Mosaic of the Registered Images (Translation based):

The given mosaic is currently displaying all the slices of the registered image. The registered image is produced by using Translations based registration.

A mosaic displaying all the slices of the registered image data. RED: Static Image, GREEN: Moving Image. A proportion of YELLOW shows the areas where the overlap is very good.

As can be seen from the above mosaic, as the slices are displayed progressively from (left-right, top-bottom), the proportion of red and green are overlapping and it increases towards the end of the slices.

This depicts the progress of the registration process as the two images are being aligned in a common space.

B) Visualising the Mosaic of the Registered Images (Rigid body based)

The rigid body registration was done in a progressive manner, meaning that it uses the affine matrix obtained from the translation in step-(A) above. Below mosaic shows, the progress of the rigid registration process and highlights how the two images are being aligned as the color channels overlap with each other.

A mosaic displaying all the slices of the registered image data. RED: Static Image, GREEN: Moving Image. A proportion of YELLOW shows the areas where the overlap is very good.

C) Visualising the Mosaic of the Registered Images (Affine registration based)

The best possible results are obtained with the affine registration mode done in a progressive manner. The below mosaic displays the progress of the registration process as the alignment proceeds and the images are aligned.

Important to note is how in the starting there is only RED and then increasingly the GREEN is introduced in the mosaic which means that the moving image is being bought into the common alignment space. Eventually, the moving image is superimposed on top of the static image.

A mosaic displaying all the slices of the registered image data. RED: Static Image, GREEN: Moving Image. A proportion of YELLOW shows the areas where the overlap is very good.

Mosaic for a relatively larger MRI data

The following mosaic has been created for a different brain image, these are relatively bigger as compared to the earlier data. This can also be noticed from the number of tiles in the mosaic present in the following image.

Visualising the mosaic of the images from two different runs of the same brain MRI.

Notice the varying hue of YELLOW in this image, this indicates the varying degree of superimposition of the voxels between the static and the moving images. The ‘YELLO‘ is also present in the previous mosaic’s but its intensity is relatively poor and that renders the difference between the images hard to comprehend. I am still working on improving the visual quality of these mosaics so that the differences can be made more apparent.

Here is how the different channels are created for the above images.

# Normalize the input images to [0,255]
static_img = 255 * ((static_img - static_img.min()) / (static_img.max() - static_img.min()))
moved_img = 255 * ((moved_img - moved_img.min()) / (moved_img.max() - moved_img.min()))

# Create the color images
overlay = np.zeros(shape=(static_img.shape) + (3,), dtype=np.uint8)

# Copy the normalized intensities into the appropriate channels of the
# color images
overlay[..., 0] = static_img
overlay[..., 1] = moved_img

The overlay contains both the RED and the GREEN channels and is being used afterward to extract the slices out for creating the mosaic.

List of PR’s that are now merged with the code base

Some of my earlier PR’s that I created are now merged with the DIPY code base and are available below for review and further comments,

Showing help when no input parameter is present

Updating the default output strategy

Next blog posts will present more details about the new methods of visualization and optimized code for the Image registration, hopefully, the Image Registration Workflow will be merged with the master codebase by then. 🙂

Adios for now,





Transforming multiple MRI Images (in a Jiffy!)




Update: So far, The Image registration workflow has been put into the PR zone. See the PR below (from earlier this week) for details:

Link to the PR: Image Registration Workflow 

This blog post is about the extension of the Image Registration Workflow for transforming several quickly. Wondering why is that any different from the conventional image registration (which is already pending approval).

Here is the thing, the typical image registration process takes anywhere between 16 – 158 seconds (just for the coronal slices) and 34-593 seconds (for brain images depending on the input size and type of registration). So, the idea is to not do the registration repeatedly for different images of the same brain but rather use the affine transformation matrix (obtained from the first registration itself) to align different images at once to the static/reference image.

Apart from huge savings (both time and computation), this will be handy when a user has a large collection of moving images where any one of them is aligned and the rest can then simply follow the information in the affine matrix without having to be registered individually.

Step-1: Load the static and moving image/image(s) as numpy arrays

static_image = nib.load(static_image_file)
static_grid2world = static_image.affine

moving_image = nib.load(moving_image_file)
image_data = moving_image.get_data()

Step-2: Do the basic sanity check before moving on to align them. This will ensure that the dimensions of the images are same so that the workflow doesn’t break down while running.

affine_matrix = load_affine_matrix(affine_matrix_file)

Step-3: Instantiate the AffineMap (with the affine matrix that is obtained in step-2 above) class and call the transform() method on that object.

img_transformation = AffineMap(affine=affine_matrix, domain_grid_shape=image_data.shape)
transformed = img_transformation.transform(image_data)

Step-4: Save the transformed image on the disk using the coordinate system of the static image.

save_nifti(out_dir+out_file, transformed, affine=static_grid2world)

Here is the link to the latest version of the workflow

Commit Link: Apply Transform Workflow

Commit Link: Unit test cases for the Apply Transform Workflow

I will open the PR for this workflow once the initial Image Registration workflow is merged (that’s a pre-requisite for this one to work). Also, this is a good opportunity for getting the feedback about possible code modifications, optimizations, and testing with different input data.

Soon, I will be updating this blog with images that were aligned using this workflow along with the benchmarks.

Adios for now,


Testing and benchmarking the Image Registration with DIPY



I am very happy to share that the first version of the Image registration workflow is now complete with associated test cases and benchmark data.

It will pave the way for a user-friendly way of image alignment (for both standalone and collection of MRI images) thus leading to efficient image registration for non-domain users.

Link to the PR: Image Registration Workflow 

Visualising the results of the Image Registration with DIPY

With the workflow in place, the task now was to register a set of images (other than the ones used during the development) to test and benchmark the performance of the workflow on real and simulated datasets.

The dataset: For testing the workflow on real images, I used the B0 and T1 experimental images. Different modes of registration are used to show the efficacy of each one on the end result.

A pair of b0, static (left) and T1, moving (right). The image extraction was done using the fsl_bet tool with a custom value for the ‘functional intensity threshold’ to enhance the quality of the extracted data.

The extracted B0 image.
The extracted T1 image.






Overlaying the Images: It can be seen that the images are not aligned, the T1 can be seen as tilted towards the right. However, the difference can be seen more profoundly if we overlay these images on top of each other. Below is the overlay of these images to show the differences in their native configuration (before registration).

Sagittal view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on top of the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.
Coronal view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.
Axial view of the overlaid images, it is evident that the ‘moving’ image is not perfectly fitting on top of the static image, the red portion of the ‘moving’ can be seen overlaying outside the static image.

Registration-Mode ‘Center of mass-based’ registration: To start with, let’s use the center of mass based registration option of the workflow and align the images. 

Sagittal view of the overlaid images, this result is better than the previous one where the images were simply overlaid one on top of the other. However, here also we can see that the alignment is not perfect as there are portions of ‘RED’ which don’t overlap well.
Coronal view of the overlaid images. There is a mismatch in the lower sagittal region.
Axial view of the overlaid images, though better still the portions of the moving image (in RED) are not fitting perfectly on top of the static.

Registration-Mode ‘Translation based’ registration: Now let’s employ the 2ns option available in the workflow i.e. the translation based image registration, the images below show how the translation based registration improves on the simple ‘center of mass’ based registration.

Sagittal view of the overlaid images, the translation based registration produced better results compared to the center of mass based registration.
Coronal view of the overlaid images, the translation based registration produced better results compared to the center of mass based registration.
Axial view of the overlaid images, there are still non-overlapping portions of ‘RED’ but they are lesser in proportion relatively compared to the center of mass based results.

Registration-Mode ‘Rigid body based’ registration: The results can be further improved by registering the images using the rigid body based registration, the results are shown below and they emphasize the improvements obtained. The rigid body based registration was done progressively i.e. it started with the results obtained from the translation based registration. 

Sagittal view of the overlaid images, produced by rigid body based registration.
Coronal view of the overlaid images produced by rigid body based registration.
Axial view of the overlaid images produced by rigid body based image registration.

Registration-Mode ‘Full affine’ registration: The affine registration involves scaling and shearing of the data also to produce the optimal results achievable for a given pair of images. The results are shown below that underscores the improvements obtained by the affine registration.

Sagittal view of the images registered by full affine registration.
Coronal view of the images aligned using the full affine registration.
Axial view of the registered images.

Here are the links for the commits made for the completed image registration workflow.

Commit Link for the completed Workflow: Registration Workflow

Commit Link for the completed unit test(s) for the Workflow: Registration Test

Benchmarking the workflow

The performance of the workflow (in terms of execution time) on both the simulated and real dataset is shown below.

The simulated dataset: The simulated dataset was generated by stacking slices of brain images on top of each other (to generate an image in the static configuration, these were coronal slices) and then using random transform to create the corresponding moving image. I set up the factors and used the setup_random_transform() function of DIPY to create the test data.

factors = {
 ('TRANSLATION', 3): (2.0, None, np.array([2.3, 4.5, 1.7])),
 ('RIGID', 3): (0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5,
 ('AFFINE', 3): (0.1, None, np.array([0.99, -0.05, 0.03, 1.3,
 0.05, 0.99, -0.10, 2.5,
 -0.07, 0.10, 0.99, -1.4]))}

In principle, any number of such images can be created by using random transformation but for the sake of testing 3 distinct set of images were created.

Set-A: A pair of images created using random translation.

Set-B: A pair of images created using rigid body transformation.

Set-C: A pair of images created using full affine registration. The randomly transformed images using full affine transformation (translation, rigid, shear and scaling)

Tests run on the Simulated Data (The reported run times are the best out of 5 executions when no other intensive application was running on the system)

Data Type Run Time (seconds) Transformation Type Applied Progressive
Translated Images 16 Translation N/A
Images created by Rigid body Transformation 37 Rigid without progressive No
Images created by Rigid body Transformation 40 Rigid with progressive Yes
Images created by Affine Transformation 117 Affine without progressive No
Images created by Affine Transformation 158 Affine with progressive yes


Tests run on the real Data (The reported run times are the best out of 5 executions when no other intensive application was running on the system)

Data Type Run Time (seconds) Transformation Type Applied Progressive
T1 and T1_run2 40 Translation N/A
T1 and T1_run2 277 Rigid with progressive yes
T1 and T1_run2 593 Affine with progressive yes

Discussion: The affine registration is by far the most time-consuming type of alignment. The tests revealed that for both simulated and real data, the maximum amount of time was being spent in the affine registration, this is not a surprise since affine registration involves full-scale alignment whereas other types of transformations (translation, rigid etc.) do not involve scaling and shear operations.

Applying the transform on a collection of images

Once, the initial registration is done, the workflow now has the option of also saving the affine matrix that was used during the transformation.

The affine matrix contains the optimal parameters used for translation and rotation. This is especially useful in the case where multiple moving images are generated during various runs of the system.

In order to produce accurate alignment, the affine matrix can be used for transforming all the moving images rather than calling the ImageRegistration Workflow every time (since image registration is a resource-intensive process).

The Apply Transform Workflow: I developed a separate workflow for applying the transform to multiple images. The workflow is in beta phase and being tested currently, below is a link to the code for the workflow:

Commit Link for the Apply Transform Workflow: Apply Transform

I will be posting another Blog soon with the details about the Apply Transform workflow and how it works with the associated benchmark data.

Adios for now!


DIPY Workflow, Image registration And Documenting the Test Cases





Some background: One of the important objectives of my GSoC project is to develop quality workflows to serve the scientific community better. Good workflows are crucial to enable the outreach and well-defined usage of the various features in the DIPY package.

How can workflows substantiate the Outreach? DIPY contains implementation for many scientific algorithms that are used in a routine analysis of MRI data. Few of these implementations are fairly straightforward and easy to grasp and use (credit also goes to Python’s intuitive syntax and the open source community for contributing to the DIPY project).

However, in outreach, the focus is on not-so-programming-friendly user base, for example, medical practitioners, life sciences experts or users in academia who would like to leverage DIPY quickly to address their own research problems. This does not mean that they cannot implement their own packages (surely they can) and DIPY as a community project depends on feedback and improvements from many such users.

The objective is to provide end-to-end processing pipeline to the user with the minimum learning curve. In workflows, several individual components (module, a function) of DIPY are combined in a well-defined manner to deliver the implementations with good software development practices. This abstracts away the low-level details from the users while allowing them to use DIPY.

Experienced users can explore the individual components of the workflows and have fine-grained control by tweaking the parameters. (Accessible through the help)

How can workflows ensure well-defined usage? Each workflow that combines multiple components also follows a rigorous testing and quality assurance procedure to check and validate the output from various intermediate components. This results in a well-tested series of steps to achieve a specific objective with DIPY.

These past 2 weeks I have been working on the creating the image registration workflow simultaneously while fixing other issues in the DIPY (See this post).

The Image Registration: Put simply, registration means to align a pair of images (MRI data) so that the downstream analysis can be performed on the registered image. Since the raw data obtained from the DMRI consist of moving images which need to be pre-processed for other types of analysis.

The registration of MRI data is a complex process with multiple options available for registering the images, for example:

A) Registration based on the Center of Mass.

B) Registration based on the Translation of Images.

C) Registration based on the Rigid body Transformation.

D) Full Affine Registration, that involves center of mass, translation, rigid body transformation, shear, and scaling of the data.

Below is the link to the workflow that I have developed for registering the image data.

Commit LinkImage registration Workflow 

In the coming weeks, I will be improving the unit tests for this workflow. In addition to testing the expected behavior (correct output), the test cases will also check the erroneous output (where an error is created intentionally).

Commit Link: Testing the Image Registration Workflow

Together, the registration workflow and the testing framework will provide a standardized option for the users to register images in various modes (and be ensured that the output is generated after passing multiple tests).

Documenting the Use Case(s) for IOIterator in the Workflow

As a good documentation practice, I also created multiple use cases for running a workflow with a combination of input and output parameters.

This was done exclusively to check the creation of output files in response to the location of input files, usage of wild cards and enabling the parameters in the workflow.

These use cases will serve as a comprehensive guide for users looking to learn about various usage scenarios of workflows.

The documentation can be found at the following link:

Commit Link: Documenting the use cases 

Extract from the Document: (dipy_append_text is the sample workflow created for the purpose of this testing.)

S. no. Test case Details (dipy_test_cases: is the parent directory containing all the experiment directory (exp1, exp2 etc.) and the respective input files for testing.) Optional flag
1. Test case-1: Both input files are present in the same directory and no output directory path is provided.

Directory: exp1 (experiment1)

Command: dipy_append_text in1.txt in2.txt

Output: An output file is written in the same directory ‘out_file.txt’.

The –force flag is used. This enforces the overwriting of the output file.

Command: dipy_append_text in1.txt in2.txt –force









2. Test case-2: An output directory within the current directory is specified and –force flag is used.

Directory: exp1 (experiment1)

Command: dipy_append_text in1.txt in2.txt –force –out_dir tmp

 Output: An output file (out_file.txt) is written in the directory ‘tmp’ within the exp1 directory.





–force –out_dir

3. Test case-3: Going one level up in the directory and executing the workflow with input files and path.

Directory: dipy_test_cases

Command: dipy_append_text exp1/in1.txt exp1/in2.txt –force –out_dir tmp

 Output: An output file (out_file.txt) is written in the directory ‘tmp’ within the exp1 directory. The previous ‘tmp’ directory is overwritten by this command.

Note: Due to –force flag, the previous ‘tmp’ directory was overwritten.





–force –out_dir

Adios for now!



Finding and fixing the ‘small and crucial’ issues in the DIPY.




Finding and fixing the issues: After a week of brainstorming and reading through the basic tutorials and documentation of DIPY. I discovered the following issues in the documentation and the code base.

Each of the reported issues is described below:

  1. Fixing the documentation of the workflows: The tutorial webpage for workflow creation in DIPY (workflow) did not mention importing the newly created method from the workflow. It only mentioned importing the run_flow method from the flow_runner class.  This will only work in case the workflow is called directly from the command line but will not work if it has to be wrapped in a separate python file and called from elsewhere.

Solving the issue: I updated the documentation and included the required import statement in the documentation.

Commit Link: Updated the 

This Pull request has been successfully merged with the code base 🙂

2.  Displaying a nice and helpful message when a workflow is invoked without any inputs: DIPY requires the workflows to be invoked with certain input parameters where both the number and format of the input is strictly important.

Behavior: Invoking the workflow without any input parameters just resulted in an error trace without any helpful message for the user. (This stack trace was hard to decipher)

Solving the issue: This behavior was handled inside the file and a conditional check was used to display the appropriate message to the user about missing parameters.

PR number: 1523

Commit Link:  Showing help when no input parameters are provided to the workflow

This Pull request has been successfully merged with the code base 🙂

3. Suppressing the harmless h5py warnings: Due to the dependency of DIPY on certain features of the older version of h5py, the h5py package cannot be updated in the new release.

Behavior: There was always a ‘Future Warning’ from the h5py package whenever a workflow was invoked.

The root cause analysis: Since all the workflows essentially make use of the run_flow method of the flow_runner class so it was the right place to handle this warning. This is so because the run_flow method is imported before any other imports in the workflow script.

Solving the issue: I created a custom exception handler in the class to catch the ‘FutureWarning’. This suppressed the harmless (but annoying) warning from h5py.

PR number: 1523

Commit Link: Suppressing the ‘FutureWarning’ from the h5py package. 

This Pull request has been successfully merged with the code base 🙂

4. Catching the argument mismatch between the run method and the doc string: All workflows requires strict documentation for the parameters provided to the run method. There are formatting restrictions imposed due to adherence to PEP8 code styling guidelines. Also, there is a need to document both the positional and optional parameters.

Behavior: The workflow exited with a cryptic error trace (usually difficult to understand). This happened whenever there was a mismatch between the number of parameters mentioned in the doc string and the run method. However, there was no conditional check for handling this behavior.

The root cause of the error: In the file the number of arguments in the doc string and the run method were not compared to establish equal length (which is required) and so the workflow simply lead to a cumbersome error trace whenever that happened.

Solving the issue: I created a simple conditional check to ensure that the doc string parameters matches exactly with that of the run method and raised a ValueError otherwise.

PR number: 1533

Commit Link: Mismatching arguments between the doc string and the run method

This Pull request has been successfully merged with the code base 🙂

Adios for now!


Sneak-Peek into the DIPY Workflows and Philosophy





Well, first things first- DIPY stands for the Diffusion Imaging in Python. DIPY is a medical imaging software meant to analyze and interpret the data generated by MRI systems (primarily the brain images and other supporting data – system parameters, meta-data etc.). DIPY is an open source initiative (under the hood of Python Software Foundation) and provides opportunities for scientific package implementation, powerful software engineering, exciting visualization techniques to leverage state of the art hardware systems (GPU shaders and more) and data-driven analytics (algorithms to improve image registration and more).

My Work and Its Usefulness

For me, I will be working on creating feature-rich and user-friendly workflows that will become part of the DIPY source code. DIPY has a significant collection of scientific algorithms that can be linked via custom python scripts for creating and delivering flexible workflows to the end user. Though powerful in functionality, not all tutorials in DIPY have their individual workflows, well not yet. After passing manual and automated validation and checks, these workflows will help medical experts, researchers, and medical doctors to quickly analyze the MRI data in a standard manner.

Exploring the Code Base

In the past, I have been going through the code base of DIPY and trying to learn the navigation around its source code. I mean understanding how the code is structured and organized. In this context, Dr. Eleftherios Garyfallidis and Serge Koudoro, founder and core developer of the DIPY respectively, have been very helpful. Now, I have a clear understanding of how the files and data are organized in the code base.

A few hours and several tests run later, I realized why they created the introspective parser and the places where there is scope for quick improvement. We discussed a list of things that were to be done on a priority basis.

Also To be Added

A good amount of work will also be dedicated to ensuring that the workflows are executing as expected and testing them on a variety of datasets and platforms. This will ensure that the code behaves as expected and in turn will add to the quality of the package.

A relatively challenging part of the assignment will be to integrate some visualization tool or intermediate output parsers to do a sanity check on the quality of intermediate output. This will prevent too many errors or too much troubleshooting down the line.

Closing for now 🙂

That’s it, for now, folks.

Stay tuned for real development updates and exciting new workflows. Oh yes, there will be awesome visualization too.


DIPY GitHub Code Base

My Forked Repository

Adios for now!