Well, first things first- DIPY stands for the Diffusion Imaging in Python. DIPY is a medical imaging software meant to analyze and interpret the data generated by MRI systems (primarily the brain images and other supporting data – system parameters, meta-data etc.). DIPY is an open source initiative (under the hood of Python Software Foundation) and provides opportunities for scientific package implementation, powerful software engineering, exciting visualization techniques to leverage state of the art hardware systems (GPU shaders and more) and data-driven analytics (algorithms to improve image registration and more).
My Work and Its Usefulness
For me, I will be working on creating feature-rich and user-friendly workflows that will become part of the DIPY source code. DIPY has a significant collection of scientific algorithms that can be linked via custom python scripts for creating and delivering flexible workflows to the end user. Though powerful in functionality, not all tutorials in DIPY have their individual workflows, well not yet. After passing manual and automated validation and checks, these workflows will help medical experts, researchers, and medical doctors to quickly analyze the MRI data in a standard manner.
Exploring the Code Base
In the past, I have been going through the code base of DIPY and trying to learn the navigation around its source code. I mean understanding how the code is structured and organized. In this context, Dr. Eleftherios Garyfallidis and Serge Koudoro, founder and core developer of the DIPY respectively, have been very helpful. Now, I have a clear understanding of how the files and data are organized in the code base.
A few hours and several tests run later, I realized why they created the introspective parser and the places where there is scope for quick improvement. We discussed a list of things that were to be done on a priority basis.
Also To be Added
A good amount of work will also be dedicated to ensuring that the workflows are executing as expected and testing them on a variety of datasets and platforms. This will ensure that the code behaves as expected and in turn will add to the quality of the package.
A relatively challenging part of the assignment will be to integrate some visualization tool or intermediate output parsers to do a sanity check on the quality of intermediate output. This will prevent too many errors or too much troubleshooting down the line.
Closing for now 🙂
That’s it, for now, folks.
Stay tuned for real development updates and exciting new workflows. Oh yes, there will be awesome visualization too.
Adios for now!