tvcastillod's Blog

Final Report

Published: 08/28/2023

gsoc  fury

Google Summer of Code 2023 Final Work Product


Diffusion Magnetic Resonance Imaging (dMRI) is a non-invasive imaging technique used by neuroscientists to measure the diffusion of water molecules in biological tissue. The directional information is reconstructed using either a Diffusion Tensor Imaging (DTI) or High Angular Resolution Diffusion Imaging (HARDI) based model, which is graphically represented as tensors and Orientation Distribution Functions (ODF). Traditional rendering engines discretize Tensor and ODF surfaces using triangles or quadrilateral polygons, making their visual quality depending on the number of polygons used to build the 3D mesh, which might compromise real-time display performance. This project proposes a methodological approach to further improve the visualization of DTI tensors and HARDI ODFs glyphs by using well-established techniques in the field of computer graphics, such as geometry amplification, billboarding, signed distance functions (SDFs), and ray marching.

Proposed Objectives

  • Implement a parallelized version of computer-generated billboards using geometry shaders for amplification.
  • Model the mathematical functions that express the geometry of ellipsoid glyphs and implement them using Ray Marching techniques.
  • Model the mathematical functions that express the geometry of ODF glyphs and implement them using Ray Marching techniques.
  • Use SDF properties and techniques to represent the uncertainty of dMRI reconstruction models.

Objectives Completed

  • Ellipsoid actor implemented with SDF

A first approach for tensor glyph generation has been made, using ray marching and SDF applied to a box. The current implementation (tensor_slicer) requires a sphere with a specific number of vertices to be deformed. Based on this model, a sphere with more vertices is needed to get a higher resolution. Because the ray marching technique does not use polygonal meshes, it is possible to define perfectly smooth surfaces and still obtain a fast rendering.

Details of the implementation:

  • Vertex shader pre-calculations: Some minor calculations are done in the vertex shader. One, corresponding to the eigenvalues constraining and min-max normalization, are to avoid incorrect visualizations when the difference between the eigenvalues is too large. And the other is related to the tensor matrix calculation given by the diffusion tensor definition T=R-1ΛR, where R is a rotation matrix that transforms the standard basis onto the eigenvector basis, and Λ is the diagonal matrix of eigenvalues [4].
  • Ellipsoid SDF definition: The definition of the SDF is done in the fragment shader inside the map function, which is used later for the ray marching algorithm and the normals calculation. We define the SDF more simply by transforming a sphere into an ellipsoid, considering that the SDF of a sphere is easily computed and the definition of a tensor gives us a linear transformation of a given geometry. Also, as scaling is not a rigid body transformation, we multiply the final result by a factor to compensate for the difference, which gave us the SDF of the ellipsoid defined as sdSphere(tensorMatrix * (position - centerMCVSOutput), scaleVSOutput*0.48) * scFactor.
  • Ray marching algorithm and lighting: For the ray marching algorithm, a small value of 20 was taken as the maximum distance since we apply the technique to each individual object and not all at the same time. Additionally, we set the convergence precision to 0.001. We use the central differences method to compute the normals necessary for the scene’s illumination, besides the Blinn-Phong lighting technique, which is high-quality and computationally cheap.
  • Visualization example: Below is a detailed visualization of the ellipsoids created from this new implementation.

ellipsoids created with ray marching SDFs

This implementation does show a better quality in the displayed glyphs, and supports the display of a large amount of data, as seen in the image below. For this reason, a tutorial was made to justify in more detail the value of this new implementation. Below are some images generated for the tutorial.

images for tutorial: ellipsoids zoomed, quality comparison, large amount of ellipsoids

Pull Requests:

  • Ellipsoid actor implemented with SDF (Merged) fury-gl/fury#791
  • Tutorial on using ellipsoid actor to visualize tensor ellipsoids for DTI (Merged) fury-gl/fury#818

Future work: In line with one of the initial objectives, it is expected to implement billboards later on to improve the performance, i.e., higher frame rate and less memory usage for the tensor ellipsoid creation. In addition to looking for ways to optimize the naive ray marching algorithm and the definition of SDFs.

Objectives in Progress

  • DTI uncertainty visualization

The DTI visualization pipeline is fairly complex, as a level of uncertainty arises, which, if visualized, helps to assess the model's accuracy. This measure is not currently implemented, and even though there are several methods to calculate and visualize the uncertainty in the DTI model, because of its simplicity and visual representation, we considered Matrix Perturbation Analysis (MPA) proposed by Basser [1]. This measurement is visualized as double cones representing the variance of the main direction of diffusion, for which the ray marching technique was also used to create these objects.

Details of the implementation:

  • Source of uncertainty: The method of MPA arises from the susceptibility of DTI to dMRI noise present in diffusion-weighted images (DWIs), and also because the model is inherently statistical, making the tensor estimation and other derived quantities to be random variables [1]. For this reason, this method focus on the premise that image noise produces a random perturbation in the diffusion tensor estimation, and therefore in the calculation of eigenvalues and eigenvectors, particularly in the first eigenvector associated with the main diffusion direction.
  • Mathematical equation: The description of the perturbation of the principal eigenvector is given by math formula where ΔD corresponds to the estimated perturbation matrix of D given by the diagonal elements of the covariance matrix Σa≈(BTΣe-1B)-1, where Σe is the covariance matrix of the error e, defined as a diagonal matrix made with the diagonal elements of e-1)=⟨S(b)⟩2η2 . Then, to get the angle θ between the perturbed principal eigenvector of D, ε1+Δε1, and the estimated eigenvector ε1, can be approximated by θ =tan-1⁡(|Δε1|) [2]. Taking into account the above, we define the function main_dir_uncertainty(evals, evecs, signal, sigma, b_matrix) that calculates the uncertainty of the eigenvector associated to the main direction of diffusion.
  • Double cone SDF definition: The final SDF is composed by the union of 2 separately cones using the definition taken from this list of distance functions, in this way we have the SDF for the double cone defined as opUnion(sdCone(p,a,h), sdCone(-p,a,h)) * scaleVSOutput
  • Visualization example: Below is a demo of how this new feature is intended to be used, an image of diffusion tensor ellipsoids and their associated uncertainty cones.

DT ellipsoids and its uncertainty

The implementation is almost complete, but as it is a new addition that includes mathematical calculations and for which there is no direct reference for comparison, it requires a more detail review before it can be incorporated.

Pull Request:

Future work: A tutorial will be made explaining in more detail how to calculate the parameters needed for the uncertainty cones using dipy functions, specifically: estimate_sigma for the noise variance calculation, design_matrix to get the b-matrix, and tensor_prediction for the signal estimation. Additionally, when ODF implementation is complete, uncertainty for this other reconstruction model is expected to be added, using semitransparent glyphs representing the mean directional information proposed by Tournier [3].

  • ODF actor implemented with SDF

HARDI-based techniques require more images than DTI, however, they model the diffusion directions as probability distribution functions (PDFs), and the fitted values are returned as orientation distribution functions (ODFs). ODFs are more diffusion sensitive than the diffusion tensor and, therefore, can determine the structure of multi-directional voxels very common in the white matter regions of the brain [3]. The current actor to display this kind of glyphs is the odf_slicer which, given an array of spherical harmonics (SH) coefficients renders a grid of ODFs, which are created from a sphere with a specific number of vertices that fit the data.

For the application of this model using the same SDF ray marching techniques, we need the data of the SH coefficients, which are used to calculate the orientation distribution function (ODF) described here. Different SH bases can be used, but for this first approach we focus on descoteaux07 (as labeled in dipy). After performing the necessary calculations, we obtain an approximate result of the current implementation of FURY, as seen below.

ODF glyphs comparison using polygons and current ray marching implementation

With a first implementation we start to solve some issues related to direction, color, and data handling, to obtain exactly the same results as the current implementation.

Details on the issues:

  • The direction and the scaling: When the shape of the ODF is more sphere-like, the size of the glyph is smaller, so for the moment it needs to be adjusted manually, but the idea is to find a relationship between the coefficients and the final object size so it can be automatically scaled. Additionally, as seen in the image, the direction does not match. To fix this, an adjustment in the calculation of the spherical coordinates can be made, or pass the direction information directly.
  • Pass the coefficients data efficiently: I'm currently creating one actor per glyph since I'm using a uniform array to pass the coefficients, but the idea is to pass all the data simultaneously. The first idea is to encode the coefficients data through a texture and retrieve them in the fragment shader.
  • The colormapping and the lighting: As these objects present curvatures with quite a bit of detail in some cases, this requires more specific lighting work, in addition to having now not only one color but a color map. This can also be done with texture, but it is necessary to see in more detail how to adjust the texture to the glyph's shape.

More details on current progress can be seen in blogpost of week 11 and week 12.

Working branch:

GSoC Weekly Blogs

My blog posts can be found on the FURY website and the Python GSoC blog.


Date Description Blog Link
Week 0
Community Bounding Period FURY - Python
Week 1
Ellipsoid actor implemented with SDF FURY - Python
Week 2
Making adjustments to the Ellipsoid Actor FURY - Python
Week 3
Working on uncertainty and details of the first PR FURY - Python
Week 4
First draft of the DTI uncertainty visualization FURY - Python
Week 5
Preparing the data for the Ellipsoid tutorial FURY - Python
Week 6
First draft of the Ellipsoid tutorial FURY - Python
Week 7
Adjustments on the Uncertainty Cones visualization FURY - Python
Week 8
Working on Ellipsoid Tutorial and exploring SH FURY - Python
Week 9
Tutorial done and polishing DTI uncertainty FURY - Python
Week 10
Start of SH implementation experiments FURY - Python
Week 11
Adjusting ODF implementation and looking for solutions on issues found FURY - Python
Week 12
Experimenting with ODFs implementation FURY - Python


[1] Basser, P. J. (1997). Quantifying errors in fiber direction and diffusion tensor field maps resulting from MR noise. In 5th Scientific Meeting of the ISMRM (Vol. 1740).
[2] Chang, L. C., Koay, C. G., Pierpaoli, C., & Basser, P. J. (2007). Variance of estimated DTI‐derived parameters via first‐order perturbation methods. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 57(1), 141-149.
[3] J-Donald Tournier, Fernando Calamante, David G Gadian, and Alan Connelly. Direct estimation of the fiber orientation density function from diffusion-weighted mri data using spherical deconvolution. Neuroimage, 23(3):1176–1185, 2004.
[4] Gordon Kindlmann. Superquadric tensor glyphs. In Proceedings of the Sixth Joint Eurographics-IEEE TCVG conference on Visualization, pages 147–154, 2004.

View Blog Post

Weekly Check-in #12

Published: 08/24/2023

What did I do this week?

There were different issues I needed to address for the ODF implementation. Even though I could not solve any of them completely, I did check each of the issues and made some progress. All the work in progress is being recorded in the following branch SH-for-ODF-impl, which when ready will be associated with a well-structured PR.

First, the scaling. For this part I was suggested to check gfa metric to adjust the scaling depending on the shape of the ODF glyph, i.e., the less the gfa the more sphere-shaped and smaller, so I had to associate a greater scaling for those. However, this did not work very well as I was unable to define an appropriate scale relation that would give an equitable result for each glyph. For this reason, I opted to use peak values which are extracted from the ODFs, setting the scales as 1/peak_value*0.4 and I got a more uniformly sized glyph without the need of setting it manually. That is a temporal solution as I would like to see better why this happens and if possible do the adjustment inside the shader instead of a precalculation. Second, for the direction, I made a small adjustment to the spherical coordinates which affected the direction of the ODF glyph. As you can see below,

All the glyphs are aligned over the y-axis but not over the z-axis, to correct this I precalculated the main direction of each glyph using peaks and passed it to the shader as a vec3, then used vec2vecrotmat to align the main axis vector of the ODF to the required direction vector, the only problem with this is that not all the glyps are equally aligned to the axis, i.e., the first 3 glyphs are aligned with the x-axis but the last one is aligned with the y-axis, so the final rotation gives a different result for that one.

As with the first small adjustment of the coordinates the direction was partially correct, I need to double check the theta, phi and r definitions to see if I can get the right direction without the need of the additional data of direction. Also, there might be a way to get the specific rotation angles associated to each individual glyph from the data associated with the ODFs.

Third, passing the coefficients data through textures. I understand better now how to pass textures to the shaders but I still have problems understanding how to retrieve the data inside the shader. I used this base implementation, suggested by one of my mentors, to store the data as a texture cubemap, "a texture, where each mipmap level consists of six 2D images which must be square. The 6 images represent the faces of a cube". I had 4x15 coefficients and inside the function, a grid of RGB colors is made so then it can be mapped as a texture. To check if was passing the data correctly, I used the same value .5 for all the textures, so then I could pick a random texel get a specific color (gray), and pass it as fragOutput0 to see if the value was correct. However, it didn't appear to work correctly as I couldn't get the expected color. To get the specific color I used texture(sampler, P) which samples texels from the texture bound to sampler at texture coordinate P. Now, what I still need to figure out is which should be the corresponding texture coordinate, I have tried with random coordinates, since they are supposed to correspond to a point on the cube and since the information I have encoded in the texture is all the same I assumed that I would get the expected result for any set of values. It might be a problem with the data normalization, or maybe there is something failing on the texture definition. I need to review it in more detail to see where is the problem.

Lastly, the colormapping. For this I created the texture based on a generic colormap from matplotlib.

I was able to give some color to the glyph but it does not match correctly its shape. Some adjustment must be done regarding the texels, as the colormap is mapped on a cube. But I need it to fit the shape of the glyph correctly.

What is coming up next?

I will continue to explore more on how to handle textures so I can solve the issues related to the coefficient data and colormapping. Also, take a deeper look at the SH implementation and check what is the information needed to adjust the main direction of the ODF correctly.

Did I get stuck anywhere?

As I mentioned I had some drawbacks in understanding the use of textures and how to retrieve the data inside the shaders. Is a topic that might take some time to manage properly but If I can master it and understand it better, it is a tool that can be useful later. Additionally, there are details of the SH implementation that I still need to understand and explore better in order to make sure I get exactly the same result as the current odf_slicer implementation.

View Blog Post

Weekly Check-in #11

Published: 08/16/2023

What did I do this week?

I continued to experiment with the ODF glyph implementation. Thanks to one of my mentors I figured out how to get the missing data corresponding to the SH coefficients alm part of the function f(θ, ϕ) described here. I also was told to make sure to implement the correct SH basis since there are different definitions from the literature, I have to focus now in the one proposed by Descoteaux, described in this paper, which is labeled in dipy as descoteaux07. To do this I had to make a small adjustment to the base implementation that I took as a reference, from which I obtained a first result using SH of order 4.

It appears that the results on the shape are about the same, except for the direction, but there is still work to be done. 

What is coming up next?

For now there are 3 things I will continue to work on:

  • The color and lighting. As these objects present curvatures with quite a bit of detail in some cases, this is something that requires more specific lighting work, in addition to having now not only one color but a color map.
  • The scaling. This is something I still don't know how to deal with. I had to adjust it manually for now, but the idea is to find a relationship between the coefficients and the final object size so it can be automatically scaled, or maybe there is a proper way to pre-process this data before passing it to the shaders to get the right result at once.
  • How to pass the information of the coefficients efficiently. Right now I'm creating one actor per glyph since I'm using a uniform array to pass the coefficients, but the idea is to pass all the data at once. I found several ideas here of how to pass a list of values to the fragment shader directly, I just need to explore deeper on how this can be done on FURY, and see which option is most suitable.

Did I get stuck anywhere?

All the points mentioned above are things that I tried to fix, however, it is something that I need to look at in much more detail and that I know is going to take me some time to understand and test before I get to the expected result. I hope to get some ideas from my mentors and fellow GSoC contributors on how I can proceed to deal with each of the problems.

View Blog Post

Weekly Check-in #10

Published: 08/08/2023

What did I do this week?

I started formally working on SH implementation. I was told to start first doing a simple program where I can modify on real time the order l and degree m, parameters corresponding to the Spherical Harmonics function, based on a previous work. That is just one part of the final ODF calculation, but here is what a first experimental script looks like.

I did it in order to make sure it was visually correct and also to understand better how those 2 parameters are related and need to be incorporated into the final calculation. There is one issue at first sight that needs to be addressed, and that is the scaling, since for SH with a degree near 0, the object gets out of bounds.

What is coming up next?

I will keep polishing details from my current open PRs, hopefully, I will get another PR merged before the last GSoC week.

Did I get stuck anywhere?

Not sure about how to use the current implementation I have to get similar visualizations made with odf_slicer, since the parameters that the function receive are different, so I need to take a deeper look and see where it might be the connection or if I should make some adjustments on the parameters.

View Blog Post

Weekly Check-in #9

Published: 07/31/2023

What did I do this week?

I addressed the comments from the tutorial of PR #818 related to how to display specific visualizations I wanted to make. I was suggested to use ShowManager to handle the zoom of the scene and also to use GridUI to display several actors at the same time for a visual quality comparison of the tensors. Below are some images generated for the tutorial that is almost done.



What is coming up next?

There are some issues with the tests of the uncertainty implementation, specifically a segmentation problem that has to be with the shaders, so I expect to correct the problem by next week.

Did I get stuck anywhere?

I'm still thinking about how to approach the implementation of the spherical harmonics for odf glyphs. Most of the implementations I have found are static so my task would be to try to parametrize the existing functions, so I can pass data from python to the shaders properly so that I can obtain the same result as the current odf_slicer.

View Blog Post