Speeding up the Legendre-Gauss Integration

As a part of the NODDI model, we were hitting a bottleneck while fitting due the Legendre integral required for computing the Watson Distribution. For those who are new to Numerical Methods, here are some of my notes that will help you understand how to Legendre-Gauss is computed using Legendre Polynomials:

We first start with evaluating the Polynomial between [-1 to 1] interval and the n extend it to any interval [a, b].

Trick to convert the above evaluation between any defined interval [a, b]...

Following is the link to the Cythonized code for Legendre Integrals: Branch Link


Simulating and Fitting the Signal using NODDIx

In the last post we took a loot at the NODDI model that provides neurite density and orientation dispersion estimates by disentangling two key contributing factors to FA. This enables the analysis of each factor individually.

The following image helps summarize the model components:

Courtesy: Zhang, Hui et. al. Bingham-NODDI: Mapping anisotropic orientation dispersion of neurites using diffusion MRI. NeuroImage. 133. 10.1016/j.neuroimage.2016.01.046.

As mentioned above and in the previous post, NODDI model consists of 3 major sub-models for fitting the data in:

  • Intra-Cellular Region
  • Extra-Cellular Region
  • Cerebrospinal Fluid (CSF Area)

A major shortcoming of the NODDI model was that it could fit only one fiber at a time for neurite orientation patterns observed in brain tissue that include:

  • Highly coherently oriented white matter structures, such as the corpus
  • White matter structures composed of bending and fanning axons, such as the centrum semiovale.
  • The cerebral cortex and subcortical gray matter structures characterized by sprawling dendritic processes in all directions.

However, with the NODDIx model, we can fit 2 fibers that are in the form of crossings and visualize them using Microstructure Imaging. An example of the same would be as follows:

Courtesy: Hamza Farooq, et. al., Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI

Now that I have the 1st draft of the code written (which can bee found here and on my master), its time to simulate some data and try to fit it using the NODDIx model.

The code has been written in such a way that we can make use of the dame functions in the code to generate a signal per voxel and make sure that the model is fitting properly.

To do so, we fix the input parameters that we are giving to generate the signal and try to estimate them using the NODDIx model. Note, we will have 2 fiber crossings and we can explicitly test for different angles between the fibers with pre-defined volume fractions.

For simplicity and easy understanding of the estimates, we set the parameters as follows:

The volume fractions have been set to equal:

Intracellular Volume Fraction 1: 0.2
Intracellular Volume Fraction 2: 0.2
Extracellular Volume Fraction 1: 0.2
Extracellular Volume Fraction 2: 0.2
CSF Volume Fraction: 0.2

Lets now fix the Orientation Dispersions (OD) and the Thetas and Phis:

OD1: 0.2
Theta1: 0.72
Phi1: 1.57

OD2: 0.24
Theta2: 0.72
Phi2: 1.57

For both the fibers, the angle between the Theta and the Phi has been set to 41 degrees approximately.

These are the 11 parameters that the model is estimating… Lets see how the estimates look on the simulation data!

[Note: We expect the estimates to be almost the same as the input values…]


The estimated volume fractions are:

Intracellular Volume Fraction 1: 0.19739277
Intracellular Volume Fraction 2: 0.1947075
Extracellular Volume Fraction 1: 0.20260892
Extracellular Volume Fraction 2: 0.2052905
CSF Volume Fraction: 0.20000128

The volume fractions seem to be estimated almost perfectly! Lets take a look at the ODs, Thetas and Phis;

OD1: 0.20045016
Theta1: 0.72000357
Phi1: 1.56999562

OD2: 0.24062461
Theta2: 0.71999553
Phi2: 1.5700052

Yep, they look almost the same!

So basically, the NODDIx Model is up and running!

[Note: Code for the above simulation and fit can be found here]

The speed of the fit needs to be taken care of though!

In the next post, I will try to speedup the code and have a rigorous code profiling!