Simulating and Fitting the Signal using NODDIx

In the last post we took a loot at the NODDI model that provides neurite density and orientation dispersion estimates by disentangling two key contributing factors to FA. This enables the analysis of each factor individually.

The following image helps summarize the model components:

Source
Courtesy: Zhang, Hui et. al. Bingham-NODDI: Mapping anisotropic orientation dispersion of neurites using diffusion MRI. NeuroImage. 133. 10.1016/j.neuroimage.2016.01.046.

As mentioned above and in the previous post, NODDI model consists of 3 major sub-models for fitting the data in:

  • Intra-Cellular Region
  • Extra-Cellular Region
  • Cerebrospinal Fluid (CSF Area)

A major shortcoming of the NODDI model was that it could fit only one fiber at a time for neurite orientation patterns observed in brain tissue that include:

  • Highly coherently oriented white matter structures, such as the corpus
    callosum.
  • White matter structures composed of bending and fanning axons, such as the centrum semiovale.
  • The cerebral cortex and subcortical gray matter structures characterized by sprawling dendritic processes in all directions.

However, with the NODDIx model, we can fit 2 fibers that are in the form of crossings and visualize them using Microstructure Imaging. An example of the same would be as follows:

Courtesy: Hamza Farooq, et. al., Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI

Now that I have the 1st draft of the code written (which can bee found here and on my master), its time to simulate some data and try to fit it using the NODDIx model.

The code has been written in such a way that we can make use of the dame functions in the code to generate a signal per voxel and make sure that the model is fitting properly.

To do so, we fix the input parameters that we are giving to generate the signal and try to estimate them using the NODDIx model. Note, we will have 2 fiber crossings and we can explicitly test for different angles between the fibers with pre-defined volume fractions.

For simplicity and easy understanding of the estimates, we set the parameters as follows:

The volume fractions have been set to equal:

Intracellular Volume Fraction 1: 0.2
Intracellular Volume Fraction 2: 0.2
Extracellular Volume Fraction 1: 0.2
Extracellular Volume Fraction 2: 0.2
CSF Volume Fraction: 0.2

Lets now fix the Orientation Dispersions (OD) and the Thetas and Phis:

OD1: 0.2
Theta1: 0.72
Phi1: 1.57

OD2: 0.24
Theta2: 0.72
Phi2: 1.57

For both the fibers, the angle between the Theta and the Phi has been set to 41 degrees approximately.

These are the 11 parameters that the model is estimating… Lets see how the estimates look on the simulation data!

[Note: We expect the estimates to be almost the same as the input values…]

RESULTS:

The estimated volume fractions are:

Intracellular Volume Fraction 1: 0.19739277
Intracellular Volume Fraction 2: 0.1947075
Extracellular Volume Fraction 1: 0.20260892
Extracellular Volume Fraction 2: 0.2052905
CSF Volume Fraction: 0.20000128

The volume fractions seem to be estimated almost perfectly! Lets take a look at the ODs, Thetas and Phis;

OD1: 0.20045016
Theta1: 0.72000357
Phi1: 1.56999562

OD2: 0.24062461
Theta2: 0.71999553
Phi2: 1.5700052

Yep, they look almost the same!

So basically, the NODDIx Model is up and running!

[Note: Code for the above simulation and fit can be found here]

The speed of the fit needs to be taken care of though!

In the next post, I will try to speedup the code and have a rigorous code profiling!

End of 2 Weeks of GSoC!

GSoC with the Python Software foundation has been an incredible journey so far and has made me really productive a programmer. Here are the links to the work that I have done so far and is obviously work in progress!

I maintain a repo called dipy-tests where I perform all my experiments. Here is the link to it:
https://github.com/ShreyasFadnavis/dipy-tests
I have started working on implementing the NODDIx model for Microstructure Imaging which was mentioned in the previous post!
Here is the link to the code that I have written so far:
Following are the commits that I have made to the my master nipy/dipy repo over the past 2 weeks: https://github.com/ShreyasFadnavis/dipy

I have also made some commits and got some pull requests merged with the main nipy/dipy repo of my PSF sub-organization. Here is the link to it:

Watch out for my next post!
Will be working on simulating data using the NODDIx model and performing some model fitting on it (some rigorous testing expected!)
GitHub: https://github.com/ShreyasFadnavis

Neurite Orientation Dispersion and Density Imaging

We need to look at Neurite Orientation Dispersion and Density Imaging (NODDI) first!

NODDI is a method of quantifying the morphology of neurites (→ Projections of neurons and dendrites, collectively) using branching complexity of the dendritic trees in terms of dendritic density

Remember that we are looking at dMRI which measures the Displacement Pattern of water molecules undergoing diffusionNODDI is a Tissue Model which makes use of Orientation Dispersion Index as a Summary Statistic. This helps quantify Angular Variation of Neurite Orientation.

The NODDI tissue model:

So the above diagram has a tiny equation at the three types of microstructural environments. The water diffusion in each of the environments behaves differently and we will look at them in a little more depth below. (BTW: CSF == CerebroSpinal Fluid).

Intra-cellular Model

Intra-cellular compartment → space bounded by the membrane of Neurites: Model this as Set of Sticks. The normalized signal for this model  is given by:

Where,

q → Gradient Direction

b  → b-val 

f(n)dn → probability of finding sticks along orientation ‘n’

The exponential term   → Signal Attenuation due to the intrinsic diffusivity of the sticks

Now, 

The f(n): the Orientation Distribution function is modeled as a Watson Distribution. This distribution is the simplest distribution that can capture the dispersion in orientations.

where, 

M is a confluent hypergeometric function. 

𝛍 = mean dispersion 

қ = concentration parameter that measures the dispersion about 𝛍 
EXtra-cellular Model

This, as the name suggests takes care of the space around the neurites. The neurites hinder the diffusion, but does not restrict it in any way. Therefore we can model it using a ‘Gaussian Anisotropic Distribution‘.

The signal for this environment of the model looks something like:

 

 

Here, D(n) is a cylindrical symmetric tensor with the principal direction of diffusion ‘n’.  But, now we need to consider 2 directions of diffusion, perpendicular and parallel diffusivities: d⊥ and d∥

The parallel diffusivity is the same as the intrinsic
free diffusivity of the intra-cellular compartment; the perpendicular
diffusivity is set with a simple tortuosity model as
d⊥ = d∥ (1 − ν_ic), where ν_ic is the intra-cellular volume fraction.

where,

Csf Model

The CSF compartment models the space occupied by cerebrospinal fluid and is modeled as isotropic Gaussian diffusion with diffusivity d_iso. But in this model we make use of the Orientation Dispersion index instead of қ as follows:

OD = 2/pi * arctan(1/қ)

________________________________________________________

References:

[1] Zhang, H., Schneider, T., Wheeler-kingshott, C. A., & Alexander, D. C. (2012). NeuroImage NODDI : Practical in vivo neurite orientation dispersion and density imaging of the human brain. NeuroImage, 61(4), 1000–1016. https://doi.org/10.1016/j.neuroimage.2012.03.072

[2] Farooq, H., Xu, J., Nam, J. W., Keefe, D. F., Yacoub, E., Georgiou, T., & Lenglet, C. (2016). Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI. Scientific Reports, 6(September), 1–9. https://doi.org/10.1038/srep38927

[3] Ferizi, U., Schneider, T., Panagiotaki, E., Nedjati-Gilani, G., Zhang, H., Wheeler-Kingshott, C. A. M., & Alexander, D. C. (2014). A ranking of diffusion MRI compartment models with in vivo human brain data. Magnetic Resonance in Medicine, 72(6), 1785–1792. https://doi.org/10.1002/mrm.25080

Simulating dMRI Data using CAMINO

     +     

Data simulation forms a crucial component of any data-driven experiment which deals with model fitting as it is the ground-truth that we will be comparing against. In my project, I will be working with the following 2 tools for data simulation:

  • UCL Camino Diffusion MRI Toolkit
  • DIPY Simulations (… Obviously!)

This post will cover Camino 1st and I aim to get into DIPY in the next post!

The most confusing part about the Camino documentation  is understanding what the ‘scheme’ file is really made up of, because it needs to be passed as a parameter to the ‘datasynth’ command which we will look at for data simulation.

Scheme files accompany DWI data and describe imaging parameters that are used in image processing. For most users, we require the gradient directions and b-value of each measurement in the sequence.

Once you have this information, you can use the CAMINO commands described below to generate scheme files.

  • Comments are allowed, the line must start with ‘#’
  • The first non-comment line must be a header stating “VERSION: <version>”. In our case:
VERSION: BVECTOR
  • After removing comments and the header, measurements are described in order, one per line. The order must correspond to the order of the DWI data.
  • Entries on each line are separated by spaces or tabs.

The BVECTOR is the most common scheme format. Each line consists of four values: the (x, y, z) components of the gradient direction followed by the b-value. For example:

   # Standard 6 DTI gradient directions, [b] = s / mm^2
  VERSION: BVECTOR
   0.000000   0.000000   0.000000   0.0
   0.707107   0.000000   0.707107   1.000E03
  -0.707107   0.000000   0.707107   1.000E03
   0.000000   0.707107   0.707107   1.000E03
   0.000000   0.707107  -0.707107   1.000E03
   0.707107   0.707107   0.000000   1.000E03
  -0.707107   0.707107   0.000000   1.000E03

If the measurement is unweighted, its gradient direction should be zero. Otherwise, the gradient directions should be unit vectors, followed by a scalar b-value. The b-value can be in any units. Units are defined implicitly, in the above example we have used s / mm^2. The choice of units affects the scale of the output tensors, if we used this scheme file we would get tensors in units of mm^2 / s. We could change the units of b to s / m^2 by scaling the b-values by 1E6. Our reconstructed tensors would then be specified in units of m^2 / s.

Finding the information for the scheme file

The best way to find the information for your scheme file is to talk to the person who programmed your MRI sequence. There is software that can help you recover them from DICOM or other scanner-specific data formats. The dcm2nii program will attempt to recover b-values and vectors in FSL format.

Converting to Camino format

If you have a list of gradient directions, you can convert them to Camino format by hand or by using pointset2scheme. If you have FSL style bval and bvec files, you can use fsl2scheme. See the man pages for more information.

 

Simulating the Data

Finally! Now that we know what the scheme files are, lets look at how to simulate the voxels…

I will be making use of the 2 utilities which I feel are relevant to my project and will test the simulation functionalities using the 59.scheme file which is present on the Camino website tutorial.

1. Synthesis Using Analytic Models

 

This uses Camino to synthesize diffusion-weighted MRI data with the white matter analytic models.

The method is explained in detail in (Panagiotaki et al NeuroImage 2011, doi:10.1016/j.neuroimage.2011.09.081).

The following example synthesizes data using the three-compartment model “ZeppelinCylinderDot“, which has an intra-axonal compartment of single radius, a cylindrically symmetric tensor for the extra-axonal space and a stationary third compartment.

Example:

datasynth -synthmodel  compartment 3 CYLINDERGPD 0.6 1.7E-9 0.0  0.0  4E-6 zeppelin 0.1 1.7E-9 0.0 0.0 2E-10  Dot -schemefile 59.scheme -voxels 1 -outputfile ZCD.Bfloat
2. Crossing cylinders using Monte Carlo Diffusion Simulator

This simulator allows the simulation of diffusion from simple to extremely complex diffusion environments, called “substrates“. We will be looking at the Crossing fibres substrates as of now.

A substrate is envisaged to sit inside a single voxel, with spins diffusing across it. The boundaries of the voxel are usually periodic so that the substrate defines an environment made up of an infinite, 3D array of whatever you specify. The measurement model in the simulation does not capture the trade-off between voxel size and SNR and hence simulation "voxels" can be quite a bit smaller than those in actual scans. This simulation is, and has always been, intended as a tool to simulate signals due to sub-voxel structure, rather than large spatially-extended structures. [- UCL Camino Docs]
Crossing Cylinders

A situation that is often of interest in diffusion MR research is where we have more than one principle fibre direction. The simulation is able to model crossing fibres with a specified crossing angle. This substrate contains two populations of fibres in interleaved planes. One population is parallel to the z-axis and another is rotated about the y-axis by a given angle with respect to the first.

Cylinders on this substrate are arranged in parallel in the xz-plane in parallel layers one cylinder thick. i.e. a plane of cylinders parallel to the z-axis, with a rotated with respect the first, then another parallel z-axis and so on. Cylinders are all of a constant radius.

An example command to use here is:
datasynth -walkers 100000 -tmax 1000 -voxels 1 -p 0.0 -schemefile 59.scheme -initial uniform -substrate crossing -crossangle 0.7854 -cylinderrad 1E-6 -cylindersep 2.1E-6 > crossingcyls45.bfloat


Here we’ve specified a crossing substrate. The crossing angle is specified in radians (NOT degrees) using the -crossangle: 0.7854. This is approximately pi/4, or 45 degrees. The crossing angle can take on any value, just make sure you use radians!

 

REFERENCES:

[1] http://camino.cs.ucl.ac.uk/index.php

[2] Panagiotaki et al NeuroImage 2011, doi:10.1016/j.neuroimage.2011.09.081

Understanding Neuroimaging Data

        

I realized that before jumping into any kind of data analyses or modeling, I should devote a small post about what ‘Brain Data’ looks like.

In the future posts, we will be looking at a lot of Neuro data and I felt that this post would be a crucial component for the same. An NIPY-DIPY (and Nibabel) image is the association of three things:

  • The image data array: a 3D or 4D array of image data
  • An affine array that tells you the position of the image array data in a reference space.
  • Image metadata (data about the data) describing the image, usually in the form of an image header.
import nibabel as nib
import matplotlib.pyplot as plt

epi_img = nib.load('C:/Users/Shreyas/Desktop/DATA_understanding/someones_epi.nii')
epi_img_data = epi_img.get_data()
epi_img_data.shape


""" Function to display row of image slices """
def show_slices(slices):
 fig, axes = plt.subplots(1, len(slices))
 for i, slice in enumerate(slices):
 axes[i].imshow(slice.T, cmap="gray", origin="lower")
 
slice_0 = epi_img_data[26, :, :]
slice_1 = epi_img_data[:, 30, :]
slice_2 = epi_img_data[:, :, 16]
show_slices([slice_0, slice_1, slice_2])
plt.suptitle("Center slices for EPI image")

In the above grayscale image that looks like the cross-sections of the brain, each pixel is a ‘Voxel’, i.e. a pixel with Volume: Where black areas denote a minimum and white areas denote maximum values. A 3D array of such voxels is forms a voxel array. The following will help us understand how to get the voxel coordinates of the center slices and the value at that voxel!

n_i, n_j, n_k = epi_img_data.shape
center_i = (n_i - 1) // 2
center_j = (n_j - 1) // 2
center_k = (n_k - 1) // 2

centers = [center_i, center_j, center_k]
print("Co-ordinates in the voxel array: ", centers)

# O/P: Co-ordinates in the voxel array: [26, 30, 16]

center_vox_value = epi_img_data[center_i, center_j, center_k]
print(center_vox_value)
# O/P: 81.54928779602051

Note that the voxel coordiantes do not tell us anything about where the data came from, as in, the point of view of the scanner/ position of the subject/ left-right brain imaging,… etc. Therefore we need to perform some kind of affine transformations to these coordinates to get them into the subject-scanner reference space. 

(x, y, z) = f(i, j, k)

The above example, though of EPI (Echo Planar Imaging), gives a brief overview of what these data mean.

In Diffusion MRI (dMRI) usually we use three types of files, a Nifti file with the diffusion weighted data, and two text files: one with b-values and one with the b-vectors. In DIPY we provide tools to load and process these files and we also provide access to publicly available datasets for those who haven’t acquired yet their own datasets.

There are some brilliant tutorials which provide an abundance of high quality information:

http://nipy.org/dipy/examples_index.html#examples

http://nipy.org/nibabel/tutorials.html#tutorials

Microstructure Imaging of Crossings: Diffusion Imaging in Python (Computational Neuroanatomy)

I am really proud to get an opportunity of working with the DIPY team of the Python Software Foundation as a Google Summer of Code Candidate.

Things I will be working on and writing about in the upcoming weeks:

  • Non-Linear Optimization
  • Model Fitting
  • Stochastic Methods and Machine Learning
  • Neuroscience using Python

Image result for python software foundation logoImage result for gsoc 2018

Image result for brain microstructure imaging dipy

DIPY is a free and open source software project for computational neuroanatomy, focusing mainly on diffusion magnetic resonance imaging (dMRI) analysis. It implements a broad range of algorithms for denoising, registration, reconstruction, tracking, clustering, visualization, and statistical analysis of MRI data.

Magnetic resonance imaging (MRI)… in 5 Lines:

MRI uses the body’s natural magnetic properties for imaging purposes. It makes use of the Hydrogen nucleus (a single proton) due to its abundance in water and fat: H+. When the body is placed in a strong magnetic field of the MRI, the protons’ axes all line up. This uniform alignment creates a magnetic vector oriented along the axis of the MRI scanner.

What does lining up of Protons mean :: ? courtesy: http://www.schoolphysics.co.uk/age16-19/Atomic%20physics/Atomic%20structure%20and%20ions/text/MRI/index.html

I feel that Neuroscience being closely tied to and having formed foundations in Hebbian and Boltzmann paradigms of Statistical Learning forms an extremely important component AI research from a variety of standpoints, a crucial one being connectivity. MRI has facilitated understanding brain mechanisms by getting and analyzing this ‘Brain Data’  and will be working on one such technique called ‘Microstructure Imaging of Crossings’.

Diffusion MRI measures water diffusion in biological tissue, which can be used to probe its microstructure. The most common model for water diffusion in tissue is the diffusion tensor (DT), which assumes a Gaussian distribution. This assumption of Gaussian diffusion oversimplifies the diffusive behavior of water in complex media, and is known experimentally to break down for relatively large b-values. DT derived indices, such as mean diffusivity or fractional anisotropy, can correlate with major tissue damage, but lack sensitivity and specificity to subtle pathological changes.

Microstructure Imaging of Crossing (MIX) is a versatile and thus suitable to a broad range of generic multicompartment models, in particular for brain areas where axonal pathways cross

 These ‘multicompartment models’ assess the variability of subvoxel regions by enabling the estimation of more specific indices, such as axon diameter, density, orientation, and permeability, and so potentially give much greater insight into tissue architecture and sensitivity to pathology.

Goal of Model Fitting:

We want to identify which model compartments are essential to explain the data and parameters that are potentially estimable from a particular experiment and compare the models to each other using the Bayesian Information Criterion (BIC) or any other Model Selection Criterion (TIC, Cp, etc.), ranking them in order of how well they explain data acquired.

This requires a novel regression method, which is robust and versatile. It enables to fit existing biophysical models with improved accuracy by utilizing the Variable Separation Method (VSM) to distinguish parameters that enter in both, linear and non-linear manner, in the model. The estimation of non-linear parameters is a non-convex problem and is handled first. This is done by stochastic search that utilizes Genetic Algorithms (GA) since GAs are effective in approximating exponential time series models. The task to estimate linear parameters amounts to a convex problem and can be solved using standard least squares techniques. These parameter estimates provide a starting point for a Trust Region method in search for a refined solution.

4 Steps involved in Implementing MIX:

Step 1 – Variable Separation: The objective function has a separable structure which can be exploited to separate the variables by variable separation method. We can rewrite our objective function as a projection using the Moore-Penrose Inverse (Pseudoinverse) and get the variable projection functional.

Step 2 – Stochastic search for non-linear parameters ‘x’: The objective function is non-convex, particularly of non-linear least-square form. Any gradient based method employed to estimate the parameters will have critical dependence on a good starting point, which is unknown. Alternative approach can be regular grid search, which is time consuming and adds computational burden. This particular type of problem therefore points towards considering stochastic search methods like GA. In case of time series analysis, GA can be used efficiently for sum of exponential functions. GA parameters can be varied for each selected biophysical model and time complexity may change with each choice. (GA method: Elitism based).

Step 3 – Constrained search for linear parameters ‘f ’: After estimating the parameters ‘x’, estimation of linear parameters ‘f ’ is a constrained linear least-squares estimation problem.

Step 4 – Non-Linear Least Squares Estimation using Trust Region Method: Step 2 and step 3 give a reliable initial guess of both ‘x’ and ‘f ’ by applying Trust Region method. This basically is an unconstrained optimization method for a region around the current search point, where the quadratic model for local minimization is “trusted” to be correct and steps are chosen to stay within this region. The size of the region is modified during the search, based on how well the model agrees with actual function evaluations: where GAs kick in. 

References:

[1] Farooq, H., Xu, J., Nam, J. W., Keefe, D. F., Yacoub, E., Georgiou, T., & Lenglet, C. (2016). Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI. Scientific Reports, 6(September), 1–9. https://doi.org/10.1038/srep38927

[2] Ferizi, U., Schneider, T., Panagiotaki, E., Nedjati-Gilani, G., Zhang, H., Wheeler-Kingshott, C. A. M., & Alexander, D. C. (2014). A ranking of diffusion MRI compartment models with in vivo human brain data. Magnetic Resonance in Medicine, 72(6), 1785–1792. https://doi.org/10.1002/mrm.25080

[3] Farooq, H., Xu, J., Nam, J. W., Keefe, D. F., Yacoub, E., & Lenglet, C. (n.d.). Microstructure Imaging of Crossing ( MIX ) White Matter Fibers from diffusion MRI Supplementary Note 1 : Tissue Compartment Model Functions, (Mix), 1–18.

[4] Manuscript, A., & Magnitude, S. (2013). NIH Public Access, 31(9), 1713–1723. https://doi.org/10.1109/TMI.2012.2196707