End of 2 Weeks of GSoC!

GSoC with the Python Software foundation has been an incredible journey so far and has made me really productive a programmer. Here are the links to the work that I have done so far and is obviously work in progress!

I maintain a repo called dipy-tests where I perform all my experiments. Here is the link to it:
https://github.com/ShreyasFadnavis/dipy-tests
I have started working on implementing the NODDIx model for Microstructure Imaging which was mentioned in the previous post!
Here is the link to the code that I have written so far:
Following are the commits that I have made to the my master nipy/dipy repo over the past 2 weeks: https://github.com/ShreyasFadnavis/dipy

I have also made some commits and got some pull requests merged with the main nipy/dipy repo of my PSF sub-organization. Here is the link to it:

Watch out for my next post!
Will be working on simulating data using the NODDIx model and performing some model fitting on it (some rigorous testing expected!)
GitHub: https://github.com/ShreyasFadnavis

Neurite Orientation Dispersion and Density Imaging

We need to look at Neurite Orientation Dispersion and Density Imaging (NODDI) first!

NODDI is a method of quantifying the morphology of neurites (→ Projections of neurons and dendrites, collectively) using branching complexity of the dendritic trees in terms of dendritic density

Remember that we are looking at dMRI which measures the Displacement Pattern of water molecules undergoing diffusionNODDI is a Tissue Model which makes use of Orientation Dispersion Index as a Summary Statistic. This helps quantify Angular Variation of Neurite Orientation.

The NODDI tissue model:

So the above diagram has a tiny equation at the three types of microstructural environments. The water diffusion in each of the environments behaves differently and we will look at them in a little more depth below. (BTW: CSF == CerebroSpinal Fluid).

Intra-cellular Model

Intra-cellular compartment → space bounded by the membrane of Neurites: Model this as Set of Sticks. The normalized signal for this model  is given by:

Where,

q → Gradient Direction

b  → b-val 

f(n)dn → probability of finding sticks along orientation ‘n’

The exponential term   → Signal Attenuation due to the intrinsic diffusivity of the sticks

Now, 

The f(n): the Orientation Distribution function is modeled as a Watson Distribution. This distribution is the simplest distribution that can capture the dispersion in orientations.

where, 

M is a confluent hypergeometric function. 

𝛍 = mean dispersion 

қ = concentration parameter that measures the dispersion about 𝛍 
EXtra-cellular Model

This, as the name suggests takes care of the space around the neurites. The neurites hinder the diffusion, but does not restrict it in any way. Therefore we can model it using a ‘Gaussian Anisotropic Distribution‘.

The signal for this environment of the model looks something like:

 

 

Here, D(n) is a cylindrical symmetric tensor with the principal direction of diffusion ‘n’.  But, now we need to consider 2 directions of diffusion, perpendicular and parallel diffusivities: d⊥ and d∥

The parallel diffusivity is the same as the intrinsic
free diffusivity of the intra-cellular compartment; the perpendicular
diffusivity is set with a simple tortuosity model as
d⊥ = d∥ (1 − ν_ic), where ν_ic is the intra-cellular volume fraction.

where,

Csf Model

The CSF compartment models the space occupied by cerebrospinal fluid and is modeled as isotropic Gaussian diffusion with diffusivity d_iso. But in this model we make use of the Orientation Dispersion index instead of қ as follows:

OD = 2/pi * arctan(1/қ)

________________________________________________________

References:

[1] Zhang, H., Schneider, T., Wheeler-kingshott, C. A., & Alexander, D. C. (2012). NeuroImage NODDI : Practical in vivo neurite orientation dispersion and density imaging of the human brain. NeuroImage, 61(4), 1000–1016. https://doi.org/10.1016/j.neuroimage.2012.03.072

[2] Farooq, H., Xu, J., Nam, J. W., Keefe, D. F., Yacoub, E., Georgiou, T., & Lenglet, C. (2016). Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI. Scientific Reports, 6(September), 1–9. https://doi.org/10.1038/srep38927

[3] Ferizi, U., Schneider, T., Panagiotaki, E., Nedjati-Gilani, G., Zhang, H., Wheeler-Kingshott, C. A. M., & Alexander, D. C. (2014). A ranking of diffusion MRI compartment models with in vivo human brain data. Magnetic Resonance in Medicine, 72(6), 1785–1792. https://doi.org/10.1002/mrm.25080

Simulating dMRI Data using CAMINO

     +     

Data simulation forms a crucial component of any data-driven experiment which deals with model fitting as it is the ground-truth that we will be comparing against. In my project, I will be working with the following 2 tools for data simulation:

  • UCL Camino Diffusion MRI Toolkit
  • DIPY Simulations (… Obviously!)

This post will cover Camino 1st and I aim to get into DIPY in the next post!

The most confusing part about the Camino documentation  is understanding what the ‘scheme’ file is really made up of, because it needs to be passed as a parameter to the ‘datasynth’ command which we will look at for data simulation.

Scheme files accompany DWI data and describe imaging parameters that are used in image processing. For most users, we require the gradient directions and b-value of each measurement in the sequence.

Once you have this information, you can use the CAMINO commands described below to generate scheme files.

  • Comments are allowed, the line must start with ‘#’
  • The first non-comment line must be a header stating “VERSION: <version>”. In our case:
VERSION: BVECTOR
  • After removing comments and the header, measurements are described in order, one per line. The order must correspond to the order of the DWI data.
  • Entries on each line are separated by spaces or tabs.

The BVECTOR is the most common scheme format. Each line consists of four values: the (x, y, z) components of the gradient direction followed by the b-value. For example:

   # Standard 6 DTI gradient directions, [b] = s / mm^2
  VERSION: BVECTOR
   0.000000   0.000000   0.000000   0.0
   0.707107   0.000000   0.707107   1.000E03
  -0.707107   0.000000   0.707107   1.000E03
   0.000000   0.707107   0.707107   1.000E03
   0.000000   0.707107  -0.707107   1.000E03
   0.707107   0.707107   0.000000   1.000E03
  -0.707107   0.707107   0.000000   1.000E03

If the measurement is unweighted, its gradient direction should be zero. Otherwise, the gradient directions should be unit vectors, followed by a scalar b-value. The b-value can be in any units. Units are defined implicitly, in the above example we have used s / mm^2. The choice of units affects the scale of the output tensors, if we used this scheme file we would get tensors in units of mm^2 / s. We could change the units of b to s / m^2 by scaling the b-values by 1E6. Our reconstructed tensors would then be specified in units of m^2 / s.

Finding the information for the scheme file

The best way to find the information for your scheme file is to talk to the person who programmed your MRI sequence. There is software that can help you recover them from DICOM or other scanner-specific data formats. The dcm2nii program will attempt to recover b-values and vectors in FSL format.

Converting to Camino format

If you have a list of gradient directions, you can convert them to Camino format by hand or by using pointset2scheme. If you have FSL style bval and bvec files, you can use fsl2scheme. See the man pages for more information.

 

Simulating the Data

Finally! Now that we know what the scheme files are, lets look at how to simulate the voxels…

I will be making use of the 2 utilities which I feel are relevant to my project and will test the simulation functionalities using the 59.scheme file which is present on the Camino website tutorial.

1. Synthesis Using Analytic Models

 

This uses Camino to synthesize diffusion-weighted MRI data with the white matter analytic models.

The method is explained in detail in (Panagiotaki et al NeuroImage 2011, doi:10.1016/j.neuroimage.2011.09.081).

The following example synthesizes data using the three-compartment model “ZeppelinCylinderDot“, which has an intra-axonal compartment of single radius, a cylindrically symmetric tensor for the extra-axonal space and a stationary third compartment.

Example:

datasynth -synthmodel  compartment 3 CYLINDERGPD 0.6 1.7E-9 0.0  0.0  4E-6 zeppelin 0.1 1.7E-9 0.0 0.0 2E-10  Dot -schemefile 59.scheme -voxels 1 -outputfile ZCD.Bfloat
2. Crossing cylinders using Monte Carlo Diffusion Simulator

This simulator allows the simulation of diffusion from simple to extremely complex diffusion environments, called “substrates“. We will be looking at the Crossing fibres substrates as of now.

A substrate is envisaged to sit inside a single voxel, with spins diffusing across it. The boundaries of the voxel are usually periodic so that the substrate defines an environment made up of an infinite, 3D array of whatever you specify. The measurement model in the simulation does not capture the trade-off between voxel size and SNR and hence simulation "voxels" can be quite a bit smaller than those in actual scans. This simulation is, and has always been, intended as a tool to simulate signals due to sub-voxel structure, rather than large spatially-extended structures. [- UCL Camino Docs]
Crossing Cylinders

A situation that is often of interest in diffusion MR research is where we have more than one principle fibre direction. The simulation is able to model crossing fibres with a specified crossing angle. This substrate contains two populations of fibres in interleaved planes. One population is parallel to the z-axis and another is rotated about the y-axis by a given angle with respect to the first.

Cylinders on this substrate are arranged in parallel in the xz-plane in parallel layers one cylinder thick. i.e. a plane of cylinders parallel to the z-axis, with a rotated with respect the first, then another parallel z-axis and so on. Cylinders are all of a constant radius.

An example command to use here is:
datasynth -walkers 100000 -tmax 1000 -voxels 1 -p 0.0 -schemefile 59.scheme -initial uniform -substrate crossing -crossangle 0.7854 -cylinderrad 1E-6 -cylindersep 2.1E-6 > crossingcyls45.bfloat


Here we’ve specified a crossing substrate. The crossing angle is specified in radians (NOT degrees) using the -crossangle: 0.7854. This is approximately pi/4, or 45 degrees. The crossing angle can take on any value, just make sure you use radians!

 

REFERENCES:

[1] http://camino.cs.ucl.ac.uk/index.php

[2] Panagiotaki et al NeuroImage 2011, doi:10.1016/j.neuroimage.2011.09.081

Understanding Neuroimaging Data

        

I realized that before jumping into any kind of data analyses or modeling, I should devote a small post about what ‘Brain Data’ looks like.

In the future posts, we will be looking at a lot of Neuro data and I felt that this post would be a crucial component for the same. An NIPY-DIPY (and Nibabel) image is the association of three things:

  • The image data array: a 3D or 4D array of image data
  • An affine array that tells you the position of the image array data in a reference space.
  • Image metadata (data about the data) describing the image, usually in the form of an image header.
import nibabel as nib
import matplotlib.pyplot as plt

epi_img = nib.load('C:/Users/Shreyas/Desktop/DATA_understanding/someones_epi.nii')
epi_img_data = epi_img.get_data()
epi_img_data.shape


""" Function to display row of image slices """
def show_slices(slices):
 fig, axes = plt.subplots(1, len(slices))
 for i, slice in enumerate(slices):
 axes[i].imshow(slice.T, cmap="gray", origin="lower")
 
slice_0 = epi_img_data[26, :, :]
slice_1 = epi_img_data[:, 30, :]
slice_2 = epi_img_data[:, :, 16]
show_slices([slice_0, slice_1, slice_2])
plt.suptitle("Center slices for EPI image")

In the above grayscale image that looks like the cross-sections of the brain, each pixel is a ‘Voxel’, i.e. a pixel with Volume: Where black areas denote a minimum and white areas denote maximum values. A 3D array of such voxels is forms a voxel array. The following will help us understand how to get the voxel coordinates of the center slices and the value at that voxel!

n_i, n_j, n_k = epi_img_data.shape
center_i = (n_i - 1) // 2
center_j = (n_j - 1) // 2
center_k = (n_k - 1) // 2

centers = [center_i, center_j, center_k]
print("Co-ordinates in the voxel array: ", centers)

# O/P: Co-ordinates in the voxel array: [26, 30, 16]

center_vox_value = epi_img_data[center_i, center_j, center_k]
print(center_vox_value)
# O/P: 81.54928779602051

Note that the voxel coordiantes do not tell us anything about where the data came from, as in, the point of view of the scanner/ position of the subject/ left-right brain imaging,… etc. Therefore we need to perform some kind of affine transformations to these coordinates to get them into the subject-scanner reference space. 

(x, y, z) = f(i, j, k)

The above example, though of EPI (Echo Planar Imaging), gives a brief overview of what these data mean.

In Diffusion MRI (dMRI) usually we use three types of files, a Nifti file with the diffusion weighted data, and two text files: one with b-values and one with the b-vectors. In DIPY we provide tools to load and process these files and we also provide access to publicly available datasets for those who haven’t acquired yet their own datasets.

There are some brilliant tutorials which provide an abundance of high quality information:

http://nipy.org/dipy/examples_index.html#examples

http://nipy.org/nibabel/tutorials.html#tutorials