### There is light…

Hey everyone!

The journey is almost over as this is probably one of my last, if not the last blog post related to this project.

I will try to summarize what the project’s current state is. Further I would like to shortly sum up what I think where my personal benefits from this project (idealistic benefits 😉 ).

But first let’s get into the code.

Download the necessary data for example 7 from GitHub. Replace the respective files in your mne root directory and run:

python setup.py install to apply the changes and register the new class.

If you wish, you could navigate to mne-python/doc and run:

PATTERN=<pattern> make html_dev-pattern

Possible options (at least what concerns my work) are:

plot_background_freesurfer.py

plot_object_sourceestimate.py

plot_morph_stc.py

plot_morph_surface_stc.py

plot_morph_volume_stc.py

, which are 3 tutorials and 2 examples (that order). Beside adjusting previous API for the use with SourceMorph, the class itself was created including the respective testing files.

Now let’s see what that’s all about:

plot_background_freesurfer.py

This is a tutorial explaining the basic concept of using FreeSurfer as tool for anatomical segmentation and reconstruction of MRI images. It gives an overview of which parts of FreeSurfer are relevant for MNE-Python users and relates the FreeSurfer logic to MNE.

plot_object_sourceestimate.py

In this tutorial it is explained what stc is, namely a source estimate class representing surface source estimates. First it is explained of which parts a source analysis in MNE-Python consists and hands-on explained what the actual Python object means.

Thereby it is taken care of, that the user understands the basic relationship between 3D coordinates of an anatomical brain representation and time series at those locations.

plot_morph_stc.py

Here the class SourceMorph is introduced and shortly explained what it does and how to use it.

A small introduction about the usefulness of morphing, is followed by by an explanation for both – surface and volume – types of surce estimates. Details about the common and diverging functionality are pointed out. Reading and saving is explained and possible API shortcuts are mentioned.

plot_morph_surface_stc.py

Within this example, it is shown how a surface source estimate is morphed in MNE-Python. Example data is used and the result shown as a plot depicting the morphed source estimate on a FreeSurfer surface representation.

plot_morph_volume_stc.py

Basically the same example as above only focusing on volume source estimates, rather than surfaces.

morph.py

Here the actual class – SourceMorph – is implemented. The file carries several private helper functions as well as the exposed public API for morph related functions.

A SourceMorph is initialized by calling as a core class of MNE including the respective parameters.

morph = mne.SourceMorph(src=src)

An actual source estimate can then be morphed by calling the newly created instance.

stc_morphed = morph(stc)

In case of a volume source estimate, the morphed data can also be converted into a NIftI image.

img = morph.as_volume(stc_morphed)

I will not go into detail in all the small modifications and separate functions, but feel free to explore them (and report any bugs 😉 )

So what have I learned?

Well turns out the main experience that I will surely carry over to my own work – and did so already – concerns styling issues and communication in professional software development. I would like to phrase it in a nutshell as:

Think before you push!

Furthermore – also related to more professional development – I found a way to finally organize everything locally in a more or less clean way, such that no data gets lost but still version Zombies like “morph_v4_final2_corrected_2.py” are avoided effectively.

I guess I just did learn what was expected, namely to “build better software”

In that sense I hope you enjoyed the journey as much as I did!

See you next year 😉

Cheers,

Tommy

PS: depending on how the final two week go with handing in results etc, I might even post a last one, referring to the actually merged files…

### We’re getting there…

Hey there!

Today really a major update!! The class is in it’s final stage an so are an example and two tutorials. One including more compact information about morphing, whereas the second provides detailed background info!

First download the necessary data for example 6 from GitHub. Replace the respective file in your mne root folder and run

python setup.py install to apply the changes and register the new class.

If you wish, you could navigate to mne-python/doc and run:

PATTERN=plot_background_morph.py make html_dev-pattern

This will render the detailed tutorial. You can then open it in your browser, by clicking mne-python/doc/_build/html/auto_tutorials

Here I will post very similar content to the above mentioned tutorial (I will not post the part about surface morphing here, since I am not the creator of this part):

## Problem statement

Modern neuro imaging techniques such as souce reconstruction or fMRI analyses, make use of advanced mathematical models and hardware to map brain activation patterns into a subject specific anatomical brain space. This enables the study of spatio-temporal brain activity. Amongst many others, the representation of spatio-temporal activation patterns is often done by overlaying the actual anatomical brain structure, with the respective activation at the respective anatomical location. Hereby volumetric anatomical MR images are often used as such or are transformed into an inflated surface.

It becomes obvious that in order to compute group level statistics, data representations across subjects must be morphed to a common frame, such that anatomically / functional similar structures are represented at the same spatial location.

## Morphing basics

Morphing describes the procedure of transforming a data representation in n- dimensional space into another data representation in the same space. In the context of neuroimaging data this space will mostly be 3-dimensional and necessary to bring individual subject spaces into a common frame. In general morphing operations can be split into two different kinds of transformation: linear and non-linear morphs or mappings.

A mapping is linear if it satisfies the following two conditions:

f(u+v)=f(u)+f(v) ,
f(cu)=cf(u) ,

where u and v are from the same vector space and c can be any scalar. This means that any linear transform is a mixture of additive and multiplicative operations and hence is often represented in terms of a transformation matrix.

In turn, a non-linear mapping can include linear components, but furthermore functions that are not limited by the above constraints. However it needs to be understood that including non-linear operations will alter the relationship of data points within a vector and thus cannot be represented as a transformation matrix. Instead every data point can be mapped independent of other data points. This becomes especially handy when morphing volumetric brain data. To achieve a mapping of structurally similar areas between subjects, it is inevitable to employ non-linear operations to account for morphological differences that cannot be represented by a linear transform.

In MNE-Python “brain space” data is represented as source estimate data, obtained by one of the implemented source reconstruction methods. See Source localization with MNE/dSPM/sLORETA/eLORETA

It is thus represented as SourceEstimate, VectorSourceEstimate, VolSourceEstimate or a mixture of those.

The data in the first two cases is represented as “surfaces”. This means that the data is represented as vertices on an inflated brain surface representation. In the last case the data is represented in a 4-dimensional space, were the last dimension refers to the data’s sample points.

Computing an inflated surface representation can be accomplished using FreeSurfer. Thereby, spherical morphing of the surfaces can be employed to bring data from different subjects into a common anatomical frame. When dealing with volumetric data, a non-linear morph map – or better morph volume – based on two different anatomical reference MRIs can be computed.

## Volumetric morphing

The key difference in volumetric morphing as compared to morphing surfaces, is that the data is represented as a volume. A volume is a 3-dimensional data representation. It is necessary to understand that the difference to a mesh (what’s commonly meant when referring to “3D model”) is that the mesh is “empty”, while the volume is not. Whereas the mesh is defined by the vertices of the outer hull, the volume is defined by that and the points it is containing. Hence morphing volumetric data does not only require to map the surface data, but also it’s content in the correct way.

It becomes more easy to understand, when thinking about morphing one brain to another. We not only want the cortices to overlap anatomically as good as possible, but also all sub-cortical structures.

In MNE-Python the implementation of volumetric morphing is achieved by wrapping the corresponding functionality from DiPy. See this dipy example for reference.

The volumetric morphed is implemented as a two stage approach. First two reference brains are are aligned using an affine linear registration and later a non-linear Symmetric Diffeomorphic Registration in 3D.

### Affine registration

See dipy affine example for reference.

Our goal is to pre-align both reference volumes as good as possible, to make it easier for the later non-linear optimization to converge to an acceptable minimum. The quality of the pre-alignment will be assessed using the mutual information that is aimed to be maximal [3].

Mutual information can be defined as the amount of predictive value two variables share. Thus how much information about a random variable B can be obtained through variable A.

It can further be expressed in terms of entropy as the difference between the joined entropy of A and B the respective conditional entropies. Hence the higher the joint entropy, the lower the conditional and hence one variable is more predictive for the respective other. Aiming for maximizing the mutual information can thus be seen as reducing the conditional entropy and thus the amount of information required from a second variable, to describe the system.

If we find a transformation such, that both volumes are overlapping as good as possible, then the location of a particular area in one brain, would be highly predictive for the same location on the second brain. In turn mutual information is high, whereas the conditional entropy is low.

The specific optimization algorithm used for mutual information driven affine registration is described in Mattes et al. 2003 [4].

In essence, a gradient decent is used to minimize the negative mutual information, while optimizing the set of parameters of the image discrepancy function.

### Symmetric Diffeomorphic Registration

See dipy sdr example for reference.

Symmetric Diffeomorphic Image Registration is described in Avants et al. 2009 [2].

A map between two objects (manifolds that need to be differentiable) is diffeomorphic if it is invertable (so is it’s inverse). Hence it can be seen as a locally linear, smooth map, that describes how each point on one object relates to the same point on a second object. Imagine a scrambled and an intact sheet of paper. There is a clear mapping between each point of the first, to each point of the second object.

The introduced “symmetry” refers to symmetry after implementation. That is that morphing A to B yields computationally the same result as morphing B to A.

As optimization criterion the cross-correlation was chosen, which is a description of similarity between two data series.

## SourceMorph

SourceMorph is MNE-Python’s source estimation morphing operator. It can perform all necessary computations, to achieve the above transformations on surface source estimate representations as well as for volumetric source estimates. This includes SourceEstimate and VectorSourceEstimate for surface representations and VolSourceEstimate for volumetric representations.

SourceMorph can take general a type specific keyword arguments. The following general keyword arguments are accepted:

• subject_from: string pointing to the respective subject folder representing the subject the is going to be morphed. E.g. subject_from=’Bert’. Within the respective subject folder (e.g. SUBJECTS_DIR/Bert), the result of an anatomical segmentation, done with FreeSurfer should be present. More specifically FreeSurfer surface data is employed to achieve surface morph operations, whereas for volume source estimates brain.mgz will be used by default. The default is subject_from=None. If this is the case subject_from will be derived from the source space or source estimate if present.
• subject_to: string pointing to the respective subject folder representing the subject the is used as reference for the respective morph. Hence this it represents the target space. Similar data as for subject_from is used. The default is subject_to=’fsaverage’, hence is not otherwise specified fsaverage will be the target space. Note that it is possible as well to specify a path pointing to any brain.mgz that is used as reference volume.
• subjects_dir: FreeSurfer subject directory. If not defined in SUBJECTS_DIR, subjects_dir should be define, otherwise the default is None, utilizing the path set in the environment.
• src: The list of source space corresponding to the source estimate that is targeted as a morph. While for VolSourceEstimates, src is must be set, it is not necessary to be set when attempting to perform a surface morph. Hence the default is src=None. If no src is provided (only possible for surface morphs), then the SourceMorph operator is only set up and the computation of the morph takes place, once the object is called. In the case of volumetric data, src must be provided, in order to obtain the relevant information of how the data maps to anatomical data, that is used to compute the morph.
• spacing: This parameter is fundamentally different depending on the underlying morph. Please see surface source estimates for information related to surface morphs and volumetric source estimates for information related to volumetric morphs. Default is spacing=5
• precomputed: If precomputed morph data is already present, or custom made morph data will be used, it can be set by the keyword argument precomputed. If not None, the argument must be a dictionary, carrying type specific morphing information set up in the same way as used by SourceMorph. Please see surface source estimates and volumetric source estimates for information about type specific morph parameters. precomputed=morph_params will thus override my_morph.params The default is precomputed=None

A SourceMorph object can be created, by initializing an instance and setting desired key word arguments like so: my_morph = mne.SourceMorph(...)

my_morph will have all arguments set or their default values as attributes. Furthermore it indicates of which kind it is my_morph.kind and what are the respective morphing parameters my_morph.params.

my_morph.params is a dictionary, that varies depending on the type of morph, all relevant information is stored. See surface source estimates and volumetric source estimates for information about type specific morph parameters.

### Surface morphing.

surface source estimates. In addition to general keyword arguments SourceMorph can take multiple arguments depending on the underlying morph. For (Vector) SourceEstimate, those keyword arguments include:

• spacing: In case of (Vector)SourceEstimate spacing can be an integer a list of 2 np.array or None. The defaut is spacing=5. Spacing refers to what was known as grade in previous versions of MNE-Python. It defines the esolution of the icosahedral mesh (typically 5). If None, all vertices will be used (potentially filling the surface). If a list, then values will be morphed to the set of vertices specified in in spacing[0] and spacing[1]. Note that specifying the vertices (e.g., grade=[np.arange(10242), np.arange(10242)] for fsaverage on a standard grade 5 source space) can be substantially faster than computing vertex locations. Note that if subject=’fsaverage’ and ‘spacing=5’, this set of vertices will automatically be used (instead of computed) for speed, since this is a common morph.
• smooth: Number of iterations for the smoothing of the surface data. If None, smooth is automatically defined to fill the surface with non-zero values. The default is None.
• xhemi: If True data can be morphed between hemispheres by setting. The full cross-hemisphere morph matrix maps left to right and right to left. A matrix for cross-mapping only one hemisphere can be constructed by specifying the appropriate vertices, for example, to map the right hemisphere to the left: vertices_from=[[], vert_rh], vertices_to=[vert_lh, []]. Cross-hemisphere mapping requires appropriate sphere.left_right morph-maps in the subject’s directory. These morph maps are included with the fsaverage_sym FreeSurfer subject, and can be created for other subjects with the mris_left_right_register FreeSurfer command. The fsaverage_sym subject is included with FreeSurfer > 5.1 and can be obtained as described here. For statistical comparisons between hemispheres, use of the symmetric fsaverage_sym model is recommended to minimize bias [5].

### volumetric source estimates

In addition to general keyword arguments SourceMorph can take multiple arguments depending on the underlying morph. For mne.VolSourceEstimate, those keyword arguments include:

• spacing: In case of VolSourceEstimate spacing can be an integer, float, tuple of integer or float or None. The default is spacing=5. Spacing refers to the voxel size that is used to compute the volumetric morph. Since two volumes are compared “point wise” the number of slices in each orthogonal direction has direct influence on the computation time and accuracy of the morph. See Symmetric Diffeomorphic Registration to understand why this is the case. Spacing thus can also be seen as the voxel size to which both reference volumes will be resliced before computing the symmetric diffeomorphic volume. An integer or float value, will be interpreted as isotropic voxel size in mm. Setting a tuple allows for anisotropic voxel sizes e.g. (1., 1., 1.2). If None the full resolution of the MRIs will be used. Note, that this can cause long computation times.
• niter_affine: As described in Affine registration an iterative process is used to find the transformation that maps one image to another. This iterative process is performed in multiple levels and a number of iterations per level. A level is a stage of iterative refinement with a certain level of precision. The higher or later the level the more refined the iterative optimization will be, requiring more computation time. The number of levels and the number of iterations per level are defined as a tuple of integers, where the number of integers or the length of the tuple defines the number of levels, whereas the integer values themselves represent the number of iterations in that respective level. The default is niter_affine=(100, 100, 10) referring to a 3 stage optimization using 100, 100 and 10 iterations for the 1st, 2nd and 3rd level. Note, that internally a 3 step approach is computed internally: A translation, followed by a rigid body transform (adding rotation), followed by an affine transform (adding scaling). Thereby the result of the first step will be the initial morph of the second and so on. Thus the defined number of iterations actually applies to 3 different computations.
• niter_sdr: As described in Symmetric Diffeomorphic Registration an iterative process is used to find the transformation that maps one image to another. This iterative process is performed in multiple levels similar to the affine optimization (Affine registration). The default is niter_sdr=(5, 5, 3) referring to a 3 stage optimization using 5, 5 and 3 iterations for the 1st, 2nd and 3rd level.

### SourceMorph’s methods

Once an instance of SourceMorph was created, it exposes 3 methods:

• my_morph() Calling an instance of SourceMorph on SourceEstimate, VectorSourceEstimate or VolSourceEstimate, will apply the precomputed morph to the input data and return the morphed source estimate (stc_morphed = my_morph(stc)). If a surface morph was attempted and no source space was provided during instantiation of SourceMorph, then the actual computation of the morph will take place, using the input data as reference data, rather then precomputing it based on the source space data. Additionally the method takes the same keyword arguments as my_morph.as_volume(), given that as_volume=True. This means that the result will not be a source estimate, but instead NIfTI image representing the source estimate data in the specified way. If as_volume=False all other volume related arguments will be ignored.

• my_morph.as_volume() This method only works with VolSourceEstimate. It returns a NIfTI image of the source estimate. mri_resolution can be defined to change the resolution of the output image. mri_resolution=True will output an image in the same resolution as the MRI information stored in src. If mri_resolution=False the output image will have the same resolution as defined in ‘spacing’ when instantiating the morph. Furthermore, mri_resolution can be defined as integer, float or tuple of integer or float to refer to the desired voxel size in mm. A single value will be interpreted as isotropic voxel size, whereas anisotropic dimensions can be defined using a tuple. Note, that a tuple must have a length of 3 referring to the 3 orthogonal spatial dimensions. The default is mri_resolution=False. The keyword argument mri_space asks, whether to use the same reference space as the reference MRI of the reference space of the source estimate. The default is mri_space=True. Furthermore a keyword argument called apply_morph can be set, indicating whether to apply the precomputed morph. In combination with the keyword argument ‘as_volume’, this can be used to produce morphed and unmorphed NIfTIs. The default is apply_morph=False.

• my_morph.save() Saves the morph object to disk. The only input argument is the filename. Since the object is stored in HDF5 (‘.h5’) format, the filename will be extended by ‘-morph.h5’ if no file extension was initially defined. To read saved SourceMorph objects, use mne.read_source_morph().

• Shortcuts:

stc_fsaverage = SourceMorph(src=src)(stc)

img = SourceMorph(src=src)(stc, as_volume=True)

## Alternative API

Some operations can be performed using the respective source estimate itself. This is mostly to support the API of previous versions of MNE-Python.

Un-morphed VolSourceEstimate can be converted into NIfTI images using stc.as_volume

Un-morphed SourceEstimate and VectorSourceEstimate can morphed using stc.morph

Note that in any of the above cases SourceMorph will be used under the hood to perform the requested operation.

## Step by step hands on tutorial

In this tutorial we will morph different kinds of source estimates between individual subject spaces using mne.SourceMorph.

We will use precomputed data and morph surface and volume source estimates to a common space. The common space of choice will be FreeSurfer’s “fsaverage”.

Furthermore we will convert our volume source estimate into a NIfTI image using morph.as_volume.

### Setup

We first import the required packages and define a list of file names for various data sets we are going to use to run this tutorial.

import os

import matplotlib.pylab as plt
import nibabel as nib
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from nilearn.image import index_img
from nilearn.plotting import plot_glass_brain

# We use the MEG and MRI setup from the MNE-sample dataset
sample_dir_raw = sample.data_path()
sample_dir = sample_dir_raw + '/MEG/sample'
subjects_dir = sample_dir_raw + '/subjects'

fname_evoked = sample_dir + '/sample_audvis-ave.fif'

fname_surf = os.path.join(sample_dir, 'sample_audvis-meg')
fname_vol = os.path.join(sample_dir,

fname_inv_surf = os.path.join(sample_dir,
'sample_audvis-meg-eeg-oct-6-meg-eeg-inv.fif')
fname_inv_vol = os.path.join(sample_dir,
'sample_audvis-meg-vol-7-meg-inv.fif')

fname_t1_fsaverage = subjects_dir + '/fsaverage/mri/brain.mgz'

### Data preparation

First we load the respective example data for surface and volume source estimates. In order to save computation time we crop our time series to a short period around the peak time, that we already know. For a real case scenario this might apply as well if a narrow time window of interest is known in advance.

stc_surf = read_source_estimate(fname_surf, subject='sample')

# The surface source space

# The volume inverse operator

# The volume source space
src_vol = inv_src['src']

# Ensure subject is not None
src_vol[0]['subject_his_id'] = 'sample'

# For faster computation we redefine tmin and tmax
stc_surf.crop(0.09, 0.1)  # our prepared surface source estimate

# Read pre-computed evoked data
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))

# Apply inverse operator
stc_vol = apply_inverse(evoked, inv_src, 1.0 / 3.0 ** 2, "dSPM")

# For faster computation we redefine tmin and tmax
stc_vol.crop(0.09, 0.1)  # our prepared volume source estimate

### Setting up SourceMorph for SourceEstimate

As explained in Surface morphing and surface source estimates we have several options to instantiate SourceMorph. We know, that if src is not provided, the morph will not be pre-computed but instead will be prepared for morphing when calling the instance. This works only with (Vector) SourceEstimate. Below you will find a common setup that will apply to most use cases.

morph_surf = SourceMorph(subject_from='sample',  # Default: None
subject_to='fsaverage',  # Default
subjects_dir=subjects_dir,  # Default: None
src=None,  # Default
spacing=5,  # Default
smooth=None,  # Default
xhemi=False)  # Default

### Setting up SourceMorph for VolSourceEstimate

From Volumetric morphing and volumetric source estimates we know, that src has to be provided when morphing VolSourceEstimate. Furthermore we can define the parameters of the in general very costly computation. Below an example was chosen using a non-default spacing of isotropic 3 mm. The default is 5 mm and you will experience a noticeable difference in computation time, when changing this parameter. Ideally subject_from can be inferred from src, subject_to is ‘fsaverage’ by default and subjects_dir is set in the environment. In that case mne.SourceMorph can be initialized taking only src as parameter. For demonstrative purposes all available keyword arguments were set nevertheless.

morph_vol = SourceMorph(subject_from='sample',  # Default: None
subject_to='fsaverage',  # Default
subjects_dir=subjects_dir,  # Default: None
spacing=(3., 3., 3.),  # Default: 5
src=src_vol,  # Default: None
niter_affine=(100, 100, 10),  # Default
niter_sdr=(5, 5, 3))  # Default

### Applying an instance of SourceMorph

Once we computed the morph for our respective dataset, we can morph the data by giving it as an argument to the SourceMorph instance. This operation applies pre-computed transforms to stc or computes the morph if instantiated without providing src. Default keyword arguments are valid for both types of morph. However, changing the default only makes real sense when morphing VolSourceEstimate See SourceMorph’s methods for more information.

stc_surf_m = morph_surf(stc_surf,  # SourceEstimate | VectorSourceEstimate
as_volume=False,  # Default
mri_resolution=False,  # Default
mri_space=False,  # Default
apply_morph=True)  # Default

stc_vol_m = morph_vol(stc_vol)  # VolSourceEstimate

### Transforming VolSourceEstimate into NIfTI

In case of a VolSourceEstimate, we can further ask SourceMorph to output a volume of our data in the new space. We do this by calling the morph.as_volume. Note, that un-morphed source estimates still can be converted into a NIfTI by using stc.as_volume. The shape of the output volume can be modified by providing the argument mri_resolution. This argument can be boolean, a tuple or an int. If mri_resolution=True, the MRI resolution, that was stored in src will be used. Setting mri_resolution to False, will export the volume having voxel size corresponding to the spacing of the computed morph. Setting a tuple or single value, will cause the output volume to expose a voxel size of that values in mm. We can play around with those parameters and see the difference.

# Create full MRI resolution output volume
img_mri_res = morph_vol.as_volume(stc_vol_m, mri_resolution=True)

# Create morph resolution output volume
img_morph_res = morph_vol.as_volume(stc_vol_m, mri_resolution=False)

# Create output volume of manually defined voxel size directly from SourceMorph
img_any_res = morph_vol(stc_vol,  # use un-morphed source estimate and
as_volume=True,  # output NIfTI with
mri_resolution=2,  # isotropic voxel size of 2mm
mri_space=True,  # in MRI space
apply_morph=True)  # after applying the morph

### Plot results

# Plot morphed volume source estiamte

# Load fsaverage anatomical image

# Initialize figure
fig, [axes1, axes2, axes3] = plt.subplots(1, 3)
fig.subplots_adjust(top=0.8, left=0.1, right=0.9, hspace=0.5)
fig.patch.set_facecolor('white')

for axes, img, res in zip([axes1, axes2, axes3],
[img_mri_res, img_morph_res,
img_any_res],
['Full MRI\nresolution',
'Morph\nresolution',
'isotropic\n7 mm']):
# Setup nilearn plotting
display = plot_glass_brain(t1_fsaverage,
display_mode='x',
cut_coords=0,
draw_cross=False,
axes=axes,
figure=fig,
annotate=False)

# Transform into volume time series and use first one
overlay = index_img(img, 0)

display.annotate(size=8)
axes.set_title(res, color='black', fontsize=12)
plt.show()

# save some memory
del stc_vol_m, morph_vol, morph_surf, img_mri_res, img_morph_res, img_any_res

Plot morphed surface source estimate

surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', smoothing_steps=5)
brain = stc_surf_m.plot(**surfer_kwargs)
brain.add_text(0.1, 0.9, 'Morphed to fsaverage', 'title', font_size=16)


## Summary

Morphing is the process of transforming a representation in one space to another. This is particularly important for neuroimaging data, since individual differences across subject’s brains have to be accounted.

In MNE-Python, morphing is achieved using SourceMorph. This class can morph surface and volumetric source estimates alike.

Instantiate a new object by calling and use the new instance to morph the data:

morph = mne.SourceMorph(src=src)
stc_fsaverage = morph(stc)

Furthermore the data can be converted into a NIfTI image:

img_fsaverage = morph.as_volume(stc_fsaverage)

## References

 [1] Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., … & Hämäläinen, M. (2013). MEG and EEG data analysis with MNE-Python. Frontiers in neuroscience, 7, 267.
 [2] (1, 2) Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2009). Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis, 12(1), 26-41.
 [3] Viola, P., & Wells III, W. M. (1997). Alignment by maximization of mutual information. International journal of computer vision, 24(2), 137-154.
 [4] Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., & Eubank, W. (2003). PET-CT image registration in the chest using free-form deformations. IEEE transactions on medical imaging, 22(1), 120-128.
 [5] Greve D. N., Van der Haegen L., Cai Q., Stufflebeam S., Sabuncu M. R., Fischl B., Brysbaert M. A Surface-based Analysis of Language Lateralization and Cortical Asymmetry. Journal of Cognitive Neuroscience 25(9), 1477-1492, 2013.

This was quite a bunch of info…

Fortunately there is a short version available 😉

Just render it and see yourself:

PATTERN=plot_morph.py make html_dev-pattern

Cheers,

Tommy

### A new class is born!

Hey together,

I’m proudly announcing that the MNE-Python class family got offspring!

After any API related discussions I can now proudly present the new SourceMorph class in it’s more or less final-ish version.

The class has basically the following features:

# computing the morph by calling the __init__ method
instance_SM = SourceMorph(SourceSpace, 'brain_from', 'brain_to')

# morphing the data by calling the __call__ method
morphed_source_estimate = instance_SM(source_estimate)

# output the data as volume by calling the as_volume method
img = instance_SM.as_volume(morphed_source_estimate)

# saving the morph by calling the save method
instance_SM.save('fname')

# readin the morph by calling respective IO function
instance_SM = read_source_morph('fname-morph.h5')

This is already pretty cool!

However, the class also infers automatically which type of source estimate should be morphed and is furthermore really flexible in input and output handling. For example as_volume can take an argument called mri_resolution, which can be boolean to morph to mri space or morph space (the resolution of the mris the morph was computed on), but further takes tuple to specify the respective voxel size in mm.

But let’s compute a proper example:

First download the necessary data for example 5 from GitHub. Replace the respective file in your mne root folder and run

python setup.py install to apply the changes and register the new class.

Now for the imports:

import matplotlib.pylab as plt
import nibabel as nib
import numpy as np
from mne import read_evokeds, SourceMorph
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from nilearn.plotting import plot_anat

Next we load some pre-computed example data:

###############################################################################
# Setup paths
sample_dir = sample.data_path() + '/MEG/sample'
subjects_dir = sample.data_path() + '/subjects'

fname_evoked = sample_dir + '/sample_audvis-ave.fif'
fname_inv = sample_dir + '/sample_audvis-meg-vol-7-meg-inv.fif'

fname_t1_fsaverage = subjects_dir + '/fsaverage/mri/brain.mgz'

###############################################################################
# Compute example data. For reference see
# :ref:<sphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py>

evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))

# Apply inverse operator
stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM")

# To save memory
stc.crop(0.087, 0.087)

If you would like to know more about the respective data and how it was computed, see the MNE Documentation.

… and now comes the magic:

source_morph = SourceMorph(inverse_operator['src'],
subject_from='sample',
subject_to='fsaverage',
subjects_dir=subjects_dir,
grid_spacing=(5., 5., 5.))

we set up our morpher by creating a  SourceMorph class. The inputs are our source spaces, the respective subject information, the subjects directory as used by mne and the grid spacing, that is the voxel size in mm that the MRIs of reference are resliced to. Mostly it’s not necessary to compute the morph on the full resolution MRI.

Interestingly the default values of SourceMorph.__init__ are chosen such, that the very same operation above can be achieved by:

source_morph = SourceMorph(inverse_operator['src'])

This is due to the fact, that when everything is set up correctly, subject_from can be inferred from src, subject_to is ‘fsaverage’, subjects_dir is specified in the environment and 5mm iso voxel size for grid_spacing turned out to be a rather good trade off between time and performance.

However, to allow for maximum flexibility and to give the user a wider range of choices for the respective morphing computation, morph type specific arguments can be passed. Those can be for instance the number of iterations or a smoothing factor.

Once we computed our morph, applying it is just as easy:

# Obtain absolute value for plotting
# To not copy the data into a new memory location, out=stc.data is set
np.abs(stc.data, out=stc.data)
# Morph data
stc_fsaverage = source_morph(stc)

That’s it!

And the cool thing: it works as easy for Surface and Vector source estimates!

stc_fsaverage contains now the morphed volume source space. If we would like to create a nifti from this data, we simply can use the as_volume function, in multiple ways:

# full mri resolution nifti (voxel size = voxel size of mri_to)
img = source_morph.as_volume(stc_fsaverage, mri_resolution=True)

# morph resolution nifti (voxel size = grid_spacing)
img = source_morph.as_volume(stc_fsaverage, mri_resolution=False)

# any morph resolution nifti - in that case 3 mm
img = source_morph.as_volume(stc_fsaverage, mri_resolution=(3., 3., 3.))

And as usual, if we plot the result:

# Load fsaverage anatomical image

# Create mri-resolution volume of results
img_fsaverage = source_morph.as_volume(stc_fsaverage, mri_resolution=True)

fig, axes = plt.subplots()
fig.subplots_adjust(top=0.8, left=0.1, right=0.9, hspace=0.5)
fig.patch.set_facecolor('black')

display = plot_anat(t1_fsaverage, display_mode='ortho',
cut_coords=[0., 0., 0.],
draw_cross=False, axes=axes, figure=fig, annotate=False)

display.annotate(size=8)
axes.set_title('subject results to fsaverage', color='white', fontsize=12)

plt.text(plt.xlim()[1], plt.ylim()[0], 't = 0.087s', color='white')
plt.show()

… we obtain:

So newly implemented class will as well come with a dedicated example, once the new MNE version is released.

Since now the bulk part is done, let’s go for the fine tuning 😉

Cheers,

Tommy

### Another one bites the dust ;)

Hey together,

this weeks blog post has some news that are already outdated.

I was playing around with the API the volumetric morph could get. However due to recent discussions things will change until the next blog post for sure. But for those of you who are interested:

Here is another attempt to the final API:

after importing necessary packages:

import matplotlib.pylab as plt
import mne
import nibabel as nib
import numpy as np
from dipy.align import imaffine, imwarp, reslice
from mne.beamformer import make_lcmv, apply_lcmv
from mne.datasets import sample
from mne.externals.h5io import write_hdf5, read_hdf5
from nilearn.image import index_img
from nilearn.plotting import plot_anat

we first of course compute some example data, just as last time:

def compute_lcmv_example_data(data_path, fname=None):
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'
# Get epochs
event_id, tmin, tmax = [1, 2], -0.2, 0.5

# Setup for reading the raw data
raw.info['bads'] = ['MEG 2443', 'EEG 053']  # 2 bad channels

# Set up pick list: gradiometers and magnetometers, excluding bad channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,

# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])

# Re-normalize our empty-room projectors, so they are fine after
# subselection
raw.info.normalize_proj()
proj = False  # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
evoked = epochs.average()

# Read regularized noise covariance and compute regularized data covariance
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0,
method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
# Read forward model

# Compute weights of free orientation (vector) beamformer with weight
# normalization (neural activity index, NAI). Providing a noise covariance
# matrix enables whitening of the data and forward solution. Source
# orientation is optimized by setting pick_ori to 'max-power'.
# weight_norm can also be set to 'unit-noise-gain'. Source orientation can
# also be 'normal' (but only when using a surface-based source space) or
# None, which computes a vector beamfomer. Note, however, that not all
# combinations of orientation selection and weight normalization are
# implemented yet.
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='nai')

# Apply this spatial filter to the evoked data.
stc = apply_lcmv(evoked, filters, max_ori_out='signed')

# take absolute values for plotting
stc.data[:, :] = np.abs(stc.data)

# Save result in stc files if desired
if fname is not None:
stc.save('lcmv-vol')

# select time window (tmin, tmax) in ms - consider changing for real data
# scenario, since those values were chosen to optimize computation time
stc.crop(0.087, 0.087)

src = forward['src']

# Save result in a 4D nifti file
img = mne.save_stc_as_volume(fname, stc, src,
mri_resolution=True)

return img, stc, src

This will yield a nifti file + the corresponding volume source estimate (stc) and source space (src)

Next we define some reading and writing functions to save our morph:

################################################################################ Save non linear mapping data
def write_morph(fname, morph, overwrite=True):
morph_out = dict()

# dissolve object structure    for key, value in morph.items():
# save type for order independent decomposition        if hasattr(value, '__dict__'):
value = value.__dict__        morph_out[key] = value
if not fname.endswith('.h5'):
fname += '.h5'    write_hdf5(fname, morph_out, overwrite=overwrite)

################################################################################ Load non linear mapping data
if not fname.endswith('.h5'):
fname += '.h5'    morph_in = read_hdf5(fname)

morph = dict()
# create new instances    morph['mapping'] = imwarp.DiffeomorphicMap(None, [])
morph['mapping'].__dict__ = morph_in.get('mapping')
morph['affine'] = imaffine.AffineMap(None)
morph['affine'].__dict__ = morph_in.get('affine')

morph['affine_reg'] = morph_in.get('affine_reg')
morph['domain_shape'] = morph_in.get('domain_shape')

return morph

The last step is to execute the example and plot the result:

###############################################################################
# Execute example

# Settings
voxel_size = (5., 5., 5.)  # of the destination volume

# Setup path
data_path = sample.data_path()

t1_sample = data_path + '/subjects/sample/mri/brain.mgz'
t1_fsaverage = data_path + '/subjects/fsaverage/mri/brain.mgz'

# compute LCMV beamformer inverse example
img, stc, src = compute_lcmv_example_data(data_path)

###############################################################################
# Compute Morph Matrix and Dispersion Volume
morph = mne.source_estimate.compute_morph_sdr(
t1_sample,
t1_fsaverage,
voxel_size=voxel_size)

write_morph('SDR-sample-fsaverage', morph)

###############################################################################
# Morph Pre - Computed

stc_vol_to_preC = stc.morph_precomputed('sample', 'fsaverage', src,
morph_precomputed,
as_volume=False)

img_vol_to_preC = stc.morph_precomputed('sample', 'fsaverage', src,
morph_precomputed,
as_volume=True)

###############################################################################
# Morph
stc_vol_to_drct = stc.morph('sample',
'fsaverage',
t1_sample,
t1_fsaverage,
src,
voxel_size=voxel_size,
as_volume=False)

img_vol_to_drct = stc.morph('sample',
'fsaverage',
t1_sample,
t1_fsaverage,
src,
voxel_size=voxel_size,
as_volume=True)

###############################################################################
# Plot results

# load fsaverage brain (Static)

# reslice Static
t1_s_img_res, t1_s_img_res_affine = reslice.reslice(
t1_s_img.get_data(),
t1_s_img.affine,
voxel_size)

t1_s_img_res = nib.Nifti1Image(t1_s_img_res, t1_s_img_res_affine)

# select image overlay
imgs = [index_img(img_vol_to_preC, 0), index_img(img_vol_to_drct, 0)]

# select anatomical background images
t1_imgs = [t1_s_img_res, t1_s_img_res]

# slices to show for Static volume
slices_s = (0, 0, 0)

slices = [slices_s, slices_s]

# define titles for plots
titles = ['fsaverage brain precomputed', 'fsaverage brain direct morph']

# plot results
figure, (axes1, axes2) = plt.subplots(2, 1)
figure.subplots_adjust(top=0.8, left=0.1, right=0.9, hspace=0.5)
figure.patch.set_facecolor('black')

for axes, img, t1_img, cut_coords, title in zip([axes1, axes2],
imgs, t1_imgs, slices, titles):
display = plot_anat(t1_img,
display_mode='ortho',
cut_coords=cut_coords,
draw_cross=False,
axes=axes,
figure=figure,
annotate=False)

display.annotate(size=8)
axes.set_title(title, color='white', fontsize=12)

plt.text(plt.xlim()[1], plt.ylim()[0], 't = 0.087s', color='white')
plt.suptitle('morph subject results to fsaverage', color='white', fontsize=16)
plt.show()

“But where did the morph go???” you may ask.

The “trick” was to implement the same morph functionality for the VolSourceEstimate class as it was already for Surface source estimates. In fact I added the respective similarly named functions to source_estimate.py inside the VolSourceEstimate class.

As usual the necessary example data can be downloaded from GitHub.

There you will find the files “example4OOAPI.py” and “source_estimate_hacked.py”.

(calling it “source_estimate.py” of course 😉 ).

Afterwards execute the example and you should see the plot below:

As clearly obvious, both methods (pre-computed and direct morph) yield the same result.

This was already a quite comfortable implementation, so what is going to improve next?

We were thinking about making a new morpher class, that could morph any object into any given space depending on the input data. This morpher would further have a save and load method, making it very comfortable to use. Furthermore it would be possible to incorporate this changes into the current API, so that the user would only experience minor changes, while adding a whole new functionality.

Looking forward to it!!

Cheers,

Tommy

### API pre-built

Has been a while since the last post, however I’m now able to proudly present the pre-built of the later functionality. To achieve this parts of the script were wrapped into functions, that in turn were set up such, that they can easily be ported to become a method of an object in the final API. Furthermore the example will probably soon be available via the MNE-python documentation.

All right! Let’s get into it!

from dipy.align import imaffine, imwarp, metrics, reslice, transforms

import matplotlib.pylab as plt

import mne
from mne.beamformer import make_lcmv, apply_lcmv
from mne.datasets import sample
from mne.externals.h5io import read_hdf5, write_hdf5

import nibabel as nib

from nilearn.image import index_img
from nilearn.plotting import plot_anat

import numpy as np

from os import path, makedirs

print(__doc__)

The first function will compute our example data according to the existing MNE-Python example and return a volumetric source space result of the example’s peak time:

def compute_lcmv_example_data(data_path, fname=None):
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'
# Get epochs
event_id, tmin, tmax = [1, 2], -0.2, 0.5

# Setup for reading the raw data
raw.info['bads'] = ['MEG 2443', 'EEG 053']  # 2 bad channels

# Set up pick list: gradiometers and magnetometers, excluding bad channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,

# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])

# Re-normalize our empty-room projectors, so they are fine after
# subselection
raw.info.normalize_proj()
proj = False  # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
evoked = epochs.average()

# Read regularized noise covariance and compute regularized data covariance
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0,
method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
# Read forward model

# Compute weights of free orientation (vector) beamformer with weight
# normalization (neural activity index, NAI). Providing a noise covariance
# matrix enables whitening of the data and forward solution. Source
# orientation is optimized by setting pick_ori to 'max-power'.
# weight_norm can also be set to 'unit-noise-gain'. Source orientation can
# also be 'normal' (but only when using a surface-based source space) or
# None, which computes a vector beamfomer. Note, however, that not all
# combinations of orientation selection and weight normalization are
# implemented yet.
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='nai')

# Apply this spatial filter to the evoked data.
stc = apply_lcmv(evoked, filters, max_ori_out='signed')

# take absolute values for plotting
stc.data[:, :] = np.abs(stc.data)

# Save result in stc files if desired
if fname is not None:
stc.save('lcmv-vol')

# select time window (tmin, tmax) in ms - consider changing for real data
# scenario, since those values were chosen to optimize computation time
stc.crop(0.087, 0.087)

# Save result in a 4D nifti file
img = mne.save_stc_as_volume(fname, stc, forward['src'],
mri_resolution=True)

return img

Next, we define our morph mapping that we want to compute. The function takes a Moving volume (img_m) that is morphed towards the Static volume (img_s). In general we compute the morph in two steps:

1. we compute an affine linear transform
2. we compute a non-linear transform on the pre-transformed image

The maximum number of iterations during the optimization can be set. niter_affine defines the number of optimization levels and the corresponding maximum number of iterations. The same applies to niter_sdr but for performing the Symmetric Diffeomorphic Registration.

As a result the function will return mapping and affine, being the respective non-linear and affine mapping objects.

def compute_morph_map(img_m, img_s=None, niter_affine=(100, 100, 10),
niter_sdr=(5, 5, 3)):
# get Static to world transform
img_s_grid2world = img_s.affine

# output Static as ndarray
img_s = img_s.dataobj[:, :, :]

# normalize values
img_s = img_s.astype('float') / img_s.max()

# get Moving to world transform
img_m_grid2world = img_m.affine

# output Moving as ndarray
img_m = img_m.dataobj[:, :, :]

# normalize values
img_m = img_m.astype('float') / img_m.max()

# compute center of mass
c_of_mass = imaffine.transform_centers_of_mass(img_s, img_s_grid2world,
img_m, img_m_grid2world)

nbins = 32

# set up Affine Registration
affreg = imaffine.AffineRegistration(
metric=imaffine.MutualInformationMetric(nbins, None),
level_iters=list(niter_affine),
sigmas=[3.0, 1.0, 0.0],
factors=[4, 2, 1])

# translation
translation = affreg.optimize(img_s, img_m,
transforms.TranslationTransform3D(), None,
img_s_grid2world, img_m_grid2world,
starting_affine=c_of_mass.affine)

# rigid body transform (translation + rotation)
rigid = affreg.optimize(img_s, img_m,
transforms.RigidTransform3D(), None,
img_s_grid2world, img_m_grid2world,
starting_affine=translation.affine)

# affine transform (translation + rotation + scaling)
affine = affreg.optimize(img_s, img_m,
transforms.AffineTransform3D(), None,
img_s_grid2world, img_m_grid2world,
starting_affine=rigid.affine)

# apply affine transformation
img_m_affine = affine.transform(img_m)

# set up Symmetric Diffeomorphic Registration (metric, iterations)
sdr = imwarp.SymmetricDiffeomorphicRegistration(
metrics.CCMetric(3), list(niter_sdr))

# compute mapping
mapping = sdr.optimize(img_s, img_m_affine)

return mapping, affine

In order to make the functionality useful in a real case scenario, one might want to save and load the respective mapping data. For this reason the following two functions were implemented:

def save_mapping(fname, data, overwrite=True):
out = dict()

# dissolve object structure
for d in data:
# save type for order independent decomposition
out[type(d).__name__] = d.__dict__

write_hdf5(fname + '.h5', out, overwrite=overwrite)

… saves the data, by creating a dictionary with all data types and their respective sub-structure, that is obtained in form of the internal dictionary of mapping or affine.

def load_mapping(fname):
# create new instances
mapping = imwarp.DiffeomorphicMap(None, [])
affine = imaffine.AffineMap(None)

data = read_hdf5(fname + '.h5')

mapping.__dict__ = data.get(type(mapping).__name__)
affine.__dict__ = data.get(type(affine).__name__)

return mapping, affine

… looks up the desired data types in the loaded dictionary and recomposes mapping and affine by creating an empty instance and reassigning the loaded data.

Now we are able to create, save and load our non-linear morphs. A crucial part however is still missing, namely applying the morph to our data. This step is done by the following function:

def morph_precomputed(img, affine, mapping):
# morph img data
img_sdr_affine = np.zeros(img.shape)
for vol in range(img.shape[3]):
img_sdr_affine[:, :, :, vol] = mapping.transform(
affine.transform(img.dataobj[:, :, :, vol]))

return img_sdr_affine

… the function takes 4D image data and loops over the 4th dimension (normally a time series). The respective 3D volume that is left, will be morphed using the affine transform first, followed by the non-linear transform. The function will return a 4D volume in the new space.

Because we also want to visualize our data later on, we want select slices to show, that are not similar relative to the volume itself, rather than the brain structure that is housed by it. Hence we might want to show different slices for Moving and Static respectively. Slices are defined as cut coordinates and shifted using a transformation matrix (in this example the inverse affine transform). The function returns an array containing the morphed 3D coordinates. Note, that in order to perform a 3D transform using a 4×4 transformation matrix, a column of ones was appended to the Nx3 array containing the old coordinates.

def morph_slice_indices(slice_indices, transmat):
# make sure array is 2D
slice_indices = np.atleast_2d(slice_indices)

# add a column of ones
slice_indices = np.append(slice_indices,
np.ones((slice_indices.shape[0], 1)), axis=1)

# matrix multiplication with transformation matrix
slice_indices_out = np.einsum('...j,ij', slice_indices, transmat)

# round to select valid slices and remove last column
return np.round(slice_indices_out[:, :-1])

As a last – mostly for convenience – we define a function that prepares our morph template data. This includes loading the example data and re-slicing. The first argument to give to the function is the result we obtained from compute_lcmv_example_data. The function returns the pre-processed volumes (functional, moving MRI, static MRI).

def prepare_volume_example_data(img_in, t1_m_path, t1_s_path,
voxel_size=(3., 3., 3.)):
# load lcmv inverse
if isinstance(img_in, str):
else:
img_vol = img_in

# reslice lcmv inverse
img_vol_res, img_vol_res_affine = reslice.reslice(
img_vol.get_data(),
img_vol.affine,
voxel_size)

img_vol_res = nib.Nifti1Image(img_vol_res, img_vol_res_affine)

# load subject brain (Moving)

# reslice Moving
t1_m_img_res, t1_m_img_res_affine = reslice.reslice(
t1_m_img.get_data(),
t1_m_img.affine,
voxel_size)

t1_m_img_res = nib.Nifti1Image(t1_m_img_res, t1_m_img_res_affine)

# load fsaverage brain (Static)

# reslice Static
t1_s_img_res, t1_s_img_res_affine = reslice.reslice(
t1_s_img.get_data(),
t1_s_img.affine,
voxel_size)

t1_s_img_res = nib.Nifti1Image(t1_s_img_res, t1_s_img_res_affine)

return img_vol_res, t1_m_img_res, t1_s_img_res

Lastly we execute the example:

# Setup path
data_path = sample.data_path()
results_path = data_path + '/subjects/LCMV-results'

# create results directory if it doesn't exist
if not path.exists(results_path):
makedirs(results_path)

# compute LCMV beamformer inverse example
img = compute_lcmv_example_data(data_path)

# load and reslice volumes
img_vol_res, t1_m_img_res, t1_s_img_res = prepare_volume_example_data(
img,
data_path + '/subjects/sample/mri/brain.mgz',
data_path + '/subjects/fsaverage/mri/brain.mgz',
voxel_size=(5., 5., 5.))

# compute morph map from Moving to Static
mapping, affine = compute_morph_map(t1_m_img_res, t1_s_img_res)

save_mapping(results_path + '/volume_morph', [mapping, affine])

mapping, affine = load_mapping(results_path + '/volume_morph')

# apply morph map (test if saving and loading worked)
img_vol_morphed = morph_precomputed(img_vol_res, mapping, affine)

# make transformed ndarray a nifti
img_vol_morphed = nib.Nifti1Image(img_vol_morphed,
affine=t1_s_img_res.affine)

# save morphed result
nib.save(img_vol_morphed, results_path + '/lcmv-fsaverage.nii.gz')

The plotting can be done using:

# select image overlay
imgs = [index_img(img_vol_res, 0), index_img(img_vol_res, 0),
index_img(img_vol_morphed, 0)]

# select anatomical background images
t1_imgs = [t1_m_img_res, t1_s_img_res, t1_s_img_res]

# slices to show for Static volume
slices_s = (-10, 0, 0)

# slices to show for Moving volume
# to show roughly the same view, we transform the selected Static slices using
# the inverse affine transformation. Note that due to rotation and the
# non-linear transform, both views do not overlap perfectly (pre computed for
# this example)
slices_m = morph_slice_indices(np.atleast_2d(slices_s), affine.affine_inv)[0]

slices = [slices_s, slices_s, tuple(slices_m)]

# define titles for plots
titles = ['subject brain', 'fsaverage brain', 'fsaverage brain morphed result']

# plot results
figure, (axes1, axes2, axes3) = plt.subplots(3, 1)
figure.subplots_adjust(top=0.8, left=0.1, right=0.9, hspace=0.5)
figure.patch.set_facecolor('black')

for axes, img, t1_img, cut_coords, title in zip([axes1, axes2, axes3],
imgs, t1_imgs, slices, titles):
display = plot_anat(t1_img,
display_mode='ortho',
cut_coords=cut_coords,
draw_cross=False,
axes=axes,
figure=figure,
annotate=False)

display.annotate(size=8)
axes.set_title(title, color='white', fontsize=12)

plt.text(plt.xlim()[1], plt.ylim()[0], 't = 0.087s', color='white')
plt.suptitle('morph subject results to fsaverage', color='white', fontsize=16)
plt.show()

… but the results shown below will are an animated t-series, whereas the plot produced by the above code will only show a single time-point.

As clearly visible the morph was successful as before, but now got a much nicer API and can be used as hands-on example.

Furthermore the above code computes within a minute, so don’t hesitate to try it out! 😉

As usual the source code will be provided via GitHub.

Stay tuned!

Cheers,

Tommy

### Code repository now available

Just a short note on where to find the code produced throughout the project.

The code will be uploaded to GitHub and can be accesses via the following link:

https://github.com/TommyClausner/GSoC2018MNE

Below each post you will also find a link to the respective code used in that specific blog post (if applicable).

### brain2brain – first example on real data

I proudly present: the first working example! It is demonstrated, how to use DIPY in order to move your volumetric grid space obtained from a LCMV beamformer computed in MNE-Python.

The procedure is as follows:

•  compute the results we want to morph
•  select reference spaces (in our case the subject’s individual T1 and “fsaverage”, which comes with mne or FreeSurfer). Note that we only use the “brain”, excluding skull and other surrounding parts. A brain only image can be obtained from the FreeSurfer segmentation.
•  morph subject brain to average brain
•  apply the resulting transform to the volumetric grid obtained from the lcmv beamformer

The following Python packages should be installed beforehand: mne, dipy, nibabel, nilearn. This should work easily using pip:

pip install mne dipy nibabel nilearn

After that is done we will use an existing example of MNE-Python to obtain the the individual result we want to morph. The full example, can be found here.

But first let’s import some packages:

import numpy as np

import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, apply_lcmv

from dipy.align.imwarp import SymmetricDiffeomorphicRegistration
from dipy.align.metrics import CCMetric
from dipy.align.reslice import reslice
from dipy.align.transforms import (TranslationTransform3D,
RigidTransform3D,
AffineTransform3D)
from dipy.align.imaffine import (transform_centers_of_mass,
MutualInformationMetric,
AffineRegistration)
import nibabel as nib

from nilearn.plotting import plot_anat
from nilearn.image import index_img, resample_img

Next we compute a slightly modified version of the example provided by MNE-Python:

# Setup paths
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'

# Get epochs
event_id, tmin, tmax = [1, 2], -0.2, 0.5

# Setup for reading the raw data
raw.info['bads'] = ['MEG 2443', 'EEG 053']  # 2 bad channels

# Set up pick list: gradiometers and magnetometers, excluding bad channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,

# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])

# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
proj = False  # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
evoked = epochs.average()

# Read regularized noise covariance and compute regularized data covariance
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
###############################################################################
# Read forward model

# Compute weights of free orientation (vector) beamformer with weight
# normalization (neural activity index, NAI). Providing a noise covariance
# matrix enables whitening of the data and forward solution. Source orientation
# is optimized by setting pick_ori to 'max-power'.
# weight_norm can also be set to 'unit-noise-gain'. Source orientation can also
# be 'normal' (but only when using a surface-based source space) or None,
# which computes a vector beamfomer. Note, however, that not all combinations
# of orientation selection and weight normalization are implemented yet.
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='nai')

# Apply this spatial filter to the evoked data.
stc = apply_lcmv(evoked, filters, max_ori_out='signed')

# take absolute values for plotting
stc.data[:, :] = np.abs(stc.data)

# Save result in stc files
stc.save('lcmv-vol')

# select time window (tmin, tmax)
stc.crop(0.0, 0.0)

# Save result in a 4D nifti file
mne.save_stc_as_volume('lcmv_inverse.nii.gz', stc,
forward['src'], mri_resolution=True)

Note, that we set “mri_resolution” to “true”. This helps us later on in morphing the grid, since the (non-linear) transform will be estimated based on the anatomical MRI(s).

Next, we will prepare our data for the morph. This includes loading the respective volumes, defining a new voxel size (note, that because this is an example, a voxel size was chosen towards optimizing performance, rather than resolution), and normalizing our two reference T1s. Furthermore we will define the number of iterations used for each optimizations step (explained below). In order to keep things comprehensible, we will from now on call the fsaverage T1 image Static, because it is the volume we want to move to. The subject’s T1 in turn will be called Moving, since it will be moved towards Static:

# isometric voxel size of 3 mm - reconsider for real data
voxel_size = (3., 3., 3.)

# number of iterations for each optimisation level
niter_affine = [100, 100, 10]
niter_sdr = [5, 5, 3]

# load lcmv inverse

# reslice lcmv inverse
img_vol_res, img_vol_res_affine = reslice(img_vol.get_data(), img_vol.affine,
voxel_size)
img_vol = nib.Nifti1Image(img_vol_res, img_vol_res_affine)

img_vol_first = index_img(img_vol, 0)  # for plotting

# select first time volume and output ndarray
img = img_vol.dataobj[:, :, :, 0]

# load subject brain (Moving)
t1_fname = data_path + '/subjects/sample/mri/brain.mgz'

# reslice Moving
t1_m_img_res, t1_m_img_res_affine = reslice(t1_m_img.get_data(),
t1_m_img.affine,
voxel_size)
t1_m_img = nib.Nifti1Image(t1_m_img_res, t1_m_img_res_affine)

# get Moving to world transform
t1_m_grid2world = t1_m_img.affine

# output Moving as ndarray
t1_m = t1_m_img.dataobj[:, :, :]

# normalize values
t1_m = t1_m.astype('float') / t1_m.max()

# load fsaverage brain (Static)
t1_fname = data_path + '/subjects/fsaverage/mri/brain.mgz'

# reslice Static
t1_s_img_res, t1_s_img_res_affine = reslice(t1_s_img.get_data(),
t1_s_img.affine,
voxel_size)
t1_s_img = nib.Nifti1Image(t1_s_img_res, t1_s_img_res_affine)

# get Static to world transform
t1_s_grid2world = t1_s_img.affine

# output Static as ndarray
t1_s = t1_s_img.dataobj[:, :, :]

# normalize values
t1_s = t1_s.astype('float') / t1_s.max()

Now we can compute the transform. First we will use an affine transformation, to get both images pre-aligned as good as possible. Based on that result, the non-linear transform will follow.

Step 1 – affine transform:

# compute center of mass
c_of_mass = transform_centers_of_mass(t1_s, t1_s_grid2world,
t1_m, t1_m_grid2world)

nbins = 32

# prepare affine registration
affreg = AffineRegistration(metric=MutualInformationMetric(nbins, None),
level_iters=niter_affine,
sigmas=[3.0, 1.0, 0.0],
factors=[4, 2, 1])

# translation
translation = affreg.optimize(t1_s, t1_m, TranslationTransform3D(), None,
t1_s_grid2world, t1_m_grid2world,
starting_affine=c_of_mass.affine)

# rigid body transform (translation + rotation)
rigid = affreg.optimize(t1_s, t1_m, RigidTransform3D(), None,
t1_s_grid2world, t1_m_grid2world,
starting_affine=translation.affine)

# affine transform (translation + rotation + scaling)
affine = affreg.optimize(t1_s, t1_m, AffineTransform3D(), None,
t1_s_grid2world, t1_m_grid2world,
starting_affine=rigid.affine)

# apply affine transformation
t1_m_affine = affine.transform(t1_m)

We shift the center of mass of Moving to Static. Afterwards, we compute a rigid body transform, that is preserving the distance between each set of points. Hence the object will be rotated, translated and reflected to match Static even better. In a third step, we will finally compute a full affine transform, that additionally includes scaling ans shearing.

We use the mutual information as an optimization metric as common in the field (Mattes et al., 2003).

Our two brains should overlap quite nicely by now. However, we still have the problem of “individuality”. Even though our brains overlap very well by now, structural differences across different individuals are still not accounted.

Step 2 – non-linear transform:

# set up Symmetric Diffeomorphic Registration (metric, iterations per level)
sdr = SymmetricDiffeomorphicRegistration(CCMetric(3), niter_sdr)

# compute mapping
mapping = sdr.optimize(t1_s, t1_m_affine)

# morph img data
img_sdr_affine = mapping.transform(affine.transform(img))

The result is a displacement map, telling exactly which voxel in Moving corresponds to which voxel in Static and how Moving has to be moved to match Static. We will use the cross-correlation optimization metric as described in Avants et al. (2009).

Step 3 – plotting, saving, happiness:

# make transformed ndarray a nifti
img_vol_sdr_affine = nib.Nifti1Image(img_sdr_affine, affine=t1_s_grid2world)

# save morphed grid
nib.save(img_vol_sdr_affine, 'lcmv_inverse_fsavg.nii.gz')

# plot result
imgs = [img_vol_first, img_vol_first,

t1_imgs = [t1_m_img, t1_s_img, t1_s_img]

titles = ['subject brain', 'fsaverage brian',
'fsaverage brian\nmorphed source']

saveas = ['lcmv_subject.png', 'lcmv_fsaverage.png',
'lcmv_fsaverage_morphed.png']

for img, t1_img, title, fname in zip(imgs, t1_imgs, titles,
saveas):
display = plot_anat(t1_img, display_mode='x',
title=title)
display.savefig(fname)
display.close()

The result of the above code will vary slightly. In the example 3 PNG files will be saved in order to visualize the transform. For reasons of eye candy I transformed the results into GIF images:

Here you see how the subject’s anatomical T1 and the result of the source analysis overlap nicely:

When overlaying the FreeSurfer average brain with the very same overlay, then we see that it does not fit. The spaces are off. The same would happen, if we tried to apply the result-space of subject A to the brain of subject B. Hence in order to make results comparable, they must fit a common space (in this case fsaverage):

After estimating the volumetric morph based on the anatomical scans, we can now transform our beamformer results to the same space. By applying first the affine transform and afterwards the non-linear transform, we shift our individual results into a common space (fsaverage):

If we would do this for a set of subjects, we could compute any statistics as usual. Because anatomical (/ functional) areas overlap now between subjects.

What’s next?

The next step would be to provide this example as an official example as part of the MNE-Python documentation.

Afterwards it’s time to go into the API that the will make the workflow accessible from higher order functions. Thereby it could be linked directly to the existing surface morph, such that the user experiences no difference in the actual handling of the function. The magic will be done “under the hood”.

Alright! Have fun trying it out and stay tuned 😉

Grid-ings,

Tommy

References

Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2009). Symmetric Diffeomorphic Image Registration with Cross- Correlation: Evaluating Automated Labeling of Elderly and Neurodegenerative Brain, 12(1), 26-41.

Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., & Eubank, W. (2003). PET-CT image registration in the chest using free-form deformations. IEEE transactions on medical imaging, 22(1), 120-128.

Source Code:

https://github.com/TommyClausner/GSoC2018MNE/blob/master/example2_brain2avg.py

### First moment of success!!

Hey together!

The digging for existing functions and implementations regarding the non-linear volumetric morph, might turn out shorter than expected.

The basic functionality has already been implemented in DIPY, which is using the Symmetric Normalization (SyN) algorithm proposed by Avants et al. (2009).

Based on the provided example I tested the algorithm on two T1 weighted anatomical MRIs. Furthermore I tested how well the algorithm performs on arbitrarily distributed cubes in a larger otherwise empty cuboid. The volume that was needed to be morphed was (in this case) created by randomly shifting each cube’s location of the initial random volume slightly.

And here are the results:

While the red color indicates the moving volume, the green color represents the static. Hence the more areas are yellow the better (higher overlap). Each frame of the respective GIF represents a slice through the corresponding volume.

For both simulations it turns out, that the amount of overlap after morphing the “moving” volume towards the “static” volume increased. Hence a (non-linear) transformation to morph image landmarks towards each other was achieved.

Structural differences in my testing examples were quite significant (human head vs. random cubes), however the algorithm performed well in both scenarios.

I think that’s quite a good starter!

Let’s see what else is out there!

Cheers,

Tommy

References:

Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2009). Symmetric Diffeomorphic Image Registration with Cross- Correlation: Evaluating Automated Labeling of Elderly and Neurodegenerative Brain, 12(1), 26-41.

Source Code:

https://github.com/TommyClausner/GSoC2018MNE/blob/master/example1_dipyTest.py

### import GoogleSummerOfCode as GSoC

Hi everyone!

Summer is on it’s way and the Google Summer Of Code is by no means an exception!

I’m very happy that my project with Python Software Foundation and MNE Python got accepted!

The project is titled: mne.set_volumeAverage(True) – preparing Group level analyses for volumetric data

So what’s it all about?

MNE Python is a Open Source software package for Python, providing tools for the analysis of (electrophysiological) brain data. Those tools include functions for neural source reconstructions of MEG/EEG data. In a nutshell, the goal is to infer sources of brain activity given electrophysiological brain data, recorded at multiple locations around the head. One approach to solve this so called inverse problem is the beamforming approach: The anatomical brain is partialized into a volumetric grid and by “scanning” through all the grid points while trying to maximize the explanation of the recorded data, grid points can be identified as relevant generators for a certain set of brain data. Furthermore time resolved (LCMV beamformer) or frequency resolved (DICS beamformer) data for those “virtual channels” can be obtained.

Due to anatomical differences across subjects, grid points might expose different relations. Or in other words: not two brains are alike. Imagine lining up all facial features of two faces. Even though they share almost always the same features it is also almost always impossible to align all features at the same time.

In order to compute group level statistics on volumetric brain data we are facing the same problem. It is hence necessary to transform individual “brain grids” into a common grid space, where functional (and anatomical) similar regions are represented at similar voxel locations in the volumetric grid.

This is were my project becomes relevant:
As a result of the proposed project I would like to see a set of functions, that allow for (non-) linear warping of volumetric grids and creating pseudo-individual MR images, based on subjects head shape. If time allows the respective visualization would be a “nice to have” as well.

Source code produced throughout the project, will be provided via GitHub.

All right! Let’s get started!

Stay tuned for any updates on the grid side of life!

Cheers,

Tommy