DiGyt's Blog

Week 8: Blog Post (#4)

DiGyt
Published: 07/22/2019

Last week, I was working on the parametrizing tests for SourceEstimates with tfr_morlet, and making sure the tests are also passed for all possible different functional arguments. Overall this worked quite smoothly for most parameters. In fact, I was already able to start implementing the same procedure for tfr_multitaper. Anyhow, for tfr_multitaper there is no reference function as for tfr_morlet, so tests for tfr_multitaper are for now basically just running the function and checking that everything is in the right place.

However, some of the parameters still were causing problems. By far the most time this week was spent to work on tfr_morlet processing data from 'vector' representations (i.e. instead of one virtual "sensor" in source space, there are 3 virtual "sensors" created, with each one representating one directional activation axis. Besides adding a further dimension (which was no big problem to adapt to), the data fields consistently showed a slight difference (of about 2-4%).
After noticing that for this case, the results are theoretically not expected to be the same for the two procedures taken in the functions, I'll have to find a case where a comparison is possible between the two.

So this is one thing i'm going to do the following week. But mainly, I'm going to continue covering cases for the tfr_multitaper function, as well as starting with the last function, tfr_stockwell.
Another think to possibly look at now is a further option of the functions, which is to additionally return the inter-trial coherence of the time-frequency transforms.

View Blog Post

Week 7: Weekly Check-In (#4)

DiGyt
Published: 07/15/2019

1. What did you do this week?

This week i finally got my first case of equivalence between the new SourceTFR API and the old source_induced_power function. As it turned out, the unequal results were not actually a problem of the code, but instead resulted from a tiny bug in the source_induced_power function, which i use to test my project. It resulted from a parameter not properly passed, which caused different wavelet functions to return different time_frequency transforms...

2. What is coming up next?

Now, the first thing i'll do is to introduce fully parametrized tests to make sure that the wavelet function works for all cases of SourceEstimates and all cases of parameters. If i get this through fast, i might even start to do the same thing for multitapers.

3. Did you get stuck anywhere?

Yeah, well: If you have to analyze and compare each line between two different (and heavily nested) functions, until you notice that a rather unrelated part is causing the actual problem, i wouldn't exactly call that 'getting stuck'. Anyhow, it results in the same time loss ;)

View Blog Post

Week 6: Blog Post (#3)

DiGyt
Published: 07/08/2019

This week i was doing several smaller things:

- Added some more commits to the read_curry function (which in my opinion is definitely ready to be merged now).
- Enhanced tests for the SourceTFR class, to push coverage to 89%.
- Further tried to get tfr_morlet() and source_induced_power() to produce a 100% equal data field.

For the last part i faced several problems. Often, the two named functions are not entirely equivalent. For Example, when SourceEstimates are created, the vector length is calculated from the x,y, and z components of each dipole. This is done with the plain old norm = sqrt(x**2 + y **2 + z **2). Since ready-made SourceEstimates are fed to TFR functions, this step is already performed when doing the time-frequency transform. Contrarily, when using source_induced_power, the norm is calculated after the TFR function. Since tfr(norm(data)) and norm(tfr(data)) are not equivalent in this case, i had to use VectorSourceEstimates instead. For VectorSourceEstimates, the x y and z components are returned separately and allow separate data handling.
Although the data looks more similar when performing TFR functions for each component of the VectorSourceEstimate and then calculating the norm, there are still various other steps that may cause slight differences in the data. After trying to handle some of them i decided for my next steps to go through the entire functions from start to end, and check where they vary in calculations.

So, that's going to be an intensive task, but i hope it will work out fine.


Stay tuned and see you next week!

View Blog Post

Week 5: Weekly Check-In (#3)

DiGyt
Published: 07/01/2019

1. What did you do this week?

Unfortunately, the read-curry-PR is still not done, so I had to focus on this problem once again. However, i still had a little bit time to focus on the main project besides. There, i started to introduce more structured tests for both, multitaper and morlet transformed source level data.

2. What is coming up next?

I just REALLY hope that the are no more things in need of change with the currently running read curry PR and I finally can focus entirely on the main task.

3. Did you get stuck anywhere?

There were some wishes to refactor the testing suite for mor clarity (which is of course a good thing), but also some needs of additional coverage. Additional coverage, however, in this case meant additional testing data. This testing data wouldn‘t be a problem if it wasn‘t a relatively time-consuming process. Testing data must need to be added in the testing repository, and then after adding, version and download hash have to be adapted in the main repository, before being able to commit the changes that would rely on this data. Of course, its important to maintain a stable repo, but as said, it can really stretch out the developing process.

View Blog Post

Week 4: Blog Post (#2)

DiGyt
Published: 06/23/2019

This week, i was mainly focussing on another Pull Request, which allows MNE to read in data created by an external program, Curry Neuroimaging Suite.
This pull request was lingering on for quite some time (i already mentioned it in Blog Post #2), anyhow there was little progress because of a tough problem concerning the testing data.
After contacting one of Curry's developers we could eliminate the implicated problem for our Curry reader. Since my mentor asked me to focus on the curry issue first before progressing,
i worked to get this issue done this week so that i can have my mind entirely set on the Source Space TFR from now on. The PR is now about to be merged and i hope i'll have some great progress on the Source TFR stuff next week.

View Blog Post