DiGyt's Blog

Week 6: Blog Post (#3)

DiGyt
Published: 07/08/2019

This week i was doing several smaller things:

- Added some more commits to the read_curry function (which in my opinion is definitely ready to be merged now).
- Enhanced tests for the SourceTFR class, to push coverage to 89%.
- Further tried to get tfr_morlet() and source_induced_power() to produce a 100% equal data field.

For the last part i faced several problems. Often, the two named functions are not entirely equivalent. For Example, when SourceEstimates are created, the vector length is calculated from the x,y, and z components of each dipole. This is done with the plain old norm = sqrt(x**2 + y **2 + z **2). Since ready-made SourceEstimates are fed to TFR functions, this step is already performed when doing the time-frequency transform. Contrarily, when using source_induced_power, the norm is calculated after the TFR function. Since tfr(norm(data)) and norm(tfr(data)) are not equivalent in this case, i had to use VectorSourceEstimates instead. For VectorSourceEstimates, the x y and z components are returned separately and allow separate data handling.
Although the data looks more similar when performing TFR functions for each component of the VectorSourceEstimate and then calculating the norm, there are still various other steps that may cause slight differences in the data. After trying to handle some of them i decided for my next steps to go through the entire functions from start to end, and check where they vary in calculations.

So, that's going to be an intensive task, but i hope it will work out fine.


Stay tuned and see you next week!

View Blog Post

Week 5: Weekly Check-In (#3)

DiGyt
Published: 07/01/2019

1. What did you do this week?

Unfortunately, the read-curry-PR is still not done, so I had to focus on this problem once again. However, i still had a little bit time to focus on the main project besides. There, i started to introduce more structured tests for both, multitaper and morlet transformed source level data.

2. What is coming up next?

I just REALLY hope that the are no more things in need of change with the currently running read curry PR and I finally can focus entirely on the main task.

3. Did you get stuck anywhere?

There were some wishes to refactor the testing suite for mor clarity (which is of course a good thing), but also some needs of additional coverage. Additional coverage, however, in this case meant additional testing data. This testing data wouldn‘t be a problem if it wasn‘t a relatively time-consuming process. Testing data must need to be added in the testing repository, and then after adding, version and download hash have to be adapted in the main repository, before being able to commit the changes that would rely on this data. Of course, its important to maintain a stable repo, but as said, it can really stretch out the developing process.

View Blog Post

Week 4: Blog Post (#2)

DiGyt
Published: 06/23/2019

This week, i was mainly focussing on another Pull Request, which allows MNE to read in data created by an external program, Curry Neuroimaging Suite.
This pull request was lingering on for quite some time (i already mentioned it in Blog Post #2), anyhow there was little progress because of a tough problem concerning the testing data.
After contacting one of Curry's developers we could eliminate the implicated problem for our Curry reader. Since my mentor asked me to focus on the curry issue first before progressing,
i worked to get this issue done this week so that i can have my mind entirely set on the Source Space TFR from now on. The PR is now about to be merged and i hope i'll have some great progress on the Source TFR stuff next week.

View Blog Post

Week 3: Weekly Check-In (#2)

DiGyt
Published: 06/19/2019

1. What did you do this week?

This week i tried to adapt the computation of the `tfr_multitaper` function to give out the correct data field when feeding it with `SourceEstimate` objects.
Later, i switched to adapt the `tfr_morlet` function in the same way i did with `tfr_multitaper`, since i noticed that this would be a better place to start in terms of comparability (There's an already premade function existing for morlet wavelet transforms in source space to which i can compare my work, while for multitaper, there exists none).

2. What is coming up next?

I'll try to create exact results from the morlet data now. it already looks very good (let's say the data fields are about 80 - 90 % similar, however there are still some small differences i haven't figured out yet).

3. Did you get stuck anywhere?

Yes. When getting `tfr_multitaper` to read `SourceEstimates`, i thought that testing the data on artificially created frequency data seemed a good idea. However, i noticed that this is not an optimal solution to test, especially since it's better to use valid `SourceEstimate` data. This was also the reason i switched to working on  `tfr_morlet` before getting `tfr_multitaper` done entirely.

View Blog Post

Week 2: Blog Post (#1)

DiGyt
Published: 06/10/2019

Last week, i got the first `SourceTFR` draft far enough to actually use it for TFR functions, which was the other main task this week.
The process was more or less like force feeding this alien object into a function constantly complaining, checking out every error it makes and trying to correct it. Yay!
The main task for now was pretty much juggling data fields around, until everything's fine and introducing new if cases for the `SourceEstimate` data types.
I don't really expect the function to produce correct time-frequency transforms at this point, but i think i'm getting closer and closer.

Unfortunately, i also had to work on another MNE pull request last week which cost me A LOT of time, yet i wasn't able to make much progress on it.
So this week, i'm going to focus more on the GSoC project, which is of course the more important thing to work on.

So for this week, I'm going to continue working on the multitaper TFR function and hope to produce some first results of time frequency transforms.

View Blog Post