This week I started porting over the GUI and figuring out the integration questions like how to set the rendering backend as well as laying the groundwork (raising an issue and discussing) smaller PRs which will help such as adding subcortical freesurfer recon-all structures to plotting which will need to happen to visualize everything nicely. Things are moving along and the (almost) merged PR that sorts out the registration and diffeomorphic warping is done (except for a few housekeeping items). That was very complicated to integrate into existing code but, in the end, I'm very happy that the tests all passed and this implementation is functionally equivalent to the previous but also gives accurate MR-CT alignments that the last one did not. The API is simpler, and there was a bit of a workaround that after each step the registration was applied instead of being used as a seed for the next step as the API of Dipy suggests is the best case. In fact, there was trouble with this seeding method and the apply method gave great results and was consistent in forward and back tests. Next week, I'll really concentrate on the GUI and hope to get everything working and lightning quick so that I can get feedback on the layout and usability and go from there.