Weekly blog #3 (week 6): 01/07 to 07/07

tomasb
Published: 07/07/2019

Hello there. It’s time for a weekly blog. One thing to note right away is that half of GSoC is already gone (week 6 is over)! I really do hope to put in some solid work before everything finishes.

 

Let me start by saying what I did this week. Firstly, once again the PR got closer to being finished. Just to mention a few of the interesting changes: with the help of my mentor I added a check, though not perfect, for whether a network outputs ‘scores’ (unconstrained values) or ‘probabilities’ (values between 0 and 1). I also had to make one argument optional for a common object in the library, without breaking too many things and without too many corrections (mypy helped massively). An interesting addition to the Grad-CAM code was checking whether differentiation failed (checking whether gradients had any None values) and making an automated test for that.

 

Outside the PR, the mentors made a new ELI5 release, and I merged those changes into my fork.

 

Regarding text, I managed to add some code to ELI5 to get a working end-to-end text explanation example (trust me, you don’t want to see the code under the hood - a bunch of hardcoding and if statements!). Improvements were made.

 

Through the week I came across a few issues. 

 

For example, at the start of Tuesday, just as I was about to do some productive work, I realised that me or Jupyter Notebook had not saved some of the work that I had done on Monday. Big shout out to IPython %history magic that let me recover the commands that I ran. I’ll CTRL+S all the time from now on!

 

I felt that a big issue this week was a lack of productivity, especially relating to ELI5 text code. Sometimes I found myself making random changes to code, without much planning. I was overwhelmed by how much there is to refactor and implement, and I still have to deal with the problem of reusing existing code from image explanations. A couple of things had helped:

  1. Again make small changes and test. Don’t make changes “into the future” that you can’t test right now.

  2. Create a simple list of TODO items in a text editor, like I did for image code before. This is great for seeing the big picture, taking tasks one-by-one instead of everything at once, and marking tasks as done.

Writing library code - code that interacts with existing code and code that may expose an API to users - is much harder than simple experimental snippets in a Jupyter Notebook. Organisation helps.

 

On Sunday I got down to adding virtualenvwrapper to my workflow. Goodbye “source big-relative-or-absolute-path-to-my-virtualenv/bin/activate” and hello “workon eli5gradcam”! I also tried to train a ConvNet on a multiple classification problem (reuters newstopics), but the model kept on getting an appalling 30% accuracy. Perhaps I need a minor modification to the hyperparameters, or maybe ConvNet’s are not so good for multiple classification and text tasks?

 

I believe that next week will be much the same as week 6, topic wise: small PR changes, library code for text, and experimentation to make the code cover more possible models (RNN’s will come - someday!).

 

Thank you for reading once again, and see you next time!

Tomas Baltrunas