Hi all. Hope we all passed evaluation 2 and our work continues!
What did you do this week?
The text PR saw a couple of updates, such as support for more ways to specify certain arguments (take more types), a change to the underlying method used for 1D array resizing (replaced scipy.signal with scipy.interpolate), and unit tests for text helpers.
I moved the code that could be reused by different neural network packages (Keras, PyTorch, Tensorflow, etc) into a new ‘nn’ package.
A major feature was adding PyTorch support in a new branch off the text PR. I got working support for images (torchvision and reusable numpy code helped a lot), but text support is yet to be added.
What is coming up next?
Work will continue as usual: Text PR needs fixes, docs and tests. In particular, I will need to train better models for integration tests.
I should open a PyTorch WIP PR once text support is added. That includes obtaining a text model with which I can do manual tests. A stretch goal would be to also get started with docs and other things that come with PR’s.
Since one of my mentors is back from holidays, I will take any feedback, and we can also plan out the last 3 weeks of GSoC. Besides the current tasks, there is some room in the schedule for other things.
Did you get stuck anywhere?
As mentioned, I need to train text models specifically designed for integration tests – small, covering many cases (kinds of layers used, character/word level tokenization, output classes, etc), and working well with explanations. I tried to add an integration test but the model I used gave dubious explanations for simple sentences, so changing the model is a solution I can try.
I was keen on opening a PyTorch PR this week, but a blocker was adding text support. Specifically I didn’t have an off-the-shelf pretrained text model to test with. The simplest solution I tried was saving a text classifier from a public kaggle notebook (I got an error when trying to save so I will need to retry).
I shall see you next week!