Hello, I am Suhaas Neel. I am currently pursuing Electronics and Communications Engineering at Jawaharlal Nehru University, New Delhi. The project I am a part of aims to automate the time-Intensive process of manually searching for augmentation strategies. There has been research in automating which has even led to improvements in the results of various benchmark datasets. Thus this summer I will be focused on building infrastructure to implement these algorithms natively with Hub.
What did I do this week?
Including the community building period, I started by looking through different algorithms and their comparisons that could discover better augmentation strategies at a lower computational cost. The pioneering work in this area uses reinforcement learning which uses a lot of compute power. Compared to this there have been newer techniques that achieve similar increase in accuracies using much less compute. This includes algorithms like faster augmentation and DADA. These two repositories also have licensing that enables us to use them in our project. Another algorithm called deep auto-augment gives better results than these two but at a higher computational cost. Despite the higher cost this method is still worth exploring because even a little improvement can be much more important than 100 GPU hours.
Apart from this I also started working on the augmentation API and read the codebases of albumentations and pytorch to check up on how they have implemented the augmentation API. Also I started working on implementing augmentations for hub datasets. Given this project was pretty open ended, given the variety of algorithms we could use to auto-augment datasets, It will surely help having a robust augmentation implementation that could work easily with different policies we would need to work with. The higher level API takes in either a hub dataset or a dataloader and an augmentation pipeline ( an array of transformations along with their parameter). This returns a generator of the same size as the input dataset/dataloader with images augmented according to the specified pipeline. To test this function I also implemented some image transformations, which included basic transformations like scaling, rotating and flipping images.
What I plan to do next week
This week my mentor mentioned the need of a more general API that could handle multiple tensors and multiple augmentation pipelines for the same tensor. Also I wanted to build some more transformations, after which I planned to get started with implementing experimenting with different auto-augmentation algorithms, although looking at the comparisons provided in the paper was reliable enough but for some algorithms the authors had open-sourced only a simplified version of the original. If we consider these algorithms, we could use some experimentation.
Where I got stuck
At the time of working out API, I was confused whether to start with the current implementation of transformations for hub which could only return one sample per sample used. This was because, it was using pytorch dataloader API under the hood which could. Hence I finally started off with making a fresh implementation. Apart from this I faced a few bugs which just needed a little time to resolve.