Week-2: Open Source Rocks!

jaladh-singhal
Published: 06/10/2019

Hello folks,

I am pleased to share that today my no. of contributions crossed 100! I know it's not a very big achievement, but atleast it gives happiness to a person who stepped in Open Source world, just 4 months ago. Thanks to Google Summer of Code 2019 ❤️

 

What did I do this week?

  1. I resumed my work on an old PR that integrates filter curves & information to our Sphinx docs by auto-generating them, each time docs are build. It is meant for providing a simple static interface to users in the docs itself. For this I'm using Jinja templating to generate Sphinx docs from the dataset.
    • I devised & implemented a better approach to integrate lots of data, by generating the required ~80 RST files at time of build, instead of storing them by defaut in docs/ directory.
    • By using Jinja, I converted Pandas dataframes (part of our dataset) into RST tables and made sure that filter tables appear in docs.
  2. Then my mentors told me to put it on a hold, and start working on setting up Azure pipelines for our repo. They are required for the Continuous Integration & Deployment (CI/CD), which are more prior than development for any project. Earlier we were using Travis for CI/CD but due to some of its shortcomings we decided to move to Azure DevOps. I was absolutely new to this, so my mentor told me to take help of another GSoC student - Youssef, who is setting up Azure pipelines for Tardis (another PSF SubOrg that my mentor is mentoring).
    • I created my Azure DevOps account, communicated with Youssef extensively for the Azure docs I need to read to understand this. He helped me a lot by providing his code & steps that are not listed in docs.
    • So after figuring out the process I needed to follow, and customizing his code as per our repository, I successfully setup both pipelines for our repo's fork. One pipeline is for testing (CI) and another is for building & deploying docs to GitHub pages (CD).

 

What is coming up next?

My mentor has already added me to our Org's Azure account as a member, so I will start setting up these pipelines for our main repositories - starkit & wsynphot. And if time allows, I'll resume my work on that PR to integrate filter curves in docs.

 

Did I get stuck anywhere?

While setting up azure pipeline for docs deployment, I felt quite stuck because there was so much new for me to grasp. The process required generation of SSH keys and setting up them with Azure by configuring variables, secure files, etc. of the pipeline. But by patient reading of Azure docs & clarifying my doubts with Youssef, I finished it up successfully.

 

What was something new and exciting I learned?

📝 PEP8 (Python Styling Guide): While reviewing my PR, my mentor pointed out that I should adopt PEP8 style in my added code to maintain code consistency. I never knew that there exists some standard styling guide for Python. I read it, it explains really well about almost every formatting & styling choices we need to make in our code and which of them is preferable!

🔐 SSH RSA keys are used as authentication credential for single sign-on without needing username & password. I came to know about this while setting up docs deployment pipeline where we need to use SSH to push commits made by VM to our GitHub repo. These keys always exist in pair as a private key & a public key - I read about this past semester in my coursebook, but I understood it better when I implemented it.

❤️ Open Source = Fast Completion: This setup of Azure pipelines that I completed in only 2 days, could not have been possible if I was not working in an Open Source Community. As Youssef had already given enough time to figure things out and writing code for the pipeline, I need not to start from scratch. I used his code & quickly customized it as per our repo. I need to understand the process, I reached him & quickly got the relevant sources. A big thanks to him & the community. More power to Open Source!

 


Thank you for reading. Stay tuned to know about my upcoming experiences!

DJDT

Versions

Time

Settings from gsoc.settings

Headers

Request

SQL queries from 1 connection

Static files (2312 found, 3 used)

Templates (11 rendered)

Cache calls from 1 backend

Signals

Log messages