Week-3: Travelling through Pipelines

jaladh-singhal
Published: 06/23/2019

Hello folks,

So pretty much happened this week - I got some stability in my schedule and got success in setting up 3 different kind of pipelines for our package.😊 Last 2 weeks were quite hectic for me that I am writing this blog post (for Week-3) much later. Let me share why ...

 

What did I do this week?

  1. After setting up CI & docs CD Azure pipelines in last week, I began setting up them for both of our main repos (starkit & wsynphot).
    • Since in this case, it was the Azure project & repo of my mentor, so I didn't have many of the access rights required in the process of setting up. We discussed about the process & access rights I need, my mentors gave me the permissions.
    • Still there were couple of more authorization problems at Azure project, it took time to figure them out. And I finally I successfully setup both of the pipelines (CI and docs CD) for both of our repos. Now we have officialy switched from Travis & doctr to Azure completely.
  2. The next task was to create another schedule triggered pipeline for executing the notebook (that ingests the filter data fetched from SVO in a HDF file) and deploying it on a server from where we can see if there's any error happened in generating HDF (filter data). This way we want to automate the generation of filter data time to time by using a pipeline, since the filter data fetched from SVO keeps on changing!
    • I figured out the basic workflow of how can I implement such a pipeline. Also set up the ssh connection to the server my mentor provided me for deploying the files.
    • Since there was lot to do in this - my mentor suggested me to create 1st a very simple pipeline for a test notebook (instead of actual) which is capable of deploying stuff on the server.
  3. There were several other tasks I also did in this week:
    • I helped the guy (Youssef) who helped me in setting up CD pipeline (last week), in documenting the process of setting up Azure pipeline.
    • I improved my PR of integrating filter list in docs, as per review given by mentor. And I also discussed extensively with him to understand what is the problem in my approach of generating RST files (since he suggested to ind a elegant way to do it).
    • I also reported the errors in the SVO FPS web interface by mailing the SVO team, they were so quick to fix it.

 

What is coming up next?

Now I've setup this auto-ingestion pipeline for test notebook, next plan will be to evolve it from a prototype to all the stuff we want it to do (like stop on 1st error in notebook, conditionally deploy HDF, etc.). Ultimately making it work for our actual ingestion notebook.

 

Did I get stuck anywhere?

As I shared above, there were various authorization problems while setting up pipeline on a repo in which I don't have admin rights - so I did felt terribly stuck as I could not proceed without my mentors aid. But after some patience & discussions (when they got time) - we fixed it. 

 

What was something new and exciting I learned?

🤖 Automation using Pipelines: Earlier I was doing just CI & docs CD using Azure pipelines but when I started working on auto-ingest pipeline, I found out we can automate any process using a pipeline - their real power! It's just what we're doing locally, we tell the Virtual Machine (VM) to do same by means of scripts, and set desired triggers for initiating it. This saves not only our time but also our resources, as entire work is being carried out on a VM without any need to put efforts in it.

🖥️ The server whose access my mentor provided me is actually a Linux machine which can serve as web host using Apache. I learned how to communicate with a remote machine over SSH by setting up SSH key pair. I also got to know about basics of Apache for web hosting - we just put files in a public_html folder on machine and they get available on web automatically by root URL.

🗒️ Importance of creating a Prototype 1st: After planning, when we start to develop a complex project, we should not forget to break it down & identify the core functionality in it - because other tasks won't yield any good result if that main task doesn't work. Same thing my mentors told me to clear up the confusion I was having in overwhelming choices I had to develop. Also a prototype means that we use a smaller test input instead of big actual input, so that output obtained is smaller thereby debugging problems become lot easier!

 


Thank you for reading. Stay tuned to know about my upcoming experiences!

DJDT

Versions

Time

Settings from gsoc.settings

Headers

Request

SQL queries from 1 connection

Static files (2312 found, 3 used)

Templates (11 rendered)

Cache calls from 1 backend

Signals

Log messages