Week 12 Review

TedLawson
Published: 08/21/2023

Accomplishments:

This week I had 6 PRs merged into master. These PR's covered the majority of the unit test folder revisions, and included:

  • Reducing code duplication by implementing pytest fixtures for common test setup/teardown
  • Reducing code duplication and improve readability through the use of test parametrization
  • Increased total codebase test coverage from 72% to 78% (so far - more to come!) by adding creating additional unit tests
  • Wrote additional docstrings to improve readability and help future contributors understand each test

Challenges:

In addition to getting these PR's merged in, I also began working on a side project which was brought to my attention by my mentor Manu. He mentioned that currently, our GitHub automated workflow is set up to run integration and unit tests on too large a matrix of [python versions X os's X Borg versions], resulting in a 39 jobs created for each push and pull request action. He tasked me with reducing this workflow by creating different matrices based on the type of job. Specifically, he requested shorter runs for Push or PR, and full runs for manual workflow dispatchs or a push to Master. I had never worked with yaml files before, so this was something new for me, but I stepped up to the task and learned a lot in the process. The main challenge I faced was getting a matrix defined dynamically as opposed to staticly. In its current form, the metric was assigned in the yaml file and was the same for all actions (push, PR, workflow dispatch). My challenge was to instead create the matrix dynamically based on the type of action. I best approach I found to doing this was to create a script that would generate the matrix and write it to a json file, which the yaml file would then store in an environment variable "GITHUB_OUTPUT" for future jobs to reference. I had two main struggles along the way:

  1. Deprecation warnings: Originally, I was setting the matrices using `::set-output`, however I received a warning from GitHub that this method was getting a deprecation warning from GitHub and needed to change my approach. After reading the GitHub documentation on this issue, I discovered there was a new "GITHUB_OUTPUT" environment file available, and using this was the new best practice. 
  2. In the integration test matrix, we exclude Borg version 2.0.0b5 with python v3.8 because it isn't supported. This was challenging at first because I was not sure how to define this when I was creating matrices dynamically vs statically. I tried several different approaches, including one where I created a step in the `test_integration` job that checked if the combination was invalid and, if so, would skip the rest of the steps. This approach, however, was somewhat clunky, and one of the project maintainers recommended defineing the exclude within the script that creates the matrices. In hindsight this seemed like the obvious approach, however I think I was hung up on trying to do the exclude within the `test_integration` job, which is where the original exclude was defined. With this new approach of adding the exlclude to the matrix within the script, everything worked as intended and I marked the PR as 'ready for review'.


The Week Ahead:

With these PRs merged in, the unit tests folder is almost complete, with some work left in the Misc tests and treemodel. After this I will move into integration tests and finish up the Testsuite Improvements project.

DJDT

Versions

Time

Settings from gsoc.settings

Headers

Request

SQL queries from 1 connection

Static files (2312 found, 3 used)

Templates (11 rendered)

Cache calls from 1 backend

Signals

Log messages