This blog covers the work done by me in the fourth week of my GSOC experience. I spent this week fixing 7 skipped or xfailing tests and added coverage report to the the github workflow. Now the number of tests passing has reached up to 209. Also we can now view the code coverage due the tests in the coverage report.
To solve the first issue I used the following imports that fixed it on both the Python versions:-
To solve the second issue I used pytest-freezegun module, that literally allows you to freeze time for the particular marked tests or test script. Using this module allowed the same request to be made every time to the servers therefore the above mentioned issues would never arise.
The fact that tests had to make the same request every time for the assertions to be true was not obvious at first. It took a lot of debugging and constant help from my mentors to find the and solve the issue completely.
One other issue that caused me to waste time and effort was to re-record cassettes for the xfailing tests. As mentioned above they would work perfectly once and then after some time later would cause the tests to fail.
What did I do?
As mentioned above I fixed some tests that were marked as xfailing or skipped and added support for code coverage. These works can be found in PRs #67 and #68.Fixed Tests
The 7 tests that I fixed majorly had two issues. The first issue was related to how str object was dealt with in both Python versions(2 and 3). The second issue was that some tests used datetime object to verify values or certificates. This was fine until date was within the range of expiration of the ticket or value. But once those values and tickets expired, the tests would fail.To solve the first issue I used the following imports that fixed it on both the Python versions:-
>>from past.builtins import basestring >>from builtins import str
To solve the second issue I used pytest-freezegun module, that literally allows you to freeze time for the particular marked tests or test script. Using this module allowed the same request to be made every time to the servers therefore the above mentioned issues would never arise.
Added coverage report
The second task that I completed this week was to add coverage report to the workflow. This allowed one to view the detailed code coverage script wise and also the overall percentage. This was important because in later weeks my work is to increase the code coverage up to 85%. Adding coverage also helps in the way that it allows one to view the code that has been covered or not so that the tests can be developed in a better way.Did I get stuck anywhere?
Yes, I definitely got stuck in the early part of the week when I was figuring out the reasons for the failing tests. It was a bit tricky to find the issue was because it was flaky. It was not a constant error that occurred every time I ran the tests.The fact that tests had to make the same request every time for the assertions to be true was not obvious at first. It took a lot of debugging and constant help from my mentors to find the and solve the issue completely.
One other issue that caused me to waste time and effort was to re-record cassettes for the xfailing tests. As mentioned above they would work perfectly once and then after some time later would cause the tests to fail.
Results
Fixed Tests
Now with the above fixes, the number of passing tests rose from 202 to 209.for python 2
$ pytest ============================= test session starts ============================= platform win32 -- Python 2.7.12, pytest-4.6.11, py-1.10.0, pluggy-0.13.1 rootdir: C:\Users\shiva\OneDrive\Desktop\Pyafipaws_Utkarsh\pyafipws, inifile: pytest.ini plugins: html-1.22.1, metadata-1.11.0, vcr-1.0.2 collected 236 items tests\test_ws_sr_padron.py s.sx [ 1%] tests\test_wsaa.py ......... [ 5%] tests\test_wsbfev1.py .......... [ 9%] tests\test_wscdc.py s....... [ 13%] tests\test_wsct.py ..........xx..xxxxxxxxxxssxxxxxx [ 26%] tests\test_wsfev1.py ...... [ 29%] tests\test_wsfev1_dummy.py . [ 29%] tests\test_wsfexv1.py .......... [ 33%] tests\test_wslsp.py ......................................x. [ 50%] tests\test_wsltv.py ............................x. [ 63%] tests\test_wslum.py ........................x. [ 74%] tests\test_wsmtx.py ................................. [ 88%] tests\test_wsremcarne.py ........................... [100%] ======= 209 passed, 5 skipped, 22 xfailed, 1 warnings in 21.75 seconds ========for python 3
$ pytest ======================== test session starts ===================== platform win32 -- Python 3.9.2, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: C:\Users\shiva\OneDrive\Desktop\Pyafipaws_Utkarsh\pyafipws, configfile: pytest.ini plugins: html-3.1.1, metadata-1.11.0, vcr-1.0.2 collected 236 items tests\test_ws_sr_padron.py s.sx [ 1%] tests\test_wsaa.py ......... [ 5%] tests\test_wsbfev1.py .......... [ 9%] tests\test_wscdc.py s....... [ 13%] tests\test_wsct.py ..........xx..xxxxxxxxxxssxxxxxx [ 26%] tests\test_wsfev1.py ...... [ 29%] tests\test_wsfev1_dummy.py . [ 29%] tests\test_wsfexv1.py .......... [ 33%] tests\test_wslsp.py ......................................x. [ 50%] tests\test_wsltv.py ............................x. [ 63%] tests\test_wslum.py ........................x. [ 74%] tests\test_wsmtx.py ................................. [ 88%] tests\test_wsremcarne.py ........................... [100%] ================ 209 passed, 5 skipped, 22 xfailed, 3 warnings in 13.89s ===========