Niraj-Kamdar's Blog

GSoC: Week 6: class InputEngine

Niraj-Kamdar
Published: 07/06/2020

What did I do this week?

I have started working on input engine this week. Currently, we only have csv2cve which accepts csv file of vendor, product and version as input and produces list of CVEs as output. Currently, csv2cve is separate module with separate command line entry point. I have created a module called input_engine that can process data from any input format (currently csv and json).User can now add remarks field in csv or json which can have any value from following values ( Here, values in parenthesis are aliases for that specific type. )

  1. NewFound (1, n, N)
  2. Unexplored (2, u, U)
  3. Mitigated, (3, m, M)
  4. Confirmed (4, c, C)
  5. Ignored (5, i, I)

I have added --input-file(-i) option in the cli.py to specify input file which input_engine parses and create intermediate data structure that will be used by output_engine to display data according to remarks. Output will be displayed in the same order as priority given to the remarks. I have also created a dummy csv2cve which just calls cli.py with -i option as argument specified in csv2cve. Here, is example usage of -i as input file to produce CVE:  cve-bin-tool -i=test.csv  and User can also use -i to supplement remarks data while scanning directory so that output will be sorted according to remarks. Here is example usage for that: cve-bin-tool -i=test.csv /path/to/scan.

I have also added test cases for input_engine and removed old test cases of the csv2cve.

What am I doing this week? 

I have exams this week from today to 9th July. So, I won't be able to do much during this week but I will spend my weekend improving input_engine like giving more fine-grained control to provide remarks and custom severity.

Have I got stuck anywhere?

No, I didn't get stuck anywhere this week :)

View Blog Post

GSoC: Week 5: improve CVEDB

Niraj-Kamdar
Published: 06/29/2020

What did I do this week?

I have finished my work on improving cvedb this week. I am using aiohttp to download NVD dataset instead of requesting with  multiprocessing pool. This has improved our downloading speed since now every tasks are downloading concurrently in same thread instead of 4 tasks at a time with process pool. I have also measured performance of aiosqlite but it was significantly slower while writing to database so, I decided to keep writing process synchronous. I have also added a beautiful progressbar with the help of rich module. So, now user can get feedback about progress of the downloading and updating database.  Here is the demo of how does it look now. 

 

It was taking 2 minutes to download and update database with multiprocessing. Now, it is only taking 1 minute for that. So, we have gained 200% speed just by converting IO bound tasks into asynchronous coroutines. I have also fixed an event loop bug that we are getting sometimes due to parallel execution of pytest. I have also fixed a small bug in extractor in PR #767. I have also created some utils function to reduce code repetition.

What am I doing this week? 

I have started working on input engine and I am hoping to provide basic triage support by the end of this week.

Have I got stuck anywhere?

No, I didn't get stuck anywhere this week :)

View Blog Post

GSoC: Week 4: Status - 300 Multiple Choice

Niraj-Kamdar
Published: 06/22/2020

Hello everyone,

What did I do this week?

I have fixed several bugs of my PR: Asynchronous File module. I have also started working on making cvedb run asynchronously. Currently, we have cache_update function which downloads  JSON dataset from NVD site and store it in user's local cache directory. Module cvedb contains a CVEDB class which has a method named nist_scrape for scraping NVD site to find appropriate links that can be used by cache_update function. It also has following methods:

  • init_database - Creates tables if database is empty
  • get_cvelist_if_stale - Update if the local db is more than one day old. This avoids the full slow update with every execution.
  • populate_db - Function that populates the database from the JSON.
  • refresh - Refresh the cve database and update it if it is stale
  • clear_cached_data - removes all data from cache directory.

It also has other methods some aren't related to updating database and some are just helper methods. We are currently using multiprocessing to download data which isn't necessary since downloading is an IO bound task and for IO bound task asyncio is a good solution. I am currently not sure how I am going to implement it since we can do use multiple ways to achieve same result and I am experimenting and benchmarking result I am getting from each method. I think storing json dataset is unnecessary since we already have populated sqlite database from it. After populating database, we are only using to check if dataset is stale and for that, we are finding SHA sum of the cached dataset and comparing it to the latest SHA sum listed in metadata from NVD site. We can save a significant amount of space by just storing SHA sum of each dataset from NVD site and compare it instead. I am also thinking about spliting CVEDB class into three classes 1) NVDDownloader - it will handle downloading and pre-processing of data, 2) CVEDB - it will create database if there isn't one and populate it with data it get from NVDDownloader and 3) CVEScanner - which scans database and find CVEs for given vendor, product and version. 

In the new architecture I am thinking of we aren't storing dataset on disk. So, how does populate_db function get data it needs to populate sqlite database. Well, we can use very popular technique we learnt from OS classes that is producer-consumer problem. In our case, NVDDownloader will act as producer while CVEDB will act as consumer and a Queue will act as a pipeline connecting producer with consumer. There are several benefits of this architecture 1) we only need to wait if queue is either full or empty. (Queue without size limit isn't practical because in our case producer is too fast.), 2) We will get performance improvement since we are using RAM as an intermediate storage instead of disk.

First I wrote code for NVDDownloader and benchmark it, it was taking 33 seconds with asyncio queues and dummy consumer to complete whole task. So, I thought about improving performance of it by using ProcessPoolExecutor with 4 workers to pre-process data and in this case it got completed in 22 seconds but all this thing I have done to optimize performance of producer is in vain because our code is as fast as its slowest part and in my case it's consumer. Database transactions are very slow and it doesn't matter how fast is my producer, I need to improve performance of writing to the database. sqlite can handle 50000 insertions/second but it can only handle few transactions/second. We are currently commiting code with every execute statement. We can instead make transactions of thousands of insert statement and commit it. We can also improve performance of database write operation by sacrificing durability of database. I guess it won't be a problem for us since we just need database for cve lookup and we can do integrity check when application start and refresh database if its corrupted (which will be rare) I have to do several experiments before I finalize best solution.

What am I doing this week? 

I am going to discuss with my mentors about what should I do for implementing above problem? Should I keep whole dataset file or just keep metadata?  I will be doing several experiments and benchmark it and choose the best solution for the problem. I am also thinking about improving UX by displaying progress bar.

Have I got stuck anywhere?

Yes, I need to confirm with my mentor if we want to keep cached JSON files or not. I have informed them about this on gitter and I am also going to discuss about it in this week's meeting.

View Blog Post

GSoC: Week 3: Awaiting the Future

Niraj-Kamdar
Published: 06/15/2020

Hello everyone,

What did I do this week?

I have started working on optimizing concurrency of CVE Binary Tool. I am going to use asyncio for IO bound tasks and process pool for long CPU bound tasks. I have converted IO bound synchronous functions  of extractor (PR#741), strings (PR#746) and file(PR#750) modules into asynchronous coroutines. I have also created async_utils module which provides necessary  asynchronous utility functions and classes for every modules. Since asyncio's eventloop doesn't support File IO directly. I have searched external library that may provide functionalities I need and I have found one: aiofiles but it was lacking many functionalities like asynchronous tempfile, shutil etc and It also has many issues and PR opened for more than a year. So, I decided to make one myself. After 2-3 days of research and coding I have finally created an asynchronous FileIO class with all the method that synchronous file object provides and also implemented tempfile's TemporaryFile, NamedTemporaryFile and SpooledTemporaryFile classes on top of it. I have also created asynchronous run_command coroutine which runs command in non-blocking manner since we are using subprocess in many places. I have also converted synchronous unittest to asynchronous by using pytest's pytest-asyncio extension plugin. 

What am I doing this week? 

I am going to refactor scanner into two separate modules: 1) version_scanner and 2) cve_scanner - I am thinking about calling it cve_fetcher to avoid misunderstanding but since I have mentioned cve_scanner in my proposal and issues, let's keep it that for now. I will be merging get_cves methods of cvedb and scanner into one module called cve_scanner which uses cvedb. This will make code more maintainable and readable once I convert it into asynchronous.

Have I got stuck anywhere?

I wasn't able to figure out that Should I use aiofiles and use it to implement functions it lacks or implement one on my own. I was confused because I don't want to reinvent wheels and code-base of  aiofiles was scary at first glance. but then I figured out code of aiofiles is unnecessarily complicated. So, I have borrowed some of their logic and written all the functionality it provides + tempfile functionalities that I need in a compact form.

I am also thinking about making my own library as an alternative to aiofiles which also implements other file IO functionality like shutil and os and deploy it on PyPI. 

View Blog Post

GSoC 20: Week 2: del legacy.c

Niraj-Kamdar
Published: 06/08/2020

Hello everyone!

It's Niraj again. Today, I will be sharing my code contribution of this week.

What did I do this week?

I have completed my work on removing compiler dependency for testing this week and opened a PR. We have been using c files to create binary files which contains same version string as can be found in the product for which we have made checker so that we can assert that our checker and scanner modules are working correctly and we are calling this test mapping_test. Because Most of the strings generated by compiling c file is just the compiler dump which we are ignoring anyway. So, why don't we use struct(as mentioned by @pdxjohnny) or plain binary strings which will save time and space. I was experimenting on struct and I found out binary file produced by using struct is same as we generate from just writing binary strings on a file. 

To make the basic test suite run quickly, we create "faked" binary files to test the CVE mappings. However, we want to be able to test real files to test that the signatures work on real-world data. We have _file_test function that takes a url, and package name and a version, and downloads the file, runs the scanner against it and we call this test package test.

Initially, I have proposed a file named mapping_test_data.py for mapping_test of test_scanner which contains list of dictionary of version, checker_name (module_name) and version_strings and a package_test_data.py file for package_test of test_scanner which contains list of tuple of url, package_name, module_name and version. For example:

mapping_test_data = [
    {
        "module": "bash",
        "version": "1.14.0",
        "version_strings": ["Bash version 1.14.0"],
    },
    {
        "module": "binutils",
        "version": "2.31.1",
        "version_strings": [
            "Using the --size-sort and --undefined-only options together",
            "libbfd-2.31.1-system.so",
            "Auxiliary filter for shared object symbol table",
        ],
    },
]
package_test_data = itertools.chain(
    [
        # Filetests for bash checker
        (
            "https://kojipkgs.fedoraproject.org/packages/bash/4.0/1.fc11/x86_64/",
            "bash-4.0-1.fc11.x86_64.rpm",
            "bash",
            "4.0.0",
        ),
        (
            "http://rpmfind.net/linux/mageia/distrib/4/x86_64/media/core/updates/",
            "bash-4.2-53.1.mga4.x86_64.rpm",
            "bash",
            "4.2.53",
        ),
    ],
    [
        # Filetests for binutils checker
        (
            "http://security.ubuntu.com/ubuntu/pool/main/b/binutils/",
            "binutils_2.26.1-1ubuntu1~16.04.8_amd64.deb",
            "binutils",
            "2.26.1",
        ),
        (
            "http://mirror.centos.org/centos/7/os/x86_64/Packages/",
            "binutils-2.27-43.base.el7.x86_64.rpm",
            "binutils",
            "2.27",
        ),
    ],

Although, this format is better than creating c file and also adding test_data in test_scanner file, In this week's virtual conference, my mentors has pointed out that if we keep test data for all checkers in one file it will be hard to navigate it since number of checkers is going to increase as time goes. So, they told me to create separate test_data file for each checkers which contains two attributes 1) mapping_test_data - which contains test data for our mapping test and 2) package_test_data - which contains test data for our package test. So, I created separate test_data file for each checker. For example, test_data file for bash checker looks like this:

mapping_test_data = [
    {"module": "bash", "version": "1.14.0", "version_strings": ["Bash version 1.14.0"]}
]
package_test_data = [
    {
        "url": "https://kojipkgs.fedoraproject.org/packages/bash/4.0/1.fc11/x86_64/",
        "package_name": "bash-4.0-1.fc11.x86_64.rpm",
        "module": "bash",
        "version": "4.0.0",
    },
    {
        "url": "http://rpmfind.net/linux/mageia/distrib/4/x86_64/media/core/updates/",
        "package_name": "bash-4.2-53.1.mga4.x86_64.rpm",
        "module": "bash",
        "version": "4.2.53",
    },
]

We also have to add new entry in the __all__ list of __init__.py file of test_data module for the checker we are writing test for, if it doesn't exist because I am using this list to load these test_data file at runtime. 

After this PR will get merged, checker developer only need to create two files 1) checker class file under checkers directory and 2) test_data file under test_data directory. This will spare him some time of navigating whole test_scanner file (around 2500 lines) to just add test_data for the checker he has written.

What am I doing this week?

I am going to make extractor module asynchronous this week. I have started working on it and created some functions for it. At the end of the week I want to have asynchronous extractor module and asynchronous test_extractor.

Have I got stuck anywhere?

As I mentioned in my previous blog, file utility of unix wasn't flagging binaries generated by me as executable binary file. After some research, I got to know about a magic signature that file utility uses to identify binary file and I have added it to the binary file I was creating. Here is this magic hex signature that can be found in the beginning of most executable file: 

b"\x7f\x45\x4c\x46\x02\x01\x01\x03"

 

View Blog Post