Published: 06/24/2022

I've finished my exams in this week, and return to work on benchmark.


  • Using:
    • Benchmark will work in a generic way to compare between LPython and Python compilers in any stage you want (but now i implemented parsing stage, if any one want to add other stage to compare, it will be easy to be added).
    • Files structure is similar to testing strucutre:
      ├── benchmark
      │   ├── benchmark.toml
      │   ├── long_statment.py
      │   ├── report_file.dat
      │   ├── test_math.py
      │   └── very_long_statment.py
      └── run_benchmark.py

      we will add files we want to compare to benchmark dir and we will add file in toml file and specify what stage we want to compare: 

      #Possible comparisons
      # parser: Parsing time
      filename = 'test_math.py'
      parser = true
      other_stage = true
      filename = 'long_statment.py'
      parser = true
      filename = 'very_long_statment.py'
      parser = true
      other_stage = true

      here we have files, and every file has comparison stages

      we can imagine files like this:


    • so parser comparison set has very_long_statment.py, long_statment.py and test_math.py, thats mean we will compare between Lpython and python on (Parsing) stage by these files.

    • other_stage comparison set has long_statment.py, that mean we will compare betweeen LPython and python on (other_stage) stage by this file.

    • Output will be table or pars, and you will have args flags to choose what you want:

      $ python3 run_benchmark.py 
      		 parser comparison 
      LPython is first raw, Python is second one.
      Time in ms.
      benchmark/test_math.py         : ▏ 237.24T
                                       ▏ 741.59T
      benchmark/long_statment.py     : ▏ 3.77 
                                       ▇▇ 22.98
      benchmark/very_long_statment.py: ▇▇▇ 39.04
                                       ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 570.24
      File Name                          LPython      Python
      -------------------------------  ---------  ----------
      benchmark/test_math.py            0.237236    0.741587
      benchmark/long_statment.py        3.77516    22.9778
      benchmark/very_long_statment.py  39.0398    570.236
      $ python3 run_benchmark.py --help
      usage: run_benchmark.py [-h] [-n] [-p] [-c COMPARE [COMPARE ...]]
      Lpython benchmark
        -h, --help            show this help message and exit
        -n, --numerical       show results as numerical table
        -p, --plots           show results as graph of pars
        -c COMPARE [COMPARE ...], --compare COMPARE [COMPARE ...]
                              What stages you want to compare, for now we have['parser']
  • Implementation:
    • For every stage we want to implement we should make a class for the stage and make two statics functions for get time(or any value we wnat to compare) for Lpython and python
      class Parser:
          def get_lpython_result(cls, file_path):
              lpython_run = subprocess.Popen("lpython --new-parser --time-report " + file_path, shell=True,
                                             stdout=subprocess.PIPE, stderr=subprocess.PIPE)
                  stdout = lpython_run.communicate()[0].decode('utf-8')
                  parsing_value = re.search(r"\bParsing: .*ms\b", stdout)[0]
                  parsing_value = parsing_value.replace("Parsing: ", '')
                  parsing_value = parsing_value.replace('ms', '')
                  parsing_value = float(parsing_value)
                  parsing_value = None
              return parsing_value
          def get_cpython_result(cls, file_path):
              input = open(file_path).read()
              t1 = clock()
              a = ast.parse(input, type_comments=True)
              t2 = clock()
              return float(t2 - t1) * 1000
      here parser class implementation.
    • I used time-report for getting times of stages in LPython.
  • Related pull requests: