Week-06
AbdelrahmanKhaled
Published: 07/23/2022
PRs:
Now we have genreating section in benchmark:
benchmark/
├── benchmark.toml
├── generated_code
│ ├── long_statement.py
│ └── very_long_statement.py
├── generating_scripts
│ ├── long_statement.py
│ └── very_long_statement.py
├── report_file.dat
└── test_math.py
and that will help us with big source code(and genrated code will be ignored from git).
View Blog Post
Week-05
AbdelrahmanKhaled
Published: 07/16/2022
- i've finished StringCompersion.
- i fixed bug in str() function, i tried to fix it by implementing string elements for lists, and i'll continue with it next week.
- benchmark: ready to reviewing and merged.
View Blog Post
Week-04
AbdelrahmanKhaled
Published: 07/08/2022
opened pr:
1 - StringsCompare: edit comparison to be to on full strings, but there are some problems.
def f():
assert "ahmed" == "ahmed"
assert "a" == "a"
assert "ab" != "a"
assert "a" < "ab"
assert "ab" > "a"
assert "a" >= "a"
View Blog Post
Week-03
AbdelrahmanKhaled
Published: 07/08/2022
i didn't make anything this week due to my graduation project.
View Blog Post
Week-02
AbdelrahmanKhaled
Published: 06/24/2022
I've finished my exams in this week, and return to work on benchmark.
Benchmark:
- Using:
- Benchmark will work in a generic way to compare between LPython and Python compilers in any stage you want (but now i implemented parsing stage, if any one want to add other stage to compare, it will be easy to be added).
- Files structure is similar to testing strucutre:
.
├── benchmark
│ ├── benchmark.toml
│ ├── long_statment.py
│ ├── report_file.dat
│ ├── test_math.py
│ └── very_long_statment.py
└── run_benchmark.py
we will add files we want to compare to benchmark dir and we will add file in toml file and specify what stage we want to compare:
#Possible comparisons
# parser: Parsing time
[[benchmark]]
filename = 'test_math.py'
parser = true
other_stage = true
[[benchmark]]
filename = 'long_statment.py'
parser = true
[[benchmark]]
filename = 'very_long_statment.py'
parser = true
other_stage = true
here we have files, and every file has comparison stages
we can imagine files like this:

-
so parser comparison set has very_long_statment.py, long_statment.py and test_math.py, thats mean we will compare between Lpython and python on (Parsing) stage by these files.
-
other_stage comparison set has long_statment.py, that mean we will compare betweeen LPython and python on (other_stage) stage by this file.
-
Output will be table or pars, and you will have args flags to choose what you want:
$ python3 run_benchmark.py
parser comparison
LPython is first raw, Python is second one.
Time in ms.
benchmark/test_math.py : ▏ 237.24T
▏ 741.59T
benchmark/long_statment.py : ▏ 3.77
▇▇ 22.98
benchmark/very_long_statment.py: ▇▇▇ 39.04
▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 570.24
File Name LPython Python
------------------------------- --------- ----------
benchmark/test_math.py 0.237236 0.741587
benchmark/long_statment.py 3.77516 22.9778
benchmark/very_long_statment.py 39.0398 570.236
$ python3 run_benchmark.py --help
usage: run_benchmark.py [-h] [-n] [-p] [-c COMPARE [COMPARE ...]]
Lpython benchmark
options:
-h, --help show this help message and exit
-n, --numerical show results as numerical table
-p, --plots show results as graph of pars
-c COMPARE [COMPARE ...], --compare COMPARE [COMPARE ...]
What stages you want to compare, for now we have['parser']
- Implementation:
- For every stage we want to implement we should make a class for the stage and make two statics functions for get time(or any value we wnat to compare) for Lpython and python
class Parser:
@classmethod
def get_lpython_result(cls, file_path):
lpython_run = subprocess.Popen("lpython --new-parser --time-report " + file_path, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
try:
stdout = lpython_run.communicate()[0].decode('utf-8')
parsing_value = re.search(r"\bParsing: .*ms\b", stdout)[0]
parsing_value = parsing_value.replace("Parsing: ", '')
parsing_value = parsing_value.replace('ms', '')
parsing_value = float(parsing_value)
except:
parsing_value = None
return parsing_value
@classmethod
def get_cpython_result(cls, file_path):
input = open(file_path).read()
t1 = clock()
a = ast.parse(input, type_comments=True)
t2 = clock()
return float(t2 - t1) * 1000
here parser class implementation.
- I used time-report for getting times of stages in LPython.
- Related pull requests:
View Blog Post