Comments (7)
from .benchmarks import BENCHS
Relative Python imports depends how you run you code. sys.argv[0] can be different than your command line. The Runner class has a program_args argument to workaround the sys.argv[0] issue:
http://perf.readthedocs.io/en/latest/api.html#runner-class
Why not using absolute imports?
See also how http://github.com/python/performance handles this issue.
Reminder: perf spawns worker child process, so it has to build a command line to run them.
from pyperf.
Hmm, I can run via $ python -m format.run
, but changing relative import from .bench ...
to from bench ...
, then run via $ python format
will work..
from pyperf.
How can we enhance the documentation to handle this issue?
from pyperf.
@Haypo after using absolute imports inside the project, I still can do like this python -m formats
https://github.com/mlouielu/python-string-format-microperformance
from pyperf.
@Haypo after using absolute imports inside the project, I still can do like this python -m formats
That's why I consider that it's more a documentation issue than a bug.
Can I close this bug report? Or do you want to write a PR for the doc?
from pyperf.
Ah.. That is a typo, I * can not * do like this python -m formats
, it get me this output:
Traceback (most recent call last):
File "/Users/louielu/dev/python-string-format-microperformance/formats/__main__.py", line 1, in <module>
from formats.run import run
ModuleNotFoundError: No module named 'formats'
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/louielu/dev/python-string-format-microperformance/formats/__main__.py", line 3, in <module>
run()
File "/Users/louielu/dev/python-string-format-microperformance/formats/run.py", line 8, in run
runner.bench_func(bm_name, bm_func)
File "/usr/local/lib/python3.6/site-packages/perf/_runner.py", line 485, in bench_func
return self._main(task)
File "/usr/local/lib/python3.6/site-packages/perf/_runner.py", line 420, in _main
bench = self._master()
File "/usr/local/lib/python3.6/site-packages/perf/_runner.py", line 543, in _master
bench = Master(self).create_bench()
File "/usr/local/lib/python3.6/site-packages/perf/_master.py", line 221, in create_bench
worker_bench, run = self.create_worker_bench()
File "/usr/local/lib/python3.6/site-packages/perf/_master.py", line 120, in create_worker_bench
suite = self.create_suite()
File "/usr/local/lib/python3.6/site-packages/perf/_master.py", line 110, in create_suite
suite = self.spawn_worker(self.calibrate_loops, 0)
File "/usr/local/lib/python3.6/site-packages/perf/_master.py", line 97, in spawn_worker
% (cmd[0], exitcode))
RuntimeError: /usr/local/Cellar/python3/3.6.1/bin/python3.6 failed with exit code 1
from pyperf.
I am also unable to use absolute imports when attempting to run benches with python -m
with the following directory structure:
mypkg/
mypkg/__init__.py
mypkg/myfunc.py
mypkg/bench/
mypkg/bench/bench_myfunc.py
The contents of bench_myfunc.py is:
from mypackage.myfunc import myfunc
if __name__ == '__main__':
runner = perf.Runner()
runner.bench_func("myfunc_foo", myfunc)
and when I run python -m mypkg.bench.bench_myfunc
I get the same error as above:
Traceback (most recent call last):
File "/tmp/perf_repro/mypkg/bench/bench_myfunc.py", line 1, in <module>
from mypkg.myfunc import myfunc
ModuleNotFoundError: No module named 'mypkg'
Traceback (most recent call last):
File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/perf_repro/mypkg/bench/bench_myfunc.py", line 6, in <module>
runner.bench_func("bench_myfunc", myfunc)
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_runner.py", line 493, in bench_func
return self._main(task)
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_runner.py", line 428, in _main
bench = self._master()
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_runner.py", line 551, in _master
bench = Master(self).create_bench()
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_master.py", line 221, in create_bench
worker_bench, run = self.create_worker_bench()
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_master.py", line 120, in create_worker_bench
suite = self.create_suite()
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_master.py", line 110, in create_suite
suite = self.spawn_worker(self.calibrate_loops, 0)
File "/home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/lib/python3.7/site-packages/perf/_master.py", line 97, in spawn_worker
% (cmd[0], exitcode))
RuntimeError: /home/hcs/.local/share/virtualenvs/perf_repro-bPiNGceB/bin/python failed with exit code 1
I am using python 3.7 and perf 1.5.1 inside a virtualenv created by pipenv.
I have attached a tar containing the repro case, with an additional file that shows that importing is working:
python -m mypkg.bench.import_works
perf_absolute_import_repro.tar.gz
from pyperf.
Related Issues (20)
- Having trouble while using pypy: pyperf is much slower than timeit HOT 4
- Waiting for runner's bench_func and bench_command functions to complete instead of receiving outputs individually? HOT 5
- Allow for warmup in PEP659 HOT 7
- Pyperf API examples fail when running from a REPL HOT 2
- Propagating PYTHONPATH in tests HOT 2
- Support custom loop factory in bench_async_func
- Instability and slowness after using `pyperf system tune` HOT 2
- Can process run in parallel? HOT 4
- Display additional information through table.
- Programmatically define output file for bench_func HOT 1
- bench_func returning None for all but the first benchmark
- How to programmatically get the output for timeit() or bench_func()? HOT 9
- Add Heirarchical Performance Testing (HPT) technique to `compare_to`? HOT 3
- 2.6.2: sphinx warnings `reference target not found`
- Disable pyperf for free-threaded build for a while HOT 1
- Include `CONFIG_ARGS` in Metadata HOT 3
- Benchmark with disabled GIL build and -Xgil=1 isn't enabling GIL HOT 8
- `warmups` not a supported parameter for Runner anymore HOT 1
- bench_command doesn't collect pystats
- Minor issue regarding memory units (kB vs KiB) HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pyperf.