Comments (10)
@stavros11 thanks, could you please profile again by commenting all kernels that are not used in this example?
from qibojit.
I tried removing all kernels except the tiny initial_state_kernel
and this reduces the dry run time from 0.7sec to 0.33sec on my laptop. The compilation cummulative time in the profiling file is also 0.33sec.
from qibojit.
Thanks, could you please measure the difference between the first call and the subsequent calls, by deleting the cache folder? See last paragraph in https://docs.cupy.dev/en/stable/overview.html
from qibojit.
Thanks, could you please measure the difference between the first call and the subsequent calls, by deleting the cache folder? See last paragraph in https://docs.cupy.dev/en/stable/overview.html
I added the following:
kernel_dir = "/home/stavros/.cupy/kernel_cache" # I can verify that this location is correct on my machine
shutil.rmtree(kernel_dir)
which removes this folder, between the dry run and simulation call but there is no change in the numbers. Dry run remains around 0.7s while simulation is <1ms.
Also the simulation cache does not create a new cache, that is even though the rmtree
call is before simulation, I cannot find the kernel cache folder after the script finishes, so I guess cupy is using memory caching when the calls are in the same script and does not recreate the cache saved in disk.
Also the dry run is the same regardless if the kernel cache folder exists or is removed before executing the script.
from qibojit.
Ok, if I understand correctly, the caching data is never generated, correct? If so, then this explains why the dry run is always slow, and we should figure out why the cache is disabled.
from qibojit.
Yes, that is my understanding too. There is a .cubin
file generated in the kernel_cache folder after I execute a script but having this file there makes no difference in subsequent dry run times compared to not having it.
from qibojit.
Ok, so cache is working. Could you please measure how long the initial_state_kernel
compilation takes if you pass 1 vs 25 qubits? (just to check if a light tracing could help here)
from qibojit.
For the initial_state_kernel
the dry run time is constant to around 0.7sec regardless of the number of qubits. For more complicated kernels there is still a constant difference between dry run and simulation that is independent of the number of qubits.
I can do some kind of tracing by running the kernel for 1 qubit first and then some higher number but it will still take the same time. The one-qubit dry run will take 0.7sec and then subsequent calls will be fast regardless of number of qubits. In principle we could hide this 0.7sec overhead in import using the tracing.
from qibojit.
Thanks. Moving to the initialization is probably an interesting suggestion, on the other hand this overhead will disappear as soon as we use GPU for an appropriate system size.
from qibojit.
I know this is a closed issue, but I just want to add that, if we want to move the compilation to the import step we can simply try to add self.gates.compile()
in the __init__()
method of the CupyBackend
class.
from qibojit.
Related Issues (20)
- porting workflows repo
- Invalid NUMBA_NUM_THREADS while running with Numpy HOT 10
- Inverse of singular matrix in a quantum-to-classical channel in `numba` HOT 1
- Port poetry HOT 3
- Improvements for cuQuantum backend HOT 3
- Cannot run pytest when cuquantum is installed HOT 1
- No version `0.0.11` on pypi HOT 5
- Backend review HOT 7
- `CuQuantum` exception HOT 7
- Slow when using repeated execution HOT 1
- Documentation
- Benchmark performance not-in-place updates
- Add not-in-place updates
- Drop py38
- some qibojit benchmarks on NVIDIA Grace-Hopper (WIP) HOT 15
- Tests are failing for cuquantum `23.10`
- Module not found during installation HOT 10
- FSWAP operator
- Cuquantum fails using the latest release HOT 2
- Add cuQuantum version in documentation HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qibojit.