GithubHelp home page GithubHelp logo

qutech / filter_functions Goto Github PK

View Code? Open in Web Editor NEW
14.0 3.0 8.0 13.08 MB

Efficient numerical calculation of generalized filter functions

Home Page: https://www.quantuminfo.physik.rwth-aachen.de/cms/Quantuminfo/Forschung/Quantum-Technology-Group/~zcsx/code/lidx/1/

License: GNU General Public License v3.0

Python 100.00%
physics quantum-computing quantum-information quantum-control python qutip filter-functions

filter_functions's Issues

Docs fail to build

Readthedocs sometimes (it's unclear to me why only sometimes) kills the build process due to excessive memory consumption of the conda environment solver.

This could be solved by

  • switching to a pip-only build w/o a conda environment. However, in this case qutip needs to be compiled, which at the moment fails because it requires cython at install time (which is not installed by default on the readthedocs docker image). See qutip/qutip#1174.
  • conda not solving even though an environment.yml file with pinned versions is passed to the command that creates the environment. For now, conda seems to solve the environment despite pinned version numbers. See here for the conda Google groups question.

Make PulseSequence modular by subclassing

Right now, when concatenating, remapping or extending PulseSequences, information about the constituent instances is not retained in the resulting PulseSequence. This has several drawbacks:

  1. It is impossible to retroactively compute the pulse correlation filter function of a composite pulse after the concatenation has been carried out as the control matrices of the concatenated pulses are not copied over to the new pulse.
  2. Similarly, the user has to decide at the moment of concatenation if they want to efficiently compute the filter function. Exploiting the concatenation property of the filter functions is not possible after the fact.
  3. Periodic concatenation of a pulse stores all time steps and coefficients explicitly in the new PulseSequence. This can take up a significant amount of memory for basically redundant information.

Extending the current structure by subclassing PulseSequence would address these issues on top of being more readable and transparent. Additionally, this would enable interfacing with qupulse in a very straightforward manner by mirroring its class structure:

  • Concatenating periodic PulseSequences could be implemented in analog to RepetitionPT so that only the atomic pulse needs to be stored.
  • Remapping and extending pulses to different qubits as well as joining different instances to a single one could be implemented in analog to MappingPT and AtomicMultiChannelPT.
  • Regular concatenation could be implemented in analog to SequencePT.

This structure would also allow for a PulseSequence instance (or rather subclass thereof) to intelligently parse its composition and decide the most efficient way of calculating the filter function (from scratch, by concatenation, etc). Accordingly, it should be possible without too much effort to derive a PulseSequence from a PulseTemplate and connect it to a virtual_awg, enabling live introspection of pulses designed with qupulse (including for example AWG transfer functions).

ConnectionError in util._get_notebook_name()

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\filter_functions-0.4.0-py3.7.egg\filter_functions\__init__.py", line 23, in <module>
    from . import analytic, basis, numeric, pulse_sequence, superoperator, util

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\filter_functions-0.4.0-py3.7.egg\filter_functions\basis.py", line 53, in <module>
    from . import util

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\filter_functions-0.4.0-py3.7.egg\filter_functions\util.py", line 158, in <module>
    _NOTEBOOK_NAME = _get_notebook_name()

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\filter_functions-0.4.0-py3.7.egg\filter_functions\util.py", line 144, in _get_notebook_name
    params={'token': ss.get('token', '')})

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\requests\api.py", line 76, in get
    return request('get', url, params=params, **kwargs)

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\requests\api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\requests\sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\requests\sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)

  File "C:\Users\lablocal\Anaconda3\envs\forge\lib\site-packages\requests\adapters.py", line 516, in send
    raise ConnectionError(e, request=request)

ConnectionError: HTTPConnectionPool(host='localhost', port=8888): Max retries exceeded with url: /api/sessions?token=ae9de937d44a0d80494a6d95d1459e1ed1f406cb70bf41cd (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001D67A300048>: Failed to establish a new connection: [WinError 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte'))

Use opt_einsum for all einsum calls

While (at the moment), numpy.einsum() is faster than opt_einsum.contract() in many situations, opt_einsum dispatches all possible contractions to BLAS, thereby making use of multiprocessing, whereas numpy.einsum() always runs on a single core. The former should therefore scale much better with large dimensions. Plus, it makes the code cleaner by doing away with case distinctions etc.

Use xarray as data structure

xarray provides a labelled multidimensional array that adds labels to numpy arrays' axes, very similar to pandas dataframes with than two dimensions.

Implementing this as the standard data structure would probably help to make adoption of this package, and writing code that interfaces with it, easier since the many dimensions of e.g. a control matrix or a filter functions that correspond to different physical entities would be labelled in a human-readable way.

The possiblity of incorporation probably stands and falls with if xarray provides a einsum implementation since we strongly rely on optimized paths when contracting multidimensional arrays.

Better data structures for operators and coefficients

Control and noise operators are at the moment stored as NumPy arrays which has a few disadvantages:

  1. Multi-qubit operators are explicit tensor products whose single-qubit components are not stored. While it is possible to manipulate the product chain (implemented by util.tensor_insert, util.tensor_transpose, util.tensor_merge), accessing its individual elements is not. This is however very useful when extending a pulse to a larger Hilbert space. In this situation, the additional qubits idle and their filter function is very easy to compute (just a FID mostly). However, when an explicit tensor product is given as an additional noise Hamiltonian, we cannot know if this new operator is only non-trivial on an idling qubit (in which case we could trivially compute the filter function) or not. Explicitly calculating the filter function might be very costly in such a case since the pulse might have a large number of time steps.
  2. Coefficients are stored as arrays of shape (n_ops, n_dt) even if some might be very repetitive (e.g. only zeros). This of course introduces a lot of unnecessary overhead when calculating filter functions.

Most of these issues might be addressed by making the code more object-oriented:

  1. Introduce a Operator class with an identifier property. This might be a subclass of ndarray such that einsum and other NumPy operations still work with it.
  2. Introduce a TensorProduct class which implicitly stores the elements of the tensor product and has a evaluate() method
    to compute the explicit matrix. This would immediately make a lot of code redundant (all the tensor_* functions above) on top of giving access to the underlying structure of a tensor product.
  3. Introduce a Coefficients class with an overloaded __getitem__ method that allows einsum to work with coefficient arrays of different lengths (I am not sure how straightforward this would be to implement).

The downside of all these changes would be a larger degree of complexity for constructing a PulseSequence. Moreover, at this point it's unclear if the package will actually be used for algorithms, i.e. more than two qubits, and thus falls under premature optimization.

Use cached_property

One could use cached_property for many of the cached attributes of Basis and PulseSequence. On the other hand, it's not helpful for those that are frequency-dependent and we'd therefore have to mix the usage of cached_property and a dedicated implementation.

Drop QuTiP dependency

In the core package, QuTiP is only used for plotting Bloch sphere trajectories. Reimplementing this (or copying their code should the licensing allow it) should significantly lower the installation barrier.

At first glance, this would require

  • reimplementing plotting.plot_bloch_vector_evolution() and
  • dynamically checking if QuTiP is available for the types.Operator, types.State types.

Keeping the dependency for the examples / documentation is okay since we still want to interface easily with QuTiP.

TypeError in util at import

At import util throws an uncatched TypeError. The error is thrown in a try catch block so my hotfix was to include TypeError in the exect statement.

Same issue occured on two different machienes. Setting up the environment by:

conda create -n myenv python=3.7
conda activate myenv
conda install numpy cython matplotlib pytest pytest-cov jupyter notebook spyder scipy
conda config --append channels conda-forge
conda install qutip
conda install pandas simanneal
pip install filter_functions

Error Message:


TypeError Traceback (most recent call last)
in
1 import numpy as np
2 from qsim.matrix import DenseOperator
----> 3 from qsim.solver_algorithms import SchroedingerSolver
4
5
c:\users\programmer\documents\qsim\labcourseroot\qsim\qsim\solver_algorithms.py in
63 from abc import ABC, abstractmethod
64
---> 65 from filter_functions import pulse_sequence
66 from filter_functions import plotting
67 from filter_functions import basis
~\AppData\Local\conda\conda\envs\jupyter_server\lib\site-packages\filter_functions_init_.py in
21 """Package for efficient calculation of generalized filter functions"""
22
---> 23 from . import analytic, basis, numeric, plotting, pulse_sequence, util
24 from .basis import Basis
25 from .numeric import (error_transfer_matrix, infidelity,
~\AppData\Local\conda\conda\envs\jupyter_server\lib\site-packages\filter_functions\basis.py in
52 from sparse import COO
53
---> 54 from .util import P_np, remove_float_errors, tensor
55
56 all = ['Basis', 'expand', 'ggm_expand', 'normalize']
~\AppData\Local\conda\conda\envs\jupyter_server\lib\site-packages\filter_functions\util.py in
153 return ''
154
--> 155 _NOTEBOOK_NAME = _get_notebook_name()
156 except ImportError:
157 _NOTEBOOK_NAME = ''
~\AppData\Local\conda\conda\envs\jupyter_server\lib\site-packages\filter_functions\util.py in _get_notebook_name()
147 params={'token': ss.get('token', '')})
148 for nn in json.loads(response.text):
--> 149 if nn['kernel']['id'] == kernel_id:
150 relative_path = nn['notebook']['path']
151 return os.path.join(ss['notebook_dir'], relative_path)
TypeError: string indices must be integers

Function to assemble algorithms from atomic pulses

A neat way of computing the filter function for an algorithm, given elementary pulses, would be a function that takes a matrix-like structure with one dimension representing the qubit registers and the other time and then assembles the total pulse, intelligently calculating the filter function.

def algorithm(pulses: List[List[PulseSequence]]) -> PulseSequence:
    ...

alg = algorithm(
    [[qubit_1_pulse_1, qubit_1_pulse_2, ... ],
     [qubit_2_pulse_1, ...            , ... ],
     [None           , qubit_3_pulse_2, None],
     [qubits_45_pulses_12             , ... ]] 
)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.