GithubHelp home page GithubHelp logo

q-optimize / c3 Goto Github PK

View Code? Open in Web Editor NEW
62.0 6.0 36.0 31.89 MB

Toolset for control, calibration and characterization of physical systems

Home Page: https://c3-toolset.readthedocs.io/

License: Apache License 2.0

Python 100.00%
calibration characterization quantum-computing quantum-information machine-learning control-systems optimal-control

c3's People

Contributors

alex-simm avatar ashutosh-mishra2 avatar dependabot[bot] avatar fedroy avatar frosati1 avatar glasern avatar kepack avatar lazyoracle avatar man6o avatar maxmaw avatar maxnaeg avatar nwittler avatar shaimach avatar slephnirr avatar superplay1 avatar yonatangideoni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

c3's Issues

Add Contributor Licence Agreement auto signing

Is your feature request related to a problem? Please describe.
Currently there is no way to ascertain that new (external?) contributors to the codebase don't have any conflicts (eg with their current employers) and properly ascertain code ownership/licensing

Describe the solution you'd like
Every PR from a new contributor should automatically trigger a digital signing of the CLA

Describe alternatives you've considered
Manually require every external contributor to provide an email (otherwise written) confirmation clarifying no conflicts

Additional context
Suggested solution: cla-bot

Propagation Performance

Is your feature request related to a problem? Please describe.
In the propagation and creation of the Us everything is written by using python for loops. These should be replaced by the tensorflow equivalents to increase performance and actually make use of tensorflow workload distribution.

c3/c3/utils/tf_utils.py

Lines 219 to 224 in ef95330

for ii in range(cflds[0].shape[0]):
cf_t = []
for fields in cflds:
cf_t.append(tf.cast(fields[ii], tf.complex128))
dUs.append(tf_dU_of_t(h0, hks, cf_t, dt))
return dUs

c3/c3/utils/tf_utils.py

Lines 140 to 145 in ef95330

ii = 0
while ii < len(hks):
h += cflds_t[ii] * hks[ii]
ii += 1
terms = int(1e12 * dt) + 2
dU = tf_expm(-1j * h * dt, terms)

Currently every du is calculated by itself and no parallelization is done, which could/should be improved.

Describe the solution you'd like
Make the progapagtion with native tensorflow functions.

Describe alternatives you've considered

Additional context

Add a CHANGELOG

Is your feature request related to a problem? Please describe.
There is no easy and centralised way for checking the changes being introduced in a new release. Sometimes the API is broken and there needs to be deprecation warnings issued at least a few minor releases ahead.

Describe the solution you'd like
A version-controlled CHANGELOG included with the repository (as a markdown file) which gets updated incrementally during all the PRs that lead up to a particular release. It gets finalised in the release/x.y.z PR before merging into master. Some points:

  • Chronologically reverse
  • Each contribution/feature should be linked to corresponding PR
  • Each release date should also be included
  • Changes should be grouped by types:
    • Added for new features.
    • Changed for changes in existing functionality.
    • Deprecated for soon-to-be removed features.
    • Removed for now removed features.
    • Fixed for any bug fixes.
    • Security in case of vulnerabilities.
  • After new release, create an Unreleased section at the top of the changelog which then gets updated both during PRs by individual contributors as well as when the team decides to work on and implement a certain future. This allows for a simple Coming soon... as well as makes it easy to prepare the final CHANGELOG during the release PR. More details here.

Describe alternatives you've considered
Discussions on Slack/Rocket.chat, fragmented notes in various PRs, Issues and meeting docs, some minimal points in the Github Releases section. None of this has archival and reference usage and are also difficult to maintain, aggregate and version control.

Additional context
Related: As discussed in the comments on PR #87 we need a well-structured commit message for the merge commits when we merge a branch into dev during the usual development cycle. The expected template for this is as below:

Short 1-line (50 chars) description of key feature/bug-fix with associated issue numbers if any (can go in the commit heading in Github PR merge UI)

(The part below goes in the merge commit message details)
Contributors: Full name (and not Github handle) of all contributors to this PR, irrespective of who made the code commits.
Longer multi-line description, typically in the form of a detailed list of various features implemented with any additional insights/remarks on the implementations etc.

It should be possible to implement this through an automated Github bot that notifies on every PR the requirements of updated tests, updated docs, changelog, commit message, CLA etc. Either enforce this through PR templates or through a github bot.

Store all signal and improve structuce of signal line

At the moment, most devices have a self.signal where the last signal that was produced gets stored.
For example:

c3/c3/generator/devices.py

Lines 926 to 931 in 815bf32

cos = tf.cos(omega_lo * ts)
sin = tf.sin(omega_lo * ts)
self.signal["inphase"] = cos
self.signal["quadrature"] = sin
self.signal["ts"] = ts
return self.signal

We should instead store all signals generated in the generator object by doing something like

signal_stack: List[tf.Variable] = []
for dev_id in self.chains[chan]:
    dev = self.devices[dev_id]
    ...
final_signal[chan] = copy.deepcopy(signal_stack.pop())
signal_stages[chan] = signal_stack

For this to work we would have to make sure that when taking elements from the signal stack they aren't popped.

However, this creates a problem:
Imagine your generator stack has the following devices:
LO, LO_noise, AWG, AWG_noise, Mixer
where the LO_noise and AWG_noise take 1 input and have 1 output.
Then the popping works because the mixer will find in the stack the noisy AWG output and the noisy LO output.
If we don't pop the stack when we get to the mixer would have the outputs:
[LO, LO_noise, AWG, AWG_noise]
And won't know that it needs to take the 2nd and 4th elements of the stack.

This leads to the point that we need to implement a more general (and more intuitive) signal generation chain that is a directed graph.
One way to do this would be when specifying the chain you specify the inputs.
What is currently:

chains={
        "TC":["lo", "lo_noise" , "awg", "dac", "resp", "mixer", "fluxbias"],
        "Q1":["lo", "awg", "dac", "resp", "mixer", "v2hz"],
    } 

could become:

chains={
        "TC": [("lo"), ("lo_noise","lo") , ("awg"), ("dac","awg"), ("resp","awg"), ("mixer","lo_noise","resp"), ("fluxbias", "mixer")]
        "Q1": [("lo") , ("awg"), ("dac","awg"), ("resp","awg"), ("mixer","lo","resp"), ("v2hz", "mixer")]
} 

where the first element of each tuple is the device that needs to make a signal and the others are the inputs.
This is to get closer to the point where we truly have a directed graph.
The tuple structure is just a suggestion but anything would do even:

chains={
        "TC": [
            {"device": "lo", "inputs" : []},
            {"device": "lo_noise", "inputs" : ["lo"]} ,
            ...,
            {"device": "mixer", "inputs" : ["awg", "lo"]},
            ...
        ]   
        "Q1": [{...}, ..., {...}]
} 

I'm aware this is a more annoying to setup but it does make more sense to someone that isn't familiar with the stack and how to make it work for new devices.

[thanks @GlaserN for the input]

No complete notebook showing Model Learning using `c3-toolset`

Is your issue related to a problem? Please describe.
There is no demonstration notebook showing how one would perform the whole cycle of c3-toolset including the model learning phase.

  • Bug Fix: Model Learning is broken
  • Feature Req: No Model Learning notebook

Describe the solution you'd like

  • Define a simple model (configs and hjsons) - Can we use one of the existing ones that are present for two_qubits.ipynb/Simulated_calibration.ipynb or in one of the tests?
  • Run C1 and C2 on this model (or just run C2 at least?)
  • Store the data being generated in the C2 step (How? Similar to c3-paper? Just store a pickle file? numpy arrays?)
  • Read the data stored in previous step and parse all the values (including reloading the current model from a config file)
  • Run C3 model learning with this data on the previously defined model

If beyond the scope, it might be adequate to start directly from step 4 with some dummy dataset from a previous C2 run and perform model learning on that data.

TO-DO/Status

  • Trace current function paths
  • Check points where API is broken due to updates elsewhere
  • Explore and understand code/API design choices (data structure, FOMs, sampling, data assumptions)
  • Change/Update code/implementation structure as required
  • Change/Update API design as required
  • Update data structure and storage format to something portable and sensible ( might relate to data saving in C2, save as dataframes instead of list of dicts)
  • Check/update integration with main.py and Optimizer
  • Check/update integration with config files
  • Run Model Learning from CLI
  • Update Docstrings and Type Annotations
  • Update config files
  • Add tests
  • Create Model Learning Notebook as outlined above
  • Add docs based on examples

Note
Model Learning Experimentation being tracked in a separate issue #105

Undefined references in parsers.py

Describe the bug
Parsers.py contains undefined references. See screenshot below.

Screenshots
image

branch = master and dev
git rev = 2b41aac7f9df227a423f7cacdaa943731c7e40e7 and 7991c8062576c339c4f4ce18e72338c1a71df068

Tensorflow: Performance bottleneck due to tf.function retracing

Describe the bug
Possible tensorflow performance bottleneck due to repeated tracing of tf.function decorated modules

To Reproduce
In Simulated_Calibration.ipynb:

C3:STATUS:Saving as: /tmp/tmpaz0djjuu/c3logs/ORBIT_cal/2021_03_16_T_16_57_04/calibration.log
(5_w,10)-aCMA-ES (mu_w=3.2,w_1=45%) in dimension 4 (seed=1004483, Tue Mar 16 16:57:04 2021)
C3:STATUS:Adding initial point to CMA sample.
WARNING:tensorflow:5 out of the last 14 calls to <function tf_matmul_left at 0x7f97602a8dd0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

Suggested workarounds

Possible causes and solutions:

(1) creating @tf.function repeatedly in a loop, -- please define your @tf.function outside of the loop.
(2) passing tensors with different shapes, -- @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing.
(3) passing Python objects instead of tensors. -- please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.

C3QasmPerfectSemulator output uses numpy types

Describe the bug

A clear and concise description of what the bug is.
qiskit.Resut created by C3 qiskit adapter can't be serialized to JSON because of counts represented as numpy.int32.

probably happens here
shots_data = (np.round(pop_t.T[-1] * shots)).astype("int32")

To Reproduce

import numpy as np
import json
from c3.qiskit import C3Provider
from qiskit import transpile, execute, QuantumCircuit, Aer
qc = QuantumCircuit(6, 6)
qc.rx(np.pi/2, 0)
qc.rx(np.pi/2, 1)
qc.measure([0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5])
c3_provider = C3Provider()
c3_backend = c3_provider.get_backend("c3_qasm_perfect_simulator")
c3_backend.set_device_config('quickstart.hjson')
c3_backend.disable_flip_labels()
c3_job = execute(qc, c3_backend, shots=1000)
result = c3_job.result()

# here crash!
json.dumps(result.to_dict())

workaround: Use custom json.JSONEncoder to serilialize numpy types

Expected behavior

Qiskit objects should serializable

Environment (please complete the following information)

  • OS: WSL2/Ubuntu 20.04
  • Python Version: 3.8.5
  • c3-toolset Version /ref/dev

Inconsistent Gate Names

Describe the bug
Naming of gates in C3 is inconsistent with the standard naming convention followed in quantum computing textbooks and community.

Details
What we call X90p (and all similarly named gates for X, Y and Z) is essentially RX(90) as usually referenced elsewhere.

Expected behavior

Current Name Expected Name
X90p RX90p
Xp RXp
X90m RX90m
Same for Y, Z Same for Y, Z
CNOT CRXp
CZ CRZp
CR What does this do?
CR90 Seems fine?
iSWAP Can someone check this?

Better and faster lindblad simulation of transmons.

Is your feature request related to a problem? Please describe.
Higher levels of the transmon have not been thorougly been tested for lindblad simulation.
The Lindblad simulation takes a very long tme especially for long sequences. It is therefore necessary to have a faster approximate version not involving full matrix exponentiation of the superoperator.

Describe the solution you'd like
Implement how noise scales for higher levels of the transmon.
Approximate faster calculation of the Lindblad propagation

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Flexible Dependency Management

Is your feature request related to a problem? Please describe.
Currently our dependencies involve version pinning eg,

c3/setup.py

Lines 37 to 47 in 6e7e7a0

install_requires=[
"adaptive==0.11.1",
"cma==3.0.3",
"gast==0.3.3",
"hjson==3.0.2",
"rich==9.2.0",
"numpy==1.19.5",
"scipy==1.5.2",
"tensorflow==2.4.1",
"tensorflow-estimator==2.4.0",
"tensorflow-probability==0.12.1",

The problem is that this will sooner or later lead to conflicts and issues when c3-toolset exists in environments with other packages that have even slightly differing requirements.

Describe the solution you'd like

  1. We should probably move to a more flexible >= style requirements. OR
  2. We release pre-built docker images to run everything in containers only

Qiskit and OpenQasm Integration for C3

Is your feature request related to a problem? Please describe.

There is no seamless way to integrate C3 with high level interfaces such as cirq or qiskit. We would like to change that.

Describe the solution you'd like

What

Support for gate level simulation of circuits defined using Qiskit with the C3 tensorflow simulator as a backend

How

Take circuit transpiled (based on system architecture and gateset provided by C3) by Qiskit compiler into QASM, parse this and create, if required, an intermediary output to be read by the C3 model_parser which is then simulated as usual by our C3 simulator

Why

Qiskit provides a decent high level interface to defining quantum circuits and it's useful to have support for using Qiskit to define circuits which can initially have a gate level simulation (OpenQasm) and then a full physics simulation (based on OpenPulse)

Describe alternatives you've considered

None. This is essential.

Additional context

TODO:

  • Figure out API used by Qiskit to call a backend simulator such as the QasmSimulator

  • Inherit the qiskit Provider, Backend and Job Classes to write our own simulator which is a wrapper around the C3 tensorflow simulator

  • Implement basic backend framework with C3Provider, C3Backend, C3Job and C3QiskitError

  • Implement framework in c3_openqasm_simulator to accept Qobj and return Result

  • Create C3 physical qubits from OpenQasm Qobj

  • Convert instructions between Qobj and C3, essentially map the gateset

  • Map execute() in qiskit to internal C3 interfacing

  • Map shots/memory to experiment/population (measurement and readout)

  • Create dictionary of results

  • Achieve the following consistent interface:

    from c3.qiskit import C3Provider
    from qiskit import execute, QuantumCircuit
    qc = QuantumCircuit(2, 2)
    # add quantum gates
    c3_provider = C3Provider()
    c3_backend = c3_provider.get_backend("c3_qasm_simulator")
    c3_backend.set_device_config("device.hjson")
    c3_job = execute(qc, c3_backend, shots=10)
    result = c3_job.result()
    res_counts = result.get_counts(qc)
    print(res_counts)
  • Update tests

  • Update examples

  • Update docs

  • Update workflows with requirements

References:

Missing Documentation

This is a continuously updated issue to keep track of missing documentation in various parts of the code. Please update the Issue description or add a comment when you find a section of the code that is poorly documented or not intuitive to use or has some quirks that need to be flagged to the user/developer. Make sure you go through the points listed below before adding something new. (ref for writing good software docs here and here.

  • How does one use evaluate() and process() as defined here? The Simulated_Calibration.ipynb is out of date and the other notebooks don't seem to use it. What all experiment parameters need to be set before running these two functions? (Updated Notebook with usage)
  • Given an object of the Model class or e.g, a model config, how does one go about programmatically identifying the qubits (or the number of qubits) or identifying which qubits are coupled (or the couplings). It seems like these subsystems don't have different types. Are they all stored in an unstructured manner inside the Model datatype and there is no way to programmatically identify a qubit from a coupling except by the name Q1 and Q1-Q2? What would be the C3 preferred way to eg, get_num_of_qubits(), get_connected_qubits() and get_qubit_levels()?
  • How is the average gate fidelity calculated and which of the several functions defined here do we need to use? Is there a naming convention or catalogue that better outlines what these functions do?
  • Regarding gate definition. In quickstart.hjson, we define single_qubit_gates between Q1 and Q2 as "X90p:Id" and two_qubit_gates as CR90. The system model that this quick-start uses also has couplings for Q4-Q6, how would one go about defining that gate, either single qubit or 2 qubit? Since all these gates seem to be only discriminated by their name, its not immediately clear how to have two qubit gates defined simultaneously between Q1-Q2 and Q4-Q6. Also, are the quotes for the gate names required or not? (Pending restructuring of gate names, qubits and indices)
  • exp.get_gates() - What is the expected usage of this function? Where do I use the output of this function? Is it just a proxy to call exp.propagation()?
  • How and when are the dUs created? Do I need to run exp.get_gates() for the dUs
  • Docs and example notebooks dont explain the C3 style vs Qiskit style qubit state labelling and how to use both when working with the C3-Qiskit interface
  • Missing updated documentation on parameters for instantiating Transmon class objects -

    c3/c3/libraries/chip.py

    Lines 289 to 302 in 7b1ccc4

    class Transmon(PhysicalComponent):
    """
    Represents the element in a chip functioning as tunanble transmon qubit.
    Parameters
    ----------
    freq: np.float64
    base frequency of the Transmon
    phi_0: np.float64
    half period of the phase dependant function
    phi: np.float64
    flux position
    """
  • #73 (update example notebook in docs)
  • #91 (missing notebook for Model Learning)
  • Missing Introductory Section explaining what the different parts of codebase encapsulate and how they work together (fixed in #110 )
  • #121 (missing notebooks and docs)
  • Create an FAQ or wiki from various questions and discussions in internal user chat
  • Missing docstrings and examples for usage of command line tools, eg, the log reader. #151
  • Missing notebook for Entangling Gate Optimization (closed in #157)
  • Compatibility with qasm style gates both through the qiskit interface and otherwise (closed in #165)
  • Setting a seed for reproducible simulations

Modification of parameter with default

Describe the bug
Modifying the default value of a parameter can lead to unexpected results.

The default value of a parameter is computed once when the function is created, not for every invocation. The "pre-computed" value is then used for every subsequent call to the function. Consequently, if you modify the default value for a parameter this "modified" default value is used for the parameter in future calls to the function. This means that the function may not behave as expected in future calls and also makes the function more difficult to understand.

some_dict.pop() is used in several places in algorithms.py which leads to modification of defaults (ref here and here).

Remark
Is there some reason why pop() is used in place of standard dictionary access routines?

Integrate native Tensorflow Learning API to C3 Model Learning

Is your feature request related to a problem? Please describe

Current implementation of Model Learning (C_3 step) suggests duplication of code and lack of integration/reuse with existing solutions (possible case of NIH syndrome).

Describe the solution you'd like

Integrate C3 Model Learning with the Tensorflow ML ecosystem by extending Model and Layer to wrap C3 computations and state.

Advantages of this approach

  1. Easy access to the whole suite of developments in the ML ecosystem in and around Tensorflow
  2. Standardized way to try out various Loss Functions
  3. Standardized way to try out various Optimization functions
  4. Use readily available Hyperparameter Optimization tools for systematic tuning
  5. Easily integrate Tensorboard and associated logging tools
  6. Allow for mixed-design composable models (visualised below)
  7. Provide serialised models ready to make predictions allowing better tune-up cycles
  8. Easily distribute and reuse pre-trained models for prediction or fine-tuning for Transfer Learning
  9. Easily and systematically experiment by turning on/off different layers, training/freezing parameters etc

Describe alternatives you've considered

  1. Current implementation - possibly works but difficult to extend, iterate and experiment
  2. Ditch C3 Model Learning (JK lol no)

Status/Steps

Useful refs here and here

  • Define a custom layer to encapsulate the computations and state (model parameters) of the c3-tf-simulator
    • Map the parameters in (QPU + Electronics) model, eg, qubit frequency, anharmonicity, Hilbert dimensions, couplings, LO/AWG properties etc to typical neural net weights and biases (w and b)
    • Define a forward pass for our simulator layer (call()) based on the gateset and the sequences as defined in the experimental data
    • Optionally define non-trainable weights (parameters in c3 language)
    • Delayed weight creation with lazy building
    • Check if recursive layer stacking works as expected
    • Provide loss tensors to be used during training by providing an add_loss() function
    • Provide add_metric() function to track a FOM during training
  • Define a custom model to expose the fit(), evaluate() and predict() (ref here)
  • Check if a custom training loop is more useful (ref here)
  • Optionally expose save(), save_weights() etc
  • Optionally check if composing models with a mix of c3-tf-simulator and general neural network layers as defined in Tensorflow works as expected
  • Explore choice of Loss Function (L1 vs L2, ref here and here)
  • Explore choice of Optimization Algorithms (SGD, Adam, AdaDelta eg)

Possible Gotchas in this approach

  • Given how tightly coupled state (C3-style model parameters) and computations (the actual computational operations eg matrix exponentiation) are in the current codebase, it might be tricky wrapping them up efficiently in a proper Layer style class
  • There is a lot of ad-hoc hacky approaches involving the use of eg tf.function decorators without adequate reasoning, tf.Variable vs tf.Constant usage, GradientTape etc which might make integration with the standard native TF learning API possibly inefficient, buggy and difficult to keep up with TF changes and developments. However, adopting the TF API ensures we are continuously able to build on and fully tap into the large ecosystem of ML tools and solutions.

c3-tf-model-learning-svg

Add writeup about Hamiltonian bases and frame transformations

Is your feature request related to a problem? Please describe.

There is a non-trivial issue relating to the basis and frame which are used to simulate dynamics:

  • The Hamiltonian can be in either the dressed or product basis:

    • Thermalization is (probably) always defined in the dressed basis (not sure what happens if different qubits are coupled to different baths with different temperatures).
    • Readout is currently assumed to be in the dressed basis (until Thorge will implement something more refined).
  • The Hamiltonian can be represented inone of (at least) 3 frames:

    • The lab frame
    • The frame rotating with the control drive LO (local oscillator)
    • The frame rotating with the qubit
  • "The frame rotating with the qubit" can refer to either the qubit as defined in the product basis or the dressed basis

This whole subject is quite confusing.

Describe the solution you'd like
We need a good write-up to explain this to ourselves.

Describe alternatives you've considered
We find a pre-existing write-up which exactly addresses this issue

Additional context
After we have everything laid out in a clear fashion, we can decide how we want this understanding to be reflected in the code. For example:

  • The code may support doing the dynamics in one of several frames, as preferred by the user; or
  • We can choose one frame and allow the user to transform a state to any of the other frames for plotting; or ...

But that will be a separate discussion

Optimizer API Mismatch between Tensorflow and Scipy Optimizers

Describe the bug
The API for optimizer algorithms defined in our library takes as optional argument a dict where one can specify things such as the maximum number of function evaluations or algorithm iterations. In the current implementation, the tensorflow optimizers treat maxfun like maxiter and this creates inconsistencies when comparing/benchmarking these algorithms.

To Reproduce
Steps to reproduce the behavior:

  1. Run examples/two_quits.ipynb
  2. In cell line 40, change to algorithms.tf_sgd
  3. Check the optimization run and notice it runs maxfun number of epochs

Expected behavior

  • Pass maxiter instead of maxfun to Tensorflow optimizer and the same should be processed by the implementation instead of the current erroneous one.
  • Raise an error if maxfun is passed, noting that these optimizers should instead be used with maxiter

Additional context
Relevant code:

iters = options["maxfun"]

Misleading Quantity units with pi

Is your feature request related to a problem? Please describe.
Inputing a number in radian values needs the unit specifier "Hz"
But inputing a unit in Hz values needs the unit specifier "Hz 2pi"
This is rather counter intuitive and should be changed.

However if you currently use the unit Hz 2pi the input value will be multiplied by 2pi converting from Hz to rad. But it would be more intuitive the other way around to insert a value in Hz with the unit Hz and then transform this value to a radian value, if necessary, e.g. creating the Hamiltonian.

Describe the solution you'd like
Do not transform input value of quantity if 2pi is in the quantity name.
Objects which need a value in radians (e.g. creation of the hamiltonian) should be able to request a value that is in radians and includes a scaling factor dependend on the given unit

Describe alternatives you've considered
Transform an initalized Hz unit to a Hz 2Pi while changing the unit specifier, as well as the value.

Additional context

Handling of qasm compatible gate ids

Describe the bug
Gate identifiers consist of a (user specified) arbitrary name, e.g. RX90p and the index of the targeted qubit. Handling of this is inconsistent when simulating.

To Reproduce
Using compute_propagators() of the Experiment class, an instruction with name RX90p acting on qubit 0 is store as "RX90p" whereas the lookup_gate method expects "RX90p[0]".

C3-Qiskit interface produces results with incompatible state labels

Describe the bug
Qiskit ordering of qubits is different from what is commonly used in Physics textbooks. C3 follows the general Physics textbook style. However when producing output through the C3-Qiskit interface, the expectation is to get results with compatible qubit ordering.

To Reproduce
The following code snippet checks C3 output with Qiskit output. This fails due to qubit indexing mismatch:

    c3_qiskit = C3Provider()
    backend = c3_qiskit.get_backend('c3_qasm_perfect_simulator')
    backend.set_device_config("test/quickstart.hjson")
    qc = get_6_qubit_circuit
    job_sim = execute(qc, backend, shots=1000)
    result_sim = job_sim.result()

    # Test results with qiskit style qubit indexing
    qiskit_simulator = Aer.get_backend("qasm_simulator")
    qiskit_counts = execute(qc, qiskit_simulator, shots=1000).result().get_counts(qc)
    assert result_sim.get_counts(qc) == qiskit_counts

Expected behavior
By default, the c3-qiskit interface should produce qiskit compatible labels with an option to disable this feature if required by the user.

Additional context
Importantly, this change in the representation of multi-qubit states affects the way multi-qubit gates are represented in Qiskit. More details are available here and it might be useful to check if a possible fix goes beyond mere relabelling of qubit state names in the experiment result.

Incorrect intantiation of classes

Describe the bug
In c3/utils/parsers.py, multiple class instantiations have not been updated along with changes elsewhere in the code.

c3/c3/utils/parsers.py

Lines 181 to 189 in a4ab975

opt = C2(
dir_path=cfg["dir_path"],
run_name=run_name,
eval_func=eval,
gateset_opt_map=gateset_opt_map,
algorithm=algorithm,
exp_right=exp,
options=options,
)

does not match

c3/c3/optimizers/c2.py

Lines 12 to 30 in a4ab975

class C2(Optimizer):
"""
Object that deals with the closed loop optimal control.
Parameters
----------
dir_path : str
Filepath to save results
eval_func : callable
infidelity function to be minimized
pmap : ParameterMap
Identifiers for the parameter vector
algorithm : callable
From the algorithm library
options : dict
Options to be passed to the algorithm
run_name : str
User specified name for the run, will be used as root folder
"""

Again,

c3/c3/utils/parsers.py

Lines 260 to 272 in a4ab975

opt = C3(
dir_path=cfg["dir_path"],
sampling=sampling_func,
batch_sizes=batch_sizes,
seqs_per_point=seqs_per_point,
opt_map=exp_opt_map,
state_labels=state_labels,
callback_foms=callback_foms,
callback_figs=callback_figs,
algorithm=algorithm,
options=options,
run_name=run_name,
)

does not match

c3/c3/optimizers/c3.py

Lines 21 to 47 in a4ab975

class C3(Optimizer):
"""
Object that deals with the model learning.
Parameters
----------
dir_path : str
Filepath to save results
sampling : str
Sampling method from the sampling library
batch_sizes : list
Number of points to select from each dataset
seqs_per_point : int
Number of sequences that use the same parameter set
pmap : ParameterMap
Identifiers for the parameter vector
state_labels : list
Identifiers for the qubit subspaces
callback_foms : list
Figures of merit to additionally compute and store
algorithm : callable
From the algorithm library
run_name : str
User specified name for the run, will be used as root folder
options : dict
Options to be passed to the algorithm
"""

Suggestions

  • Update the parsers module to match changes elsewhere in the code
  • Add tests to prevent regression

Changing optimization solutions

Describe the bug
The optimization in examples/two_qubits.ipynb did not converge to high accuracy after 8e747d5

To Reproduce
Rerun the notebook.

Workaround
The issue is fixed by extending the search bounds of the optimization, as done in 71c4c09

Open questions

  • What changed to cause a different solution?
  • Why don't the existing tests catch this? We test propagation and signal generation. Maybe not careful enough?
  • How do we test against this?

Redesign parameter handling

Is your feature request related to a problem? Please describe.

Describe the solution you'd like
Previously parameters of both model and control components were managed in a object.params dictionary. This should be modified so that the property is directly a field of the object. E.g. qubit.params["frequency"] becomes qubit.frequency.
Example:

c3/c3/system/chip.py

Lines 77 to 99 in ef95330

def __init__(
self,
name,
hilbert_dim,
desc=None,
comment=None,
freq=None,
anhar=None,
t1=None,
t2star=None,
temp=None,
params=None,
):
# TODO Cleanup params passing and check for conflicting information
super().__init__(
name=name,
desc=desc,
comment=comment,
hilbert_dim=hilbert_dim,
params=params,
)
if freq:
self.params['freq'] = freq

For this to work each object needs to implement a get_parameters() method that exposes properties to the parametermap to be used instead of comp.params.items() in the following:

c3/c3/parametermap.py

Lines 38 to 45 in ef95330

def __initialize_parameters(self) -> None:
par_lens = {}
pars = {}
for comp in self.__components.values():
for par_name, par_value in comp.params.items():
par_id = "-".join([comp.name, par_name])
par_lens[par_id] = par_value.length
pars[par_id] = par_value

Describe alternatives you've considered

Additional context

Optimizer for experiment design

Is your feature request related to a problem? Please describe.
Experiment design is a new type of optimization procedure.

Solution
Similar to literature, the class implements the procedure

  • A distribution of model parameters
  • An initial distribution (guess) of control parameters
  • Implement and compute the required goal function(s)
  • Maybe update the model distribution

Missing test coverage for `c3/optimizers/c2.py`

Is your feature request related to a problem? Please describe.
We don't have good test coverage for c3/optimizers/c2.py. Currently it is at 33%

Describe the solution you'd like
Add tests to check the full C2 Calibration workflow

Additional context
A mock experiment must be provided to mimic the behaviour of hardware calibration. This could be something very rudimentary since we already have a more sophisticated workflow in the examples/Simulated_calibration.ipynb that is also checked as part of our CI.

Typing of functions with Quantity arguments

Describe the bug
Typing of most objects requiring Quantity objects is wrong, requiring mostly float by itself. In order to get the right typing behavior a way should be found to either associate Quantity objects as float/array or let the methods require Quantity objects in the first place.

Example of the code:

c3/c3/system/chip.py

Lines 57 to 89 in ef95330

class Qubit(PhysicalComponent):
"""
Represents the element in a chip functioning as qubit.
Parameters
----------
freq: np.float64
frequency of the qubit
anhar: np.float64
anharmonicity of the qubit. defined as w01 - w12
t1: np.float64
t1, the time decay of the qubit due to dissipation
t2star: np.float64
t2star, the time decay of the qubit due to pure dephasing
temp: np.float64
temperature of the qubit, used to determine the Boltzmann distribution
of energy level populations
"""
def __init__(
self,
name,
hilbert_dim,
desc=None,
comment=None,
freq=None,
anhar=None,
t1=None,
t2star=None,
temp=None,
params=None,
):

Related: https://github.com/shaimach/c3po/issues/6 and https://github.com/shaimach/c3po/issues/58

Extend and unify libraries

Is your feature request related to a problem? Please describe

Describe the solution you'd like
Chip components should become a library with a more general name, e.g. quantum_elements, physical_elements, physical_components.
Tasks should also become two libraries: readout and initialization.
Devices can also become a library.

All elements of libraries should follow a single implementation, i.e. all be initialized and declared the same way (see issue #21).
All elements of libraries provided should be covered by a very basic test.

Describe alternatives you've considered

Additional context
Only elements of general interest should stay in libraries, all elements specific to a single use case should be added in local environment after import.

Container image for development

Is your feature request related to a problem? Please describe.

A container image for development and/or deployment of c3-toolset is essential as we move towards containerised remote development/simulations using Kubernetes clusters

Describe the solution you'd like

  • A Dockerfile to create an image and deploy a container with all the development dependencies pre-installed
  • A workflow to rebuild and push the docker image to a public registry every time our dependencies change

Describe alternatives you've considered

Manually setting up all dependencies by installing them in the environment/container after it is created

Additional context

Check the microsoft/vscode-dev-container, okteto/python for reference images. Also automate the installation of various VS-Code extensions that are generally used in our dev process. Recommended extensions can also be added by providing an extensions.json file.

`Sensitivity` code is stale and broken

Describe the bug

Code for Sensitivity Analysis in c3/optimizers/sensitivity.py is broken since it has not been updated with the rest of the codebase.

To Reproduce

python c3/main.py test/sensitivity.cfg will throw a bunch of errors due to stale config files and stale code.

Expected behavior

  • Sensitivity Analysis has significant overlap with Model Learning code, in that they use the same goal function. Ideally, the SET class should be derived from the Model Learning C3 class, re-implementing and extending as required
  • run Sensitivity Analysis from the CLI with a working config file
  • run Sensitivity Analysis in an interactive notebook
  • Tests to check that code works as expected (this would require some way to quantify that sensitivity analysis is working, because right now it seems the only way to do that is look at plots)
  • Module level docstrings
  • Tutorial style documentation (either as a part of or following the Model Learning docs)

Additional context

Implementing SET as derived from C3 might either involve making a new class derived from the base Optimizer which is then derived by both C3 and SET or leaving C3 as is and just deriving SET from it. This will depend on how different the two classes are.

Relevant sections in literature -

Introducing a Qobj like in QuTiP? More QuTiP combatability?

In many places in the code we need to pass information between classes and functions in libraries.
The information we pass is (not comprehensive list): the dimension of the subspaces of the hilbert subspace, their order, the states computationally relevant, the representation (superoper/oper, state/density matrix/density vector).
For example when doing propagation the experiment class asks the model class if lindblad is true, in which case the assumption is that we are using the superoper + density vector formalism, and hence different types of propagation and population calls are made.

Qutip solves some of these issues by having some of this information in the Qobj class and then you can just make the relevant calls, like population or fidelity calls, on the object and the underlying implementation will change based on the information the object itself provides.
https://qutip.org/docs/latest/guide/guide-basics.html#the-quantum-object-class

I think this is the right way to move forward to allow the code to stay modular.

In general with the expansion of the propagation library and the recent improvements made on qutip to have noise (https://arxiv.org/abs/2105.09902) do we want to try for more compatability with QuTip?

Memory leakage due to tf.Variables

Currently memory in optimization is strongly increasing with every iteration. Using a memory profiler this tracks down to redifining tf.Variables in the propagation of method in tensorflow:

c3/c3/experiment.py

Lines 333 to 345 in ebb13e4

hks.append(hctrls[key])
dt = tf.Variable(ts[1].numpy() - ts[0].numpy(), dtype=tf.complex128)
if model.lindbladian:
col_ops = model.get_Lindbladians()
dUs = tf_utils.tf_propagation_lind(h0, hks, col_ops, signals, dt)
else:
dUs = tf_utils.tf_propagation(h0, hks, signals, dt)
self.dUs[gate] = dUs
self.ts = ts
U = tf_utils.tf_matmul_left(dUs)
self.U = U
return U

Here are memory profiling of 20 iterations using
a) dt = tf.variable...
memory_Variable

and b) dt =tf.constant
memory_constant

It therefore seems that there is memory leakage due to the change to tf.Variables as introduced in 46be03f by @lazyoracle what has been the main point of changing the definition. If there was no specific reason, except for explicitely having to watch constant in the gradient tape I would suggest to revert tf.Variables to tf.constant.

Explicit tests for tf_utils and qt_utils

Is your feature request related to a problem? Please describe.
We need explicit tests and extensive tests for the functions in tf_utils and qt_utils. Code coverage is not given by the current tests and several functions are not curretly working, correctly, e.g. tf_superoper_average_fidelity.
https://github.com/q-optimize/c3/blob/dev/c3/utils/tf_utils.py#L802-L807
As those functions are the backbone for all calculations good tests would be essential for all computations.

Describe the solution you'd like
Write tests

Describe alternatives you've considered

Additional context

Simulated_Calibration Notebook and Docs are outdated

Describe the bug
Simulated_Calibration Notebook and Docs are outdated and throw errors when attempting to execute

To Reproduce

jupyter nbconvert --to=notebook --in-place --execute examples/Simulated_calibration.ipynb

from single_qubit_blackbox_exp import create_experiment

blackbox = create_experiment()
------------------

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-1-4c1c0a78a5a2> in <module>
      1 from single_qubit_blackbox_exp import create_experiment
      2 
----> 3 blackbox = create_experiment()

~/c3/examples/single_qubit_blackbox_exp.py in create_experiment()
     34         name="Q1",
     35         desc="Qubit 1",
---> 36         freq=Qty(
     37             value=freq,
     38             min=4.995e9 * 2 * np.pi,

TypeError: __init__() got an unexpected keyword argument 'min'
TypeError: __init__() got an unexpected keyword argument 'min'

Additional context
We should add a test to ensure all notebooks are running since they often get missed when breaking changes are introduced in the code.

Move `c3.qiskit` outside `c3-toolset`

Is your feature request related to a problem? Please describe.
The qiskit interface is an independent interface that need not exist within the core c3-toolset. The development can be decoupled and would benefit from existing independently of the core codebase. This would also reduce the dependencies for c3-toolset

Describe the solution you'd like
A separate c3-qiskit repository that acts as a plugin to the c3-toolset, with possible backend support for various hardware control stacks as well.

  • Move c3.qiskit to a c3-qiskit repository
  • c3-qiskit imports c3-toolset and qiskit as dependencies and implements Qiskit Provider, Backend and Job classes.

To-Do/Status

  • Create a c3-qiskit package that works as a drop-in replacement for the current c3.qiskit
  • Add tests to c3-qiskit to check integration with c3-toolset
  • Update c3-toolset to use c3_qiskit in place of c3.qiskit
  • Ensure integration tests for c3_qiskit are present in c3-toolset

`read_config()` does not parse tasks when creating Model objects

Describe the bug
read_config() does not parse for tasks when creating Model object

To Reproduce

  • Create a config file with a tasks component
  • Read config file with read_config()
  • Tasks not parsed and included in Model object

Expected behavior
When tasks are included in the config file, they should get instantiated along with the rest of model attributes

c3/c3/system/model.py

Lines 121 to 154 in ef95330

def read_config(self, filepath: str) -> None:
"""
Load a file and parse it to create a Model object.
Parameters
----------
filepath : str
Location of the configuration file
"""
with open(filepath, "r") as cfg_file:
cfg = hjson.loads(cfg_file.read())
for name, props in cfg["Qubits"].items():
props.update({"name": name})
dev_type = props.pop("c3type")
self.subsystems[name] = device_lib[dev_type](**props)
for name, props in cfg["Couplings"].items():
props.update({"name": name})
dev_type = props.pop("c3type")
self.couplings[name] = device_lib[dev_type](**props)
if dev_type == "Drive":
for connection in self.couplings[name].connected:
try:
self.subsystems[connection].drive_line = name
except KeyError as ke:
raise Exception(
f"C3:ERROR: Trying to connect drive {name} to unkown "
f"target {connection}."
) from ke
if "use_dressed_basis" in cfg:
self.dressed = cfg["use_dressed_basis"]
self.__create_labels()
self.__create_annihilators()
self.__create_matrix_representations()

Add callback_fids option to Calibrate(...)

Describe the missing feature

OptimalControl(...) supports a callback_fids=[...] option.
Calibrate(...) does not.
It should.

Describe the solution you'd like

Add an callback_fids=[...] option to Calibrate(...)

Describe alternatives you've considered

Death

Additional context

Generally, interface to all 3 optimizations should be as similar as possible

pip installation error on Windows

Problem Description

Installing c3-toolset with pip throws the following error:

$ pip install c3-toolset
.
.
.
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.

We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.

tensorflow 2.3.1 requires gast==0.3.3, but you'll have gast 0.4.0 which is incompatible.

System Description

  • Windows 10 Build 19042.685
  • Miniconda 4.9.0
  • Python 3.8.5
  • pip 20.2.4

Suggested fix

Freeze the version of gast

Add physics write-ups to the git

Is your feature request related to a problem? Please describe.
We have/are generating write-ups of relevant physics ahead of coding (e.g. non-Markovian dynamics, SU(4) structure for co-design, etc) The code isn't really understandable without this supporting material. And unless we have these in the git, putting the write-up alongside the code which implements it, we'll lose track over time.

Describe the solution you'd like
Create the write-up in whatever tool you like.
When code based on this write-up goes into the repository, so should the write-up.

Describe alternatives you've considered
Wiki. But it needs some version control that matches the code versioning.

Missing line in 'Open-loop optimal control' docs

Describe the bug
On readthedocs there is a line missing in the code which leads to 'ValueError: not enough values to unpack' (expected 2, got 0) (https://c3-toolset.readthedocs.io/en/latest/optimal_control.html)

To Reproduce
Steps to reproduce the behavior:

  1. Execute the code from Setup of a two-qubit chip with C3 and Open-loop optimal control on readthedocs
  2. See error

Expected behavior
No error

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows 10
  • PyCharm 2020.3.4
  • Anaconda 4.9.2 / Python 3.8.5

Solution
Add 'exp.pmap.set_opt_map(gateset_opt_map)' before 'opt.optimize_controls()'

Support for Python 3.9

Describe the missing feature

We are unable to use c3-toolset with Python 3.9. Various dependencies of c3-toolset also require at least Python 3.9 to be able to use their latest version

Describe the solution you'd like

We would like to use c3-toolset with Python 3.9

Describe alternatives you've considered

N/A

Additional context

Adding support for Python 3.9 should be trivial once #95 is merged.

AWG options ineffective

Bug
If using one of these setters after creating the AWG instance, they do not affect the actual function call.

c3/c3/generator/devices.py

Lines 1225 to 1232 in 31e96be

def enable_drag(self):
self.__options = "drag"
def enable_drag_2(self):
self.__options = "drag_2"
def enable_pwc(self):
self.__options = "pwc"

Previously, we checked the option when executing generate_signal() calls now we bind the appropriate function handle.

Fix
Bind the function handle in the setters.

Testing priority and abort on failure

Is your feature request related to a problem? Please describe.
Wasting computing time on heavy simulations when simple tests already have failed.

Describe the solution you'd like
Flag computationally light tests and run them first. Only if they are passed, run the resource intensive ones.

Reoccurring pattern of usage `try-except`

Describe the issue

I just came across the extensive usage of the following pattern, see screenshot, try-except to check whether a key exists in a dictionary. I want to propose the usage of configuration validators or use strictly defined record types such as dataclasses-json.

This can be re-written to the following to be more explicit and readable to the user.

if not "algorithm" in cfg:
  raise ...

algorithm_name = cfg["algorithm"]
if not algorithm_name in algorithms:
  raise ...

There is also a typo: Unkown should be Unknown in C3:ERROR:Unkown sampling method

Screenshots

image

Add Property based Tests

Is your feature request related to a problem? Please describe.

Now that we have some foundational testing framework in place, we need to augment it to better catch edge cases as well as prevent regressions or new errors popping up when changes are made in related sections of the code. One way of making the test suite broader and more robust is Property Based Testing - testing that relies on properties of the function as opposed to example test cases.

Describe the solution you'd like

Hypothesis is a property based testing framework for Python that also works with pytest. The framework calls the test function thousands of times with generated data, specified loosely by types and bounds, and ensures the properties we’ve defined hold true. If an assertion fails, Hypothesis will keep searching to find the minimal example required to violate the assumptions and show it to us.

Describe alternatives you've considered

Sit and think about every possible edge case and hope to God that you didn't miss any of them.

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.