q-optimize / c3 Goto Github PK
View Code? Open in Web Editor NEWToolset for control, calibration and characterization of physical systems
Home Page: https://c3-toolset.readthedocs.io/
License: Apache License 2.0
Toolset for control, calibration and characterization of physical systems
Home Page: https://c3-toolset.readthedocs.io/
License: Apache License 2.0
Is your feature request related to a problem? Please describe.
Currently there is no way to ascertain that new (external?) contributors to the codebase don't have any conflicts (eg with their current employers) and properly ascertain code ownership/licensing
Describe the solution you'd like
Every PR from a new contributor should automatically trigger a digital signing of the CLA
Describe alternatives you've considered
Manually require every external contributor to provide an email (otherwise written) confirmation clarifying no conflicts
Additional context
Suggested solution: cla-bot
Is your feature request related to a problem? Please describe.
In the propagation and creation of the Us everything is written by using python for loops. These should be replaced by the tensorflow equivalents to increase performance and actually make use of tensorflow workload distribution.
Lines 219 to 224 in ef95330
Lines 140 to 145 in ef95330
Currently every du is calculated by itself and no parallelization is done, which could/should be improved.
Describe the solution you'd like
Make the progapagtion with native tensorflow functions.
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
There is no easy and centralised way for checking the changes being introduced in a new release. Sometimes the API is broken and there needs to be deprecation warnings issued at least a few minor releases ahead.
Describe the solution you'd like
A version-controlled CHANGELOG included with the repository (as a markdown file) which gets updated incrementally during all the PRs that lead up to a particular release. It gets finalised in the release/x.y.z
PR before merging into master
. Some points:
Added
for new features.Changed
for changes in existing functionality.Deprecated
for soon-to-be removed features.Removed
for now removed features.Fixed
for any bug fixes.Security
in case of vulnerabilities.Unreleased
section at the top of the changelog which then gets updated both during PRs by individual contributors as well as when the team decides to work on and implement a certain future. This allows for a simple Coming soon...
as well as makes it easy to prepare the final CHANGELOG during the release PR. More details here.Describe alternatives you've considered
Discussions on Slack/Rocket.chat, fragmented notes in various PRs, Issues and meeting docs, some minimal points in the Github Releases section. None of this has archival and reference usage and are also difficult to maintain, aggregate and version control.
Additional context
Related: As discussed in the comments on PR #87 we need a well-structured commit message for the merge commits when we merge a branch into dev
during the usual development cycle. The expected template for this is as below:
Short 1-line (50 chars) description of key feature/bug-fix with associated issue numbers if any (can go in the commit heading in Github PR merge UI)
(The part below goes in the merge commit message details)
Contributors: Full name (and not Github handle) of all contributors to this PR, irrespective of who made the code commits.
Longer multi-line description, typically in the form of a detailed list of various features implemented with any additional insights/remarks on the implementations etc.
It should be possible to implement this through an automated Github bot that notifies on every PR the requirements of updated tests, updated docs, changelog, commit message, CLA etc. Either enforce this through PR templates or through a github bot.
At the moment, most devices have a self.signal
where the last signal that was produced gets stored.
For example:
Lines 926 to 931 in 815bf32
We should instead store all signals generated in the generator object by doing something like
signal_stack: List[tf.Variable] = []
for dev_id in self.chains[chan]:
dev = self.devices[dev_id]
...
final_signal[chan] = copy.deepcopy(signal_stack.pop())
signal_stages[chan] = signal_stack
For this to work we would have to make sure that when taking elements from the signal stack they aren't popped.
However, this creates a problem:
Imagine your generator stack has the following devices:
LO, LO_noise, AWG, AWG_noise, Mixer
where the LO_noise
and AWG_noise
take 1 input and have 1 output.
Then the popping works because the mixer will find in the stack the noisy AWG output and the noisy LO output.
If we don't pop the stack when we get to the mixer would have the outputs:
[LO, LO_noise, AWG, AWG_noise]
And won't know that it needs to take the 2nd and 4th elements of the stack.
This leads to the point that we need to implement a more general (and more intuitive) signal generation chain that is a directed graph.
One way to do this would be when specifying the chain you specify the inputs.
What is currently:
chains={
"TC":["lo", "lo_noise" , "awg", "dac", "resp", "mixer", "fluxbias"],
"Q1":["lo", "awg", "dac", "resp", "mixer", "v2hz"],
}
could become:
chains={
"TC": [("lo"), ("lo_noise","lo") , ("awg"), ("dac","awg"), ("resp","awg"), ("mixer","lo_noise","resp"), ("fluxbias", "mixer")]
"Q1": [("lo") , ("awg"), ("dac","awg"), ("resp","awg"), ("mixer","lo","resp"), ("v2hz", "mixer")]
}
where the first element of each tuple is the device that needs to make a signal and the others are the inputs.
This is to get closer to the point where we truly have a directed graph.
The tuple structure is just a suggestion but anything would do even:
chains={
"TC": [
{"device": "lo", "inputs" : []},
{"device": "lo_noise", "inputs" : ["lo"]} ,
...,
{"device": "mixer", "inputs" : ["awg", "lo"]},
...
]
"Q1": [{...}, ..., {...}]
}
I'm aware this is a more annoying to setup but it does make more sense to someone that isn't familiar with the stack and how to make it work for new devices.
[thanks @GlaserN for the input]
Is your feature request related to a problem? Please describe.
Add tests for ensuring framechange works as expected.
Describe the solution you'd like
As described by @fedroy here in https://github.com/shaimach/c3po/issues/15#issuecomment-727599976
Additional context
Overall discussion in https://github.com/shaimach/c3po/issues/15
Is your issue related to a problem? Please describe.
There is no demonstration notebook showing how one would perform the whole cycle of c3-toolset
including the model learning phase.
Describe the solution you'd like
If beyond the scope, it might be adequate to start directly from step 4 with some dummy dataset from a previous C2 run and perform model learning on that data.
TO-DO/Status
C2
, save as dataframes instead of list of dicts)main.py
and Optimizer
config
filesNote
Model Learning Experimentation being tracked in a separate issue #105
Describe the bug
Possible tensorflow performance bottleneck due to repeated tracing of tf.function
decorated modules
To Reproduce
In Simulated_Calibration.ipynb
:
C3:STATUS:Saving as: /tmp/tmpaz0djjuu/c3logs/ORBIT_cal/2021_03_16_T_16_57_04/calibration.log
(5_w,10)-aCMA-ES (mu_w=3.2,w_1=45%) in dimension 4 (seed=1004483, Tue Mar 16 16:57:04 2021)
C3:STATUS:Adding initial point to CMA sample.
WARNING:tensorflow:5 out of the last 14 calls to <function tf_matmul_left at 0x7f97602a8dd0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Suggested workarounds
Possible causes and solutions:
(1) creating @tf.function repeatedly in a loop, -- please define your @tf.function outside of the loop.
(2) passing tensors with different shapes, -- @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing.
(3) passing Python objects instead of tensors. -- please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
if nq == 1:
op = op_tuple[0]
if nq == 2:
op = np.kron(op_tuple[0], op_tuple[1])
if nq == 3:
op = np.kron(np.kron(op_tuple[0], op_tuple[1]), op_tuple[2])
This should be solved by replacing the code above in the function by this general snippet:
op = op_tuple[0]
for i in range(nq-1):
op = np.kron(op, op_tuple[i+1])
A clear and concise description of what the bug is.
qiskit.Resut created by C3 qiskit adapter can't be serialized to JSON because of counts represented as numpy.int32.
probably happens here
shots_data = (np.round(pop_t.T[-1] * shots)).astype("int32")
import numpy as np
import json
from c3.qiskit import C3Provider
from qiskit import transpile, execute, QuantumCircuit, Aer
qc = QuantumCircuit(6, 6)
qc.rx(np.pi/2, 0)
qc.rx(np.pi/2, 1)
qc.measure([0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5])
c3_provider = C3Provider()
c3_backend = c3_provider.get_backend("c3_qasm_perfect_simulator")
c3_backend.set_device_config('quickstart.hjson')
c3_backend.disable_flip_labels()
c3_job = execute(qc, c3_backend, shots=1000)
result = c3_job.result()
# here crash!
json.dumps(result.to_dict())
workaround: Use custom json.JSONEncoder to serilialize numpy types
Qiskit objects should serializable
c3-toolset
Version /ref/devDescribe the bug
Naming of gates in C3
is inconsistent with the standard naming convention followed in quantum computing textbooks and community.
Details
What we call X90p
(and all similarly named gates for X
, Y
and Z
) is essentially RX(90)
as usually referenced elsewhere.
Expected behavior
Current Name | Expected Name |
---|---|
X90p | RX90p |
Xp | RXp |
X90m | RX90m |
Same for Y, Z | Same for Y, Z |
CNOT | CRXp |
CZ | CRZp |
CR | What does this do? |
CR90 | Seems fine? |
iSWAP | Can someone check this? |
The propagation methods can be used to propagate density matrices, state vectors and/or get unitaries. That needs to be organized.
Add decorators to highlight what the specific method is propagating.
Is your feature request related to a problem? Please describe.
Higher levels of the transmon have not been thorougly been tested for lindblad simulation.
The Lindblad simulation takes a very long tme especially for long sequences. It is therefore necessary to have a faster approximate version not involving full matrix exponentiation of the superoperator.
Describe the solution you'd like
Implement how noise scales for higher levels of the transmon.
Approximate faster calculation of the Lindblad propagation
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Is your feature request related to a problem? Please describe.
Currently our dependencies involve version pinning eg,
Lines 37 to 47 in 6e7e7a0
The problem is that this will sooner or later lead to conflicts and issues when c3-toolset
exists in environments with other packages that have even slightly differing requirements.
Describe the solution you'd like
>=
style requirements. ORThere is no seamless way to integrate C3 with high level interfaces such as cirq or qiskit. We would like to change that.
Support for gate level simulation of circuits defined using Qiskit with the C3 tensorflow simulator as a backend
Take circuit transpiled (based on system architecture and gateset provided by C3) by Qiskit compiler into QASM, parse this and create, if required, an intermediary output to be read by the C3 model_parser
which is then simulated as usual by our C3 simulator
Qiskit provides a decent high level interface to defining quantum circuits and it's useful to have support for using Qiskit to define circuits which can initially have a gate level simulation (OpenQasm
) and then a full physics simulation (based on OpenPulse
)
None. This is essential.
Figure out API used by Qiskit to call a backend simulator such as the QasmSimulator
Inherit the qiskit Provider
, Backend
and Job
Classes to write our own simulator which is a wrapper around the C3 tensorflow simulator
Implement basic backend framework with C3Provider
, C3Backend
, C3Job
and C3QiskitError
Implement framework in c3_openqasm_simulator
to accept Qobj
and return Result
Create C3 physical qubits from OpenQasm Qobj
Convert instructions between Qobj
and C3, essentially map the gateset
Map execute()
in qiskit
to internal C3 interfacing
Map shots/memory to experiment/population (measurement and readout)
Create dictionary of results
Achieve the following consistent interface:
from c3.qiskit import C3Provider
from qiskit import execute, QuantumCircuit
qc = QuantumCircuit(2, 2)
# add quantum gates
c3_provider = C3Provider()
c3_backend = c3_provider.get_backend("c3_qasm_simulator")
c3_backend.set_device_config("device.hjson")
c3_job = execute(qc, c3_backend, shots=10)
result = c3_job.result()
res_counts = result.get_counts(qc)
print(res_counts)
Update tests
Update examples
Update docs
Update workflows with requirements
QasmSimulator
transpile
OpenQasm 3
Qobj
dictsThis is a continuously updated issue to keep track of missing documentation in various parts of the code. Please update the Issue description or add a comment when you find a section of the code that is poorly documented or not intuitive to use or has some quirks that need to be flagged to the user/developer. Make sure you go through the points listed below before adding something new. (ref for writing good software docs here and here.
evaluate()
and process()
as defined here? The Simulated_Calibration.ipynb
is out of date and the other notebooks don't seem to use it. What all experiment parameters need to be set before running these two functions? (Updated Notebook with usage)Model
class or e.g, a model config, how does one go about programmatically identifying the qubits (or the number of qubits) or identifying which qubits are coupled (or the couplings). It seems like these subsystems don't have different types. Are they all stored in an unstructured manner inside the Model
datatype and there is no way to programmatically identify a qubit from a coupling except by the name Q1
and Q1-Q2
? What would be the C3 preferred way to eg, get_num_of_qubits()
, get_connected_qubits()
and get_qubit_levels()
?single_qubit_gates
between Q1
and Q2
as "X90p:Id"
and two_qubit_gates
as CR90
. The system model that this quick-start uses also has couplings for Q4-Q6
, how would one go about defining that gate, either single qubit or 2 qubit? Since all these gates seem to be only discriminated by their name, its not immediately clear how to have two qubit gates defined simultaneously between Q1-Q2
and Q4-Q6
. Also, are the quotes for the gate names required or not? (Pending restructuring of gate names, qubits and indices)exp.get_gates()
- What is the expected usage of this function? Where do I use the output of this function? Is it just a proxy to call exp.propagation()
?dUs
created? Do I need to run exp.get_gates()
for the dUs
Transmon
class objects - Lines 289 to 302 in 7b1ccc4
Describe the bug
Modifying the default value of a parameter can lead to unexpected results.
The default value of a parameter is computed once when the function is created, not for every invocation. The "pre-computed" value is then used for every subsequent call to the function. Consequently, if you modify the default value for a parameter this "modified" default value is used for the parameter in future calls to the function. This means that the function may not behave as expected in future calls and also makes the function more difficult to understand.
some_dict.pop()
is used in several places in algorithms.py
which leads to modification of defaults (ref here and here).
Remark
Is there some reason why pop()
is used in place of standard dictionary access routines?
Current implementation of Model Learning (C_3 step) suggests duplication of code and lack of integration/reuse with existing solutions (possible case of NIH syndrome).
Integrate C3 Model Learning with the Tensorflow ML ecosystem by extending Model
and Layer
to wrap C3 computations and state.
layer
to encapsulate the computations and state (model parameters) of the c3-tf-simulator
w
and b
)call()
) based on the gateset and the sequences as defined in the experimental dataadd_loss()
functionadd_metric()
function to track a FOM during trainingmodel
to expose the fit()
, evaluate()
and predict()
(ref here)save()
, save_weights()
etcc3-tf-simulator
and general neural network layers as defined in Tensorflow works as expectedLayer
style classtf.function
decorators without adequate reasoning, tf.Variable
vs tf.Constant
usage, GradientTape
etc which might make integration with the standard native TF learning API possibly inefficient, buggy and difficult to keep up with TF changes and developments. However, adopting the TF API ensures we are continuously able to build on and fully tap into the large ecosystem of ML tools and solutions.Is your feature request related to a problem? Please describe.
There is a non-trivial issue relating to the basis and frame which are used to simulate dynamics:
The Hamiltonian can be in either the dressed or product basis:
The Hamiltonian can be represented inone of (at least) 3 frames:
"The frame rotating with the qubit" can refer to either the qubit as defined in the product basis or the dressed basis
This whole subject is quite confusing.
Describe the solution you'd like
We need a good write-up to explain this to ourselves.
Describe alternatives you've considered
We find a pre-existing write-up which exactly addresses this issue
Additional context
After we have everything laid out in a clear fashion, we can decide how we want this understanding to be reflected in the code. For example:
But that will be a separate discussion
Describe the bug
The API for optimizer algorithms defined in our library takes as optional argument a dict where one can specify things such as the maximum number of function evaluations or algorithm iterations. In the current implementation, the tensorflow optimizers treat maxfun
like maxiter
and this creates inconsistencies when comparing/benchmarking these algorithms.
To Reproduce
Steps to reproduce the behavior:
examples/two_quits.ipynb
algorithms.tf_sgd
maxfun
number of epochsExpected behavior
maxiter
instead of maxfun
to Tensorflow optimizer and the same should be processed by the implementation instead of the current erroneous one.maxfun
is passed, noting that these optimizers should instead be used with maxiter
Additional context
Relevant code:
Line 283 in b6dd98f
Is your feature request related to a problem? Please describe.
Inputing a number in radian values needs the unit specifier "Hz"
But inputing a unit in Hz values needs the unit specifier "Hz 2pi"
This is rather counter intuitive and should be changed.
However if you currently use the unit Hz 2pi
the input value will be multiplied by 2pi converting from Hz to rad. But it would be more intuitive the other way around to insert a value in Hz with the unit Hz and then transform this value to a radian value, if necessary, e.g. creating the Hamiltonian.
Describe the solution you'd like
Do not transform input value of quantity if 2pi is in the quantity name.
Objects which need a value in radians (e.g. creation of the hamiltonian) should be able to request a value that is in radians and includes a scaling factor dependend on the given unit
Describe alternatives you've considered
Transform an initalized Hz unit to a Hz 2Pi
while changing the unit specifier, as well as the value.
Additional context
Describe the bug
Gate identifiers consist of a (user specified) arbitrary name, e.g. RX90p
and the index of the targeted qubit. Handling of this is inconsistent when simulating.
To Reproduce
Using compute_propagators()
of the Experiment
class, an instruction with name RX90p
acting on qubit 0 is store as "RX90p" whereas the lookup_gate
method expects "RX90p[0]".
Describe the bug
Qiskit ordering of qubits is different from what is commonly used in Physics textbooks. C3 follows the general Physics textbook style. However when producing output through the C3-Qiskit interface, the expectation is to get results with compatible qubit ordering.
To Reproduce
The following code snippet checks C3 output with Qiskit output. This fails due to qubit indexing mismatch:
c3_qiskit = C3Provider()
backend = c3_qiskit.get_backend('c3_qasm_perfect_simulator')
backend.set_device_config("test/quickstart.hjson")
qc = get_6_qubit_circuit
job_sim = execute(qc, backend, shots=1000)
result_sim = job_sim.result()
# Test results with qiskit style qubit indexing
qiskit_simulator = Aer.get_backend("qasm_simulator")
qiskit_counts = execute(qc, qiskit_simulator, shots=1000).result().get_counts(qc)
assert result_sim.get_counts(qc) == qiskit_counts
Expected behavior
By default, the c3-qiskit interface should produce qiskit compatible labels with an option to disable this feature if required by the user.
Additional context
Importantly, this change in the representation of multi-qubit states affects the way multi-qubit gates are represented in Qiskit. More details are available here and it might be useful to check if a possible fix goes beyond mere relabelling of qubit state names in the experiment result.
Describe the bug
In c3/utils/parsers.py
, multiple class instantiations have not been updated along with changes elsewhere in the code.
Lines 181 to 189 in a4ab975
Lines 12 to 30 in a4ab975
Again,
Lines 260 to 272 in a4ab975
Lines 21 to 47 in a4ab975
Suggestions
parsers
module to match changes elsewhere in the codeDescribe the bug
The optimization in examples/two_qubits.ipynb did not converge to high accuracy after 8e747d5
To Reproduce
Rerun the notebook.
Workaround
The issue is fixed by extending the search bounds of the optimization, as done in 71c4c09
Open questions
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
Previously parameters of both model and control components were managed in a object.params
dictionary. This should be modified so that the property is directly a field of the object. E.g. qubit.params["frequency"]
becomes qubit.frequency
.
Example:
Lines 77 to 99 in ef95330
For this to work each object needs to implement a get_parameters()
method that exposes properties to the parametermap to be used instead of comp.params.items()
in the following:
Lines 38 to 45 in ef95330
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
Experiment design is a new type of optimization procedure.
Solution
Similar to literature, the class implements the procedure
Is your feature request related to a problem? Please describe.
We don't have good test coverage for c3/optimizers/c2.py
. Currently it is at 33%
Describe the solution you'd like
Add tests to check the full C2 Calibration workflow
Additional context
A mock experiment must be provided to mimic the behaviour of hardware calibration. This could be something very rudimentary since we already have a more sophisticated workflow in the examples/Simulated_calibration.ipynb
that is also checked as part of our CI.
Describe the bug
Typing of most objects requiring Quantity objects is wrong, requiring mostly float by itself. In order to get the right typing behavior a way should be found to either associate Quantity objects as float/array or let the methods require Quantity objects in the first place.
Example of the code:
Lines 57 to 89 in ef95330
Related: https://github.com/shaimach/c3po/issues/6 and https://github.com/shaimach/c3po/issues/58
Is your feature request related to a problem? Please describe
Describe the solution you'd like
Chip components should become a library with a more general name, e.g. quantum_elements, physical_elements, physical_components.
Tasks should also become two libraries: readout and initialization.
Devices can also become a library.
All elements of libraries should follow a single implementation, i.e. all be initialized and declared the same way (see issue #21).
All elements of libraries provided should be covered by a very basic test.
Describe alternatives you've considered
Additional context
Only elements of general interest should stay in libraries, all elements specific to a single use case should be added in local environment after import.
A container image for development and/or deployment of c3-toolset
is essential as we move towards containerised remote development/simulations using Kubernetes clusters
Dockerfile
to create an image and deploy a container with all the development dependencies pre-installedManually setting up all dependencies by installing them in the environment/container after it is created
Check the microsoft/vscode-dev-container
, okteto/python
for reference images. Also automate the installation of various VS-Code extensions that are generally used in our dev process. Recommended extensions can also be added by providing an extensions.json
file.
Code for Sensitivity Analysis in c3/optimizers/sensitivity.py
is broken since it has not been updated with the rest of the codebase.
python c3/main.py test/sensitivity.cfg
will throw a bunch of errors due to stale config files and stale code.
SET
class should be derived from the Model Learning C3
class, re-implementing and extending as requiredImplementing SET
as derived from C3
might either involve making a new class derived from the base Optimizer
which is then derived by both C3
and SET
or leaving C3
as is and just deriving SET
from it. This will depend on how different the two classes are.
Relevant sections in literature -
In many places in the code we need to pass information between classes and functions in libraries.
The information we pass is (not comprehensive list): the dimension of the subspaces of the hilbert subspace, their order, the states computationally relevant, the representation (superoper/oper, state/density matrix/density vector).
For example when doing propagation the experiment class asks the model class if lindblad is true, in which case the assumption is that we are using the superoper + density vector formalism, and hence different types of propagation and population calls are made.
Qutip solves some of these issues by having some of this information in the Qobj class and then you can just make the relevant calls, like population or fidelity calls, on the object and the underlying implementation will change based on the information the object itself provides.
https://qutip.org/docs/latest/guide/guide-basics.html#the-quantum-object-class
I think this is the right way to move forward to allow the code to stay modular.
In general with the expansion of the propagation library and the recent improvements made on qutip to have noise (https://arxiv.org/abs/2105.09902) do we want to try for more compatability with QuTip?
Currently memory in optimization is strongly increasing with every iteration. Using a memory profiler this tracks down to redifining tf.Variables in the propagation
of method in tensorflow:
Lines 333 to 345 in ebb13e4
dt = tf.variable...
It therefore seems that there is memory leakage due to the change to tf.Variables as introduced in 46be03f by @lazyoracle what has been the main point of changing the definition. If there was no specific reason, except for explicitely having to watch constant
in the gradient tape I would suggest to revert tf.Variables to tf.constant.
Is your feature request related to a problem? Please describe.
We need explicit tests and extensive tests for the functions in tf_utils and qt_utils. Code coverage is not given by the current tests and several functions are not curretly working, correctly, e.g. tf_superoper_average_fidelity.
https://github.com/q-optimize/c3/blob/dev/c3/utils/tf_utils.py#L802-L807
As those functions are the backbone for all calculations good tests would be essential for all computations.
Describe the solution you'd like
Write tests
Describe alternatives you've considered
Additional context
Describe the bug
Simulated_Calibration Notebook and Docs are outdated and throw errors when attempting to execute
To Reproduce
jupyter nbconvert --to=notebook --in-place --execute examples/Simulated_calibration.ipynb
from single_qubit_blackbox_exp import create_experiment
blackbox = create_experiment()
------------------
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-4c1c0a78a5a2> in <module>
1 from single_qubit_blackbox_exp import create_experiment
2
----> 3 blackbox = create_experiment()
~/c3/examples/single_qubit_blackbox_exp.py in create_experiment()
34 name="Q1",
35 desc="Qubit 1",
---> 36 freq=Qty(
37 value=freq,
38 min=4.995e9 * 2 * np.pi,
TypeError: __init__() got an unexpected keyword argument 'min'
TypeError: __init__() got an unexpected keyword argument 'min'
Additional context
We should add a test to ensure all notebooks are running since they often get missed when breaking changes are introduced in the code.
Describe the bug
Line 229 in d571137
Is your feature request related to a problem? Please describe.
The qiskit
interface is an independent interface that need not exist within the core c3-toolset
. The development can be decoupled and would benefit from existing independently of the core codebase. This would also reduce the dependencies for c3-toolset
Describe the solution you'd like
A separate c3-qiskit
repository that acts as a plugin to the c3-toolset
, with possible backend support for various hardware control stacks as well.
c3.qiskit
to a c3-qiskit
repositoryc3-qiskit
imports c3-toolset
and qiskit
as dependencies and implements Qiskit Provider
, Backend
and Job
classes.To-Do/Status
c3-qiskit
package that works as a drop-in replacement for the current c3.qiskit
c3-qiskit
to check integration with c3-toolset
c3-toolset
to use c3_qiskit
in place of c3.qiskit
c3_qiskit
are present in c3-toolset
Describe the bug
read_config()
does not parse for tasks when creating Model
object
To Reproduce
read_config()
Model
objectExpected behavior
When tasks are included in the config file, they should get instantiated along with the rest of model attributes
Lines 121 to 154 in ef95330
OptimalControl(...) supports a callback_fids=[...] option.
Calibrate(...) does not.
It should.
Add an callback_fids=[...] option to Calibrate(...)
Death
Generally, interface to all 3 optimizations should be as similar as possible
Installing c3-toolset
with pip throws the following error:
$ pip install c3-toolset
.
.
.
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
tensorflow 2.3.1 requires gast==0.3.3, but you'll have gast 0.4.0 which is incompatible.
pip
20.2.4Freeze the version of gast
Is your feature request related to a problem? Please describe.
We have/are generating write-ups of relevant physics ahead of coding (e.g. non-Markovian dynamics, SU(4) structure for co-design, etc) The code isn't really understandable without this supporting material. And unless we have these in the git, putting the write-up alongside the code which implements it, we'll lose track over time.
Describe the solution you'd like
Create the write-up in whatever tool you like.
When code based on this write-up goes into the repository, so should the write-up.
Describe alternatives you've considered
Wiki. But it needs some version control that matches the code versioning.
Describe the bug
On readthedocs there is a line missing in the code which leads to 'ValueError: not enough values to unpack' (expected 2, got 0) (https://c3-toolset.readthedocs.io/en/latest/optimal_control.html)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
No error
Desktop (please complete the following information):
Solution
Add 'exp.pmap.set_opt_map(gateset_opt_map)' before 'opt.optimize_controls()'
We are unable to use c3-toolset
with Python 3.9. Various dependencies of c3-toolset
also require at least Python 3.9 to be able to use their latest version
We would like to use c3-toolset
with Python 3.9
N/A
Adding support for Python 3.9 should be trivial once #95 is merged.
Bug
If using one of these setters after creating the AWG instance, they do not affect the actual function call.
Lines 1225 to 1232 in 31e96be
Previously, we checked the option when executing generate_signal()
calls now we bind the appropriate function handle.
Fix
Bind the function handle in the setters.
Is your feature request related to a problem? Please describe.
Wasting computing time on heavy simulations when simple tests already have failed.
Describe the solution you'd like
Flag computationally light tests and run them first. Only if they are passed, run the resource intensive ones.
Describe the issue
I just came across the extensive usage of the following pattern, see screenshot, try-except
to check whether a key exists in a dictionary. I want to propose the usage of configuration validators or use strictly defined record types such as dataclasses-json.
This can be re-written to the following to be more explicit and readable to the user.
if not "algorithm" in cfg:
raise ...
algorithm_name = cfg["algorithm"]
if not algorithm_name in algorithms:
raise ...
There is also a typo: Unkown
should be Unknown
in C3:ERROR:Unkown sampling method
Screenshots
Currently, the AWG class contains a lot of code related to computing the shapes for Inphase and Quadrature signals that should be moved to pulse shapes.
More description TBD.
Now that we have some foundational testing framework in place, we need to augment it to better catch edge cases as well as prevent regressions or new errors popping up when changes are made in related sections of the code. One way of making the test suite broader and more robust is Property Based Testing - testing that relies on properties of the function as opposed to example test cases.
Hypothesis
is a property based testing framework for Python that also works with pytest
. The framework calls the test function thousands of times with generated data, specified loosely by types and bounds, and ensures the properties we’ve defined hold true. If an assertion fails, Hypothesis
will keep searching to find the minimal example required to violate the assumptions and show it to us.
Sit and think about every possible edge case and hope to God that you didn't miss any of them.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.