sandialabs / pygsti Goto Github PK
View Code? Open in Web Editor NEWA python implementation of Gate Set Tomography
Home Page: http://www.pygsti.info
License: Apache License 2.0
A python implementation of Gate Set Tomography
Home Page: http://www.pygsti.info
License: Apache License 2.0
I'm having problems running the HTML files on Firefox in Ubuntu. I'm getting the report loading failed screen, but this claims that Firefox doesn't have any such issue. I'm continuing to get the issue when I run through a jupyter notebook, even with the suggested patch on the page. Running an https client through python does appear to work, but this has all manner of caching issues when I switch between different reports.
I suppose this means that Firefox is no longer allowing locally-loaded html files to load other files? I can't seem to find details of this/a workaround anywhere, perhaps this can be updated in the report loading failed screen?
Report generation hangs on systems where latex packages (etoolbox) in my case are missing. It'd be nice if we aborted with an error in this case, or at least let the user see the output from latex so it's evident what's happening.
When creating a brief pdf report using pygsti, using a default directory name with underscores creates a LaTex error. This is caused by a failing math interpretation of the directory name in the \hypersetup{pdfinfo={ \putfield{pdfinfo}{} }} section in the GST report template. Using \detokenize{directory_name} in the generated .tex file saves the problem, however this is not practical since this error is created every time when creating a new report. For now, my workaround is to pop the key 'defaultDirectory' before calling the function _to_pdfinfo in pygsti's results.py. When using several computers, this becomes cumbersome.
If I use the different functions in jamiolkowski.py to calculate the negative e-vals I get inconsistent values. I think the problem is that mags_of_negative_choi_evals()
does not call jamiolkowski_iso()
with the basis of the gates in the gateset.
It occurred when I run the tutorial file 00 Quick and easy GST.ipynb
and 07 Report Generation.ipynb
. The error is same in both files. When it read the .create_presentation_ppt
, it can't find progressTable.png
file.
Here is the traceback:
IOError Traceback (most recent call last)
<ipython-input-14-dac62d848da6> in <module>()
1 #create GST slides (tables and figures of full report in Powerpoint slides; best for folks familiar with GST)
----> 2 results.create_presentation_ppt(confidenceLevel=95, filename="tutorial_files/easy_slides.pptx", verbosity=2)
/home/wujizhou/pyGSTi/packages/pygsti/report/results.py in create_presentation_ppt(self, confidenceLevel, filename, title, datasetLabel, suffix, debugAidsAppendix, pixelPlotAppendix, whackamoleAppendix, m, M, verbosity, pptTables)
2338 #body_shape = slide.shapes.placeholders[1]; tf = body_shape.text_frame
2339 add_text_list(slide.shapes, 1, 2, 8, 2, ['Ns is the number of gate strings', 'Np is the number of parameters'], 15)
-> 2340 drawTable(slide.shapes, 'progressTable', 1, 3, 8.5, 4, ptSize=10)
2341
2342 slide = add_slide(SLD_LAYOUT_TITLE_NO_CONTENT, "Detailed %s Analysis" % plotFnName)
/home/wujizhou/pyGSTi/packages/pygsti/report/results.py in draw_table_latex(shapes, key, left, top, width, height, ptSize)
2279
2280 pathToImg = _os.path.join(fileDir, "%s.png" % key)
-> 2281 return draw_pic(shapes, pathToImg, left, top, width, height)
2282
2283
/home/wujizhou/pyGSTi/packages/pygsti/report/results.py in draw_pic(shapes, path, left, top, width, height)
2283
2284 def draw_pic(shapes, path, left, top, width, height):
-> 2285 pxWidth, pxHeight = Image.open(open(path)).size
2286 pxAspect = pxWidth / float(pxHeight) #aspect ratio of image
2287 maxAspect = width / float(height) #aspect ratio of "max" box
IOError: [Errno 2] No such file or directory: 'tutorial_files/easy_slides_files/progressTable.png'
Travis CI fails with the output:
Running setup.py install for scipy
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
The build has been terminated
When running two-qubit GST using MPI I am getting the following error, on the 5th iteration of MLGST
Traceback (most recent call last):
File "pygsti_2q_mpi.py", line 58, in <module>
memLimit=memLim, verbosity=3, comm=comm)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/drivers/longsequence.py", line 466, in do_long_sequence_gst
output_pkl, printer)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/drivers/longsequence.py", line 685, in do_long_sequence_gst_base
gs_lsgst_list = _alg.do_iterative_mlgst(**args)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/algorithms/core.py", line 2845, in do_iterative_mlgst
memLimit, comm, distributeMethod, profiler, evt_cache)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/algorithms/core.py", line 1439, in do_mc2gst
verbosity=printer-1, profiler=profiler)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/optimize/customlm.py", line 209, in custom_leastsq
new_f = obj_fn(new_x)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/algorithms/core.py", line 1207, in _objective_func
gs.bulk_fill_probs(probs, evTree, probClipInterval, check, comm)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/objects/gateset.py", line 2637, in bulk_fill_probs
evalTree, clipTo, check, comm)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/objects/gatematrixcalc.py", line 2067, in bulk_fill_probs
mySubTreeIndices, subTreeOwners, mySubComm = evalTree.distribute(comm)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/objects/evaltree.py", line 441, in distribute
_mpit.distribute_indices(list(range(nSubtreeComms)), comm)
File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/tools/mpitools.py", line 79, in distribute_indices
loc_comm = comm.Split(color=color, key=rank)
File "MPI/Comm.pyx", line 199, in mpi4py.MPI.Comm.Split (src/mpi4py.MPI.c:91864)
mpi4py.MPI.Exception: Other MPI error, error stack:
PMPI_Comm_split(471)..........: MPI_Comm_split(MPI_COMM_WORLD, color=0, key=11, new_comm=0x7faa10d37178) failed
PMPI_Comm_split(453)..........:
MPIR_Comm_split_impl(222).....:
MPIR_Get_contextid_sparse(752): Too many communicators
This is with pyGSTi v0.9.5, and mpi4py v2.0.0, run with mpiexec -n 16 python3 pygsti_2q_mpi.py
Here's the script: pygsti_2q_mpi.py
Any hints as to what is going wrong would be appreciated. I'm rerunning this with mpi4py v3.0.0 right now...
See https://travis-ci.org/pyGSTio/pyGSTi/jobs/505328727, for instance
Pickling datasets in python 3.7 includes uuid.SafeUUID
, which isn't present in earlier python versions. A dataset saved in 3.7 can't be opened in 3.5 or 2.7.
We should serialize datasets some other way, probably using numpy's builtin serialization.
Instead of hard-coding the version, just get it from the last git tag. Streamlines automated deployment a bit.
see setuptools-scm.
@kmrudin, as discussed over Skype a bug report for the CZ construction.
When generating a CZ gate using the following code I get a CZ gate that is not symmetric with respect to what qubit is the target (which it should be) and applying it twice does not give the identity.
CZ_01 = pygsti.construction.build_gate([4],[('Q0', 'Q1')], 'CZ(pi, Q0, Q1)',basis='pp')
CZ_10 = pygsti.construction.build_gate([4],[('Q0', 'Q1')], 'CZ(pi, Q1, Q0)',basis='pp')
Plotting this using matplotlib plt.matshow(CZ_01)
gives the following.
Contructing the gate by hand using :
myUnitary = np.diag([1,1,1,-1])
mySuperOp_stdbasis = pygsti.unitary_to_process_mx(myUnitary)
mySuperOp_ppbasis = pygsti.std_to_pp(mySuperOp_stdbasis)
Gcz = mySuperOp_ppbasis
gives exactly what is expected.
With Gcz@Gcz
equal to the identity as expected.
Commit a4a93eb introduces a new function, get_transition
, for automatically computing the variable linlog_trans
which gets fed into the LinLogNorm
class. The purpose of this issue is to work through the following questions (perhaps among others). Feedback from @enielse @kmrudin and @jarthurgross would be appreciated.
linlog_trans
be set automatically, and should users not be able to override that value?
linlog_trans
value? (As you'll see in the commit, it is currently defaulted to None
. This currently conflicts with the documentation...:disappointed:)get_transition
currently accepts an eps
parameter, which determines the quantile. However, eps
has not been "bubbled up" through the rest of the code.Our automated deployment uses a username/password stored as CI environment variables for deployment. They're marked as secure variables in Travis and I don't know what protection that affords, but since I don't see an automated deployment user in the pyGSTio org I'm a little nervous it's somebody's github login... Related comment.
I don't think we have deploy keys available through travis-ci.org
, but we can migrate to travis-ci.com
. See #47
Error when trying to import module functions:
from pygsti.construction import make_lsgst_experiment_list, std1Q_XY, std1Q_XYI, std2Q_XYCNOT
File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/__init__.py", line 15, in <module>
from . import report as rpt
File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/__init__.py", line 12, in <module>
from .factory import *
File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/factory.py", line 26, in <module>
from . import workspace as _ws
File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/workspace.py", line 26, in <module>
from . import plotly_plot_ex as _plotly_ex
File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/plotly_plot_ex.py", line 11, in <module>
from plotly.offline.offline import _plot_html
ImportError: cannot import name '_plot_html'
Turns out plotly released v3.8.0 like 11 hours ago, which removed this function _plot_html_
. Easy fix is to pin the plotly version in your requirements.txt and setup.py, then tag and release a new version.
I am using do_stdpractice_gst()
on two-qubit GST data, and with default parameters, the single-2QUR
gauge optimisation is only run on the Target
model data, not on the results of the TP
/CPTP
modes.
Only the "-- Performing 'single' gauge optimization" message is printed for the TP/CPTP steps, and indeed the generated report is missing the single-2QUR
data for all but the Target
model.
Environment (please complete the following information):
(This sounds like a trivial bug that would be easier to fix for me than to write up, but then, you couldn't accept a PR from me unless the legal situation has changed in the meantime.)
We've got undefined names in a couple of spots around the codebase. See this linter log. I'm calling this low-priority because if neither tests nor users have caught these then apparently these units aren't terribly important.
In Tutorial 01 Section Creating a GateSet from scratch how do you set the dimension? When I write the file "MyTargetGateset.txt" the basis is ``UNKNOWN".
Python 2.7 will reach end-of-life on January 1st, 2020 (see this melodramatic countdown timer). Additionally, many major Python projects have pledged to drop support for Python 2 on or before that date.
The next major release of pyGSTi (v0.9.9) will drop support for Python 2. In other words, we'll be limiting our future support to Python 3.5 and 3.7. Users running pyGSTi on Python 2 should consult the official guide for more information on porting their environment to Python 3.
As of ace6d65 our CI no longer builds for Python 2.7. Developers are no longer required to ensure Python 2.7 compatibility in new contributions. Here's what should be done before v0.9.9:
python_requires
in setup.py
setup.py
sys.version_info
checks for python 2__future__
importspygsti.tools.compattools
(sorry in advance for abusing math terminology)
Describe the bug
In pygsti.baseobjs.objectivefn
see _spam_penalty_jac_fill
. I'm pretty sure that the assignments here should be adding another term to the first axis index (possibly the key for effectvec
?) because as it is each iteration of the loop will overwrite the same row. Additional space is allocated in the ObjectiveFunction __init__
but never written to, so the returned jacobian will have garbage data at the end!
Additional context
See this failed build from the test refactor branch, and compare to this subsequent build. In custom_leastsq
, one uninitialized row of the jacobian meant there was a garbage value in the diagonal, which would rarely (and nondeterministically) cause mu
to grow very very fast and fill the diagonal with inf
s, causing the error. The only change between builds was adding a CI step to print out the build environment so I could debug the failed test ๐คฃ
Report generation can fail due to plot inlining being enable in Jupyter (this came up in #6)
Note that this can happen without the user enabling inlining in the notebook or in the configuration. The docker image from Jupyter has a hook that enables inlining implicitly whenever matplotlib is imported in the notebook.
Here is a notebook with a small example: Minimal+example+plot+inlining.ipynb.zip
If report generation crucially depends on having inlining disabled, I think pyGSTi should internally disable inlining and restore it to the original state (or perhaps have a different work around). Users are likely need inlining to do their own analysis beyond pyGSTi and it seem onerous to expect users to have it disable just for pyGSTi.
When a data set contains 0-counts (i.e. one or more circuit outcomes are never observed in an experiment) then the GST optimization reports an incorrectly low log-likelihood value.
Just wanted to document a problem that cvxopt
seems to have with Macs running El Capitan. On running one of the tests, test_bootstrap (__main__.TestDriversMethods)
, I get:
ImportError: dlopen(~/anaconda/envs/py27/lib/python2.7/site-packages/cvxopt/lapack.so, 2): Symbol not found: _dgesv_
Referenced from: ~/anaconda/envs/py27/lib/python2.7/site-packages/cvxopt/lapack.so
Expected in: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
The issue cvxopt/cvxopt#45 documents what I think is the same problem, and it may be due to my use of anaconda. I'll try some fixes and see if I come up with a way to get it working.
Obviously we need to be checking that patches aren't breaking the tutorial/example notebooks. Programmatic execution of notebooks looks pretty straightforward so we can just whip up a quick script to smoke-test our notebooks in CI.
Version 0.9.8.1 contains a bug in the function unitary_superoperator_matrix_log
, which causes the assertion assert(_np.linalg.norm(_spl.expm(logM) - M) < 1e-8)
within that function to fail in numerous cases.
This can be reproduced by running
import pygsti
from pygsti.construction import std2Q_XYICNOT
from pygsti.construction import std1Q_XYI
gs = std1Q_XYI.target_model()
pygsti.tools.unitary_superoperator_matrix_log(gs['Gx'], 'pp')
This was caused by the addition of a sqrt(d)/2
scaling factor in the hamiltonian_to_lindblad
function (to give the return value more meaningful units) in commit 434866d. The fix for this issue is to compensate for this scaling factor within unitary_superoperator_matrix_log
by changing the lines
logM_std = _lt.hamiltonian_to_lindbladian(H) # rho --> -i*[H, rho]
logM = change_basis(logM_std, "std", mxBasis)
to
logM_std = _lt.hamiltonian_to_lindbladian(H) # rho --> -i*[H, rho]* sqrt(d)/2
logM = change_basis(logM_std * (2.0/_np.sqrt(H.shape[0])), "std", mxBasis)
(only the comment is changed in the first line). @robpkelly, please apply this change as a hot fix.
@enielse
When trying to install PyGSTi using from pypi (https://pypi.python.org/pypi/pyGSTi) using pip install pygsti --upgrade
I run into an error because of a missing .pyx file.
This occurs both on my laptop (screenshot below) as on our online test builds that have PyGSTi as a dependency.
A few thoughts regarding the aesthetics of the RB plots:
Could we display the legend by default? I was a bit confused when the legend didn't show up.
The original color scheme was developed to ensure good contrast between the data (dots) and the fit (line). Now that there are two possible fits, as well as the possibility that all three (data, zeroth order, and first order) could be displayed on the same plot, I'd like to suggest we use a triadic color scheme to help ensure that maximum contrast between the colors when order
is set to 'all'
. To help preserve contrast when we plot either the first or zeroth order fit and the data, I'd recommend the following colors:
cmap(30)
cmap(110)
to cmap(120)
(goes from dark red-orange to more vibrant orange)cmap(50)
to cmap(169)
(goes from green to bright-ish yellow)It would be less confusing to use the same kind of line style for the "fit" plots, and a different one for the "analytic". That way the line style groups together what is fitted and analytic. (This does raise some problems if the fits are too similar.)
Similarly, the colors associated with the zeroth order (first order) fit should be the same between the "fit" and "analytic" plots. That way it's possible to compare between the fit and analytic plots by color.
(Note: If the above suggestions don't jive with whatever scientific interpretations were suppose to draw from the data -- i.e., does it make sense to compare the "fit" and "analytic" parts of the plot for each other? -- then we should certainly change the suggestions!)
When plotting the data, we could use the zorder
parameter to put the data on top of everything else. (Setting it to some number, like 10, should ensure we plot the data above all the other lines.)
Set the x-axis label by using the capitalized name of the gate:
xlabel = 'RB Sequence Length ({0}s)'.format(gstyp.capitalize())
An example plot showing these suggestions is below, where I made up some numbers of analytic_params
. (pdf downloads to your computer.)
I am able to run linear gst with on a dataset using result = pygsti.do_lgst(ds, ... )
. Even for 2 qubit GST this runs in ~2.5 secs, as opposed to the multiple hours it takes me to do the standard practice GST result = pygsti.do_stdpractice_gst(ds, ...)
.
However, when I try to create a report from the results object that do_lgst
gives me this is not possible. Because the result object does not contain the estimates
attribute (see error messages below). I understand that the report generated by LGST is not to be considered reliable, however it does provide a very valuable sanity check.
Is there any chance this bug will be addressed in the future or is this behavior that is not supported?
I'm running on the latest version of the beta branch: 472a06d .
pygsti.report.create_standard_report(
results=result, title=a.measurementstring+'_'+a.timestamp,
filename=join(a.proc_data_dict['folder'], a.measurementstring+'_'+a.timestamp +'line_inv_GST_report.html'),
confidenceLevel=95)
*** Creating workspace ***
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-14479d2824b1> in <module>()
2 results=result, title=a.measurementstring+'_'+a.timestamp,
3 filename=join(a.proc_data_dict['folder'], a.measurementstring+'_'+a.timestamp +'line_inv_GST_report.html'),
----> 4 confidenceLevel=95)
~/GitHubRepos/DiCarloLab_Repositories/pyGSTi/packages/pygsti/report/factory.py in create_standard_report(results, filename, title, confidenceLevel, comm, ws, auto_open, link_to, brevity, advancedOptions, verbosity)
669
670 results_dict = results if isinstance(results, dict) else {"unique": results}
--> 671 toggles = _set_toggles(results_dict, brevity, combine_robust)
672
673 #DEBUG
~/GitHubRepos/DiCarloLab_Repositories/pyGSTi/packages/pygsti/report/factory.py in _set_toggles(results_dict, brevity, combine_robust)
186 toggles["ShowScaling"] = False
187 for res in results_dict.values():
--> 188 for est in res.estimates.values():
189 weights = est.parameters.get("weights",None)
190 if weights is not None and len(weights) > 0:
AttributeError: 'GateSet' object has no attribute 'estimates'
On the feature-dashboards
branch in Tutorial 20, running this code
from pygsti.report import workspace
w = workspace.Workspace()
Gave rise to this SyntaxError
:
File "/Users/tlschol/Desktop/pyGSTi/packages/pygsti/report/workspace.py", line 158
exec(factory_func_def, exec_globals) #Python 3
SyntaxError: function 'makefactory' uses import * and bare exec, which are illegal because it is a nested function
In pygsti/report/workspace.py
, the relevant lines are
exec_globals = {'cls' : cls, 'self': self}
if _sys.version_info > (3, 0):
exec(factory_func_def, exec_globals) #Python 3
else:
exec("""exec factory_func_def in exec_globals""") #Python 2
A similar issue arises if you comment out the first part of the if/then, and have the interpreter check the other line:
File "/Users/tlschol/Desktop/pyGSTi/packages/pygsti/report/workspace.py", line 161
exec("""exec factory_func_def in exec_globals""") #Python 2
SyntaxError: unqualified exec is not allowed in function 'makefactory' because it is a nested function
This syntax error prevents the creation of the workspace object, which in turn blocks the user from executing any of the remaining code in Tutorials 20, 21, or 22.
The system I am using has pygsti 0.9.3, and Python 2.7.12 :: Anaconda custom (x86_64).
I tried to use pygsti on Windows with anaconda python distribution, and I got the following error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\__init__.py", line 12, in <module>
from . import algorithms as alg
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\algorithms\__init__.py", line 12, in <module>
from .core import *
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\algorithms\core.py", line 16, in <module>
from .. import optimize as _opt
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\optimize\__init__.py", line 12, in <module>
from .customlm import *
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\optimize\customlm.py", line 14, in <module>
from ..tools import mpitools as _mpit
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\tools\__init__.py", line 11, in <module>
from .jamiolkowski import *
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\tools\jamiolkowski.py", line 10, in <module>
from ..baseobjs.basis import basis_matrices as _basis_matrices
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\baseobjs\__init__.py", line 13, in <module>
from .profiler import Profiler
File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\baseobjs\profiler.py", line 24, in <module>
import resource as _resource
ModuleNotFoundError: No module named 'resource'
>>>
It seems resource
module doesn't exist on the Windows platform. Is there a way to use it on a Windows system?
Hi Erik,
I recently tried updating to the latest version of the beta branch of pyGSTi. Whenever I try to check out the latest version I run into the problem that some example notebooks are not present (see screenshot below). This is flagged as file deletions by git.
It appears that the problem is because windows does not allow colons (:
) in filenames. A simple solution would be to rename the offending notebooks.
Describe the bug
Simulating RB data using the simulate.rb_with_pauli_errors
function results in bogus identically-zero "success counts".
To Reproduce
When running the RB analysis tutorial, RBAnalysis.ipynb
, if you set runsims=True
and try to generate the "MySimulatedDRBData.txt"
file from the line:
rb.simulate.rb_with_pauli_errors(pspec, errormodel, lengths, k, counts,
rbtype='DRB', filename=filename, verbosity=1)
The output counts (2nd column of the file) are all zeros, e.g.:
# Results from a DRB simulation
# Number of qubits
5
# RB length // Success counts // Total counts // Circuit depth // Circuit two-qubit gate count
0 0 50 106 90
10 0 50 799 784
20 0 50 1473 1385
30 0 50 2128 2059
...
Expected behavior
Counts should not all be zero.
Environment (please complete the following information):
When I tried running "RBAnalysis.ipynb" with "runsims = True", the new RB data that was generated has success counts = 0 for all RB lengths for every iteration. Any ideas why this would be the case? I've attached the "..txt" file for reference.
See error messages in the attached (as PNG and zipped Jupyter notebook). This happens in the 00 Quick and easy GST.ipynb
file included with pyGSTi.
It seems to indicate that using the default configuration, matplotlib will try to use the QtAgg backend to plot, but then it can't plot because there is no DISPLAY variable set (X is not running).
This error can be reproduced by using a docker image built from BBN-Q/pygsti-docker (so it runs on Ubuntu 14.04, and all python library dependencies are explicitly listed there).
Almost certainly an issue with the generated runTravisTests.sh
script. I can revise this to run tests directly without a regression.
two_qubit_gate()
in basistools.py needs an option to pass in the ii
Pauli
After installing via pip, importing pygsti.extras.idletomography
fails with an error something like:
ImportError: cannot import name 'idletomography' from 'pygsti.extras'
In particular, this bug will surface when trying to generate HTML reports. This is due to a bug in setup.py that omits the idletomography package when copying files during pip's install process.
Hi there, I'm using pyGSTi in a project where I'd also like to use Plotly 4.1. I noticed #55 where there's a remark about unpinning the Plotly version in the future, and it looks like there was an attempt to do that before it was pinned again to 3.10. Just curious what the path to unpinning this looks like?
I believe there have been two errors associated with report generation and FPR. One I believe was already spotted by Erik, when using global fiducial pair reduction.
When trying to create a report on master with per-germ fiducial pair reduction there is a similar bug in:
report/results.py at Line 905
elif isinstance(fidPairs,dict) or hasattr(fidPairs,"keys"):
#Assume fidPairs is a dict indexed by germ
fidpair_filters = { (x,y): fidPairs[germ]
for x in Ls[st:] for y in germs }
I believe that should be fidPairs[y].
The same error shows up at line 741.
pyGSTi has started failing during report generation. It does not seem to be related to any specific data set (it has started failing on any data I throw at it, inscluding data generated by pyGSTi itself).
Failure seems to happen because CVXOPT cannot solve a convex optimization problem needed for the report (the convex problem related to the diamond norm). In essence, the result CVXOPT generates is not optimal, nor is the problem infeasible or unbounded, which indicates serious problems.
Here is a minimal example that leads to such failure
import matplotlib
import time, pickle, os
import scipy as sp
import numpy as np
import pygsti
from pygsti.construction import std1Q_XYI
gs1Q = std1Q_XYI.gs_target
gs1Q_test = std1Q_XYI.gs_target
fiducials1Q = std1Q_XYI.fiducials
germs1Q = std1Q_XYI.germs
maxLengths1Q = [0,1,2,4,8,16,32,64,128,256]
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(
gs1Q_test.gates.keys(),
fiducials1Q,
fiducials1Q,
germs1Q,
maxLengths1Q)
gs_datagen = gs1Q_test.depolarize(gate_noise=0.003, spam_noise=0.05)
ds = pygsti.construction.generate_fake_data(gs_datagen, listOfExperiments, nSamples=2000,
sampleError="binomial", seed=2015)
pygsti.io.write_dataset("test.gst", ds)
results_test = pygsti.do_long_sequence_gst("test.gst",
gs1Q_test,
fiducials1Q,
fiducials1Q,
germs1Q,
maxLengths1Q,
mxBasis="pp",
gaugeOptRatio=1e-7,
advancedOptions ={ 'memoryLimitInBytes' : 10*(1024)**3,
'depolarizeLGST' : 0.2,
'verbosity' : 3} )
results_test.create_full_report_pdf(verbosity=3)
The result error message is
---------------------------------------------------------------------------
SolverError Traceback (most recent call last)
<ipython-input-1-4edd2b634e2b> in <module>()
42 'verbosity' : 3} )
43
---> 44 results_test.create_full_report_pdf(verbosity=3)
/home/jovyan/work/pyGSTi/packages/pygsti/report/results.pyc in create_full_report_pdf(self, confidenceLevel, filename, title, datasetLabel, suffix, debugAidsAppendix, gaugeOptAppendix, pixelPlotAppendix, whackamoleAppendix, m, M, tips, verbosity)
1375
1376 for key in tables_to_compute:
-> 1377 qtys[key] = self.tables.get(key, verbosity=v).render('latex')
1378 qtys["tt_"+key] = tooltiptex(".tables['%s']" % key)
1379
/home/jovyan/work/pyGSTi/packages/pygsti/report/resultcache.pyc in get(self, key, confidence_level, verbosity)
76 _sys.stdout.flush()
77
---> 78 self._data[level][key] = computeFn(key, level, verbosity)
79 except ResultCache.NoCRDependenceError:
80 assert(level is not None)
/home/jovyan/work/pyGSTi/packages/pygsti/report/results.pyc in fn(key, confidenceLevel, vb)
457 cri = self._get_confidence_region(confidenceLevel)
458 return _generation.get_gateset_vs_target_table(
--> 459 gsBest, gsTgt, fmts, tblCl, longT, cri, mxBasis)
460 fns['bestGatesetVsTargetTable'] = (fn, validate_essential)
461
/home/jovyan/work/pyGSTi/packages/pygsti/report/generation.pyc in get_gateset_vs_target_table(gateset, targetGateset, formats, tableclass, longtable, confidenceRegionInfo, mxBasis)
423 qtys_to_compute = [ '%s %s' % (gl,qty) for qty in qtyNames for gl in gateLabels ]
424 qtys = _cr.compute_gateset_gateset_qtys(qtys_to_compute, gateset, targetGateset,
--> 425 confidenceRegionInfo, mxBasis)
426
427 table = _ReportTable(formats, colHeadings, formatters,
/home/jovyan/work/pyGSTi/packages/pygsti/report/reportables.pyc in compute_gateset_gateset_qtys(qtynames, gateset1, gateset2, confidenceRegionInfo, mxBasis)
729 try:
730 ret[key] = _getGateQuantity(half_diamond_norm, gateset1, gateLabel,
--> 731 eps, confidenceRegionInfo)
732 except ImportError: #if failed to import cvxpy (probably b/c it's not installed)
733 ret[key] = ReportableQty(_np.nan) # report NAN for diamond norms
/home/jovyan/work/pyGSTi/packages/pygsti/report/reportables.pyc in _getGateQuantity(fnOfGate, gateset, gateLabel, eps, confidenceRegionInfo, verbosity)
76
77 if confidenceRegionInfo is None: # No Error bars
---> 78 return ReportableQty(fnOfGate(gateset.gates[gateLabel]))
79
80 # make sure the gateset we're given is the one used to generate the confidence region
/home/jovyan/work/pyGSTi/packages/pygsti/report/reportables.pyc in half_diamond_norm(gate)
724
725 def half_diamond_norm(gate):
--> 726 return 0.5 * _tools.diamonddist(gate, gateset2.gates[gateLabel]) #Note: default 'gm' basis
727 #vary elements of gateset1 (assume gateset2 is fixed)
728
/home/jovyan/work/pyGSTi/packages/pygsti/tools/gatetools.pyc in diamonddist(A, B, mxBasis, dimOrStateSpaceDims)
236 prob = _cvxpy.Problem(objective, constraints)
237 # try:
--> 238 prob.solve(solver="CVXOPT")
239 # prob.solve(solver="ECOS")
240 # prob.solve(solver="SCS")#This always fails
/opt/conda/envs/python2/lib/python2.7/site-packages/cvxpy/problems/problem.pyc in solve(self, *args, **kwargs)
171 return func(self, *args, **kwargs)
172 else:
--> 173 return self._solve(*args, **kwargs)
174
175 @classmethod
/opt/conda/envs/python2/lib/python2.7/site-packages/cvxpy/problems/problem.pyc in _solve(self, solver, ignore_dcp, warm_start, verbose, parallel, **kwargs)
282 results_dict = {s.STATUS: sym_data.presolve_status}
283
--> 284 self._update_problem_state(results_dict, sym_data, solver)
285 return self.value
286
/opt/conda/envs/python2/lib/python2.7/site-packages/cvxpy/problems/problem.pyc in _update_problem_state(self, results_dict, sym_data, solver)
394 else:
395 raise SolverError(
--> 396 "Solver '%s' failed. Try another solver." % solver.name())
397 self._status = results_dict[s.STATUS]
398
SolverError: Solver 'CVXOPT' failed. Try another solver.
Here are the packages I have installed (and their versions).
In reviewing recent updates to the RB tutorial on develop
, I noticed there appears to be some code duplication between rbresults.py
and rbobjs.py
. rbobjs
contains the RBResults
object, which is also present in rbresults
. Given the commit history, I'm assuming rbobjs
is meant to supercede rbresults
. (rbresults
was last worked on in commit 9689a74 on 10/6, while robjs
came into existence as part of the refactor in commit 025bf1b on 10/10.)
If this is indeed the case, I would propose we delete rbresults
.
If I create a Pauli channel I don't seem to be getting consistency between the equation for the diamond norm of a Pauli channel (e.g. https://arxiv.org/abs/1109.6887 eqn. 5.4) and the pygsti function.
Ex.
import numpy as np
pr1 = np.array([0.2,0.3,0.2,0.3])
pr2 = np.array([0.5,0.4,0.05,0.05])
The pauli channel diamond norm is:
np.sum(np.abs(pr1-pr2))
This gives 0.8
If I input this into pygsti
import pygsti
pygsti.gatetools.diamonddist(np.diag(pr1),np.diag(pr2),maxBasis='pp'))
I get 0.3
In the example germ generation notebook, checking if a 1Q germ set is AC seems to be fine, but for 2Q it takes a while. (I can add more quantitative statements in a bit.)
As of May 2018, Travis CI has been restructuring their handling of open-source projects. Projects on travis-ci.org
are being migrated to travis-ci.com
. See https://docs.travis-ci.com/user/migrate/open-source-on-travis-ci-com/
We should opt-in to migrate ours. We'd get some nice features like deploy keys. Should be mostly painless -- I think the only thing we'll need to update is the build badge URL in the README. We may temporarily lose build history, but I think that should still be accessible through travis-ci.org
@enielse & I have been discussing what practices we want to enforce, so I ought to write them down.
Hi! I'm trying to build a noise model using ExplicitOpModel. I want to add 'XX' gate to the basis gates.
model1 = pygsti.objects.ExplicitOpModel([0,1],'pp')
#Populate the Model object with states, effects, gates,
# all in the *normalized* Pauli basis: { I/sqrt(2), X/sqrt(2), Y/sqrt(2), Z/sqrt(2) }
# where I, X, Y, and Z are the standard Pauli matrices.
model1['rho0'] = np.kron([ 1/sqrt(2), 0, 0, 1/sqrt(2) ],[ 1/sqrt(2), 0, 0, 1/sqrt(2) ]) # density matrix [[1, 0], [0, 0]] in Pauli basis
model1['Mdefault'] = pygsti.objects.UnconstrainedPOVM(
{'00': np.kron([ 1/sqrt(2), 0, 0, 1/sqrt(2) ],[ 1/sqrt(2), 0, 0, 1/sqrt(2) ]), # projector onto [[1, 0], [0, 0]] in Pauli basis
'01': np.kron([ 1/sqrt(2), 0, 0, -1/sqrt(2) ],[ 1/sqrt(2), 0, 0, 1/sqrt(2) ]),
'10': np.kron([ 1/sqrt(2), 0, 0, 1/sqrt(2) ],[ 1/sqrt(2), 0, 0, -1/sqrt(2) ]) ,
'11': np.kron([ 1/sqrt(2), 0, 0, -1/sqrt(2) ],[ 1/sqrt(2), 0, 0, -1/sqrt(2) ])
}) # projector onto [[0, 0], [0, 1]] in Pauli basis
angle=np.pi/4
U1_xx= [[np.cos(angle),0,0,-np.sin(angle)*1j],
[0,np.cos(angle),-np.sin(angle)*1j,0],
[0,-np.sin(angle)*1j,np.cos(angle),0],
[-np.sin(angle)*1j,0,0,np.cos(angle)]]
XX= np.kron(U1_xx,np.conjugate(U1_xx))
mdl['Gxx',0,1]=XX
I get an error back pointing to the XX gate. I'm not sure why pysgti thinks 'XX' is of evolution type 'statevec'.
Cannot add an object with evolution type 'statevec' to a model with one of 'densitymx'
Also, is it possible to add Molmer Sorensen to the basis set of gates so that it can be used for circuit simulations?
@enielse
Going over the RB tutorial ( pyGSTi/jupyter_notebooks/Tutorials/15 Randomized Benchmarking.ipynb). I wanted to take alook at the example datafile. However the files are missing (on the master branch).
Describe the bug
CI tests on beta
have caught a few apparent python2.7 incompatibilities. See this build log.
test_stdgst_matrix
checks a gaugeopt estimate against one from the disk for equivalence to 2 decimal places. This precision was lowered from 3 by c78ea56 to but under python2.7 it still seems to be too strict -- probably due to some version-specific numpy behavior. We could lower it again but perhaps a difference to 1 decimal place is a bug?
test_stdgst_terms
seems to fail when calling scipy.linalg.solve
in customlm.custom_leastsq
with arguments containing inf and/or NaN. Not sure what's going on there, will look into it further shortly.
To Reproduce
$ virtualenv /tmp/venv-2.7 && source /tmp/venv-2.7/bin/activate
$ pip install -e .[testing]
$ nosetests test.test_packages.drivers.testCalcMethods1Q:CalcMethods1QTestCase.test_stdgst_matrix
[...]
$ nosetests test.test_packages.drivers.testCalcMethods1Q:CalcMethods1QTestCase.test_stdgst_terms
[...]
Expected behavior
Should match python3.5 and 3.7 results
Environment (please complete the following information):
0.9.7.3.post118+g5309a408
(branch develop
)In working through how we can splice together colormaps using the plotting.splice_cmaps()
function and the plotting.LinLogNorm()
class (both being developed on the colormap_fix
branch), I am running into problems with the resulting cmap
object. In particular, we would like to use splice_cmaps()
to join together two colormaps, where the splicing takes place at the normalization of the trans
variable of the LinLogNorm()
class.
Preliminary imports and basic declarations:
import pygsti
import matplotlib.cm as cm
linlog_trans = 11
norm = pygsti.report.plotting.LinLogNorm(trans=linlog_trans)
Code which works
cmap = pygsti.report.plotting.splice_cmaps([cm.Greys, cm.Reds_r],\
splice_points=[.1])
cmap(1)
>> (0.98154556050020103, 0.98154556050020103, 0.98154556050020103, 1.0)
Code which throws an error
cmap = pygsti.report.plotting.splice_cmaps([cm.Greys, cm.Reds_r],\
splice_points=[norm(11)])
cmap(1)
>> ValueError Traceback (most recent call last)
<ipython-input-90-4078524cc28b> in <module>()
----> 1 cmap(1)
/Users/tlschol/anaconda/lib/python2.7/site-packages/matplotlib/colors.pyc in __call__(self, X, alpha, bytes)
548 # See class docstring for arg/kwarg documentation.
549 if not self._isinit:
--> 550 self._init()
551 mask_bad = None
552 if not cbook.iterable(X):
/Users/tlschol/anaconda/lib/python2.7/site-packages/matplotlib/colors.pyc in _init(self)
730 self._lut = np.ones((self.N + 3, 4), np.float)
731 self._lut[:-3, 0] = makeMappingArray(
--> 732 self.N, self._segmentdata['red'], self._gamma)
733 self._lut[:-3, 1] = makeMappingArray(
734 self.N, self._segmentdata['green'], self._gamma)
/Users/tlschol/anaconda/lib/python2.7/site-packages/matplotlib/colors.pyc in makeMappingArray(N, data, gamma)
467 if x[0] != 0. or x[-1] != 1.0:
468 raise ValueError(
--> 469 "data mapping points must start with x=0. and end with x=1")
470 if np.sometrue(np.sort(x) - x):
471 raise ValueError(
ValueError: data mapping points must start with x=0. and end with x=1
The only difference between these two calls is in the splice_points
declaration, where we use [.5]
and [norm(11)]
. Curiously, norm(11)
returns nan
. Ideas as to what are going on would be welcome.
The behavior we are trying to achieve would set the splice point for the colormap at the normalization value of the transition point. That way, when the normalization goes from logarithmic to linear, the colormap will change as well.
I get an assertion error when I generate the standard report for 2 qubit model testing. It works fine for single qubit model testing. I'm not sure if it is a bug or if I am doing something wrong, so decided to open a regular issue.
To Reproduce
Steps to reproduce the behavior:
I couldn't upload a Jupyter notebook file here, so here is a GitHub link to the code I'm running: https://github.com/newsma/pygsti_work/blob/master/2QubitModelTesting.ipynb
Expected behavior
Generate the standard report.
Environment (please complete the following information):
At the end of Tutorial 16, we demonstrate fitting different-order RB models. I noticed some issues related to the error-handling based on different combinations of the input parameters.
Currently, after instantiating the figure and extracting some variables, we check how gstyp
and analytic_params
play with analytic
around line 413:
if analytic != None:
if gstyp != 'clifford':
print("Analytical curve is for Clifford decay!")
if analytic_params==None:
print("Function must be given the analytical parameters!")
f_an = analytic_params['f']
This code snippet raises two questions:
gystp
is not "clifford" (i.e., is "primitive") then does it make sense to plot the analytic decay curve? If not, maybe a warning which indicates this discrepancy occurred, and sets analytic = None
would be a good solution:if gstyp != 'clifford':
print("Analytical curve is for Clifford decay only. Setting analytic to None.")
analytic = None
analytic_params
dictionary. However, the original snippet doesn't raise any kind of error when analytic != None
, but analytic_params = None
. It is true that the very next line which executes (f_an = analytic_params['f']
) will raise an error, since analytic_params
is None, but it's not a very helpful error:TypeError: 'NoneType' object has no attribute '__getitem__'
As such, I'd like to propose adding the following code (or some variant thereof) on line 399 to do this check before we start making the plot:
if (analytic is not None) and (gstyp != 'clifford'):
print("Analytical curve is for Clifford decay only. Setting analytic to None.")
analytic = None
if (analytic is not None) and (analytic_params is None):
raise ValueError, "No input analytic parameters specified.\
Please specify analytic_params, or set analytic to None."
This would help us resolve some potential contradictions between the input parameters before running the rest of the function.
I am getting the following error while trying the example from Getting started quickly with Gate Set Tomography. I would appreciate if anyone could look into the issue. Thanks.
Traceback (most recent call last):
File "gst.py", line 62, in
gaugeOptRatio=1e-3, constrainToTP=True)
TypeError: do_long_sequence_gst() got an unexpected keyword argument 'gaugeOptRatio'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.