GithubHelp home page GithubHelp logo

sandialabs / pygsti Goto Github PK

View Code? Open in Web Editor NEW
127.0 15.0 55.0 124.13 MB

A python implementation of Gate Set Tomography

Home Page: http://www.pygsti.info

License: Apache License 2.0

Python 30.97% Jupyter Notebook 64.53% TeX 0.16% Shell 0.02% HTML 2.38% JavaScript 0.08% CSS 0.21% C++ 0.38% Cython 1.25%
tomography qcvv quantum-computing characterization gst snl-quantum-computing scr-2018

pygsti's Introduction


pyGSTi 0.9.12.1


master build develop build beta build notebooks on beta

pyGSTi

pyGSTi is an open-source software for modeling and characterizing noisy quantum information processors (QIPs), i.e., systems of one or more qubits. It is licensed under the Apache License, Version 2.0. Copyright information can be found in NOTICE, and the license itself in LICENSE.

There are three main objects in pyGSTi:

  • Circuit: a quantum circuit (can have many qubits).
  • Model: a description of a QIP's gate and SPAM operations (a noise model).
  • DataSet: a dictionary-like container holding experimental data.

You can do various things by with these objects:

  • Circuit simulation: compute a the outcome probabilities of a Circuit using a Model.
  • Data simulation: simulate experimental data (a DataSet) using a Model.
  • Model testing: Test whether a given Model fits the data in a DataSet.
  • Model estimation: Estimate a Model from a DataSet (e.g. using GST).
  • Model-less characterization: Perform Randomized Benchmarking on a DataSet.

In particular, there are a number of characterization protocols currently implemented in pyGSTi:

  • Gate Set Tomography (GST) is the most complex and is where the software derives its name (a "python GST implementation").
  • Randomized Benchmarking (RB) is a well-known method for assessing the quality of a QIP in an average sense. PyGSTi implements standard "Clifford" RB as well as the more scalable "Direct" RB methods.
  • Robust Phase Estimation (RPE) is a method designed for quickly learning a few noise parameters of a QIP that particularly useful for tuning up qubits.

PyGSTi is designed with a modular structure so as to be highly customizable and easily integrated to new or existing python software. It runs using python 3.8 or higher. To faclilitate integration with software for running cloud-QIP experiments, pyGSTi Circuit objects can be converted to IBM's OpenQASM and Rigetti Quantum Computing's Quil circuit description languages.

Installation

Apart from several optional Cython modules, pyGSTi is written entirely in Python. To install pyGSTi and only its required dependencies run:

pip install pygsti

Or, to install pyGSTi with all its optional dependencies too, run:

pip install pygsti[complete]

The disadvantage to these approaches is that the numerous tutorials included in the package will then be buried within your Python's site_packages directory, which you'll likely want to access later on. Alternatively, you can locally install pyGSTi using the following commands:

cd <install_directory>
git clone https://github.com/pyGSTio/pyGSTi.git
cd pyGSTi
pip install -e .[complete]

As above, you can leave off the .[complete] if you only went the minimal set of dependencies installed. You could also replace the git clone ... command with unzip pygsti-0.9.x.zip where the latter file is a downloaded pyGSTi source archive. Any of the above installations should build the set of optional Cython extension modules if a working C/C++ compiler and the Cython package are present. If, however, compilation fails or you later decided to add Cython support, you can rebuild the extension modules (without reinstalling) if you've followed the local installation approach above using the command:

python setup.py build_ext --inplace

Finally, Jupyter notebook is highly recommended as it is generally convenient and the format of the included tutorials and examples. It is installed automatically when [complete] is used, otherwise it can be installed separately.

Getting Started

Here's a couple of simple examples to get you started.

Circuit simulation

To compute the outcome probabilities of a circuit, you just need to create a Circuit object (describing your circuit) and a Model object containing the operations contained in your circuit. Here we use a "stock" single-qubit Model containing Idle, X(ฯ€/2), and Y(ฯ€/2) gates labelled Gi, Gx, and Gy, respectively:

import pygsti
from pygsti.modelpacks import smq1Q_XYI

mycircuit = pygsti.circuits.Circuit([('Gxpi2',0), ('Gypi2',0), ('Gxpi2',0)])
model = smq1Q_XYI.target_model()
outcome_probabilities = model.probabilities(mycircuit)

Gate Set Tomography

Gate Set Tomography is used to characterize the operations performed by hardware designed to implement a (small) system of quantum bits (qubits). Here's the basic idea:

  1. you tell pyGSTi what gates you'd ideally like to perform

  2. pyGSTi tells you what circuits it want's data for

  3. you perform the requested experiments and place the resulting data (outcome counts) into a text file that looks something like:

    ## Columns = 0 count, 1 count
    {} 0 100  # the empty sequence (just prep then measure)
    Gx 10 90  # prep, do a X(pi/2) gate, then measure
    GxGy 40 60  # prep, do a X(pi/2) gate followed by a Y(pi/2), then measure
    Gx^4 20 80  # etc...
    
  4. pyGSTi takes the data file and outputs a "report" - currently a HTML web page.

In code, running GST looks something like this:

import pygsti
from pygsti.modelpacks import smq1Q_XYI

# 1) get the ideal "target" Model (a "stock" model in this case)
mdl_ideal = smq1Q_XYI.target_model()

# 2) generate a GST experiment design
edesign = smq1Q_XYI.create_gst_experiment_design(4) # user-defined: how long do you want the longest circuits?

# 3) write a data-set template
pygsti.io.write_empty_dataset("MyData.txt", edesign.all_circuits_needing_data, "## Columns = 0 count, 1 count")

# STOP! "MyData.txt" now has columns of zeros where actual data should go.
# REPLACE THE ZEROS WITH ACTUAL DATA, then proceed with:
ds = pygsti.io.load_dataset("MyData.txt") # load data -> DataSet object

# OR: Create a simulated dataset with:
# ds = pygsti.data.simulate_data(mdl_ideal, edesign, num_samples=1000)

# 4) run GST (now using the modern object-based interface)
data = pygsti.protocols.ProtocolData(edesign, ds) # Step 1: Bundle up the dataset and circuits into a ProtocolData object
protocol = pygsti.protocols.StandardGST() # Step 2: Select a Protocol to run
results = protocol.run(data) # Step 3: Run the protocol!

# 5) Create a nice HTML report detailing the results
report = pygsti.report.construct_standard_report(results, title="My Report", verbosity=1)
report.write_html("myReport", auto_open=True, verbosity=1) # Can also write out Jupyter notebooks!

Tutorials and Examples

There are numerous tutorials (meant to be pedagogical) and examples (meant to be demonstrate how to do some particular thing) in the form of Jupyter notebooks beneath the pyGSTi/jupyter_notebooks directory. The root "START HERE" notebook will direct you where to go based on what you're most interested in learning about. You can view the read-only GitHub version of this notebook or you can explore the tutorials interactively using JupyterHub via Binder. Note the existence of a FAQ, which addresses common issues.

Running notebooks locally

While it's possible to view the notebooks on GitHub using the links above, it's usually nicer to run them locally so you can mess around with the code as you step through it. To do this, you'll need to start up a Jupyter notebook server using the following steps (this assumes you've followed the local installation directions above):

  • Changing to the notebook directory, by running: cd jupyter_notebooks/Tutorials/

  • Start up the Jupyter notebook server by running: jupyter notebook

The Jupyter server should open up your web browser to the server root, from where you can start the first "START_HERE.ipynb" notebook. Note that the key command to execute a cell within the Jupyter notebook is Shift+Enter, not just Enter.

Documentation

Online documentation is hosted on Read the Docs.

License

PyGSTi is licensed under the Apache License Version 2.0.

Questions?

For help and support with pyGSTi, please contact the authors at [email protected].

pygsti's People

Contributors

adhumu avatar colibri-coruscans avatar coreyostrove avatar dhothem avatar dnadlinger avatar eendebakpt avatar enielse avatar jarthurgross avatar johnkgamble avatar jordanh6 avatar kevincyoung avatar kmrudin avatar lnmaurer avatar lsaldyt avatar msarovar avatar pyioncontrol avatar rileyjmurray avatar robinbk avatar robpkelly avatar sserita avatar tjproct avatar travis-s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pygsti's Issues

Automated testing for notebooks

Obviously we need to be checking that patches aren't breaking the tutorial/example notebooks. Programmatic execution of notebooks looks pretty straightforward so we can just whip up a quick script to smoke-test our notebooks in CI.

Calculating negative eigenvalues of the choi matrix

If I use the different functions in jamiolkowski.py to calculate the negative e-vals I get inconsistent values. I think the problem is that mags_of_negative_choi_evals() does not call jamiolkowski_iso() with the basis of the gates in the gateset.

Problems creating GST pdf report

When creating a brief pdf report using pygsti, using a default directory name with underscores creates a LaTex error. This is caused by a failing math interpretation of the directory name in the \hypersetup{pdfinfo={ \putfield{pdfinfo}{} }} section in the GST report template. Using \detokenize{directory_name} in the generated .tex file saves the problem, however this is not practical since this error is created every time when creating a new report. For now, my workaround is to pop the key 'defaultDirectory' before calling the function _to_pdfinfo in pygsti's results.py. When using several computers, this becomes cumbersome.

Colon in filename of example notebooks creates problem on Windows

@enielse

Hi Erik,
I recently tried updating to the latest version of the beta branch of pyGSTi. Whenever I try to check out the latest version I run into the problem that some example notebooks are not present (see screenshot below). This is flagged as file deletions by git.

It appears that the problem is because windows does not allow colons (:) in filenames. A simple solution would be to rename the offending notebooks.

image

Error when adding a process matrix to the noise model

Hi! I'm trying to build a noise model using ExplicitOpModel. I want to add 'XX' gate to the basis gates.

model1 = pygsti.objects.ExplicitOpModel([0,1],'pp')

#Populate the Model object with states, effects, gates,
# all in the *normalized* Pauli basis: { I/sqrt(2), X/sqrt(2), Y/sqrt(2), Z/sqrt(2) }
# where I, X, Y, and Z are the standard Pauli matrices.
model1['rho0'] = np.kron([ 1/sqrt(2), 0, 0, 1/sqrt(2) ],[ 1/sqrt(2), 0, 0, 1/sqrt(2) ]) # density matrix [[1, 0], [0, 0]] in Pauli basis
model1['Mdefault'] = pygsti.objects.UnconstrainedPOVM(
    {'00': np.kron([ 1/sqrt(2), 0, 0, 1/sqrt(2) ],[ 1/sqrt(2), 0, 0, 1/sqrt(2) ]),   # projector onto [[1, 0], [0, 0]] in Pauli basis
     '01': np.kron([ 1/sqrt(2), 0, 0, -1/sqrt(2) ],[ 1/sqrt(2), 0, 0, 1/sqrt(2) ]),
    '10': np.kron([ 1/sqrt(2), 0, 0, 1/sqrt(2) ],[ 1/sqrt(2), 0, 0, -1/sqrt(2) ]) ,
     '11': np.kron([ 1/sqrt(2), 0, 0, -1/sqrt(2) ],[ 1/sqrt(2), 0, 0, -1/sqrt(2) ])
    }) # projector onto [[0, 0], [0, 1]] in Pauli basis

angle=np.pi/4
U1_xx= [[np.cos(angle),0,0,-np.sin(angle)*1j],
             [0,np.cos(angle),-np.sin(angle)*1j,0],
              [0,-np.sin(angle)*1j,np.cos(angle),0],
              [-np.sin(angle)*1j,0,0,np.cos(angle)]]
XX= np.kron(U1_xx,np.conjugate(U1_xx))

mdl['Gxx',0,1]=XX

I get an error back pointing to the XX gate. I'm not sure why pysgti thinks 'XX' is of evolution type 'statevec'.

Cannot add an object with evolution type 'statevec' to a model with one of 'densitymx'

Also, is it possible to add Molmer Sorensen to the basis set of gates so that it can be used for circuit simulations?

do_stdpractice_gst() only runs single-2QUR for Target mode

I am using do_stdpractice_gst() on two-qubit GST data, and with default parameters, the single-2QUR gauge optimisation is only run on the Target model data, not on the results of the TP/CPTP modes.

Only the "-- Performing 'single' gauge optimization" message is printed for the TP/CPTP steps, and indeed the generated report is missing the single-2QUR data for all but the Target model.

Environment (please complete the following information):

  • pyGSTi version: master (v0.9.8.2, f143ca1)
  • python version: 3.5, 3.7
  • OS: macOS, Linux

(This sounds like a trivial bug that would be easier to fix for me than to write up, but then, you couldn't accept a PR from me unless the legal situation has changed in the meantime.)

IOError: [Errno 2] No such file or directory

It occurred when I run the tutorial file 00 Quick and easy GST.ipynb and 07 Report Generation.ipynb. The error is same in both files. When it read the .create_presentation_ppt, it can't find progressTable.png file.
Here is the traceback:

IOError                                   Traceback (most recent call last)
<ipython-input-14-dac62d848da6> in <module>()
      1 #create GST slides (tables and figures of full report in Powerpoint slides; best for folks familiar with GST)
----> 2 results.create_presentation_ppt(confidenceLevel=95, filename="tutorial_files/easy_slides.pptx", verbosity=2)

/home/wujizhou/pyGSTi/packages/pygsti/report/results.py in create_presentation_ppt(self, confidenceLevel, filename, title, datasetLabel, suffix, debugAidsAppendix, pixelPlotAppendix, whackamoleAppendix, m, M, verbosity, pptTables)
   2338             #body_shape = slide.shapes.placeholders[1]; tf = body_shape.text_frame
   2339             add_text_list(slide.shapes, 1, 2, 8, 2, ['Ns is the number of gate strings', 'Np is the number of parameters'], 15)
-> 2340             drawTable(slide.shapes, 'progressTable', 1, 3, 8.5, 4, ptSize=10)
   2341 
   2342             slide = add_slide(SLD_LAYOUT_TITLE_NO_CONTENT, "Detailed %s Analysis" % plotFnName)

/home/wujizhou/pyGSTi/packages/pygsti/report/results.py in draw_table_latex(shapes, key, left, top, width, height, ptSize)
   2279 
   2280             pathToImg = _os.path.join(fileDir, "%s.png" % key)
-> 2281             return draw_pic(shapes, pathToImg, left, top, width, height)
   2282 
   2283 

/home/wujizhou/pyGSTi/packages/pygsti/report/results.py in draw_pic(shapes, path, left, top, width, height)
   2283 
   2284         def draw_pic(shapes, path, left, top, width, height):
-> 2285             pxWidth, pxHeight = Image.open(open(path)).size
   2286             pxAspect = pxWidth / float(pxHeight) #aspect ratio of image
   2287             maxAspect = width / float(height) #aspect ratio of "max" box

IOError: [Errno 2] No such file or directory: 'tutorial_files/easy_slides_files/progressTable.png'

RB Results Object Plotting - plot aesthetics

A few thoughts regarding the aesthetics of the RB plots:

  • Could we display the legend by default? I was a bit confused when the legend didn't show up.

  • The original color scheme was developed to ensure good contrast between the data (dots) and the fit (line). Now that there are two possible fits, as well as the possibility that all three (data, zeroth order, and first order) could be displayed on the same plot, I'd like to suggest we use a triadic color scheme to help ensure that maximum contrast between the colors when order is set to 'all'. To help preserve contrast when we plot either the first or zeroth order fit and the data, I'd recommend the following colors:

    • Data: Keep at cmap(30)
    • Zeroth order fit: Go from cmap(110) to cmap(120) (goes from dark red-orange to more vibrant orange)
    • First order fit: Go from cmap(50) to cmap(169) (goes from green to bright-ish yellow)
  • It would be less confusing to use the same kind of line style for the "fit" plots, and a different one for the "analytic". That way the line style groups together what is fitted and analytic. (This does raise some problems if the fits are too similar.)

  • Similarly, the colors associated with the zeroth order (first order) fit should be the same between the "fit" and "analytic" plots. That way it's possible to compare between the fit and analytic plots by color.

(Note: If the above suggestions don't jive with whatever scientific interpretations were suppose to draw from the data -- i.e., does it make sense to compare the "fit" and "analytic" parts of the plot for each other? -- then we should certainly change the suggestions!)

  • When plotting the data, we could use the zorder parameter to put the data on top of everything else. (Setting it to some number, like 10, should ensure we plot the data above all the other lines.)

  • Set the x-axis label by using the capitalized name of the gate:

xlabel = 'RB Sequence Length ({0}s)'.format(gstyp.capitalize())

An example plot showing these suggestions is below, where I made up some numbers of analytic_params. (pdf downloads to your computer.)

example.pdf

Missing template datafiles.txt /broken link

@enielse
Going over the RB tutorial ( pyGSTi/jupyter_notebooks/Tutorials/15 Randomized Benchmarking.ipynb). I wanted to take alook at the example datafile. However the files are missing (on the master branch).

RB package - code overlap?

In reviewing recent updates to the RB tutorial on develop, I noticed there appears to be some code duplication between rbresults.py and rbobjs.py. rbobjs contains the RBResults object, which is also present in rbresults. Given the commit history, I'm assuming rbobjs is meant to supercede rbresults. (rbresults was last worked on in commit 9689a74 on 10/6, while robjs came into existence as part of the refactor in commit 025bf1b on 10/10.)

If this is indeed the case, I would propose we delete rbresults.

Objective functions do not correctly fill SPAM penalty jacobian

(sorry in advance for abusing math terminology)

Describe the bug
In pygsti.baseobjs.objectivefn see _spam_penalty_jac_fill. I'm pretty sure that the assignments here should be adding another term to the first axis index (possibly the key for effectvec?) because as it is each iteration of the loop will overwrite the same row. Additional space is allocated in the ObjectiveFunction __init__ but never written to, so the returned jacobian will have garbage data at the end!

Additional context
See this failed build from the test refactor branch, and compare to this subsequent build. In custom_leastsq, one uninitialized row of the jacobian meant there was a garbage value in the diagonal, which would rarely (and nondeterministically) cause mu to grow very very fast and fill the diagonal with infs, causing the error. The only change between builds was adding a CI step to print out the build environment so I could debug the failed test ๐Ÿคฃ

Dropping Python 2 support

Python 2.7 will reach end-of-life on January 1st, 2020 (see this melodramatic countdown timer). Additionally, many major Python projects have pledged to drop support for Python 2 on or before that date.

The next major release of pyGSTi (v0.9.9) will drop support for Python 2. In other words, we'll be limiting our future support to Python 3.5 and 3.7. Users running pyGSTi on Python 2 should consult the official guide for more information on porting their environment to Python 3.


As of ace6d65 our CI no longer builds for Python 2.7. Developers are no longer required to ensure Python 2.7 compatibility in new contributions. Here's what should be done before v0.9.9:

  • Drop Python 2.7 builds from TravisCI
  • Specify python_requires in setup.py
  • Remove Python 2 backports from dependencies in setup.py
    • Not sure we actually have any strictly for Python 2...
  • Remove Python 2 compatibility duct-tape throughout the source tree, including:
    • sys.version_info checks for python 2
    • most if not all __future__ imports
    • Python 2 compatibility patches in pygsti.tools.compattools
  • Remove Python 2 static test fixture resources

Reports no longer loading on firefox

I'm having problems running the HTML files on Firefox in Ubuntu. I'm getting the report loading failed screen, but this claims that Firefox doesn't have any such issue. I'm continuing to get the issue when I run through a jupyter notebook, even with the suggested patch on the page. Running an https client through python does appear to work, but this has all manner of caching issues when I switch between different reports.

I suppose this means that Firefox is no longer allowing locally-loaded html files to load other files? I can't seem to find details of this/a workaround anywhere, perhaps this can be updated in the report loading failed screen?

pygsti cannot run on Windows

I tried to use pygsti on Windows with anaconda python distribution, and I got the following error.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\__init__.py", line 12, in <module>
    from . import algorithms as alg
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\algorithms\__init__.py", line 12, in <module>
    from .core import *
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\algorithms\core.py", line 16, in <module>
    from .. import optimize     as _opt
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\optimize\__init__.py", line 12, in <module>
    from .customlm import *
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\optimize\customlm.py", line 14, in <module>
    from ..tools import mpitools as _mpit
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\tools\__init__.py", line 11, in <module>
    from .jamiolkowski import *
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\tools\jamiolkowski.py", line 10, in <module>
    from ..baseobjs.basis import basis_matrices as _basis_matrices
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\baseobjs\__init__.py", line 13, in <module>
    from .profiler import Profiler
  File "C:\MiniConda\envs\IonControl\lib\site-packages\pygsti\baseobjs\profiler.py", line 24, in <module>
    import resource as _resource
ModuleNotFoundError: No module named 'resource'
>>>

It seems resource module doesn't exist on the Windows platform. Is there a way to use it on a Windows system?

Release on pypi is broken

Error when trying to import module functions:

  from pygsti.construction import make_lsgst_experiment_list, std1Q_XY, std1Q_XYI, std2Q_XYCNOT
  File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/__init__.py", line 15, in <module>
    from . import report as rpt
  File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/__init__.py", line 12, in <module>
    from .factory import *
  File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/factory.py", line 26, in <module>
    from . import workspace as _ws
  File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/workspace.py", line 26, in <module>
    from . import plotly_plot_ex as _plotly_ex
  File "/home/schuyler/.conda/envs/arb-p/lib/python3.6/site-packages/pygsti/report/plotly_plot_ex.py", line 11, in <module>
    from plotly.offline.offline import _plot_html
ImportError: cannot import name '_plot_html'

Turns out plotly released v3.8.0 like 11 hours ago, which removed this function _plot_html_. Easy fix is to pin the plotly version in your requirements.txt and setup.py, then tag and release a new version.

Diamond Norm for a Pauli Channel

If I create a Pauli channel I don't seem to be getting consistency between the equation for the diamond norm of a Pauli channel (e.g. https://arxiv.org/abs/1109.6887 eqn. 5.4) and the pygsti function.

Ex.
import numpy as np
pr1 = np.array([0.2,0.3,0.2,0.3])
pr2 = np.array([0.5,0.4,0.05,0.05])
The pauli channel diamond norm is:
np.sum(np.abs(pr1-pr2))
This gives 0.8

If I input this into pygsti
import pygsti
pygsti.gatetools.diamonddist(np.diag(pr1),np.diag(pr2),maxBasis='pp'))
I get 0.3

Report generation fails due to plot inlining in Jupyter

Report generation can fail due to plot inlining being enable in Jupyter (this came up in #6)

Note that this can happen without the user enabling inlining in the notebook or in the configuration. The docker image from Jupyter has a hook that enables inlining implicitly whenever matplotlib is imported in the notebook.

Here is a notebook with a small example: Minimal+example+plot+inlining.ipynb.zip

If report generation crucially depends on having inlining disabled, I think pyGSTi should internally disable inlining and restore it to the original state (or perhaps have a different work around). Users are likely need inlining to do their own analysis beyond pyGSTi and it seem onerous to expect users to have it disable just for pyGSTi.

RB Results Object Plotting - Error Handling

At the end of Tutorial 16, we demonstrate fitting different-order RB models. I noticed some issues related to the error-handling based on different combinations of the input parameters.

Currently, after instantiating the figure and extracting some variables, we check how gstyp and analytic_params play with analytic around line 413:

if analytic != None:
   if gstyp != 'clifford':
        print("Analytical curve is for Clifford decay!")            
   if analytic_params==None:
      print("Function must be given the analytical parameters!")
    f_an = analytic_params['f']

This code snippet raises two questions:

  • If gystp is not "clifford" (i.e., is "primitive") then does it make sense to plot the analytic decay curve? If not, maybe a warning which indicates this discrepancy occurred, and sets analytic = None would be a good solution:
if gstyp != 'clifford':
    print("Analytical curve is for Clifford decay only. Setting analytic to None.")
    analytic = None  
  • Just after this code snippet, we attempt to extract out the analytic parameters from the analytic_params dictionary. However, the original snippet doesn't raise any kind of error when analytic != None, but analytic_params = None. It is true that the very next line which executes (f_an = analytic_params['f']) will raise an error, since analytic_params is None, but it's not a very helpful error:
TypeError: 'NoneType' object has no attribute '__getitem__'

As such, I'd like to propose adding the following code (or some variant thereof) on line 399 to do this check before we start making the plot:

if (analytic is not None) and (gstyp != 'clifford'):
    print("Analytical curve is for Clifford decay only. Setting analytic to None.")
    analytic = None  
if (analytic is not None) and (analytic_params is None):
    raise ValueError, "No input analytic parameters specified.\
 Please specify analytic_params, or set analytic to None."

This would help us resolve some potential contradictions between the input parameters before running the rest of the function.

Unpinning Plotly

Hi there, I'm using pyGSTi in a project where I'd also like to use Plotly 4.1. I noticed #55 where there's a remark about unpinning the Plotly version in the future, and it looks like there was an attempt to do that before it was pinned again to 3.10. Just curious what the path to unpinning this looks like?

Error when generating report for 2 qubit model testing

I get an assertion error when I generate the standard report for 2 qubit model testing. It works fine for single qubit model testing. I'm not sure if it is a bug or if I am doing something wrong, so decided to open a regular issue.

To Reproduce
Steps to reproduce the behavior:
I couldn't upload a Jupyter notebook file here, so here is a GitHub link to the code I'm running: https://github.com/newsma/pygsti_work/blob/master/2QubitModelTesting.ipynb

Expected behavior
Generate the standard report.

Environment (please complete the following information):

  • pyGSTi version 0.9.7
  • python version 3.7
  • OS macOS Mojave 10.14.5

All zero counts when simulating RB data in v0.9.7.2-0.9.7.5

Describe the bug
Simulating RB data using the simulate.rb_with_pauli_errors function results in bogus identically-zero "success counts".

To Reproduce
When running the RB analysis tutorial, RBAnalysis.ipynb, if you set runsims=True and try to generate the "MySimulatedDRBData.txt" file from the line:

rb.simulate.rb_with_pauli_errors(pspec, errormodel, lengths, k, counts, 
                                              rbtype='DRB', filename=filename, verbosity=1)

The output counts (2nd column of the file) are all zeros, e.g.:

# Results from a DRB simulation
# Number of qubits
5
# RB length // Success counts // Total counts // Circuit depth // Circuit two-qubit gate count
0 0 50 106 90
10 0 50 799 784
20 0 50 1473 1385
30 0 50 2128 2059
...

Expected behavior
Counts should not all be zero.

Environment (please complete the following information):

  • pyGSTi version 0.9.7.2 to 0.9.7.5
  • python version 3.7
  • OS X v10.13.6

AssertionError when generating report: _np.linalg.norm(_spl.expm(logM) - M) < 1e-8)

Version 0.9.8.1 contains a bug in the function unitary_superoperator_matrix_log, which causes the assertion assert(_np.linalg.norm(_spl.expm(logM) - M) < 1e-8) within that function to fail in numerous cases.

This can be reproduced by running

import pygsti
from pygsti.construction import std2Q_XYICNOT
from pygsti.construction import std1Q_XYI
gs = std1Q_XYI.target_model()
pygsti.tools.unitary_superoperator_matrix_log(gs['Gx'], 'pp')

This was caused by the addition of a sqrt(d)/2 scaling factor in the hamiltonian_to_lindblad function (to give the return value more meaningful units) in commit 434866d. The fix for this issue is to compensate for this scaling factor within unitary_superoperator_matrix_log by changing the lines

logM_std = _lt.hamiltonian_to_lindbladian(H)  # rho --> -i*[H, rho]                                                                                                                   
logM = change_basis(logM_std, "std", mxBasis)

to

logM_std = _lt.hamiltonian_to_lindbladian(H)  # rho --> -i*[H, rho]* sqrt(d)/2                                                                                                                      
logM = change_basis(logM_std * (2.0/_np.sqrt(H.shape[0])), "std", mxBasis)

(only the comment is changed in the first line). @robpkelly, please apply this change as a hot fix.

MPI error when running LGST

When running two-qubit GST using MPI I am getting the following error, on the 5th iteration of MLGST

Traceback (most recent call last):
  File "pygsti_2q_mpi.py", line 58, in <module>
    memLimit=memLim, verbosity=3, comm=comm)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/drivers/longsequence.py", line 466, in do_long_sequence_gst
    output_pkl, printer)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/drivers/longsequence.py", line 685, in do_long_sequence_gst_base
    gs_lsgst_list = _alg.do_iterative_mlgst(**args)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/algorithms/core.py", line 2845, in do_iterative_mlgst
    memLimit, comm, distributeMethod, profiler, evt_cache)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/algorithms/core.py", line 1439, in do_mc2gst
    verbosity=printer-1, profiler=profiler)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/optimize/customlm.py", line 209, in custom_leastsq
    new_f = obj_fn(new_x)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/algorithms/core.py", line 1207, in _objective_func
    gs.bulk_fill_probs(probs, evTree, probClipInterval, check, comm)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/objects/gateset.py", line 2637, in bulk_fill_probs
    evalTree, clipTo, check, comm)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/objects/gatematrixcalc.py", line 2067, in bulk_fill_probs
    mySubTreeIndices, subTreeOwners, mySubComm = evalTree.distribute(comm)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/objects/evaltree.py", line 441, in distribute
    _mpit.distribute_indices(list(range(nSubtreeComms)), comm)
  File "/home/gribeill/GitHub/pyGSTi/packages/pygsti/tools/mpitools.py", line 79, in distribute_indices
    loc_comm = comm.Split(color=color, key=rank)
  File "MPI/Comm.pyx", line 199, in mpi4py.MPI.Comm.Split (src/mpi4py.MPI.c:91864)
mpi4py.MPI.Exception: Other MPI error, error stack:
PMPI_Comm_split(471)..........: MPI_Comm_split(MPI_COMM_WORLD, color=0, key=11, new_comm=0x7faa10d37178) failed
PMPI_Comm_split(453)..........:
MPIR_Comm_split_impl(222).....:
MPIR_Get_contextid_sparse(752): Too many communicators

This is with pyGSTi v0.9.5, and mpi4py v2.0.0, run with mpiexec -n 16 python3 pygsti_2q_mpi.py

Here's the script: pygsti_2q_mpi.py

Any hints as to what is going wrong would be appreciated. I'm rerunning this with mpi4py v3.0.0 right now...

workspace object + exec() command

On the feature-dashboards branch in Tutorial 20, running this code

from pygsti.report import workspace
w = workspace.Workspace()

Gave rise to this SyntaxError:

File "/Users/tlschol/Desktop/pyGSTi/packages/pygsti/report/workspace.py", line 158
    exec(factory_func_def, exec_globals) #Python 3
SyntaxError: function 'makefactory' uses import * and bare exec, which are illegal because it is a nested function

In pygsti/report/workspace.py, the relevant lines are

exec_globals = {'cls' : cls, 'self': self}
if _sys.version_info > (3, 0):
    exec(factory_func_def, exec_globals) #Python 3
else:
    exec("""exec factory_func_def in exec_globals""") #Python 2

A similar issue arises if you comment out the first part of the if/then, and have the interpreter check the other line:

File "/Users/tlschol/Desktop/pyGSTi/packages/pygsti/report/workspace.py", line 161
    exec("""exec factory_func_def in exec_globals""") #Python 2
SyntaxError: unqualified exec is not allowed in function 'makefactory' because it is a nested function

This syntax error prevents the creation of the workspace object, which in turn blocks the user from executing any of the remaining code in Tutorials 20, 21, or 22.

The system I am using has pygsti 0.9.3, and Python 2.7.12 :: Anaconda custom (x86_64).

Python 2.7 compatibility issues failing builds

Describe the bug
CI tests on beta have caught a few apparent python2.7 incompatibilities. See this build log.

test_stdgst_matrix checks a gaugeopt estimate against one from the disk for equivalence to 2 decimal places. This precision was lowered from 3 by c78ea56 to but under python2.7 it still seems to be too strict -- probably due to some version-specific numpy behavior. We could lower it again but perhaps a difference to 1 decimal place is a bug?

test_stdgst_terms seems to fail when calling scipy.linalg.solve in customlm.custom_leastsq with arguments containing inf and/or NaN. Not sure what's going on there, will look into it further shortly.

To Reproduce

$ virtualenv /tmp/venv-2.7 && source /tmp/venv-2.7/bin/activate
$ pip install -e .[testing]
$ nosetests test.test_packages.drivers.testCalcMethods1Q:CalcMethods1QTestCase.test_stdgst_matrix
[...]
$ nosetests test.test_packages.drivers.testCalcMethods1Q:CalcMethods1QTestCase.test_stdgst_terms
[...]

Expected behavior
Should match python3.5 and 3.7 results

Environment (please complete the following information):

  • pyGSTi 0.9.7.3.post118+g5309a408 (branch develop)
  • python 2.7

Report Generation Issues with FPR

I believe there have been two errors associated with report generation and FPR. One I believe was already spotted by Erik, when using global fiducial pair reduction.

When trying to create a report on master with per-germ fiducial pair reduction there is a similar bug in:
report/results.py at Line 905

            elif isinstance(fidPairs,dict) or hasattr(fidPairs,"keys"):
                #Assume fidPairs is a dict indexed by germ
                fidpair_filters = { (x,y): fidPairs[germ] 
                                    for x in Ls[st:] for y in germs }

I believe that should be fidPairs[y].

The same error shows up at line 741.

Migrate to travis-ci.com (low priority)

As of May 2018, Travis CI has been restructuring their handling of open-source projects. Projects on travis-ci.org are being migrated to travis-ci.com. See https://docs.travis-ci.com/user/migrate/open-source-on-travis-ci-com/

We should opt-in to migrate ours. We'd get some nice features like deploy keys. Should be mostly painless -- I think the only thing we'll need to update is the build badge URL in the README. We may temporarily lose build history, but I think that should still be accessible through travis-ci.org

Undefined names (low priority)

We've got undefined names in a couple of spots around the codebase. See this linter log. I'm calling this low-priority because if neither tests nor users have caught these then apparently these units aren't terribly important.

cvxopt issues on El Capitan

Just wanted to document a problem that cvxopt seems to have with Macs running El Capitan. On running one of the tests, test_bootstrap (__main__.TestDriversMethods), I get:

ImportError: dlopen(~/anaconda/envs/py27/lib/python2.7/site-packages/cvxopt/lapack.so, 2): Symbol not found: _dgesv_
  Referenced from: ~/anaconda/envs/py27/lib/python2.7/site-packages/cvxopt/lapack.so
  Expected in: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib

The issue cvxopt/cvxopt#45 documents what I think is the same problem, and it may be due to my use of anaconda. I'll try some fixes and see if I come up with a way to get it working.

Report generation hangs on missing latex packages

Report generation hangs on systems where latex packages (etoolbox) in my case are missing. It'd be nice if we aborted with an error in this case, or at least let the user see the output from latex so it's evident what's happening.

Automatically Setting LinLog Transition Point

Commit a4a93eb introduces a new function, get_transition, for automatically computing the variable linlog_trans which gets fed into the LinLogNorm class. The purpose of this issue is to work through the following questions (perhaps among others). Feedback from @enielse @kmrudin and @jarthurgross would be appreciated.

  • Should linlog_trans be set automatically, and should users not be able to override that value?
    • If the value can be set by the user, what is the correct way to declare the default linlog_trans value? (As you'll see in the commit, it is currently defaulted to None. This currently conflicts with the documentation...:disappointed:)
    • If the value is not to be set by the user, should we allow them to determine the quantile? get_transition currently accepts an eps parameter, which determines the quantile. However, eps has not been "bubbled up" through the rest of the code.
  • Should the plots themselves, or perhaps their caption, indicate what quantile is actually being used? (That would allow for easy comparison amongst plots, I think...or, at the very least, help people avoid getting confused - "What exactly is this plot showing us again?".)

Report generation for LGST results object

@kmrudin @enielse

I am able to run linear gst with on a dataset using result = pygsti.do_lgst(ds, ... ). Even for 2 qubit GST this runs in ~2.5 secs, as opposed to the multiple hours it takes me to do the standard practice GST result = pygsti.do_stdpractice_gst(ds, ...) .

However, when I try to create a report from the results object that do_lgst gives me this is not possible. Because the result object does not contain the estimates attribute (see error messages below). I understand that the report generated by LGST is not to be considered reliable, however it does provide a very valuable sanity check.

Is there any chance this bug will be addressed in the future or is this behavior that is not supported?

I'm running on the latest version of the beta branch: 472a06d .

pygsti.report.create_standard_report(
    results=result, title=a.measurementstring+'_'+a.timestamp, 
    filename=join(a.proc_data_dict['folder'], a.measurementstring+'_'+a.timestamp +'line_inv_GST_report.html'),
    confidenceLevel=95)
*** Creating workspace ***

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-13-14479d2824b1> in <module>()
      2     results=result, title=a.measurementstring+'_'+a.timestamp,
      3     filename=join(a.proc_data_dict['folder'], a.measurementstring+'_'+a.timestamp +'line_inv_GST_report.html'),
----> 4     confidenceLevel=95)

~/GitHubRepos/DiCarloLab_Repositories/pyGSTi/packages/pygsti/report/factory.py in create_standard_report(results, filename, title, confidenceLevel, comm, ws, auto_open, link_to, brevity, advancedOptions, verbosity)
    669 
    670     results_dict = results if isinstance(results, dict) else {"unique": results}
--> 671     toggles = _set_toggles(results_dict, brevity, combine_robust)
    672 
    673     #DEBUG

~/GitHubRepos/DiCarloLab_Repositories/pyGSTi/packages/pygsti/report/factory.py in _set_toggles(results_dict, brevity, combine_robust)
    186     toggles["ShowScaling"] = False
    187     for res in results_dict.values():
--> 188         for est in res.estimates.values():
    189             weights = est.parameters.get("weights",None)
    190             if weights is not None and len(weights) > 0:

AttributeError: 'GateSet' object has no attribute 'estimates'

splicing colormaps + LinLogNorm

In working through how we can splice together colormaps using the plotting.splice_cmaps() function and the plotting.LinLogNorm() class (both being developed on the colormap_fix branch), I am running into problems with the resulting cmap object. In particular, we would like to use splice_cmaps() to join together two colormaps, where the splicing takes place at the normalization of the trans variable of the LinLogNorm() class.

Preliminary imports and basic declarations:

import pygsti
import matplotlib.cm as cm

linlog_trans = 11
norm = pygsti.report.plotting.LinLogNorm(trans=linlog_trans)

Code which works

cmap = pygsti.report.plotting.splice_cmaps([cm.Greys, cm.Reds_r],\
                                           splice_points=[.1])
cmap(1)

>> (0.98154556050020103, 0.98154556050020103, 0.98154556050020103, 1.0)

Code which throws an error

cmap = pygsti.report.plotting.splice_cmaps([cm.Greys, cm.Reds_r],\
                                           splice_points=[norm(11)])

cmap(1)

>> ValueError                                Traceback (most recent call last)
<ipython-input-90-4078524cc28b> in <module>()
----> 1 cmap(1)

/Users/tlschol/anaconda/lib/python2.7/site-packages/matplotlib/colors.pyc in __call__(self, X, alpha, bytes)
    548         # See class docstring for arg/kwarg documentation.
    549         if not self._isinit:
--> 550             self._init()
    551         mask_bad = None
    552         if not cbook.iterable(X):

/Users/tlschol/anaconda/lib/python2.7/site-packages/matplotlib/colors.pyc in _init(self)
    730         self._lut = np.ones((self.N + 3, 4), np.float)
    731         self._lut[:-3, 0] = makeMappingArray(
--> 732             self.N, self._segmentdata['red'], self._gamma)
    733         self._lut[:-3, 1] = makeMappingArray(
    734             self.N, self._segmentdata['green'], self._gamma)

/Users/tlschol/anaconda/lib/python2.7/site-packages/matplotlib/colors.pyc in makeMappingArray(N, data, gamma)
    467     if x[0] != 0. or x[-1] != 1.0:
    468         raise ValueError(
--> 469             "data mapping points must start with x=0. and end with x=1")
    470     if np.sometrue(np.sort(x) - x):
    471         raise ValueError(

ValueError: data mapping points must start with x=0. and end with x=1

The only difference between these two calls is in the splice_points declaration, where we use [.5] and [norm(11)]. Curiously, norm(11) returns nan. Ideas as to what are going on would be welcome.

The behavior we are trying to achieve would set the splice point for the colormap at the normalization value of the transition point. That way, when the normalization goes from logarithmic to linear, the colormap will change as well.

Cannot import" idletomography"

After installing via pip, importing pygsti.extras.idletomography fails with an error something like:
ImportError: cannot import name 'idletomography' from 'pygsti.extras'

In particular, this bug will surface when trying to generate HTML reports. This is due to a bug in setup.py that omits the idletomography package when copying files during pip's install process.

Unexpected CZ gate

@kmrudin, as discussed over Skype a bug report for the CZ construction.

When generating a CZ gate using the following code I get a CZ gate that is not symmetric with respect to what qubit is the target (which it should be) and applying it twice does not give the identity.

CZ_01 = pygsti.construction.build_gate([4],[('Q0', 'Q1')], 'CZ(pi, Q0, Q1)',basis='pp')
CZ_10 = pygsti.construction.build_gate([4],[('Q0', 'Q1')], 'CZ(pi, Q1, Q0)',basis='pp')

Plotting this using matplotlib plt.matshow(CZ_01) gives the following.
image

image

Contructing the gate by hand using :

myUnitary =  np.diag([1,1,1,-1])
mySuperOp_stdbasis = pygsti.unitary_to_process_mx(myUnitary)
mySuperOp_ppbasis = pygsti.std_to_pp(mySuperOp_stdbasis)
Gcz = mySuperOp_ppbasis

gives exactly what is expected.

image

With Gcz@Gcz equal to the identity as expected.

image

pyGSTi fails to generate plots without X

See error messages in the attached (as PNG and zipped Jupyter notebook). This happens in the 00 Quick and easy GST.ipynb file included with pyGSTi.

It seems to indicate that using the default configuration, matplotlib will try to use the QtAgg backend to plot, but then it can't plot because there is no DISPLAY variable set (X is not running).

This error can be reproduced by using a docker image built from BBN-Q/pygsti-docker (so it runs on Ubuntu 14.04, and all python library dependencies are explicitly listed there).

display-error

00 Quick and easy GST MPS.ipynb.zip

Should be using a deploy key for CI/CD

Our automated deployment uses a username/password stored as CI environment variables for deployment. They're marked as secure variables in Travis and I don't know what protection that affords, but since I don't see an automated deployment user in the pyGSTio org I'm a little nervous it's somebody's github login... Related comment.

I don't think we have deploy keys available through travis-ci.org, but we can migrate to travis-ci.com. See #47

Report generation fails during diamond norm computation

pyGSTi has started failing during report generation. It does not seem to be related to any specific data set (it has started failing on any data I throw at it, inscluding data generated by pyGSTi itself).

Failure seems to happen because CVXOPT cannot solve a convex optimization problem needed for the report (the convex problem related to the diamond norm). In essence, the result CVXOPT generates is not optimal, nor is the problem infeasible or unbounded, which indicates serious problems.

Here is a minimal example that leads to such failure

import matplotlib

import time, pickle, os

import scipy as sp
import numpy as np

import pygsti

from pygsti.construction import std1Q_XYI

gs1Q = std1Q_XYI.gs_target

gs1Q_test = std1Q_XYI.gs_target

fiducials1Q = std1Q_XYI.fiducials
germs1Q = std1Q_XYI.germs
maxLengths1Q = [0,1,2,4,8,16,32,64,128,256]

listOfExperiments = pygsti.construction.make_lsgst_experiment_list(
            gs1Q_test.gates.keys(), 
            fiducials1Q, 
            fiducials1Q, 
            germs1Q, 
            maxLengths1Q)

gs_datagen = gs1Q_test.depolarize(gate_noise=0.003, spam_noise=0.05)
ds = pygsti.construction.generate_fake_data(gs_datagen, listOfExperiments, nSamples=2000,
                                            sampleError="binomial", seed=2015)
pygsti.io.write_dataset("test.gst", ds)

results_test = pygsti.do_long_sequence_gst("test.gst", 
                                        gs1Q_test, 
                                        fiducials1Q, 
                                        fiducials1Q, 
                                        germs1Q,
                                        maxLengths1Q, 
                                        mxBasis="pp", 
                                        gaugeOptRatio=1e-7,
                                        advancedOptions ={ 'memoryLimitInBytes' : 10*(1024)**3,
                                                           'depolarizeLGST' : 0.2,
                                                           'verbosity' : 3} )

results_test.create_full_report_pdf(verbosity=3)

The result error message is

---------------------------------------------------------------------------
SolverError                               Traceback (most recent call last)
<ipython-input-1-4edd2b634e2b> in <module>()
     42                                                            'verbosity' : 3} )
     43 
---> 44 results_test.create_full_report_pdf(verbosity=3)

/home/jovyan/work/pyGSTi/packages/pygsti/report/results.pyc in create_full_report_pdf(self, confidenceLevel, filename, title, datasetLabel, suffix, debugAidsAppendix, gaugeOptAppendix, pixelPlotAppendix, whackamoleAppendix, m, M, tips, verbosity)
   1375 
   1376         for key in tables_to_compute:
-> 1377             qtys[key] = self.tables.get(key, verbosity=v).render('latex')
   1378             qtys["tt_"+key] = tooltiptex(".tables['%s']" % key)
   1379 

/home/jovyan/work/pyGSTi/packages/pygsti/report/resultcache.pyc in get(self, key, confidence_level, verbosity)
     76                         _sys.stdout.flush()
     77 
---> 78                     self._data[level][key] = computeFn(key, level, verbosity)
     79                 except ResultCache.NoCRDependenceError:
     80                     assert(level is not None)

/home/jovyan/work/pyGSTi/packages/pygsti/report/results.pyc in fn(key, confidenceLevel, vb)
    457             cri = self._get_confidence_region(confidenceLevel)
    458             return _generation.get_gateset_vs_target_table(
--> 459                 gsBest, gsTgt, fmts, tblCl, longT, cri, mxBasis)
    460         fns['bestGatesetVsTargetTable'] = (fn, validate_essential)
    461 

/home/jovyan/work/pyGSTi/packages/pygsti/report/generation.pyc in get_gateset_vs_target_table(gateset, targetGateset, formats, tableclass, longtable, confidenceRegionInfo, mxBasis)
    423     qtys_to_compute = [ '%s %s' % (gl,qty) for qty in qtyNames for gl in gateLabels ]
    424     qtys = _cr.compute_gateset_gateset_qtys(qtys_to_compute, gateset, targetGateset,
--> 425                                             confidenceRegionInfo, mxBasis)
    426 
    427     table = _ReportTable(formats, colHeadings, formatters,

/home/jovyan/work/pyGSTi/packages/pygsti/report/reportables.pyc in compute_gateset_gateset_qtys(qtynames, gateset1, gateset2, confidenceRegionInfo, mxBasis)
    729             try:
    730                 ret[key] = _getGateQuantity(half_diamond_norm, gateset1, gateLabel,
--> 731                                             eps, confidenceRegionInfo) 
    732             except ImportError: #if failed to import cvxpy (probably b/c it's not installed)
    733                 ret[key] = ReportableQty(_np.nan) # report NAN for diamond norms

/home/jovyan/work/pyGSTi/packages/pygsti/report/reportables.pyc in _getGateQuantity(fnOfGate, gateset, gateLabel, eps, confidenceRegionInfo, verbosity)
     76 
     77     if confidenceRegionInfo is None: # No Error bars
---> 78         return ReportableQty(fnOfGate(gateset.gates[gateLabel]))
     79 
     80     # make sure the gateset we're given is the one used to generate the confidence region

/home/jovyan/work/pyGSTi/packages/pygsti/report/reportables.pyc in half_diamond_norm(gate)
    724 
    725             def half_diamond_norm(gate):
--> 726                 return 0.5 * _tools.diamonddist(gate, gateset2.gates[gateLabel]) #Note: default 'gm' basis
    727                   #vary elements of gateset1 (assume gateset2 is fixed)
    728 

/home/jovyan/work/pyGSTi/packages/pygsti/tools/gatetools.pyc in diamonddist(A, B, mxBasis, dimOrStateSpaceDims)
    236     prob = _cvxpy.Problem(objective, constraints)
    237 #    try:
--> 238     prob.solve(solver="CVXOPT")
    239 #        prob.solve(solver="ECOS")
    240 #       prob.solve(solver="SCS")#This always fails

/opt/conda/envs/python2/lib/python2.7/site-packages/cvxpy/problems/problem.pyc in solve(self, *args, **kwargs)
    171             return func(self, *args, **kwargs)
    172         else:
--> 173             return self._solve(*args, **kwargs)
    174 
    175     @classmethod

/opt/conda/envs/python2/lib/python2.7/site-packages/cvxpy/problems/problem.pyc in _solve(self, solver, ignore_dcp, warm_start, verbose, parallel, **kwargs)
    282             results_dict = {s.STATUS: sym_data.presolve_status}
    283 
--> 284         self._update_problem_state(results_dict, sym_data, solver)
    285         return self.value
    286 

/opt/conda/envs/python2/lib/python2.7/site-packages/cvxpy/problems/problem.pyc in _update_problem_state(self, results_dict, sym_data, solver)
    394         else:
    395             raise SolverError(
--> 396                 "Solver '%s' failed. Try another solver." % solver.name())
    397         self._status = results_dict[s.STATUS]
    398 

SolverError: Solver 'CVXOPT' failed. Try another solver.

Here are the packages I have installed (and their versions).

conda_packages.txt
pip_packages.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.