projectq-framework / fermilib Goto Github PK
View Code? Open in Web Editor NEWFermiLib: Open source software for analyzing fermionic quantum simulation algorithms
Home Page: https://projectq.ch/
License: Apache License 2.0
FermiLib: Open source software for analyzing fermionic quantum simulation algorithms
Home Page: https://projectq.ch/
License: Apache License 2.0
We are already working to integrate scripts for running Psi4 into FermiLib as plugins. Expect this to be finished by the end of the weekend.
@jarrodmcc and myself merged a PR to update the OpenFermion citation yesterday, which now includes more authors in arXiv v2. This PR was subsequently reverted and deleted without explanation or review by @damiansteiger. Was there a problem with the updated citation?
We might want to consider making a new non-alpha release. Now that OpenFermion is out, if we are going to keep FermiLib around, moving it out of alpha makes some sense to me. @damiansteiger and @thomashaener, thoughts?
Eventually we should include an implementation of these classes in C++ for performance reasons. This should help with faster TimeEvolution calls in ProjectQ, faster transformations, etc.
Very often we would like to analyze properties of operators on a restricted particle number manifold. For instance, many (in fact nearly all) simulation algorithms depend in some way on the norm of ||H||. But if you know that your simulation will occur on the fixed electron number manifold (almost always the case), then you are really only concerned with the highest eigenvalue of H that has an eigenvector confined to that manifold. Another example is that you'd like to simulate a particular state, perhaps the ground state of a jellium or Hubbard model Hamiltonian. But the ground states of these models are vacuum - what you really want is the ground state in a particular particle sector. You want to use Lanczos algorithm for efficiency but that's not easy to unless you first project the operator into the desired particle number manifold.
Usually, the task is to obtain a state on a fixed particle number manifold and then take expectation values of that state with other operators. A very concrete example is that we'd like to know the effect that Trotter error has on the eigenvalues estimated by phase estimation. So what we can do is to take the expectation value of the Trotter error operator with the ground state of our Hamiltonian. But again, perhaps we are looking at a Hubbard model and want the ground state on that particular manifold.
Since we usually use the sparse representation of states and operators for such purposes, that is where this code will be most relevant. But what is the best strategy for performing this task? For instance, one could delete all the rows and columns of the matrix on the wrong sector to make the operator more compact. At least in the occupation-number basis (the one associated with Jordan-Wigner), I think this is not too complicated. I suppose you need to look at the Hamming weight of the binary representation of the row/column number. Right?
@jarrodmcc any thoughts on this? What strategy would you suggest?
This isn't really our fault but it's our problem because a lot of people won't actually open the notebook they'll just look at it in Git. It seems to break whenever we write the string initializers.
@thomashaener @damiansteiger any idea why this is happening?
Currently, the class MolecularData can save and store information using an HDF5 container. But some information, in particular arrays that can grow to be large like the integrals, RDMs and CC amplitudes are stored independently as numpy arrays. The logic behind this made sense for earlier versions of the code where we used pickle. Because sometimes you want to load some information but not all of it. However, the HDF5 container already has the property that it only loads the data you ask for. Accordingly, we should make sure that all data is saved in one HDF5 file.
We have discussed this issue before the release and decided to switch to pytest because
For example code, browse the tests in ProjectQ.
Our continuous integration testing is therefore since the beginning using pytest instead of unittest. Pytest allows to run the standard unittest files for backwards compatibility and unfortunately our tests are still using unittest.
Could we decide that if we write completely new test files, that these use pytest and not unittest?
Obviously it would also be great if the other tests at some point are refactored but this is not an important issue but we should start somewhere now. I know this means some people will have to learn pytest, but it is easy (see https://docs.pytest.org/en/latest/) and there are already great test examples in ProjectQ for pretty much all cases.
It would also help new users that they see that the best way to testing is going with pytest. For example in #123
self.assertTrue(self.assertTrue(evensector == 2**(n_qubits - 1))
was used and naturally it doesn't give a sensible error message (False not equal True). Of course one can use self.assertEqual instead to get a better error message but such a thing would not have happened with pytest as it is without boilerplate:
assert evensector == 2**(n_qubits - 1)
@maffoo: I remember you were also in favour of pytest. Anymore good reasons to switch?
The third test in _plane_wave_hamiltonian_test seems to fail occasionally in Python 3.5. This is shown here: https://travis-ci.org/ProjectQ-Framework/FermiLib/jobs/230235580
The origin of this failure is not clear. We should get to the bottom of it.
Says it's the real length of a single cubey, when actually it's the length of grid.length cubies.
... I think.
This also affects the docstring of volume_scale.
We should aim for 100% test coverage. Coveralls reveals which modules have the least coverage; those with the least coverage, such as molecular_data should be improved first.
Assuming modes must be non-negative integers, the bitwise complement will be unambiguous and significantly easier to read/write/code.
For example:
def get_fermion_operator(interaction_operator):
n_qubits = count_qubits(interaction_operator)
identity_term = FermionOperator((), interaction_operator.constant)
one_body_terms = FermionOperator.sum(
FermionOperator.ladder(~p, q, # annihilate p, create q
coefficient=interaction_operator[p, q])
for p, q in itertools.product(range(n_qubits), repeat=2)
)
two_body_terms = FermionOperator.sum(
FermionOperator.ladder(~p, ~q, r, s, # annihilate p, annihilate q, create r, create s
coefficient=interaction_operator[p, q, r, s])
for p, q, r, s in itertools.product(range(n_qubits), repeat=4)
)
return identity_term + one_body_terms + two_body_terms
The downsides here are:
~
looks like -
, so typos will be hard to spot and misreadings are somewhat likely.((p, 0), 1)
the code will probably fail, but accidentally creating ~~p
will likely run to completion and create garbage. The -p
instead of ~p
typo is also at high risk to make something that almost works and eats an hour of your time to debug it.The upsides are:
Please remove all \ used for line continuation but use () instead
In an earlier version of FermiLib our core data structures supported easier manipulation and access. For instance, the iter method was defined on QubitOperator and FermionOperator so that one could type:
for coefficient, terms in my_operator:
instead of
for term in my_operator.terms:
coefficient = my_operator[term]
Also, we had overloaded the set and slice methods for easier access to terms. I found this design to be more intuitive and user-friendly.
Additionally, we need an easy way to extract the term and coefficient from an operator that we know has only one term and coefficient (a pattern that comes up a lot). Because currently there are some places in the code where we do this:
term = list(my_term.terms.keys()[0])
coefficient = my_term[term]
It would be much better to be able to write:
term, coefficient = my_term.pop_single_term()
And have an error be raised if there is more than one term in my_term.
@damiansteiger how do you feel about this? We would want to keep the parallel structure between FermionOperator and QubitOperator and thus we should update the ProjectQ QubitOperator class as well if we do make these changes.
Two people today have accidentally pulled into master. Also, the develop code is fairly stable. Let's please make that change.
FermionOperator has a terms field with a complicated dict-of-tuple-of-tuple-to-complex type.
The dictionary's key type should be a class LadderOperators
. And LadderOperators
should be a collection of LadderOperator
instances. Both should support arithmetic operators for combining them, so that for example you could write p/q*r/s
instead of ((p, 1), (q, 0), (r, 1), (s, 0))
.
We have a nice framework for saving and loading instances of the MolecularData class. We should also have something similar for FermionOperator and QubitOperator. In fact, there is a project going on right now which will benefit from this significantly in the coming weeks. A good example of an operator we might want to save and load is an error operator from the Trotter error code. These operators are expensive to compute and complex to analyze so there is good reason that one might want to save the output.
I suggest we continue to use HDF5 and try to loosely parallel the system by which MolecularData is saved. However, automatically generated names for arbitrary FermionOperators and QubitOperators are not a good idea since these classes are quite broad. Naming should then be left up to the user. The directory should perhaps be specified optionally with the default option being the same place where MolecularData is saved by default. I suggest that save() and load() are external functions, kept in utils/. We should anticipate automatic naming functions that will use these primitive save/load functions as subroutines. A good example where this would be helpful would be in saving and loading plane wave Hamiltonians.
We should think about the most efficient way to save both types of operators. An easy (but not necessarily optimal) solution involves calling the str() method that is already implemented in these operators. To load these operators one will need to write a parser. Since these are classes with a small number of attributes that are unlikely to change, it might make sense to use pickle (yes, I know about the security issue). A bigger concern with that is the discrepancy between pickling in python 2 and 3. Is there a standard way to store python dictionaries? That could work since a dictionary essentially defines QubitOperator and FermionOperator. We may also want to think about writing the builtin eval method on these classes. Keep in mind that if somebody is going to the trouble of saving these operators, they are likely rather large and performance should be a priority.
I am curious to hear the opinions of @jarrodmcc, @damiansteiger, @thomashaener, and @Strilanc. We should discuss and agree on a solution prior to any pull requests being opened.
Hi!
I'm running some calculations on various molecules by seeing how many terms are there in the JW operator (jw_qubit_operator.terms
) where jw_qubit_operator = jordan_wigner(molecular_hamiltonian_from_MolecularData)
.
When I run the same calculation on different machines, I get different answers (different terms, with different lengths, different number of terms, etc). Why does this happen? Is this expected?
Thanks!
Once the electronic structure plugins are available we will advertise FermiLib on the ProjectQ website and update the "Teams" page to include all of the FermiLib contributors.
We've had a few problems with people submitting pull requests that break the demo file. The demo is quite important because it is likely the first thing that new users will read. However, we currently have no method to test whether it runs without errors. Such a test would save us a lot of trouble!
I notice that travis tests four versions of python, all on linux. Is it possible to also test on mac?
The asymptotic complexity of all functions in FermiLib should be written into the doc strings. Understanding where the bottlenecks are is quite important for the ultimate scalability of the library. When code is unacceptably slow, people should add an example to performance_benchmarks.py, in the examples folder, so we can work on speeding those things up.
In _jellium_test.py if one changes "grid_length" in test_kinetic_integration to 3 or 4, my computer runs out of memory and freezes. I have no idea why this is since at 3 qubits its still only an 8 by 8 matrix. @Spaceenter, any ideas what is happening here?
Tests shouldn't be making changes to source files.
If this indicates a problem, then the test should be failing instead of editing the file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.