GithubHelp home page GithubHelp logo

itensor / ndtensors.jl Goto Github PK

View Code? Open in Web Editor NEW
27.0 27.0 7.0 491 KB

A Julia package for n-dimensional sparse tensors.

License: Apache License 2.0

Julia 100.00%
block-sparsity julia sparse sparse-tensors tensor

ndtensors.jl's Introduction

Homepage: http://itensor.org/

An efficient and flexible C++ library for performing tensor network calculations.

The foundation of the library is the Intelligent Tensor or ITensor. Contracting ITensors is no harder than multiplying scalars: matching indices automatically find each other and contract. This makes it easy to transcribe tensor network diagrams into correct, efficient code.

Installation instructions can be found in the INSTALL file.

Citation

If you use ITensors.jl in your work, for now please cite the arXiv preprint:

@misc{fishman2020itensor,
    title={The \mbox{ITensor} Software Library for Tensor Network Calculations},
    author={Matthew Fishman and Steven R. White and E. Miles Stoudenmire},
    year={2020},
    eprint={2007.14822},
    archivePrefix={arXiv},
    primaryClass={cs.MS}
}

ndtensors.jl's People

Contributors

christopherdavidwhite avatar emstoudenmire avatar giggleliu avatar github-actions[bot] avatar kshyatt avatar mtfishman avatar orialb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ndtensors.jl's Issues

Contracting DiagTensor with DenseTensor may be a bit slow

I noticed when comparing CTMRG benchmarks in ITensors.jl to C++ ITensor, ITensors.jl is a bit slower (maybe by 20% with no BLAS threading). Based on profiling, it looks like the contraction between DiagTensor and DenseTensor is probably the culprit.

Bug in DiagBlockSparseTensor show method

On line 569 of blocksparse/diagblocksparse.jl, the code which unpacks the iterated value seems out-of-date:

for (n, (block, _)) in enumerate(diagblockoffsets(T))

whereas it seems like now the values returned by diagblockoffsets are just plain integers rather than Tuples or some other iterable type. I tried replacing (block,_) with just block but the rest of the code inside the loop apparently still expects block to be a type other than integer.

Replace `fullfidelity(::MPO, ::MPO)` with `fidelity(::ITensor, ::ITensor)`

fullfidelity is not my favorite name. Instead, we could define fidelity(::ITensor, ::ITensor), and then tell people to use fidelity(prod(rho), prod(sigma)). We can also define fidelity(::MPO, ::MPO) and just make it throw an error, telling people to use fidelity(::ITensor, ::ITensor) instead.

Complex type not being propagated in blocksparse svd

Passing a complex block-sparse Tensor into the svd function can lead to an InexactError. Changing the lines (blocksparse/linearalgebra.jl) 153 and 154:

U = BlockSparseTensor(undef, nzblocksU, indsU)
V = BlockSparseTensor(undef, nzblocksV, indsV)

To the following

U = BlockSparseTensor(ElT,undef, nzblocksU, indsU)
V = BlockSparseTensor(ElT,undef, nzblocksV, indsV)

appears to fix the issue. However, when trying to make a unit test I'm getting very large differences between USV' and the original tensor - perhaps I'm expecting the wrong output from the SVD?

indsA = ([2,3],[4,5])
locs = [(1,2),(2,1)]
A = BlockSparseTensor(ComplexF64,locs,indsA)
randn!(A)
U,S,V = svd(A)
@test isapprox(norm(array(U)*array(S)*array(V)'-array(A)),0.0; atol=1e-14)

Improve Unit Tests

We should set up CI for NDTensors.jl.

Unfortunately the tests are not very comprehensive right now. Also, a lot of the functionality is tested through ITensors.jl, and it is important to make sure that NDTensors works with ITensor indices. Possibly we could have NDTensors run ITensors tests (and force it to use the current branch of NDTensors?), as suggested here: #16.

Drop zero blocks during uncombining

Uncombining can "generate" nonzero blocks:

using ITensors

i = Index(QN(0,2)=>2,QN(1,2)=>2; tags="i")
j = settags(i,"j")
A = emptyITensor(i,j,dag(i'))
A[1,1,1] = 1.0 
C = combiner(i, j; tags="c")
AC = A * C
Ap = AC * dag(C)

@show A
@show Ap

gives:

A = ITensor ord=3
Dim 1: (dim=4|id=251|"i") <Out>
 1: QN(0,2) => 2
 2: QN(1,2) => 2
Dim 2: (dim=4|id=251|"j") <Out>
 1: QN(0,2) => 2
 2: QN(1,2) => 2
Dim 3: (dim=4|id=251|"i")' <In>
 1: QN(0,2) => 2
 2: QN(1,2) => 2
NDTensors.BlockSparse{Float64,Array{Float64,1},3}
 4×4×4
Block: Block{3}((0x0000000000000001, 0x0000000000000001, 0x0000000000000001), 0x8324cf5639de3e67)
 [1:2, 1:2, 1:2]
[:, :, 1] =
 1.0  0.0
 0.0  0.0

[:, :, 2] =
 0.0  0.0
 0.0  0.0

Ap = ITensor ord=3
Dim 1: (dim=4|id=251|"i") <Out>
 1: QN(0,2) => 2
 2: QN(1,2) => 2
Dim 2: (dim=4|id=251|"j") <Out>
 1: QN(0,2) => 2
 2: QN(1,2) => 2
Dim 3: (dim=4|id=251|"i")' <In>
 1: QN(0,2) => 2
 2: QN(1,2) => 2
NDTensors.BlockSparse{Float64,Array{Float64,1},3}
 4×4×4
Block: Block{3}((0x0000000000000001, 0x0000000000000001, 0x0000000000000001), 0x8324cf5639de3e67)
 [1:2, 1:2, 1:2]
[:, :, 1] =
 1.0  0.0
 0.0  0.0

[:, :, 2] =
 0.0  0.0
 0.0  0.0

Block: Block{3}((0x0000000000000002, 0x0000000000000002, 0x0000000000000001), 0x8681cd10e233a627)
 [3:4, 3:4, 1:2]
[:, :, 1] =
 0.0  0.0
 0.0  0.0

[:, :, 2] =
 0.0  0.0
 0.0  0.0

We could check for zero blocks here and drop them. However, checking every block for zeros may not be a cost we want to impose every time an uncombiner is used.

Since it is inconvenient to pass tolerances to contract, we could be very strict with the tolerance there, and then provide a dropzeros[!] function for NDTensors and ITensors.

Small contraction optimizations

Oddly enough, for very small tensor contractions, using reshape to reshape the Tensor storage data into a Matrix that gets passed to Blas.gemm! takes up a non-negligible amount of time. This appears to be a known issue: JuliaLang/julia#36313

The most common place this is used is the array(::Tensor) constructor, here: https://github.com/ITensor/NDTensors.jl/blob/master/src/dense.jl#L355, but it is also used directly here: https://github.com/ITensor/NDTensors.jl/blob/master/src/dense.jl#L652

An alternative might be to use a Base.ReshapedArray instead. It looks like it can still be passed to BLAS, for example:

julia> V = randn(6)
6-element Array{Float64,1}:
 -0.26695949899994226
 -0.13359169415563152
  0.37079765916385093
  0.6749298904891178
 -0.8679197316877514
  0.2343964793982253

julia> @btime reshape($V, (3, 2));
  31.608 ns (1 allocation: 64 bytes)

julia> @btime Base.ReshapedArray($V, (3, 2), ());
  4.781 ns (1 allocation: 32 bytes)

julia> A = Base.ReshapedArray(V, (3, 2), ())
3×2 reshape(::Array{Float64,1}, 3, 2) with eltype Float64:
 -0.266959   0.67493
 -0.133592  -0.86792
  0.370798   0.234396

julia> A isa StridedArray
true

julia> BLAS.gemm('N', 'T', A, A)
3×3 Array{Float64,2}:
  0.526798   -0.550121   0.0592132
 -0.550121    0.771131  -0.252973
  0.0592132  -0.252973   0.192433

Change SVD convention

Right now, U,S,V = svd(A) has the convention U*S*transpose(V) == A, while Julia uses the convention that it outputs V such that U*S*V' == A. We should consider changing it to the Julia convention.

Think about the names of sparse storage types/tensors

It may be good to rethink the names of the sparse storage tensors and constructors. Right now there is:

  • Dense
  • Diag
  • BlockSparse
  • DiagBlockSparse

DiagBlockSparse may not be a good name, however. I think it is a bit misleading, because naively one might expect that it has a general block sparse structure (where any blocks may be nonzero), where the blocks are diagonal. However, it is actually a block diagonal tensor (the only nonzero blocks are along the diagonal), where the nonzero blocks themselves are diagonal. Therefore, it may be better to call it DiagBlockDiag. This looks like a strange name, but it is anticipating that we may want to have a BlockDiag storage where the diagonal blocks are dense (many block diagonal matrices already show up in the code, for example as inputs to eigen).

Additionally, we may want to rename these constructors:

to explicitly use UniformDiag instead of Diag, so it is clearer that a uniform Diag storage is being made.

Try to simplify the overload requirements for inds

We should reevaluate the interface requirements for the inds object of the Tensor type, and try to make it as simple as possible. For example, if we assume the inds object is indexable (which is true for Dims and IndexSet) then it should just require having the elements of inds have a dim or blockdim overload, instead of requiring inds itself to have a dim or blockdim overload.

Extend Spectrum definition

As discussed in ITensor/ITensors.jl/pull/427, it would be helpful to extend the type parametrization of Spectrum to Union{<:AbstractVector, Nothing} so that the eigs field can be nothing for cases like qr where the eigenvalues are not easy to compute.

Also, we might consider changing the name of the eigs(...) function to eigvals(...) to match Julia's eigvals function.

Borrow some interface ideas from BlockArrays and other packages

It could be useful to adopt some interface ideas from other Julia packages that implement blocked arrays/matrices, for example:

For example, in BlockArrays.jl, the "block index" (the label of the block) has a special type Block. This allows notation like:

julia> using BlockArrays

julia> A = PseudoBlockArray(ones(3,3), [1,2], [2,1]) # A dense array split into 4 blocks
2×2-blocked 3×3 PseudoBlockArray{Float64,2}:
 1.0  1.01.0
 ──────────┼─────
 1.0  1.01.0
 1.0  1.01.0

julia> @view A[Block(2,1)]  # Similar to our `blockview` function
2×2 view(::PseudoBlockArray{Float64,2,Array{Float64,2},Tuple{BlockedUnitRange{Array{Int64,1}},BlockedUnitRange{Array{Int64,1}}}}, BlockSlice(Block(2),2:3), BlockSlice(Block(1),1:2)) with eltype Float64:
 1.0  1.0
 1.0  1.0

julia> A[BlockIndex((2,1), (2,2))] = 2  # Set the (2,2) index of the (2,1) block
2

julia> A
2×2-blocked 3×3 PseudoBlockArray{Float64,2}:
 1.0  1.01.0
 ──────────┼─────
 1.0  1.01.0
 1.0  2.01.0

It would be nice to use similar concepts in NDTensors for the BlockSparseTensor and DiagBlockSparseTensor.

Simplify `_contract_scalar!` logic by using `LoopVectorization.jl`

Simplify the _contract_scalar! logic by using LoopVectorization.jl instead of BLAS calls. I believe in general LoopVectorization.jl is as fast as MKL for any BLAS level 1 and 2 operations (and even BLAS level 3 for smaller matrices), and can be used very simply through the broadcast syntax like @avx Rᵈ .= α .* Tᵈ .+ Rᵈ instead of the ugly syntax BLAS.axpy!(α, Tᵈ, Rᵈ).

This should simplify that logic a lot, and allow us to merge the logic of _contract_scalar! and permutedims!!, which could also use LoopVectorization.jl for the trivial permutation branch. In principle, _contract_scalar! should be able to just call permutedims!!.

EDIT: Note that a main disadvantage of changing from BLAS calls to @avx calls is that they use different kinds of multithreading, so to get both matrix multiplications (which of course right now use BLAS.gemm!) and vectorized operations to use multithreading, a user would need to have both BLAS and Julia multithreading enabled. I have found that even if they are not nested, they can still cause problems with each other, so that needs to be investigated. Looking forward to a world where this is all in Julia!

Make a lazily permuted Tensor wrapper

It might be nice to have a lazily permuted Tensor wrapper, similar to Julia's PermutedDimsArray. We could define it as something like PermutedDims{<:Tensor} that stores a view of the Tensor as well as the permutation of the dimensions. A first use case would be to help with compatibility with Strided.jl, where we could have a conversion from our lazy permuted Tensor type to their lazy permuted array type.

Change `permutedims!!` interface

Currently, permutedims!!(R::Tensor, T::Tensor, perm, f::Function) accepts only a 2-argument function that applies pairwise to the elements of R and permutedims(T, perm).

This could be replaced by a more general function map_permutedims!!(f::Function, R::Tensor, T1::Tensor, perm1, T2::Tensor, perm2, ...), where f is an n-argument function for inputs (T1, perm1, T2, perm2, ..., Tn, permn). This could act as a useful backend for more general ITensor broadcasting, and could directly be written in terms of Strided.jl broadcast calls like:

@strided R .= f.(permutedims(T1, perm1), permutedims(T2, perm2), ...)

Generalize `contract` to accept any valid labels

The contract function, at least for the case of Dense tensors, currently only succeeds when the labels arguments obey certain conventions, as shown in the example code below. It would be good to generalize contract so that any valid labels (meaning distinct integers for non-contracted indices, and matching integers for contracted indices) would be accepted.

In this sample code, the call to contract making C1 succeeds, but the second call fails with the error shown below:

using NDTensors

let
  A = Tensor(2,3,4)
  B = Tensor(4,2)
  randn!(A)
  randn!(B)

  C1 = contract(A,(-1,1,2),B,(3,-1))
  @show C1[1,1,1] ≈ A[1,1,1]*B[1,1]+A[2,1,1]*B[1,2]

  C2 = contract(A,(1,2,3),B,(4,3))
  @show C2[1,1,1] ≈ A[1,1,1]*B[1,1]+A[2,1,1]*B[1,2]
end

The error:

[1] error(::String) at ./error.jl:33
[2] is_trivial_permutation(::NTuple{5,Int64}) at /mnt/home/mstoudenmire/.julia/packages/NDTensors/mBPTL/src/tupletools.jl:84
[3] contract!!(::Tensor{Float64,5,Dense{Float64,Array{Float64,1}},NTuple{5,Int64}}, ::NTuple{5,Int64}, ::Tensor{Float64,3,Dense{Float64,Array{Float64,1}},Tuple{Int64,Int64,Int64}}, ::Tuple{Int64,Int64,Int64}, ::Tensor{Float64,2,Dense{Float64,Array{Float64,1}},Tuple{Int64,Int64}}, ::Tuple{Int64,Int64}, ::Int64, ::Int64) at /mnt/home/mstoudenmire/.julia/packages/NDTensors/mBPTL/src/dense.jl:440
[4] contract!! at /mnt/home/mstoudenmire/.julia/packages/NDTensors/mBPTL/src/dense.jl:419 [inlined]
[5] contract(::Tensor{Float64,3,Dense{Float64,Array{Float64,1}},Tuple{Int64,Int64,Int64}}, ::Tuple{Int64,Int64,Int64}, ::Tensor{Float64,2,Dense{Float64,Array{Float64,1}},Tuple{Int64,Int64}}, ::Tuple{Int64,Int64}, ::NTuple{5,Int64}) at /mnt/home/mstoudenmire/.julia/packages/NDTensors/mBPTL/src/dense.jl:407
[6] contract(::Tensor{Float64,3,Dense{Float64,Array{Float64,1}},Tuple{Int64,Int64,Int64}}, ::Tuple{Int64,Int64,Int64}, ::Tensor{Float64,2,Dense{Float64,Array{Float64,1}},Tuple{Int64,Int64}}, ::Tuple{Int64,Int64}) at /mnt/home/mstoudenmire/.julia/packages/NDTensors/mBPTL/src/dense.jl:404
[7] top-level scope at /mnt/home/mstoudenmire/ndtensors_test.jl:12

NDTensors.jl incompatible with Optim.jl

Hello everybody,

I am trying to minimise a function of some variables and quantum state by ITensors.jl's means to measure an MPS. However, the Optim.jl package has a dependency on FiniteDiff.jl which in turn has a dependency on ArrayInterface.jl which in turn overloads the method setindex of NDTensors.jl. This throws an error when using the elementary method setelt() in ITensors.jl

An example:

julia> using ITensors, Optim

julia> i = Index(4)
(dim=4|id=70)

julia> setelt(i=>1)
ERROR: MethodError: setindex(::NDTensors.Tensor{Float64,1,NDTensors.Empty{Float64,NDTensors.Dense{Float64,Array{Float64,1}}},IndexSet{1,Index{Int64},Tuple{Index{Int64}}}}, ::Float64, ::Int64) is ambiguous. Candidates:
  setindex(T::NDTensors.Tensor{var"#s79",N,StoreT,IndsT} where StoreT<:NDTensors.Empty where IndsT where var"#s79"<:Number, x, I...) where N in NDTensors at ~/.julia/packages/NDTensors/YYVnF/src/empty.jl:99
  setindex(x::AbstractArray{T,1} where T, v, i::Int64) in ArrayInterface at ~/.julia/packages/ArrayInterface/hIHGL/src/ArrayInterface.jl:106
Possible fix, define
  setindex(::NDTensors.Tensor{T,1,StoreT,IndsT} where StoreT<:NDTensors.Empty where IndsT where T<:Number, ::Any, ::Int64)
Stacktrace:
 [1] setindex!!(::NDTensors.Tensor{Float64,1,NDTensors.Empty{Float64,NDTensors.Dense{Float64,Array{Float64,1}}},IndexSet{1,Index{Int64},Tuple{Index{Int64}}}}, ::Float64, ::Int64) at ~/.julia/packages/NDTensors/YYVnF/src/empty.jl:106
 [2] setindex!(::ITensor{1}, ::Float64, ::Int64) at ~/.julia/packages/ITensors/ka2Nl/src/itensor.jl:706
 [3] setelt(::Pair{Index{Int64},Int64}, ::Vararg{Pair{Index{Int64},Int64},N} where N) at ~/.julia/packages/ITensors/ka2Nl/src/itensor.jl:473
 [4] top-level scope at REPL[22]:1

I think it would be great to be able to use Optim.jl in conjuction with ITensors.jl. I have however no idea if the proposed solution by the compiler Possible fix, define setindex(::NDTensors.Tensor{T,1,StoreT,IndsT} where StoreT<:NDTensors.Empty where IndsT where T<:Number, ::Any, ::Int64) would be suitable

Cheers
Jan

Introduce `setstorage` and `setinds`

It would be helpful to have generic functions setstorage and setinds that create a copy of a tensor with the specified storage or indices. For example, right now complex(::Tensor) is defined as:

complex(T::Tensor) = tensor(complex(store(T)), inds(T))

which could be replaced by:

complex(T::Tensor) = setstorage(T, complex(store(T)))

Also a note that I would like to deprecate store(T) in favor of storage(T).

A similar generic code pattern would be to add setdata for storage types. This could help with making generic code across different storage types. For example right now there is a generic complex defined for all storage types:

Base.complex(S::T) where {T <: TensorStorage} = complex(T)(complex(data(S)))

but also more specialized versions for sparse storage types:

complex(D::BlockSparse{T}) where {T} =
  BlockSparse{complex(T)}(complex(data(D)), blockoffsets(D))

Many functions like this only act on the data part of the storage and not the block offset list, so could be replaced by a single generic version:

Base.complex(S::T) where {T <: TensorStorage} = setdata(T, complex(data(S)))

where the generic function setdata would have to be defined for each storage type.

Contraction logic optimizations

There are some more contraction logic optimizations to do, following up on #25. For example:

  • Have ContractionProperties only store Tuples, not MVectors. MVectors can be used as temporaries in compute_perms! and compute_contraction_properties!. This should decrease allocation requirements.
  • Provide a way to pass just the number of output indices statically to contract!!. This will help with type stability for out-of-place contraction, since in that case the number of output indices is not known at compile time. In general, this type instability is not too bad, but for small tensor contractions it can add some overhead (I think on the order of 1 microsecond). This type instability shows up in the function contract_labels, where it outputs a Tuple of the labels of the new tensor but it is created dynamically.

Use nonzeros as an alternative name for data

Right now, a view of the vector storage of a Tensor can be obtained with the function data. We could consider adopting the Julia SparseArrays function name nonzeros for this purpose (https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nonzeros), which returns a view of the structural nonzeros of a sparse array.

This is helpful for iteration, any time the iteration is only supposed to work on the elements that are already nonzero.

The line for what data should be touched is a bit blurry from the ITensor perspective. For example, for a QN ITensor, what is supposed to be meant by A .+= 2? It could be interpreted in 3 ways:

  1. Modifying only the structurally nonzero elements.
  2. Modifying any element consistent with the flux (with a reminder that even determining which blocks are consistent with the flux can be slow in the limit of many blocks).
  3. Modifying every element of the ITensor (which would be inconsistent with the flux in general, so would necessitate making the storage dense).

(1) is probably the most common, but likely it is best to make it explicit using nonzeros(A) .+= 2, and possibly it is best to disallow A .+= 2 unless a use case is found where the operations are meant to act on something besides the current structurally nonzero elements.

Allow specifying the element type in `random_unitary`/`random_orthog`

Allow specifying the element type in random_unitary, such as random_unitary(ComplexF64, 4, 4) or random_unitary(Float64, 4, 4). Then, random_orthog(m, n) could be defined as random_unitary(Float64, m, n). Allows for more generic code (for example see the usage in randomCircuitMPS).

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.