GithubHelp home page GithubHelp logo

block-hczhai / block2-preview Goto Github PK

View Code? Open in Web Editor NEW
61.0 61.0 23.0 66.26 MB

Efficient parallel quantum chemistry DMRG in MPO formalism

License: GNU General Public License v3.0

CMake 0.34% Roff 1.21% Python 17.53% C++ 80.88% C 0.02% Shell 0.01%
ab-initio bose-hubbard density-matrix-renomalization-group dmrg fermi-hubbard heisenberg-model matrix-product-states mrci pyscf quantum-chemistry tj-model

block2-preview's People

Contributors

bogdanoff avatar brianz98 avatar chillenb avatar dependabot[bot] avatar h-larsson avatar hczhai avatar seunghoonlee89 avatar zhcui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

block2-preview's Issues

DMRG-SCF error with pyscf+block2

Hi All,

I am willing to use block2 as a FCI solver in DMRG-SCF scheme with Pyscf 2.0.1. Both packages had been installed via pip and can run some examples.

To build the connection between Pyscf and block2, I manually added a script settings.py, as:

import os
from pyscf import lib

BLOCKEXE = '/home/cuiys/.conda/envs/cuiys/bin/block2main'
BLOCKEXE_COMPRESS_NEVPT = '/home/cuiys/.conda/envs/cuiys/bin/block2main'
#BLOCKSCRATCHDIR = os.path.join('./scratch', str(os.getpid()))
BLOCKSCRATCHDIR = os.path.join(lib.param.TMPDIR, str(os.getpid()))
#BLOCKRUNTIMEDIR = '.'
BLOCKRUNTIMEDIR = str(os.getpid())
MPIPREFIX = 'mpirun'  # change to srun for SLURM job system
BLOCKVERSION = '0.4.10'

So now I can correctly do 'from pyscf import dmrgscf'. I tried to run an example:

from pyscf import gto, scf, mcscf, dmrgscf, mrpt
dmrgscf.settings.MPIPREFIX = 'mpirun -n 3'

mol =gto.M(atom='C 0 0 0; C 0 0 1', basis='631g', verbose=5)
mf = scf.RHF(mol).run()
mc = dmrgscf.DMRGSCF(mf, 4, 4)
mc.kernel()

But it met with error termination. The error start from:

Intel MKL ERROR: Parameter 5 was incorrect on entry to DGEMM .Intel MKL ERROR: Parameter 5 was incorrect on entry to DGEMM .

If I remove dmrgscf.settings.MPIPREFIX = 'mpirun -n 3'. It also terminates with error.

If I use

mc.fcisolver = dmrgscf.DMRGCI(mol) 
dmrgscf.dryrun(mc)

then I can run with 'block2main dmrg.conf' normally. Can you give any clues?

Thanks in advance!
yunshu

Installation on Apple M2

I'm attempting to install block2 on an Apple M2 and running into a mkl dependency issue. Can you please help resolve it?

`ERROR: Cannot install block2==0.1.10, block2==0.1.4, block2==0.1.5, block2==0.1.6, block2==0.1.7, block2==0.1.8, block2==0.2.0, block2==0.3.0, block2==0.4.0, block2==0.4.1, block2==0.4.10, block2==0.4.12, block2==0.4.13, block2==0.4.14, block2==0.4.2, block2==0.4.5, block2==0.4.6, block2==0.4.8, block2==0.4.9, block2==0.5.0 and block2==0.5.1 because these package versions have conflicting dependencies.

The conflict is caused by:
block2 0.5.1 depends on mkl==2021.4
block2 0.5.0 depends on mkl==2019
block2 0.4.14 depends on mkl==2019
block2 0.4.13 depends on mkl==2019
block2 0.4.12 depends on mkl==2019
block2 0.4.10 depends on mkl==2019
block2 0.4.9 depends on mkl==2019
block2 0.4.8 depends on mkl==2019
block2 0.4.6 depends on mkl==2019
block2 0.4.5 depends on mkl==2019
block2 0.4.2 depends on mkl==2019
block2 0.4.1 depends on mkl==2019
block2 0.4.0 depends on mkl==2019
block2 0.3.0 depends on mkl==2019
block2 0.2.0 depends on mkl==2019
block2 0.1.10 depends on mkl
block2 0.1.8 depends on mkl
block2 0.1.7 depends on mkl
block2 0.1.6 depends on mkl
block2 0.1.5 depends on mkl
block2 0.1.4 depends on mkl`

I have the following packages installed on my conda environment.

Name Version Build Channel
bzip2 1.0.8 h1de35cc_0
ca-certificates 2023.01.10 hecd8cb5_0
intel-openmp 2021.4.0 hecd8cb5_3538
libffi 3.4.2 hecd8cb5_6
mkl 2021.4.0 hecd8cb5_637
ncurses 6.4 hcec6c5f_0
openssl 1.1.1t hca72f7f_0
pip 23.0.1 py310hecd8cb5_0
python 3.10.11 h218abb5_2
readline 8.2 hca72f7f_0
setuptools 66.0.0 py310hecd8cb5_0
sqlite 3.41.2 h6c40b1e_0
tk 8.6.12 h5d9f67b_0
tzdata 2023c h04d1e81_0
wheel 0.38.4 py310hecd8cb5_0
xz 5.4.2 h6c40b1e_0
zlib 1.2.13 h4dc903c_0

I am on an Apple Mac M2, on Ventura 13.3.1 (a).

Seg fault on Hubbard dimer

Hi,

I'm trying to use block2 for DMRG as the high-level solver within some DMET calculations on the Hubbard model. As a minimal example, to compare the results with DMRG to the results with FCI, I tried to run both DMRG and FCI on a Hubbard dimer. However, block2main throws a segfault on this system. Details are below - and any help is appreciated!

The configuration file I'm using is as follows. The segfault doesn't seem to be sensitive to the schedule used.

nelec 2
spin 0
hf_occ integral

schedule
0 100 1e-06 1e-06
5 200 1e-07 1e-07
8 200 1e-07 0e+00
end

maxiter 28
twodot_to_onedot 11
sweep_tol 1e-06
orbitals ./FCI_dump
warmup local_4site
nroots 1
outputlevel 2
prefix ./tmp
mkl_thrds 2
noreorder

The FCI_dump orbitals file is as follows:

&FCI NORB= 2,NELEC= 2,MS2= 0,
  ORBSYM=1,1,
  ISYM=1,
  IUHF=1,
 &END
  0.0000000000000000   0   0   0   0
  0.0000000000000000   0   0   0   0
  1.0000000000000000   1   1   1   1
  1.0000000000000000   2   2   2   2
  0.0000000000000000   0   0   0   0
 -1.0000000000000000   2   1   0   0
 -1.0000000000000000   2   2   0   0
  0.0000000000000000   0   0   0   0
 -1.0000000000000000   2   1   0   0
 -1.0000000000000000   2   2   0   0
  0.0000000000000000   0   0   0   0
  0.0000000000000000   0   0   0   0

With these files, when I run block2main dmrg.conf.seg, I get this output:


********************************** INPUT START **********************************
nelec                                                            2
spin                                                             0
hf_occ                                                    integral
maxiter                                                         28
twodot_to_onedot                                                11
sweep_tol                                                    1e-06
orbitals                                                ./FCI_dump
warmup                                                 local_4site
nroots                                                           1
outputlevel                                                      2
prefix                                                       ./tmp
mkl_thrds                                                        2
noreorder
schedule                  Sweep   0-   4 : Mmps =   100 Noise =     1e-06 DavTol =     1e-06
                          Sweep   5-   7 : Mmps =    10 Noise =     1e-07 DavTol =     1e-07
                          Sweep   8-  27 : Mmps =    10 Noise =         0 DavTol =     1e-07
irrep                                                            1
********************************** INPUT END   **********************************

SPIN ADAPTED - REAL DOMAIN
qc mpo type =  QCTypes.Conventional
 UseMainStack = 0 MinDiskUsage = 1 MinMemUsage = 0 IBuf = 0 OBuf = 0
 FPCompression: prec = 1.00e-16 chunk = 1024
 IMain = 0 B / 134 MB DMain = 0 B / 687 MB ISeco = 0 B / 57.2 MB DSeco = 0 B / 1.01 GB
 OpenMP = 1 TBB = 0 MKL = INTEL 2019.0.0 SeqType = Tasked MKLIntLen = 4
 THREADING = 2 layers : Global | Operator BatchedGEMM
 NUMBER : Global = 56 Operator = 28 Quanta = 0 MKL = 2
 COMPLEX = 1 KSYMM = 0
read integral finished 0.011580886999999984
integral sym error =            0
MinMPOMemUsage =  False
MPS =  CC 0 2 < N=2 S=0 PG=0 >
GS INIT MPS BOND DIMS =       1     3     1
pre-mpo memory usage =  69.7 MB
build mpo start ...
zsh: segmentation fault  block2main dmrg.conf.seg

Thanks for the help.
Gil

Particle Number Not Conserved in Triplets at Low M Value?

Hello,

I am running the below code for a state-averaged DMRGCI calculation based on the example here: https://block2.readthedocs.io/en/latest/user/dmrg-scf.html. Everything seems to be working properly at high enough M value and for the singlet state. However, at low M value, the CAS 2rdm written to disk in 2pdm.npy appears to not conserve particle number, as demonstrated by the code below:

from pyscf import gto, scf, lib, dmrgscf, mcscf
import numpy as np
import os

dmrgscf.settings.BLOCKEXE = os.popen("which block2main").read().strip()
dmrgscf.settings.MPIPREFIX = ''

mol = gto.Mole()
mol.charge = 0
mol.atom = [('O', [0.0, 0.0, -0.13209669380597672]),
            ('H', [0.0, 1.4315287853817316, 0.9797000689025815]),
            ('H', [0.0, -1.4315287853817316, 0.9797000689025815])]
mol.unit = "bohr"
mol.basis = "ccpvdz"
mol.spin = 0
mol.symmetry = "c2v"
mol.build()
mf = scf.RHF(mol)
mf.kernel()

nactorb = 13
nactelec = (4,4)

coeff = np.array([[ 1.00084509e+00,  9.27715182e-03,  9.45150653e-03,
                  0.00000000e+00,  0.00000000e+00, -4.37852318e-02,
                  5.48903114e-01, -2.22213228e-01,  2.54298029e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -2.24384618e-01,
                  0.00000000e+00,  2.89705169e-01,  0.00000000e+00,
                  0.00000000e+00, -3.53989539e-01,  2.62744542e-02],
                [-5.66273145e-03,  2.69813624e-01,  4.00958797e-01,
                  0.00000000e+00,  0.00000000e+00,  2.69584693e-02,
                  1.41758767e+00, -4.43008260e-01,  4.98743651e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -5.09191969e-01,
                  0.00000000e+00,  6.14396791e-01,  0.00000000e+00,
                  0.00000000e+00, -6.15268261e-01, -3.25434272e-02],
                [-8.76366930e-03,  8.20907335e-02,  4.77632681e-01,
                  0.00000000e+00,  0.00000000e+00,  6.48631908e-01,
                 -1.07080173e+00,  9.07299269e-01, -1.17139965e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  6.33951532e-01,
                  0.00000000e+00, -1.61238643e-01,  0.00000000e+00,
                  0.00000000e+00,  3.37259951e+00, -1.27583564e+00],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  3.03629495e-17,  6.33214473e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  1.00610184e-17,  5.68648991e-17,
                 -9.64796430e-01, -1.83302999e-02,  1.79163104e-17,
                 -3.70689064e-18,  3.99289627e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  5.89669167e-02,
                 -2.73127055e-17,  0.00000000e+00,  0.00000000e+00],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                 -4.95864595e-01,  3.87732039e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00, -1.64308899e-01, -9.28674278e-01,
                 -5.90767430e-17, -1.12240715e-18, -2.92595554e-01,
                  6.05381183e-02, -6.52089446e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  3.61068229e-18,
                  4.46050330e-01,  0.00000000e+00,  0.00000000e+00],
                [-2.10212458e-04,  3.97698207e-01, -3.96475966e-01,
                  0.00000000e+00,  0.00000000e+00, -5.06347268e-01,
                  3.51840661e-01,  7.63247726e-01, -1.82626023e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -3.85388211e-02,
                  0.00000000e+00,  4.65225575e-01,  0.00000000e+00,
                  0.00000000e+00,  3.02543052e-01, -1.89898318e-01],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  1.34753105e-17,  4.93308371e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  3.62059746e-17, -1.98805914e-17,
                  9.75614804e-01, -1.43689136e-01, -7.44559036e-17,
                 -4.78581300e-17, -5.13883950e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -7.59686750e-01,
                 -9.95901622e-17,  0.00000000e+00,  0.00000000e+00],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                 -2.20068521e-01,  3.02064259e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00, -5.91288438e-01,  3.24674697e-01,
                  5.97391774e-17, -8.79842203e-18,  1.21595718e+00,
                  7.81582575e-01,  8.39236179e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -4.65173973e-17,
                  1.62643078e+00,  0.00000000e+00,  0.00000000e+00],
                [-3.30166518e-06,  1.97883502e-01, -2.94229966e-01,
                  0.00000000e+00,  0.00000000e+00,  6.76328330e-01,
                  9.36785000e-02, -5.45430645e-01, -3.54567819e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -2.55074266e-01,
                  0.00000000e+00, -7.02403255e-01,  0.00000000e+00,
                  0.00000000e+00,  1.67096710e+00, -4.17870936e-01],
                [-3.09749044e-21, -1.08665333e-18,  7.06643330e-19,
                  0.00000000e+00,  0.00000000e+00, -1.05861807e-18,
                  2.96093237e-18, -3.43209923e-17, -8.02511562e-17,
                  8.65253956e-01,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  2.87579016e-19,
                 -6.35351096e-01,  2.52449210e-17,  0.00000000e+00,
                  0.00000000e+00, -5.20617655e-17,  1.03300009e-16],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                 -2.69511519e-02,  1.05256609e-18,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  6.63648500e-01, -2.66303784e-01,
                 -2.24067754e-18,  5.65080978e-17,  2.27833762e-02,
                 -6.42967256e-02,  2.64491933e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -3.12053372e-17,
                  1.09750613e+00,  0.00000000e+00,  0.00000000e+00],
                [-1.14334576e-04,  9.21752059e-03, -1.17500402e-02,
                  0.00000000e+00,  0.00000000e+00,  2.51192262e-02,
                  1.35999580e-01,  9.93273917e-02,  4.91052138e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  1.32831213e-01,
                  0.00000000e+00, -5.33143043e-02,  0.00000000e+00,
                  0.00000000e+00,  6.15612163e-01,  7.44170540e-01],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  1.65028209e-18,  1.71897087e-02,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00, -4.06367505e-17,  1.63064039e-17,
                 -3.65930413e-02,  9.22847270e-01, -1.39507944e-18,
                  3.93703896e-18, -1.61954600e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -5.09621831e-01,
                 -6.72028686e-17,  0.00000000e+00,  0.00000000e+00],
                [-2.52929289e-05, -8.87319780e-03,  5.77018068e-03,
                  0.00000000e+00,  0.00000000e+00, -8.64427255e-03,
                  2.41778477e-02, -2.80252170e-01, -6.55300420e-01,
                 -1.05963049e-16,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  2.34826087e-03,
                  7.78080686e-17,  2.06140423e-01,  0.00000000e+00,
                  0.00000000e+00, -4.25116577e-01,  8.43508587e-01],
                [-2.75674287e-03,  2.84767956e-01, -5.53715541e-02,
                 -3.31435431e-01,  0.00000000e+00,  2.50231419e-01,
                 -5.22473785e-01, -5.35501134e-02,  3.69020476e-01,
                  0.00000000e+00,  5.08886426e-01,  3.44444869e-01,
                  0.00000000e+00,  0.00000000e+00,  1.73368850e-01,
                 -2.87537401e-01, -4.95658143e-01, -1.41630692e-01,
                  0.00000000e+00,  5.73329171e-01,  0.00000000e+00,
                 -1.30292118e+00, -1.50177138e+00,  6.89881360e-01],
                [ 3.30436524e-04,  2.95982078e-02, -8.38872625e-03,
                 -7.73415641e-02,  0.00000000e+00, -8.13406706e-01,
                  2.92053975e-01, -3.95346023e-01,  2.12586037e-01,
                  0.00000000e+00, -6.04125717e-02,  2.30618075e-01,
                  0.00000000e+00,  0.00000000e+00, -1.77640500e+00,
                 -1.41097653e-01,  2.49641499e-01, -2.24348062e-02,
                  0.00000000e+00, -3.51667001e-01,  0.00000000e+00,
                 -3.73703761e-01, -4.07576393e-01,  1.39185848e-01],
                [-5.42122252e-20,  2.98741780e-18, -3.11080906e-19,
                 -1.51984174e-18,  3.14215016e-02,  3.61114657e-18,
                 -3.40949970e-18,  7.95276913e-19,  4.34413513e-18,
                  2.01238892e-01,  6.11750162e-18, -1.44636673e-18,
                  6.73576892e-02,  1.54958426e-01, -1.07796738e-17,
                  2.36941471e-17,  3.37771509e-17,  2.61106693e-17,
                  7.52670802e-01, -2.32514640e-17,  8.46714639e-01,
                 -5.16798376e-17, -6.16823634e-17,  1.35795943e-17],
                [ 8.85352825e-04, -4.87882351e-02,  5.08033673e-03,
                  2.48208992e-02,  1.92401207e-18, -5.89744989e-02,
                  5.56813557e-02, -1.29878576e-02, -7.09451105e-02,
                  1.23223282e-17, -9.99063832e-02,  2.36209612e-02,
                  4.12446893e-18,  9.48846701e-18,  1.76045433e-01,
                 -3.86954787e-01, -5.51622736e-01, -4.26419590e-01,
                  4.60877944e-17,  3.79725223e-01,  5.18463187e-17,
                  8.43995798e-01,  1.00734944e+00, -2.21771605e-01],
                [ 6.53444999e-04, -1.40115819e-02, -1.77791562e-02,
                  3.26371215e-02,  0.00000000e+00, -3.62689964e-04,
                  9.58494786e-02, -7.28541503e-02,  1.27016002e-02,
                  0.00000000e+00, -3.43692146e-03, -2.60528004e-02,
                  0.00000000e+00,  0.00000000e+00,  1.37651568e-01,
                  7.53162591e-01, -3.01536678e-01,  4.68299324e-01,
                  0.00000000e+00,  5.45580122e-01,  0.00000000e+00,
                  6.42305117e-01,  2.93877061e-01, -6.12929973e-01],
                [-2.75674287e-03,  2.84767956e-01, -5.53715541e-02,
                  3.31435431e-01,  0.00000000e+00,  2.50231419e-01,
                 -5.22473785e-01, -5.35501134e-02,  3.69020476e-01,
                  0.00000000e+00, -5.08886426e-01, -3.44444869e-01,
                  0.00000000e+00,  0.00000000e+00, -1.73368850e-01,
                  2.87537401e-01,  4.95658143e-01, -1.41630692e-01,
                  0.00000000e+00,  5.73329171e-01,  0.00000000e+00,
                  1.30292118e+00, -1.50177138e+00,  6.89881360e-01],
                [ 3.30436524e-04,  2.95982078e-02, -8.38872625e-03,
                  7.73415641e-02,  0.00000000e+00, -8.13406706e-01,
                  2.92053975e-01, -3.95346023e-01,  2.12586037e-01,
                  0.00000000e+00,  6.04125717e-02, -2.30618075e-01,
                  0.00000000e+00,  0.00000000e+00,  1.77640500e+00,
                  1.41097653e-01, -2.49641499e-01, -2.24348062e-02,
                  0.00000000e+00, -3.51667001e-01,  0.00000000e+00,
                  3.73703761e-01, -4.07576393e-01,  1.39185848e-01],
                [ 5.42122252e-20, -2.98741780e-18,  3.11080906e-19,
                 -1.51984174e-18,  3.14215016e-02, -3.61114657e-18,
                  3.40949970e-18, -7.95276913e-19, -4.34413513e-18,
                 -2.01238892e-01,  6.11750162e-18, -1.44636673e-18,
                  6.73576892e-02,  1.54958426e-01, -1.07796738e-17,
                  2.36941471e-17,  3.37771509e-17, -2.61106693e-17,
                 -7.52670802e-01,  2.32514640e-17,  8.46714639e-01,
                 -5.16798376e-17,  6.16823634e-17, -1.35795943e-17],
                [-8.85352825e-04,  4.87882351e-02, -5.08033673e-03,
                  2.48208992e-02,  1.92401207e-18,  5.89744989e-02,
                 -5.56813557e-02,  1.29878576e-02,  7.09451105e-02,
                 -1.23223282e-17, -9.99063832e-02,  2.36209612e-02,
                  4.12446893e-18,  9.48846701e-18,  1.76045433e-01,
                 -3.86954787e-01, -5.51622736e-01,  4.26419590e-01,
                 -4.60877944e-17, -3.79725223e-01,  5.18463187e-17,
                  8.43995798e-01, -1.00734944e+00,  2.21771605e-01],
                [ 6.53444999e-04, -1.40115819e-02, -1.77791562e-02,
                 -3.26371215e-02,  0.00000000e+00, -3.62689964e-04,
                  9.58494786e-02, -7.28541503e-02,  1.27016002e-02,
                  0.00000000e+00,  3.43692146e-03,  2.60528004e-02,
                  0.00000000e+00,  0.00000000e+00, -1.37651568e-01,
                 -7.53162591e-01,  3.01536678e-01,  4.68299324e-01,
                  0.00000000e+00,  5.45580122e-01,  0.00000000e+00,
                 -6.42305117e-01,  2.93877061e-01, -6.12929973e-01]])

lib.param.TMPDIR = os.path.abspath(lib.param.TMPDIR)

solvers = [dmrgscf.DMRGCI(mol, maxM=2, tol=1E-10) for _ in range(2)]
weights = [1.0 / len(solvers)] * len(solvers)

solvers[0].spin = 0
solvers[1].spin = 2

for i, mcf in enumerate(solvers):
    mcf.runtimeDir = lib.param.TMPDIR + "/%d" % i
    mcf.scratchDirectory = lib.param.TMPDIR + "/%d" % i
    mcf.threads = 8
    mcf.memory = int(mol.max_memory / 1000) # mem in GB

mc = mcscf.CASSCF(mf, nactorb, nactelec)
mcscf.state_average_mix_(mc, solvers, weights)

mc.max_cycle = 0
returned = mc.kernel(coeff)

def analyze_casdm2(mc,casdm2_fn):
    casdm2s = np.load(casdm2_fn) 

    casdm2s = np.einsum("ciklj->cijkl",casdm2s) #Switch to chemist notation
    casdm2aa, casdm2ab, casdm2bb = casdm2s
    casdm2 = casdm2aa + 2*casdm2ab + casdm2bb

    nactel = np.sum(mc.nelecas)
    casdm1 = np.einsum('ikjj->ki', casdm2)
    casdm1 /= (nactel-1)
    
    dm2tr = np.round(np.einsum('iijj->',casdm2),8)
    dm1tr = np.round(np.einsum('ii->',casdm1),8)
    print("CASDM2 Trace:",dm2tr,"(Expected:",nactel*(nactel-1),")")
    print("CASDM1 Trace:",dm1tr,"(Expected:",nactel,")")

    neleca, nelecb = mc.nelecas
    casdm1n = (2-(neleca+nelecb)/2.) * casdm1 - np.einsum('pkkq->pq', casdm2)
    casdm1n *= 1./(neleca-nelecb+1)
    casdm1a, casdm1b = (casdm1+casdm1n)*.5, (casdm1-casdm1n)*.5
    print("CASDM1A Trace:",np.round(np.trace(casdm1a),8))
    print("CASDM1B Trace:",np.round(np.trace(casdm1b),8))
    
print("--- Singlet Analysis ---")
singlet_casdm2 = lib.param.TMPDIR + "/0/node0/2pdm.npy"
analyze_casdm2(mc,singlet_casdm2)

print("--- Triplet Analysis ---")
triplet_casdm2 = lib.param.TMPDIR + "/1/node0/2pdm.npy"
analyze_casdm2(mc,triplet_casdm2)

Where I have used the code in dmrgci.py to read the density matrices from disk and obtain the spin 1rdms. The output I get for the above is:

"""
converged SCF energy = -76.0267027987298
CASSCF energy = -74.0005790322444
CASCI E = -74.0005790322444 E(CI) = -21.8658428098696 S^2 = 1.0000000
CASCI state-averaged energy = -74.0005790322444
CASCI energy for each state
State 0 weight 0.5 E = -74.7504817835604 S^2 = 0.0000000
State 1 weight 0.5 E = -73.2506762809283 S^2 = 2.0000000
--- Singlet Analysis ---
CASDM2 Trace: 56.0 (Expected: 56 )
CASDM1 Trace: 8.0 (Expected: 8 )
CASDM1A Trace: 4.0
CASDM1B Trace: 4.0
--- Triplet Analysis ---
CASDM2 Trace: 54.19404306 (Expected: 56 )
CASDM1 Trace: 7.74200615 (Expected: 8 )
CASDM1A Trace: 5.88646361
CASDM1B Trace: 1.85554254
"""

Additionally, when this occurs I am unable to reproduce the DMRG-CASCI energy reported by block2 using the casdm2 for the triplet read from disk (I am able to reproduce the value for the singlet. I am wondering if this lack of particle conservation is due to some approximation in block2 for computing the 2rdm that fails for triplets/excited states, etc. at low M value? Is there a reason that the DMRG-CASCI energy reported by block2 would differ from the energy obtained by using the rdm2 written to disk by Block2?

Thank you and apologies for the lengthy numpy array above.

Segfault in CSF sampling

Dear @hczhai,

I have a bug that I'm honestly not sure how to further analyze.
The error message is:

[xxx:129821:0:129821] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffffffffffc)
[xxx:129821:1:129866] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffffffffffc)
[xxx:129821:2:129864] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffffffffffc)
==== backtrace (tid: 129866) ====
 0 0x00000000000219c3 ucs_debug_print_backtrace()  /dev/shm/UCX/1.10.0/GCCcore-10.3.0/ucx-1.10.0/src/ucs/debug/debug.c:656
 1 0x000000000538e43d block2::DeterminantTRIE<block2::SU2Long, double, void>::evaluate()  determinant.cpp:0
 2 0x000000000000e0d5 GOMP_taskgroup_end()  ???:0
 3 0x0000000000007ea5 start_thread()  pthread_create.c:0
 4 0x00000000000fe96d __clone()  ???:0
=================================
[xxx:129821] *** Process received signal ***
[xxx:129821] Signal: Segmentation fault (11)
[xxx:129821] Signal code:  (-6)
[xxx:129821] Failing at address: 0x6d950001fb1d
[xxx:129821] [ 0] /lib64/libpthread.so.0(+0xf630)[0x2b6389123630]
[xxx:129821] [ 1] /usr/bin/Block2/p0.5.1rc4/block2.cpython-39-x86_64-linux-gnu.so(+0x538e43d)[0x2b639617343d]
[xxx:129821] [ 2] /usr/bin/Block2/p0.5.1rc4/block2_mpi.libs/libgomp-f7e03b3e.so.1.0.0(+0xe0d5)[0x2b63988ad0d5]
[xxx:129821] [ 3] /lib64/libpthread.so.0(+0x7ea5)[0x2b638911bea5]
[xxx:129821] [ 4] /lib64/libc.so.6(clone+0x6d)[0x2b6389dad96d]
[xxx:129821] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 129821 on node exited on xxx signal 11 (Segmentation fault).
--------------------------------------------------------------------------

The input file is:

nelec 42
spin 2
orbitals FCIDUMP
schedule default
maxiter 100
maxM 500
sweep_tol 1.0000e-07
outputlevel 3
hf_occ integral
prefix  /scratch/
memory, 700, g
num_thrds 36
mem 200 g
restart_sample 0.1
mps_tags KET1

which I run with mpirun --bind-to core --map-by ppr:1:node:pe=36 block2main sample.conf with the mpi-enabled version.

The weird thing is that it only happens for 1 of my 2 Hamiltonians.
So the exact same script with exactly the same number of orbitals/electrons (43o, 42e) but slightly different Hamiltonian/MPS values does end normally.
I've also tried to lower the value of the sampling cutoff as I've previously bumped into a divide by zero, but that doesn't seem to help.
At first I thought about an out of memory bug but lowering the bond dimension did not change anything, also my other calculation is able to fit in memory.
Any thoughts? I understand that there is not much to go on, so let me know if you need other config data or need me to run a different test.
If this is a user caused problem, do you know what I am doing wrong?
The DeterminantTrie struct seems fairly indifferent to the Hamiltonian and should work for any MPS I would think.

Memory issues of DMRG with mpirun

Hi All,
I met with a problem when I was doing a DMRG-SCF run with block2.

ERROR: mpirun -n 40 /home/cuiys/.conda/envs/test/bin/block2main dmrg.conf > dmrg.out 2>&1
Traceback (most recent call last):
  File "/home/SCRATCH/PYSCF_59/dmrg_restart.py", line 41, in <module>
    mc.kernel(mo1)
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/mcscf/mc1step.py", line 812, in kernel
    _kern(self, mo_coeff,
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/mcscf/mc1step.py", line 349, in kernel
    e_tot, e_cas, fcivec = casscf.casci(mo, ci0, eris, log, locals())
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/mcscf/mc1step.py", line 834, in casci
    e_tot, e_cas, fcivec = casci.kernel(fcasci, mo_coeff, ci0, log)
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/mcscf/casci.py", line 546, in kernel
    e_tot, fcivec = casci.fcisolver.kernel(h1eff, eri_cas, ncas, nelecas,
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/dmrgscf/dmrgci.py", line 730, in kernel
    executeBLOCK(self)
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/dmrgscf/dmrgci.py", line 948, in executeBLOCK
    raise err
  File "/home/cuiys/.conda/envs/test/lib/python3.9/site-packages/pyscf/dmrgscf/dmrgci.py", line 943, in executeBLOCK
    check_call(cmd, cwd=DMRGCI.runtimeDir, shell=True)
  File "/home/cuiys/.conda/envs/test/lib/python3.9/subprocess.py", line 373, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'mpirun -n 40 /home/cuiys/.conda/envs/test/bin/block2main dmrg.co
[input_output.zip](https://github.com/block-hczhai/block2-preview/files/8525294/input_output.zip)
nf > dmrg.out 2>&1' returned non-zero exit status 137.

dmrg.conf, dmrg.out and FCIDUMP can be found attached. Could you tell me where the problem is? Thanks in advance.

Sample determinants instead of CSFs

The documentation says:

The keyword sample or restart_sample can be used to sample CSFs or determinants after DMRG or from an MPS loaded from disk. The value associated with the keyword sample or restart_sample is the threshold for sampling.

However, sample always samples CSFs. Is there a possibility to directly sample Slater determinants instead of CSFs at the end of a DMRG calculation?

Thank you!

ltdmrg.py cannot import "get_partition_weights"

I recently installed block2 via pip install block2, and then tried to run ltdmrg.py in pyblock2, it seems the module get_partition_weights cannot be imported:

ImportError: cannot import name 'get_partition_weights' from 'block2'

Example from documentation gives WARN: DMRG executable file for nevptsolver is the same to the executable file for DMRG solver.

Hi, first of all thanks for the detailed response here: pyscf/pyscf#1601

So I went ahead and installed block2, and it seems to be running. I am getting a warning when running the compress_approx nevpt2 example from here

WARN: DMRG executable file for nevptsolver is the same to the executable file for DMRG solver. If they are both compiled by MPI compilers, they may cause error or random results in DMRG-NEVPT calculation.

The code seems to run fine despite the warning, and it seems clear that it is intentional that the DMRG and DMRG NEVPT2 executables are the same. I just wanted to check to make sure this is all working correctly, and I didn't make some mistake in my installation that would give me incorrect nevpt2 results

No Orbital Rotation

Hello,

I have noticed that when using the PySCF interface with mcscf.CASCI() and using dmrgscf.DMRGCI() fcisolvers, the orbitals I enter (mc.mo_coeff before mc.kernel()) are not identical to the orbitals I get out (mc.mo_coeff after mc.kernel()). It is not clear to me from the documentation which orbitals I am getting out, especially when doing a state-averaged calculation (e.g. using mcscf.state_average_mix_).

I presume that mc.mo_coeffs[mc.ncore:mc.ncore+mc.ncas] should correspond to the active orbitals corresponding to the 1pdm.npy and 2pdm.npy files. But is there a way to get the 1pdm and 2pdm printed out in the basis I put in? I have tried playing with mc.canonicalization and mc.natorb but this does not seem to do anything.

I can of course undo any rotation of the active space, but it appears that even in SA-DMRGCI some orbitals are being rotated out of the active space to form the final set of coefficients Cfinal (as I have found evaluating the SVD of Cfinal^T S Cinitial), maybe due to some canonicalization?

Thank you for any help you can provide.

dmrgscf freezes after start (too large integral ?)

Hello,
I come to you again with a question. I am trying to perform state average dmrgscf with pyscf (2.1.1) and the last release of block2 (not the mpi one) installed through pip on a rather big 1st row transition metal complex (triplet). I couldn't diagnose if I was hitting the limit of what is feasible with the current implementation or if I'm doing something wrong.
I have a "moderately" large active space (18e,20o) with a large basis set 745 cGTO (and no density fitting) and 30 atoms and a max starting bond dimension of 200 to try and get decent orbitals.
So far the program seems to freeze when calling block2main after the ROHF step.

Last test was on 16 threads and an exceedingly large amount of memory (1Tb), as issues related to this was my first guess. First it's using all the threads for ERI calculations and throughout the ROHF part. When entering the DMRG part only one cpu remains active and the ~230Gb used memory remains in use (i'm guessing integrals are stored in memory). After that nothing happens and it remains in that state (longest i've let it is about 12hours, i ended up killing it). I couldn't diagnose what is happening at that step since it seems to be at the limit between the two programs. I'm not sure what is happening there, i would expect a long integral transformation step but the fact that it runs only one core makes me doubt what's really going on.

And the Slurm scheduler seems to be lost too:
slurmstepd: error: slurm_get_node_energy: Socket timed out on send/recv operation slurmstepd: error: _get_joules_task: can't get info from slurmd
This is where it stops:
`******** <class 'pyscf.mcscf.addons._state_average_mcscf_solver..StateAverageMCSCF'> ********
CAS (10e+8e, 20o), ncore = 54, nvir = 671
frozen orbitals 17
max_cycle_macro = 50
max_cycle_micro = 4
conv_tol = 1e-07
conv_tol_grad = None
orbital rotation max_stepsize = 0.02
orbital rotation threshold for CI restart = 0.01
augmented hessian ah_max_cycle = 30
augmented hessian ah_conv_tol = 1e-12
augmented hessian ah_linear dependence = 1e-14
augmented hessian ah_level shift = 1e-08
augmented hessian ah_start_tol = 2.5
augmented hessian ah_start_cycle = 3
augmented hessian ah_grad_trust_region = 3
kf_trust_region = 3
kf_interval = 4
ci_response_space = 4
ci_grad_trust_region = 3
with_dep4 0
natorb = True
canonicalization = True
sorting_mo_energy = False
ao2mo_level = 2
chkfile = model_tpa.chk
max_memory 900000 MB (current use 309642 MB)
internal_rotation = False

******** Block flags ********
executable = /home/lchaussy/.local/bin/block2main
BLOCKEXE_COMPRESS_NEVPT= /home/lchaussy/.local/bin/block2main
`
input.txt

I join the minimal working example input that I used for debugging purposes.

I have run successfully the same script with minimal basis set, and also tested on smaller molecules with that same basis and input (with outstanding efficiency I have to say). Is this the limit of what is actually feasible or is my strategy wrong here? I am very new to the pyscf ecosystem and block and might be missing some obvious things.
I did also see several strategies to reduce memory use in block, which I have not use here yet thinking that if a minimal bond dimension as this one (200) doesn't work out no need to push harder anyway.

DMRG-SC-NEVPT2 result seems problematic for CAS(2,2)

Hi, I'm doing some tests and found that the DMRG-CASSCF(2,2) based SC-NEVPT2 result of block2 is different from the result of standard CASSCF(2,2)-NEVPT2 (Of course, there is no need to perform DMRG calculations for such small active space. So this is just a test).

This problem only occurs in CAS(2,2). I've tried two versions (block2-preview-0.5.1 and block2-preview-p0.5.2rc4) and this problem can be reproduced. For larger active spaces like (4,4) or (8,8), numerical results of two methods are almost identical. Here is one example: the CAS(2,2) of methane (one C-H bonding orbital and one C-H antibonding orbital)
ch4_cas22.zip

The DMRG-SC-NEVPT2 energy components calculated by block2 are

Sr    (-1)',   E = -0.00000000008124
Si    (+1)',   E = -0.00000000005290
Sijrs (0)  ,   E = -0.09775925767618
Sijr  (+1) ,   E = -0.00000000000000
Srsi  (-1) ,   E = -0.00000000000000
Srs   (-2) ,   E = -0.00000000000000
Sij   (+2) ,   E = -0.00000000000000
Sir   (0)' ,   E = -0.00000000000000
Nevpt2 Energy = -0.097759257810315

And there is one warning in output

/home/jxzou/software/dmrgscf/pyscf/dmrgscf/dmrgci.py:419: RuntimeWarning: invalid value encountered in true_divide
  twopdm /= (nelectrons-2)

The SC-NEVPT2 energy components calculated by CASSCF-NEVPT2 are

Sr    (-1)',   E = -0.00000000007663
Si    (+1)',   E = -0.00000000004147
Sijrs (0)  ,   E = -0.09775932714861
Sijr  (+1) ,   E = -0.00337510290839
Srsi  (-1) ,   E = -0.02314602815303
Srs   (-2) ,   E = -0.00952348805530
Sij   (+2) ,   E = -0.00024465569697
Sir   (0)' ,   E = -0.01382402538095
Nevpt2 Energy = -0.147872627461348

So, it seems that Sijr, Srsi, Srs, Sij and Sir components of block2 are problematic when calculating a very small active space like CAS(2,2).

Again, thanks for this excellent package!

A bug in the Green's function when orbital transformation is used

Hello there,

In pyblock2/gfdmrg.py, there is a method called greens_function of the GFDMRG class. The greens_function method takes a parameter called mo_coeff which is set to None by default. However, when I try to supply a matrix to this parameter, the following error occurs:

python3: <path_to_block2>/block2-preview/src/instantiation/core/../../core/parallel_tensor_functions.hpp:122: void block2::ParallelTensorFunctions<S, FL>::right_assign(const std::shared_ptr<block2::OperatorTensor<S, FL> >&, std::shared_ptr<block2::OperatorTensor<S, FL> >&) const [with S = block2::SU2Long; FL = double]: Assertion a->rmat->data[i] == c->rmat->data[i]' failed`

My debugging attempt indicates that the error happens at line 647: rme.init_environments(False).

How to reproduce the error
At line 1023 and 1035 of pyblock2/gfdmrg.py, change mo_coeff=None to mo_coeff=np.eye(len(idxs)), then just run the test calculation at the end of this file (python gfdmrg.py).

State specific dmrgci orbitals for multiple roots in pyblock2

Hello,

I would like to retrieve the natural orbitals for each state following a state-averaged dmrgscf calculation. As I understood, a second DMRGCI should be performed on the averaged density to get a set of orthogonal orbitals. Is it currently possible via pyblock2 or the only option is to use the "statespecific" keyword via block2main as described in the documentation ? Alternatively, I suppose I can just follow up that procedure and diagonalise the 1pdm for each state but I also have to do the previous state-averaged calculation with block2main (not via pyblock2) ?

Any advice most welcome

not finding shared MKL object

I am trying to run dmrgscf to get CASSCF with DMRG as the active space solver for 9 heavy atoms molecule. While running the code, I am facing several issues. One of which is related to "not finding shared MKL objects" and other related to "running out of scratch space on the /tmp folder". Can you please help me on how to solve this issue.
Thank you in advance.

von Neumann entropy

Hi, I am trying to get the van Neumann entropy (one-orbital and two-orbitals) and eventually also the mutual information between pairs of orbitals. Is this printed somewhere by block2 already or would I need to calculate this myself? I noticed there is a keyword that may control this, 'store_wfn_spectra' - the documentation says this may output the one orbital entropy to a file called sweep_wfn_entropy.npy - I couldn't find this file when running with this keyword. Was this deprecated (git grep does't find this file in the current version)?

Any advice appreciated.

installation of block2 on mac system

Greetings,

Many compliments for putting together the block2 package!
I have tried to install it on a macOS Big Sur machine through the pip install block2 command, but encountered an error message, documented in the enclosed txt file. Could I ask if there are any thoughts/advice about how to resolve this issue? I remain available for any information I may supply.

Thank you for the attention,
Mario

block2_installation.txt

openmpi lib issues

Hi Huanchen,

We have updated to newest version of block2. While running casscf calculations, I encountered the following issue.

  File "/lcrc/project/SEI/Naveen/DMRG/dmrgscf/new_test/test.py", line 96, in <module>
    mc.kernel([coeff, coeff])
  File "/soft/anaconda3/2023.03/envs/DMRG-mpi/lib/python3.10/site-packages/pyscf/mcscf/umc1step.py", line 460, in kernel
    _kern(self, mo_coeff,
  File "/soft/anaconda3/2023.03/envs/DMRG-mpi/lib/python3.10/site-packages/pyscf/mcscf/umc1step.py", line 247, in kernel
    e_tot, e_cas, fcivec = casscf.casci(mo, ci0, eris, log, locals())
  File "/soft/anaconda3/2023.03/envs/DMRG-mpi/lib/python3.10/site-packages/pyscf/mcscf/umc1step.py", line 492, in casci
    e_tot, e_cas, fcivec = ucasci.kernel(fcasci, mo_coeff, ci0, log,
  File "/soft/anaconda3/2023.03/envs/DMRG-mpi/lib/python3.10/site-packages/pyscf/mcscf/ucasci.py", line 110, in kernel
    e_tot, fcivec = casci.fcisolver.kernel(h1eff, eri_cas, ncas, nelecas,
  File "/soft/anaconda3/2023.03/envs/DMRG-mpi/lib/python3.10/site-packages/pyscf/dmrgscf/dmrgci.py", line 713, in kernel
    writeIntegralFile(self, h1e, eri, norb, nelec, ecore)
  File "/lcrc/project/SEI/Naveen/DMRG/dmrgscf/new_test/test.py", line 17, in write_uhf_fcidump
    from block2 import FCIDUMP, VectorUInt8
ImportError: /soft/openmpi/4.0.7/bdw/gcc-9.2/lib/libmpi.so: undefined symbol: opal_hwloc201_hwloc_get_type_depth

Can you please help/suggest us regarding this issue. Complete output file is attached here for your review.

slurm-2825656.txt

Thank you,
Naveen

spin correlation calculations

Dear Huanchen,
I am trying to test 3 Ru atoms in chain, each 5 Angstrom apart. It is taking forever to finish the calculating integrals. Can you please let me know how to speed up the calculations? Please find attached input and output files.
Thank you,
Naveen

Transition Density Matrices with state_average_mix?

Hello,

I am trying to calculate transition density matrices between states in different irreps using state_average_mix in PySCF (e.g. see here: https://block2.readthedocs.io/en/latest/user/dmrg-scf.html for the energy calculation)

Averaging over two irreps appears to create two separate scratch directories and run directories ("solver0" and "solver1") each with their own dmrg.conf input with the respective # of roots in specified in each passed fcisolver.

I am now trying to calculate transition density matrices between the computed ground state (e.g. in the 1A1 irrep) and a number of computed excited states (e.g. in the 1B1 irrep), following the documentation here: https://block2.readthedocs.io/en/latest/user/basic.html#note1

There is some suggestion of this in the documentation, which states:

"""
The transition density matrices between states with different point group irreducible representations are also available by simply adding the keyword tran_twopdm after the corresponding multi-target state-averaged calculation.
"""

But adding this keyword to either dmrg.conf and then running does not interact at all with the other irrep. I have tried to find a workaround by post-hoc "tagging" the matrices by renaming the files / doing follow-up state specific calculations in the excited state irrep and then following the info in the "Load MPS for Density Matrix Calculation" of the documentation, but I have run into several issues trying to work this approach as well.

Is there any official supported method for computing transition density matrices from state_average_mix?

Thanks,

-Daniel

Mixing casci and orbital ordering

I am trying to use the casci keyword with three integers specifying the occupied, active, and virtual space size to use block2 as a CAS solver. I do get the correct energies out for some small test cases if and only if also set noreorder in the input file. With any other orbital reordering option I get completely bogus energies.
Maybe the reordering is applied before the active space is selected?

[Question] Get CI coefficients

Hi. I am trying to get the determinant coefficients from a DMRG-CI calculation using PySCF.

I was first using stackBlock, but someone at this Github page recommended me to use Block2, as it would be easier to do using that:

pyscf/pyscf#357

Unlike the standard PySCF FCI Solver, when I use the DMRG solver, the object mycas.ci does not contain the FCIVector objects with the determinant coefficients. It is just a iterator range(0, nroots).

How can I get the determinant coefficients using Block2?

Block2@OpenMolcas

It is great that Block2 with OpenMolcas now.

Unfortunately, the patched version of OpenMolcas is rather old and for instance ECPs are not available.
Is there a chance that you will adapt your code to a recent version of OpenMolcas?

Many thx in advance

Peter

Installing/building with intelpython and intel oneapi mkl+tbb

I am trying to set up block2 in a containerized environment based on rockylnux with intelpython and intel oneapi mkl and tbb. I have verified with another OMP parallelization enabled program that the setup works in principle, but I am still struggling to get block2 to work (despite the useful information in this issue).

First, I tried a simple pip install block2. As expected this pulls in the mkl-include, cmake, mlk, intel-openmp python packages and block2 then uses these libraries rather than the ones provided by the system, as a simple ldd call shows:

        $ ldd /opt/intel/intelpython3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so
        linux-vdso.so.1 (0x00007ffc5e19e000)
        librt.so.1 => /lib64/librt.so.1 (0x00007fb82b06e000)
        libgomp-f7e03b3e.so.1.0.0 => /opt/intel/intelpython3/lib/python3.7/site-packages/block2.libs/libgomp-f7e03b3e.so.1.0.0 (0x00007fb82ae57000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fb82ac37000)
        libmkl_intel_lp64-7fce225f.so => /opt/intel/intelpython3/lib/python3.7/site-packages/block2.libs/libmkl_intel_lp64-7fce225f.so (0x00007fb82a09f000)
        libmkl_core-fbf20eba.so => /opt/intel/intelpython3/lib/python3.7/site-packages/block2.libs/libmkl_core-fbf20eba.so (0x00007fb825e9a000)
        libmkl_gnu_thread-22473446.so => /opt/intel/intelpython3/lib/python3.7/site-packages/block2.libs/libmkl_gnu_thread-22473446.so (0x00007fb8245d5000)
        libmkl_avx2.so => /opt/intel/intelpython3/lib/python3.7/site-packages/block2.libs/libmkl_avx2.so (0x00007fb820a2a000)
        libmkl_avx512.so => /opt/intel/intelpython3/lib/python3.7/site-packages/block2.libs/libmkl_avx512.so (0x00007fb81c877000)
        libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fb81c4e2000)
        libm.so.6 => /lib64/libm.so.6 (0x00007fb81c160000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fb81bf48000)
        libc.so.6 => /lib64/libc.so.6 (0x00007fb81bb83000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fb82f599000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007fb81b97f000)

This is of course not what I would ideally want but I expected it to at least work, However, while block2main, can be run and yields correct results, it only ever uses a single core even when starting it with, e.g., export OMP_NUM_THREADS=4; block2main .... Just out of curiosity: Do you have any idea why?

Second, I tried to have pip compile block2 from source for me via pip install block2 --no-binary :all:. I expect this to still pull in the duplicate libraries but was hoping that then maybe somehow OMP would work. This leads to: ERROR: Could not find a version that satisfies the requirement mkl==2019 (from block2) (from versions: none). The same happens when doing pip install . from a clone of the repository. Strangely, the above plain pip install block2 installed mkl-2019.0-py2.py3-none-manylinux1_x86_64.whl just fine when instructed to install mkl==2019. Any idea why this is behaving differently? Why would pip be able to find a matching version and install it in one case but not the other?

I could finally make the pip install . from a git clone work by (i) manually doing pip install pybind11 (without this the compilation fails because it cannot find some pybind11 header files - I see that pybind11 is listed under the install_requires but that does not seem to be enough) and (ii) removing the version pinning of mkl via sed -i 's/mkl==2019/mkl/g' setup.py. This then installs a 2022 mkl and openmp along with a 2021 tbb.

Third, I tried "the real thing" of compiling block2 manually from source via cmake against the system oneapi libraries. The first problem here was that cmake seems to look for tbb and related libraries in $ENV{MKLROOT}/lib $ENV{MKLROOT}/lib/intel64 /usr/local/lib $ENV{TBBROOT}/lib but with openapi they actually are in $TBBROOT/lib/intel64/gcc4.8/. This could be easily fixed by passing -DTBB_LIB=$TBBROOT/lib/intel64/gcc4.8/libtbb.so -DTBB_LIBS_MALP=$TBBROOT/lib/intel64/gcc4.8/libtbbmalloc_proxy.so -DTBB_LIB_MAL=$TBBROOT/lib/intel64/gcc4.8/libtbbmalloc.so to cmake. With this I could build the python extension manually via:

cmake .. -DCMAKE_BUILD_TYPE=Release -DUSE_MKL=ON -DBUILD_LIB=ON -DLARGE_BOND=ON -DUSE_KSYMM=OFF -DUSE_COMPLEX=ON -DUSE_SG=ON -DOMP_LIB=TBB -DTBB=ON -DTBB_LIB=$TBBROOT/lib/intel64/gcc4.8/libtbb.so -DTBB_LIBS_MALP=$TBBROOT/lib/intel64/gcc4.8/libtbbmalloc_proxy.so -DTBB_LIB_MAL=$TBBROOT/lib/intel64/gcc4.8/libtbbmalloc.so
cmake --build . --config Release -- --jobs=2

Looking with ldd:

        $ ldd /opt/intel/intelpython3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so
        linux-vdso.so.1 (0x00007fc0dc17d000)
        librt.so.1 => /lib64/librt.so.1 (0x00007fc0d7b73000)
        libmkl_intel_lp64.so.2 => /opt/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_intel_lp64.so.2 (0x00007fc0d6cd3000)
        libmkl_core.so.2 => /opt/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_core.so.2 (0x00007fc0d291e000)
        libmkl_tbb_thread.so.2 => /opt/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_tbb_thread.so.2 (0x00007fc0d05b8000)
        libmkl_avx2.so => /opt/intel/intelpython3/lib/libmkl_avx2.so (0x00007fc0cd65f000)
        libmkl_avx512.so => /opt/intel/intelpython3/lib/libmkl_avx512.so (0x00007fc0c9afc000)
        libtbb.so.12 => /opt/intel/oneapi/tbb/2021.5.1/env/../lib/intel64/gcc4.8/libtbb.so.12 (0x00007fc0c9884000)
        libtbbmalloc.so.2 => /opt/intel/intelpython3/lib/libtbbmalloc.so.2 (0x00007fc0c9629000)
        libtbbmalloc_proxy.so.2 => /opt/intel/intelpython3/lib/libtbbmalloc_proxy.so.2 (0x00007fc0c9425000)
        libstdc++.so.6 => /opt/intel/intelpython3/lib/libstdc++.so.6 (0x00007fc0dbffd000)
        libm.so.6 => /lib64/libm.so.6 (0x00007fc0c90a3000)
        libgcc_s.so.1 => /opt/intel/intelpython3/lib/libgcc_s.so.1 (0x00007fc0dbfe9000)
        libc.so.6 => /lib64/libc.so.6 (0x00007fc0c8cde000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fc0dbf53000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fc0c8abe000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007fc0c88ba000)

I see that now the system mkl libraries were used - great (there is some shadowing of libtbbmalloc* but that should be fine)!
Unfortunately now the library is no longer linked to libgomp at all and consequently, when I triy to load it I get ImportError: /opt/intel/intelpython3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so: undefined symbol: GOMP_single_start. This happens depite libgomp being installed in a standard location:

        $ ldconfig -p | grep gomp
        libgomp.so.1 (libc6,x86-64) => /lib64/libgomp.so.1

Unfortunately my knowledge of cmake is not sufficient to easily understand what is going wrong here. Any help would be highly appreciated!!! And apologies in case I am overlooking something obvious.

segmentation fault with big_site keyword

Hi, I am tring to use mps-mrci with large site to calculate the naphthanlene with CAS(10,10) and 199 external orbtials, the calculation always gives the segmentation fault before constructing the MPO no matter which value I choose(without the keyword "big_site", it can run normally but very slow). Since I used to calculate several smaller systems with this keyword sucessfully, and it seems not very memory-consuming, I wonder the reason of this problem.
Here is the input and output message that may be helpful for your advices, thanks in advance!

The head of FCIDUMP reads like this:

 &FCI NORB= 209, NELEC= 10, MS2= 0
  ORBSYM=1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
  ISYM=1
 &END
 -4.387049478353499676e-02      138      154        0        0
 -2.318294020226800006e-02      137      154        0        0
 -1.355618241139899989e-02      136      154        0        0
  8.583600312781700103e-03      135      154        0        0
 -8.894048845712499915e-03      127      154        0        0
  1.542896170191200046e-02      126      154        0        0

and the input file I'm using is as follows:

mem 300g
sym c1
orbitals FCIDUMP

nelec 10
spin 0
irrep 1

mrci 0 10 199
big_site fock
nonspinadapted
full_integral

num_thrds 24
schedule default
maxM 1000
maxiter 30

with the command block2main input, I get this output

********************************** INPUT START **********************************
mem                                                           300g
sym                                                             c1
orbitals                                                   FCIDUMP
nelec                                                           10
spin                                                             0
irrep                                                            1
mrci                                                      0 10 199
big_site                                                      fock
nonspinadapted
full_integral
num_thrds                                                       24
schedule                  Sweep   0-   7 : Mmps =   250 Noise =     0.001 DavTol =    0.0001
                          Sweep   8-  15 : Mmps =   500 Noise =    0.0001 DavTol =     1e-05
                          Sweep  16-  23 : Mmps =  1000 Noise =    0.0001 DavTol =     1e-05
                          Sweep  24-  29 : Mmps =  1000 Noise =         0 DavTol =     1e-06
maxM                                                          1000
maxiter                                                         30
sweep_tol                                                     1E-5
twodot_to_onedot                                                26
********************************** INPUT END   **********************************

NON SPIN ADAPTED - REAL DOMAIN - DOUBLE PREC
qc mpo type =  QCTypes.NC
 UseMainStack = 0 MinDiskUsage = 1 MinMemUsage = 0 IBuf = 0 OBuf = 0
 FPCompression: prec = 1.00e-16 chunk = 1024
 IMain = 0 B / 19.6 GB DMain = 0 B / 101 GB ISeco = 0 B / 8.38 GB DSeco = 0 B / 151 GB
 OpenMP = 1 TBB = 0 MKL = GNU 2019.0.0 SeqType = None MKLIntLen = 4
 THREADING = 2 layers : Global | Operator BatchedGEMM
 NUMBER : Global = 24 Operator = 24 Quanta = 0 MKL = 1
 COMPLEX = 1 SINGLE-PREC = 1 KSYMM = 0
dynamic correlation space : inactive = 0, cas = 10, external = 199
using fiedler reorder =  [ 74 121  73  11 120 161  10 162 164  13 123 122  76  15 110  61  75  14
  60  79 109 197  50  80 163 126  12 168 167  49 196  17 129  62  18 169
  78 125 198 165 127  77 130 170  19 124 166  20  21 128  81  16  85  23
 172 173  82 131 171 132 135  84  22 133  83 134 180  32 175  92  25  86
  26 176 181 144  33  93  87 136 142 177  34  88  27 182   0   5 143 138
   2   7   1  31  90  89   8   6   3 174 140   9  94  29   4  42 190  38
 189 147  41  97 151  40 183 100 187 188 152 179 103  24 139 186  45 150
 206  30  57 193  99 148 184 156  39 105 153  35 192  44  98 149 137 106
 191 155 116  28 157 178 145 104  36  71  69  43 102 207 101  95 117  59
  91 118  56 208  58 119 199 141  51  63 154 111  68 146 115 185  53 205
  72  52  67 112  70 201  64  96 200 113  65  54  55 114 202 203  37  66
 204  46 194 107  47 195 158 108 159  48 160]
reorder indices adjusted for dynamic correlation =  [  0   5   2   7   1   8   6   3   9   4  74 121  73  11 120 161  10 162
 164  13 123 122  76  15 110  61  75  14  60  79 109 197  50  80 163 126
  12 168 167  49 196  17 129  62  18 169  78 125 198 165 127  77 130 170
  19 124 166  20  21 128  81  16  85  23 172 173  82 131 171 132 135  84
  22 133  83 134 180  32 175  92  25  86  26 176 181 144  33  93  87 136
 142 177  34  88  27 182 143 138  31  90  89 174 140  94  29  42 190  38
 189 147  41  97 151  40 183 100 187 188 152 179 103  24 139 186  45 150
 206  30  57 193  99 148 184 156  39 105 153  35 192  44  98 149 137 106
 191 155 116  28 157 178 145 104  36  71  69  43 102 207 101  95 117  59
  91 118  56 208  58 119 199 141  51  63 154 111  68 146 115 185  53 205
  72  52  67 112  70 201  64  96 200 113  65  54  55 114 202 203  37  66
 204  46 194 107  47 195 158 108 159  48 160]
read integral finished 17.656113924458623
integral sym error =            0
#--- init SCIFockBigSite ---
# ASSERTIONS ARE DISABLED!
# WITHOUT OMP
# nAlpha nBeta -> N  2Sz | nStates
# based on nOccs
#   0   0  ->     0    0 |     1
# max El: Alpha, Beta, Tot=0 0 0
sizes:1 1
# nDet=1
#--- init SCIFockBigSite ---
# ASSERTIONS ARE DISABLED!
# WITHOUT OMP
# nAlpha nBeta -> N  2Sz | nStates
# based on nOccs
--- ATTENTION! SORT INPUT nOccs! ---
#   0   0  ->     0    0 |     1
#   1   0  ->     1    1 |   199
#   0   1  ->     1   -1 |   199
#   1   1  ->     2    0 |  39601
#   2   0  ->     2    2 |  19701
#   0   2  ->     2   -2 |  19701
# max El: Alpha, Beta, Tot=2 2 2
sizes:6 6
# nDet=79402
MinMPOMemUsage =  True
MPS =  CCRRRRRRRRR 0 2 < N=10 SZ=0 PG=0 >
GS INIT MPS BOND DIMS =       1     4    16    64   256   276   269   263   261   257   254     1
pre-mpo memory usage =  2.98 GB
build mpo start ...
Segmentation fault (core dumped)

and if I change the big_site value to csf or folding, and delete the keyword nonspinadapted, the segmentation fault will occur even earlier (both at the same place),

********************************** INPUT START **********************************
mem                                                           300g
sym                                                             c1
orbitals                                                   FCIDUMP
nelec                                                           10
spin                                                             0
irrep                                                            1
mrci                                                      0 10 199
big_site                                                       csf
full_integral
num_thrds                                                       24
schedule                  Sweep   0-   7 : Mmps =   250 Noise =     0.001 DavTol =    0.0001
                          Sweep   8-  15 : Mmps =   500 Noise =    0.0001 DavTol =     1e-05
                          Sweep  16-  23 : Mmps =  1000 Noise =    0.0001 DavTol =     1e-05
                          Sweep  24-  29 : Mmps =  1000 Noise =         0 DavTol =     1e-06
maxM                                                          1000
maxiter                                                         30
sweep_tol                                                     1E-5
twodot_to_onedot                                                26
********************************** INPUT END   **********************************

SPIN ADAPTED - REAL DOMAIN - DOUBLE PREC
qc mpo type =  QCTypes.NC
 UseMainStack = 0 MinDiskUsage = 1 MinMemUsage = 0 IBuf = 0 OBuf = 0
 FPCompression: prec = 1.00e-16 chunk = 1024
 IMain = 0 B / 19.6 GB DMain = 0 B / 101 GB ISeco = 0 B / 8.38 GB DSeco = 0 B / 151 GB
 OpenMP = 1 TBB = 0 MKL = GNU 2019.0.0 SeqType = None MKLIntLen = 4
 THREADING = 2 layers : Global | Operator BatchedGEMM
 NUMBER : Global = 24 Operator = 24 Quanta = 0 MKL = 1
 COMPLEX = 1 SINGLE-PREC = 1 KSYMM = 0
dynamic correlation space : inactive = 0, cas = 10, external = 199
using fiedler reorder =  [ 74 121  73  11 120 161  10 162 164  13 123 122  76  15 110  61  75  14
  60  79 109 197  50  80 163 126  12 168 167  49 196  17 129  62  18 169
  78 125 198 165 127  77 130 170  19 124 166  20  21 128  81  16  85  23
 172 173  82 131 171 132 135  84  22 133  83 134 180  32 175  92  25  86
  26 176 181 144  33  93  87 136 142 177  34  88  27 182   0   5 143 138
   2   7   1  31  90  89   8   6   3 174 140   9  94  29   4  42 190  38
 189 147  41  97 151  40 183 100 187 188 152 179 103  24 139 186  45 150
 206  30  57 193  99 148 184 156  39 105 153  35 192  44  98 149 137 106
 191 155 116  28 157 178 145 104  36  71  69  43 102 207 101  95 117  59
  91 118  56 208  58 119 199 141  51  63 154 111  68 146 115 185  53 205
  72  52  67 112  70 201  64  96 200 113  65  54  55 114 202 203  37  66
 204  46 194 107  47 195 158 108 159  48 160]
reorder indices adjusted for dynamic correlation =  [  0   5   2   7   1   8   6   3   9   4  74 121  73  11 120 161  10 162
 164  13 123 122  76  15 110  61  75  14  60  79 109 197  50  80 163 126
  12 168 167  49 196  17 129  62  18 169  78 125 198 165 127  77 130 170
  19 124 166  20  21 128  81  16  85  23 172 173  82 131 171 132 135  84
  22 133  83 134 180  32 175  92  25  86  26 176 181 144  33  93  87 136
 142 177  34  88  27 182 143 138  31  90  89 174 140  94  29  42 190  38
 189 147  41  97 151  40 183 100 187 188 152 179 103  24 139 186  45 150
 206  30  57 193  99 148 184 156  39 105 153  35 192  44  98 149 137 106
 191 155 116  28 157 178 145 104  36  71  69  43 102 207 101  95 117  59
  91 118  56 208  58 119 199 141  51  63 154 111  68 146 115 185  53 205
  72  52  67 112  70 201  64  96 200 113  65  54  55 114 202 203  37  66
 204  46 194 107  47 195 158 108 159  48 160]
read integral finished 18.12559413537383
integral sym error =            0
Segmentation fault (core dumped)

Thanks for the help.
Ivan

Discrepancy Between DMRG Energy and Saved Density Matrix?

Hello, I am experiencing a discrepancy between the energy printed by DMRG (contained within mc.fcisolver.e_states) and the energy calculated from the CAS 2rdm saved to disk. Specifically this seems to occur when solvers with multiple roots are used within the mcscf.state_average_mix_ API. Below is a working example:

Code for running the DMRG calculation:

from pyscf import gto, scf, lib, dmrgscf, mcscf, ao2mo
import numpy as np
import os

dmrgscf.settings.BLOCKEXE = os.popen("which block2main").read().strip()
dmrgscf.settings.MPIPREFIX = ''

mol = gto.Mole()
mol.charge = 0
mol.atom = [('O', [0.0, 0.0, -0.13209669380597672]),
            ('H', [0.0, 1.4315287853817316, 0.9797000689025815]),
            ('H', [0.0, -1.4315287853817316, 0.9797000689025815])]
mol.unit = "bohr"
mol.basis = "ccpvdz"
mol.spin = 0
mol.symmetry = "c2v"
mol.build()
mf = scf.RHF(mol)
mf.kernel()

nactorb = 13
nactelec = (4,4)

coeff = np.array([[ 1.00087816e+00,  4.04012324e-03,  1.29054540e-02,
                  0.00000000e+00,  0.00000000e+00, -1.42424448e-01,
                  5.82036900e-01,  1.02600545e-01,  2.44432886e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -1.65358912e-01,
                  0.00000000e+00,  3.07107371e-01,  0.00000000e+00,
                  0.00000000e+00,  3.55438584e-01,  1.18200012e-02],
                [-4.65901938e-03,  2.61269236e-01,  4.11574787e-01,
                  0.00000000e+00,  0.00000000e+00, -1.87831968e-01,
                  1.49185744e+00,  1.97106219e-01,  4.68530237e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -3.86826374e-01,
                  0.00000000e+00,  6.50578224e-01,  0.00000000e+00,
                  0.00000000e+00,  6.11214800e-01, -4.26411954e-02],
                [-7.90255240e-03,  9.48752972e-02,  4.60468836e-01,
                  0.00000000e+00,  0.00000000e+00,  1.00307111e+00,
                 -1.17219647e+00, -4.19418668e-01, -1.17257596e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  3.34918030e-01,
                  0.00000000e+00, -2.39817633e-01,  0.00000000e+00,
                  0.00000000e+00, -3.48446411e+00, -9.82050558e-01],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  3.04585713e-17,  6.33575554e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  9.30701274e-18,  5.67524067e-17,
                 -9.64374880e-01, -2.06411907e-02,  1.79614973e-17,
                 -4.27752761e-18,  4.00480398e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  6.11789733e-02,
                 -2.74199171e-17,  0.00000000e+00,  0.00000000e+00],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                 -4.97426217e-01,  3.87953137e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00, -1.51995053e-01, -9.26837138e-01,
                 -5.90509305e-17, -1.26390841e-18, -2.93333512e-01,
                  6.98573272e-02, -6.54034124e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  3.74613169e-18,
                  4.47801230e-01,  0.00000000e+00,  0.00000000e+00],
                [-3.54505646e-04,  3.98283190e-01, -4.04828617e-01,
                  0.00000000e+00,  0.00000000e+00, -1.69130468e-01,
                  2.12816328e-01, -9.42972923e-01, -1.54529113e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -4.40705224e-03,
                  0.00000000e+00,  4.63833948e-01,  0.00000000e+00,
                  0.00000000e+00, -3.39814136e-01, -1.29713869e-01],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  1.34118652e-17,  4.92712898e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  3.68608778e-17, -2.01405172e-17,
                  9.74261498e-01, -1.42472923e-01, -7.31027815e-17,
                 -4.64057519e-17, -5.23210225e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -7.62035380e-01,
                 -1.00506226e-16,  0.00000000e+00,  0.00000000e+00],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                 -2.19032381e-01,  3.01699637e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00, -6.01983818e-01,  3.28919608e-01,
                  5.96563113e-17, -8.72395045e-18,  1.19385902e+00,
                  7.57863441e-01,  8.54467141e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -4.66612095e-17,
                  1.64139124e+00,  0.00000000e+00,  0.00000000e+00],
                [-1.53059217e-04,  1.97240923e-01, -2.87392833e-01,
                  0.00000000e+00,  0.00000000e+00,  3.82934374e-01,
                  2.12101897e-01,  7.48360313e-01, -3.91209087e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -4.15088124e-01,
                  0.00000000e+00, -7.09880138e-01,  0.00000000e+00,
                  0.00000000e+00, -1.66372871e+00, -2.64357040e-01],
                [-2.47585471e-21, -1.31990799e-18,  1.28440241e-18,
                  0.00000000e+00,  0.00000000e+00, -1.63400747e-17,
                  5.99217840e-19,  2.90581965e-17, -8.58828938e-17,
                  8.66994854e-01,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -2.14415023e-18,
                 -6.32973416e-01,  2.57762698e-17,  0.00000000e+00,
                  0.00000000e+00,  6.54448695e-17,  9.05837236e-17],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                 -2.71918277e-02,  1.03634817e-18,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  6.72605783e-01, -2.66728351e-01,
                 -2.46358705e-18,  5.64468218e-17,  2.58427406e-02,
                 -6.93600028e-02,  2.69403521e-01,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -3.12998361e-17,
                  1.09035012e+00,  0.00000000e+00,  0.00000000e+00],
                [-1.11063954e-04,  9.01606307e-03, -1.17107791e-02,
                  0.00000000e+00,  0.00000000e+00,  6.92424119e-02,
                  1.16363515e-01, -3.33345189e-02,  4.15581841e-01,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  1.49394886e-01,
                  0.00000000e+00, -4.43447731e-02,  0.00000000e+00,
                  0.00000000e+00, -5.48113189e-01,  8.40545340e-01],
                [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  1.66501924e-18,  1.69248500e-02,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00, -4.11852260e-17,  1.63324011e-17,
                 -4.02334298e-02,  9.21846558e-01, -1.58241148e-18,
                  4.24707527e-18, -1.64962080e-17,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -5.11165115e-01,
                 -6.67646892e-17,  0.00000000e+00,  0.00000000e+00],
                [-2.02168880e-05, -1.07778667e-02,  1.04879416e-02,
                  0.00000000e+00,  0.00000000e+00, -1.33426835e-01,
                  4.89298499e-03,  2.37278181e-01, -7.01287048e-01,
                 -1.06176247e-16,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
                  0.00000000e+00,  0.00000000e+00, -1.75083153e-02,
                  7.75168868e-17,  2.10479216e-01,  0.00000000e+00,
                  0.00000000e+00,  5.34397914e-01,  7.39672236e-01],
                [-2.46847655e-03,  2.87959515e-01, -5.71722033e-02,
                 -3.30675233e-01,  0.00000000e+00,  2.25826632e-01,
                 -4.78517598e-01,  2.18224099e-01,  3.81271586e-01,
                  0.00000000e+00,  4.97477684e-01,  3.39877832e-01,
                  0.00000000e+00,  0.00000000e+00,  1.92523799e-01,
                 -2.75261247e-01, -4.93780741e-01, -4.88471928e-02,
                  0.00000000e+00,  5.84051615e-01,  0.00000000e+00,
                 -1.30937830e+00,  1.57146847e+00,  5.16378179e-01],
                [ 2.72628548e-04,  2.15849397e-02,  4.11358948e-04,
                 -7.83155358e-02,  0.00000000e+00, -9.18053285e-01,
                  2.77577759e-01,  5.74055344e-03,  2.23041876e-01,
                  0.00000000e+00, -3.53030846e-02,  2.33851515e-01,
                  0.00000000e+00,  0.00000000e+00, -1.77724287e+00,
                 -1.38572783e-01,  2.32417923e-01,  2.63188921e-02,
                  0.00000000e+00, -3.27926747e-01,  0.00000000e+00,
                 -3.82669350e-01,  4.08943335e-01,  1.15960021e-01],
                [-4.93086906e-20,  3.08224788e-18, -3.85044573e-19,
                 -1.49212666e-18,  3.16806581e-02,  3.28212378e-18,
                 -3.50125708e-18,  1.67463834e-18,  3.72947133e-18,
                  1.99171910e-01,  5.35975882e-18, -1.44471034e-18,
                  6.90751203e-02,  1.56636053e-01, -1.06510171e-17,
                  2.34852293e-17,  3.40616351e-17,  2.87111219e-17,
                  7.53220404e-01, -2.36678107e-17,  8.46257806e-01,
                 -5.17001494e-17,  6.15152550e-17,  6.83334167e-18],
                [ 8.05272028e-04, -5.03369278e-02,  6.28825509e-03,
                  2.43682777e-02,  1.93988083e-18, -5.36011490e-02,
                  5.71798674e-02, -2.73489195e-02, -6.09068890e-02,
                  1.21957621e-17, -8.75315042e-02,  2.35939103e-02,
                  4.22963125e-18,  9.59119207e-18,  1.73944310e-01,
                 -3.83542900e-01, -5.56268715e-01, -4.68888203e-01,
                  4.61214479e-17,  3.86524681e-01,  5.18183457e-17,
                  8.44327514e-01, -1.00462035e+00, -1.11596938e-01],
                [ 5.92192217e-04, -1.52837007e-02, -1.61268570e-02,
                  3.24998023e-02,  0.00000000e+00, -1.41490633e-02,
                  9.60824268e-02,  5.85891719e-02, -6.60585595e-03,
                  0.00000000e+00,  1.17152149e-03, -2.41478948e-02,
                  0.00000000e+00,  0.00000000e+00,  1.39000963e-01,
                  7.54358784e-01, -2.90248822e-01,  4.60381274e-01,
                  0.00000000e+00,  5.28295429e-01,  0.00000000e+00,
                  6.45890289e-01, -3.84351802e-01, -5.84809633e-01],
                [-2.46847655e-03,  2.87959515e-01, -5.71722033e-02,
                  3.30675233e-01,  0.00000000e+00,  2.25826632e-01,
                 -4.78517598e-01,  2.18224099e-01,  3.81271586e-01,
                  0.00000000e+00, -4.97477684e-01, -3.39877832e-01,
                  0.00000000e+00,  0.00000000e+00, -1.92523799e-01,
                  2.75261247e-01,  4.93780741e-01, -4.88471928e-02,
                  0.00000000e+00,  5.84051615e-01,  0.00000000e+00,
                  1.30937830e+00,  1.57146847e+00,  5.16378179e-01],
                [ 2.72628548e-04,  2.15849397e-02,  4.11358948e-04,
                  7.83155358e-02,  0.00000000e+00, -9.18053285e-01,
                  2.77577759e-01,  5.74055344e-03,  2.23041876e-01,
                  0.00000000e+00,  3.53030846e-02, -2.33851515e-01,
                  0.00000000e+00,  0.00000000e+00,  1.77724287e+00,
                  1.38572783e-01, -2.32417923e-01,  2.63188921e-02,
                  0.00000000e+00, -3.27926747e-01,  0.00000000e+00,
                  3.82669350e-01,  4.08943335e-01,  1.15960021e-01],
                [ 4.93086906e-20, -3.08224788e-18,  3.85044573e-19,
                 -1.49212666e-18,  3.16806581e-02, -3.28212378e-18,
                  3.50125708e-18, -1.67463834e-18, -3.72947133e-18,
                 -1.99171910e-01,  5.35975882e-18, -1.44471034e-18,
                  6.90751203e-02,  1.56636053e-01, -1.06510171e-17,
                  2.34852293e-17,  3.40616351e-17, -2.87111219e-17,
                 -7.53220404e-01,  2.36678107e-17,  8.46257806e-01,
                 -5.17001494e-17, -6.15152550e-17, -6.83334167e-18],
                [-8.05272028e-04,  5.03369278e-02, -6.28825509e-03,
                  2.43682777e-02,  1.93988083e-18,  5.36011490e-02,
                 -5.71798674e-02,  2.73489195e-02,  6.09068890e-02,
                 -1.21957621e-17, -8.75315042e-02,  2.35939103e-02,
                  4.22963125e-18,  9.59119207e-18,  1.73944310e-01,
                 -3.83542900e-01, -5.56268715e-01,  4.68888203e-01,
                 -4.61214479e-17, -3.86524681e-01,  5.18183457e-17,
                  8.44327514e-01,  1.00462035e+00,  1.11596938e-01],
                [ 5.92192217e-04, -1.52837007e-02, -1.61268570e-02,
                 -3.24998023e-02,  0.00000000e+00, -1.41490633e-02,
                  9.60824268e-02,  5.85891719e-02, -6.60585595e-03,
                  0.00000000e+00, -1.17152149e-03,  2.41478948e-02,
                  0.00000000e+00,  0.00000000e+00, -1.39000963e-01,
                 -7.54358784e-01,  2.90248822e-01,  4.60381274e-01,
                  0.00000000e+00,  5.28295429e-01,  0.00000000e+00,
                 -6.45890289e-01, -3.84351802e-01, -5.84809633e-01]])

lib.param.TMPDIR = os.path.abspath(lib.param.TMPDIR)

solver = dmrgscf.DMRGCI(mol, maxM=3, tol=1E-10)
solver.spin = 0
solver.nroots = 2
solver.wfnsym = "A1"
solver.runtimeDir = lib.param.TMPDIR + "/0"
solver.scratchDirectory = lib.param.TMPDIR + "/0"
solver.threads = 8
solver.memory = int(mol.max_memory / 1000)
solver.block_extra_keyword = ["singlet_embedding"]

mc = mcscf.CASCI(mf, nactorb, nactelec)

solvers = [solver]
weights = np.ones(2)/2
mcscf.state_average_mix_(mc,solvers,weights)

returned = mc.kernel(coeff)

Code for calculating the energy from the active space 2rdm:

def energy_casdm2(mc, casdm2):
    casdm1 = np.einsum('ikjj->ki', casdm2)
    casdm1 /= (np.sum(mc.nelecas)-1)
    h1, e0 = mc.get_h1eff (mc.mo_coeff)
    h2 = ao2mo.restore (1, mc.get_h2eff (mc.mo_coeff), mc.ncas)
    e1 = np.tensordot (h1, casdm1, axes=2)
    e2 = .5 * np.tensordot (h2, casdm2, axes=4)
    return e0 + e1 + e2

casdm2_fn = f"{solver.runtimeDir}/node0/2pdm-0-0.npy"
casdm2s = np.load(casdm2_fn) 
casdm2s = np.einsum("ciklj->cijkl",casdm2s) #Switch to chemist notation
casdm2aa, casdm2ab, casdm2bb = casdm2s
casdm2 = casdm2aa + 2*casdm2ab + casdm2bb

dm2_energy = energy_casdm2(mc,casdm2)
dmrg_energy = mc.fcisolver.e_states[0]
print("Calculated Energy from DM2:",dm2_energy)
print("Energy for GS Reported by DMRG:",mc.fcisolver.e_states[0])
print("Agreement:",np.allclose(dm2_energy,dmrg_energy))

I have tested this code for several cases and it works perfectly well in cases where each solver in state_average_mix_ contains one root (i.e. each state has a different symmetry). However there is a discrepancy when there is more than one root. The code above should print:

"""
Calculated Energy from DM2: -73.03952692675198
Energy for GS Reported by DMRG: -73.26438043830856
Agreement: False
"""

Strangely, the above example seems to work perfectly well if maxM=2, but the deviation becomes very large at maxM=3 (or 5, 10, etc.)

Is it the case that the DMRG energy cannot be evaluated from the active space 2rdm in this fashion if there is more than one root in of the same symmetry in the fcisolver (i.e. in cases of 2pdm-0-0.npy and 2pdm-1-1.npy instead of 2pdm.npy)?

Segmentation fault from twopdm flag

I have tried to run DMRG for the 2D Hubbard model(6 by 6, U=8.0, nelec=32e-) with a smaller bond dimension(100). However, it returns a segmentation fault error after building 2pdm(see log.txt) regardless of flags(e.g. nonspinadapted/noreorder). On the other hand, it works without error if I remove the twopdm flag. I have tested on a large memory system(200G/400G/800G), but it could not resolve this issue.

Here I attached a minimal working example for the problematic case. I would appreciate it if you could check it and share your feedback.

To run the mwe, execute as follows:
block2main mwe/dmrg.conf.000

mwe.tar.gz
log.txt

Cannot find libiomp5.dylib on mac os

It was installed with "pip install block2"

Here are some outputs when I run block2 on mac os. It seems that block2 is not looking for libiomp5.dylib in the right place.

Reason: tried: '/Users/runner/hostedtoolcache/Python/3.8.12/x64/lib/libiomp5.dylib' (no such file), '/Users/runner/hostedtoolcache/Python/3.8.12/x64/lib/libiomp5.dylib' (no such file), '/usr/local/lib/libiomp5.dylib' (no such file), '/usr/lib/libiomp5.dylib' (no such file)

Controlling the Krylov space dimension for the TDVP (tangent space) time-evolution from a Python code

Hi Huanchen,

this is more of a request than a bug report, could you possibly make the size of the Krylov space (used in TDVP time-evolution) input to the TimeEvolution Python class? I see that there is a C++ function expo_apply (defined in src/core/complex_matrix_functions.hpp) that accepts a parameter called deflation_max_size which I suppose is the Krylov space size. But there doesn't seem to be a way to control the Krylov space dimension from the Python side.

Thanks.

Quick question about memory allocation issue

I am seeing the following memory error and would like to understand the background:

Sweep =   24 | Direction =  forward | Bond dimension = 1000 | Noise =  1.00e-04 | Dav threshold =  1.00e-05
 --> Site =    0-   1 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 1.70e+05 Tdav = 0.00 T = 0.00
 --> Site =    1-   2 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 3.45e+05 Tdav = 0.00 T = 0.00
 --> Site =    2-   3 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 5.54e+05 Tdav = 0.00 T = 0.01
 --> Site =    3-   4 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 7.96e+05 Tdav = 0.00 T = 0.01
 --> Site =    4-   5 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 1.11e+06 Tdav = 0.00 T = 0.01
 --> Site =    5-   6 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 1.51e+06 Tdav = 0.00 T = 0.01
 --> Site =    6-   7 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 1.78e+06 Tdav = 0.00 T = 0.01
 --> Site =    7-   8 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 2.25e+06 Tdav = 0.00 T = 0.01
 --> Site =    8-   9 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 2.65e+06 Tdav = 0.00 T = 0.01
 --> Site =    9-  10 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 2.83e+06 Tdav = 0.00 T = 0.01
 --> Site =   10-  11 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 3.39e+06 Tdav = 0.00 T = 0.01
 --> Site =   11-  12 .. Mmps =    1 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 1.40e+08 Tdav = 0.00 T = 0.01
 --> Site =   12-  13 .. Mmps =    3 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 3.53e+09 Tdav = 0.00 T = 0.02
 --> Site =   13-  14 .. Mmps =   10 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 2.16e+10 Tdav = 0.01 T = 0.09
 --> Site =   14-  15 .. Mmps =   34 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 4.31e+10 Tdav = 0.03 T = 0.21
 --> Site =   15-  16 .. Mmps =  100 Ndav =   1 E =   -229.4613379355 Error = 0.00e+00 FLOPS = 6.64e+10 Tdav = 0.06 T = 0.24
 --> Site =   16-  17 .. Mmps =  299 Ndav =   1 E =   -229.4613379355 Error = 5.97e-14 FLOPS = 1.26e+11 Tdav = 0.10 T = 0.39
 --> Site =   17-  18 .. Mmps =  880 Ndav =   1 E =   -229.4613379355 Error = 1.87e-13 FLOPS = 2.20e+11 Tdav = 0.22 T = 0.66
 --> Site =   18-  19 .. Mmps = 1000 Ndav =   2 E =   -229.4613446557 Error = 4.93e-09 FLOPS = 2.92e+11 Tdav = 1.38 T = 3.60
 --> Site =   19-  20 .. Mmps = 1000 Ndav =   2 E =   -229.4614121851 Error = 2.79e-06 FLOPS = 3.58e+11 Tdav = 5.05 T = 10.21
 --> Site =   20-  21 .. Mmps = 1000 Ndav =   2 E =   -229.4614134905 Error = 3.88e-07 FLOPS = 3.48e+11 Tdav = 3.19 T = 8.22
 --> Site =   21-  22 .. Mmps = 1000 Ndav =   1 E =   -229.4614129209 Error = 2.67e-08 FLOPS = 2.26e+11 Tdav = 0.60 T = 4.20
 --> Site =   22-  23 .. Mmps = 1000 Ndav =   1 E =   -229.4614129088 Error = 5.55e-10 FLOPS = 1.31e+11 Tdav = 0.21 T = 3.63
 --> Site =   23-  24 .. Mmps =  997 Ndav =   1 E =   -229.4614129088 Error = 3.66e-13 FLOPS = 5.69e+10 Tdav = 0.19 T = 4.44
 --> Site =   24-  25 .. exceeding allowed memory (size=135000000, trying to allocate 99692)  (double)

This is during a hybrid multi node MPI/OMP job on nodes with 32 cores and 512GB of RAM per node with 2 MPI processes per node and num_thrds 16 (this has turned out to be optimal in terms of runtime) and 16 orbitals. Slurm reports a peak memory usage of ~150GB which is far below the available 512GB.

I am thus wondering: Am I really running out of memory here, or is this some artificial boundary that I am hitting that can be changed by means of a configuration setting? Many thanks in advance!

ImportError for newer pybind11 versions

Hi Huanchen,

It looks like Pybind Version >= 2.10.2 introduces the error

ImportError: generic_type: cannot initialize type "KeysView[Tuple[str, int]]": an object with that name is already defined

Which seems to be due to pybind/pybind11#4353
Do you have an idea how to fix this?

Different energy for re-evaluation after DMRG run

Hi all,
I ran into a rather strange behavior when running DMRG with block2 - this might point towards a bug or a lack of understanding how the code works on my end so I thought I'd quickly reach out via this channel.

In short the problem is as follows: Having run DMRG for a molecular system, a re-evaluation of the energy with the optimized state gives a different (worse) energy than the energy obtained in the DMRG run. This seems to be a bit setup dependent but the minimal example below should explain what is going on. The agreement between the energies improves as the bond dimension of the MPS is allowed to be larger - but still, I would expect the two energy values to agree regardless of what bond dim is chosen (or am I wrong here?).

This is the minimal example:

from pyscf import gto, scf, ao2mo

from pyblock2.driver.core import DMRGDriver, SymmetryTypes

import numpy as np


# Set up H chain (8 atoms, minimal basis set)
dist = 3.0
norb = 8

mol = gto.Mole()

mol.build(
    atom=[("H", (x, 0.0, 0.0)) for x in dist * np.arange(8)],
    basis="sto-6g",
    symmetry=True,
    unit="Bohr",
)

# We are running the calculation in a canonical basis, which is set up here.
myhf = scf.RHF(mol)
ehf = myhf.scf()
basis = myhf.mo_coeff

h1 = np.linalg.multi_dot((basis.T, scf.hf.get_hcore(mol), basis))
h2 = ao2mo.restore(1, ao2mo.kernel(mol, basis), basis.shape[0])


# Set up the DMRG calc
mps_solver = DMRGDriver(symm_type=SymmetryTypes.SU2)
mps_solver.initialize_system(h1.shape[0], n_elec=np.sum(mol.nelec))
mpo = mps_solver.get_qc_mpo(h1e=h1, g2e=h2, iprint=1, reorder=None)

ket = mps_solver.get_random_mps("KET", bond_dim=50, nroots=1)

# Run DMRG
en = mps_solver.dmrg(mpo, ket, n_sweeps=100, iprint=1, bond_dims=[50])

print("DMRG final energy: {}".format(en))

# Now re-evaluate the energy
print("Re-evaluated energy: {}".format(mps_solver.expectation(ket, mpo, ket)))

For my version of block2 (latest Pypi version) this gives the following result:

DMRG final energy: -8.559546134243703
Re-evaluated energy: -8.55220532772357

So there is a bit of a discrepancy between the two results which surprises me - and I'm not sure where this is coming from.
Could you maybe look into this and check if this is actually a bug?

Thanks a lot and best wishes!
Yannic

Issues building on NERSC's Cori System

Hi,

I am trying to build on NERSC's Cori Cray-XC system on behalf of a user. I have tried to build using the Intel and GNU compilers, getting different errors (detailed below).

The TL;DR is that:

  1. If compiling with Intel, then icpc consumes too many resources for our login nodes and is killed.
  2. If compiling with GNU, the compiler complains about a syntax error.

Please advise how to compile on Cray systems, eg. which compiler do you recommend. You can find the detailed errors and CMake output below.

Using Intel

zhcui@cori02:/global/cscratch1/sd/zhcui/.consulting/block2-preview/build>  CC=cc CXX=CC cmake .. -DUSE_MKL=ON -DBUILD_LIB=ON -DMPI=ON -DOMP_LIB=GNU -DLARGE_BOND=ON -DUSE_SG=ON -DUSE_BIG_SITE=ON -DCMAKE_CXX_FLAGS=-qoverride-limits
-- The C compiler identification is Intel 19.1.2.20200623
-- The CXX compiler identification is Intel 19.1.2.20200623
-- Cray Programming Environment 2.7.10 C
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /opt/cray/pe/craype/2.7.10/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Cray Programming Environment 2.7.10 CXX
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/cray/pe/craype/2.7.10/bin/CC - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/bin/python3 (found version "3.9.7")
-- Found PythonLibs: /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libpython3.so (found suitable version "3.9.7", minimum required is "3.9")
-- PROJECT_NAME = block2
-- PYTHON_VERSION_MAJOR = 3
-- PYTHON_VERSION_MINOR = 9
-- PYTHON_LIBRARIES = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libpython3.so
-- PYTHON_EXECUTABLE = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/bin/python3
-- PYTHON_EXECUTABLE_HINT =
-- PYTHON_INCLUDE_DIRS = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/include/python3.9
-- PYLIB_SUFFIX = .cpython-39-x86_64-linux-gnu.so
-- PYBIND_INCLUDE_DIRS = /global/homes/z/zhcui/.local/cori/3.9-anaconda-2021.11/lib/python3.9/site-packages/pybind11/include
-- BUILD_LIB = ON
-- MKL_INCLUDE_DIR = /opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/include
-- MKL_LIBS = -Wl,--no-as-needed;/global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libiomp5.so;/opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/
intel64/libmkl_intel_lp64.so;/opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64/libmkl_core.so;/opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64/li
bmkl_intel_thread.so;/opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64/libmkl_avx2.so;/opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64/libmkl_avx
512.so
-- Found MPI_C: /opt/cray/pe/craype/2.7.10/bin/cc (found version "3.1")
-- Found MPI_CXX: /opt/cray/pe/craype/2.7.10/bin/CC (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- MPI_COMPILE_FLAGS =
-- MPI_LINK_FLAGS =
-- MPI_INCLUDE_PATH =
-- MPI_CXX_LIBRARIES =
-- OPT_FLAG = -O3;-funroll-loops;-qopenmp;-Werror;-Werror=return-type;-fvisibility=hidden;-Wno-error=attributes;-Wno-attributes
-- BOND_FLAG = -D_LARGE_BOND
-- MKL_FLAG = -D_HAS_INTEL_MKL=2
-- CORE_FLAG = -D_USE_CORE
-- DMRG_FLAG = -D_USE_DMRG
-- BIG_SITE_FLAG = -D_USE_BIG_SITE
-- SP_DMRG_FLAG = -D_USE_SP_DMRG
-- IC_FLAG = -D_USE_IC
-- KSYMM_FLAG =
-- SG_FLAG = -D_USE_SG
-- COMPLEX_FLAG =
-- SINGLE_PREC_FLAG =
-- SCI_FLAG =
-- TBB_FLAG =
-- MPI_FLAG = -D_HAS_MPI
-- OMP_LIB = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libiomp5.so
-- MKL_OMP_LIB_NAME = mkl_intel_thread
-- TBB_LIBS =
-- Configuring done
CMake Warning at CMakeLists.txt:455 (ADD_LIBRARY):
  Cannot generate a safe runtime search path for target block2 because files
  in some directories may conflict with libraries in implicit directories:

    runtime library [libmkl_intel_lp64.so] in /opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64 may be hidden by files in:
      /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib
    runtime library [libmkl_core.so] in /opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64 may be hidden by files in:
      /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib
    runtime library [libmkl_intel_thread.so] in /opt/intel/compilers_and_libraries_2020.2.254/linux/mkl/lib/intel64 may be hidden by files in:
      /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib

  Some of these libraries may not be found correctly.


-- Generating done
-- Build files have been written to: /global/cscratch1/sd/zhcui/.consulting/block2-preview/build

Which then compiles for a while until being killed by the operating system because icpc is consuming too many resources:

icpc: error #10106: Fatal error in /opt/intel/compilers_and_libraries_2020.2.254/linux/bin/intel64/mcpcom, terminated by kill signal 

Using GNU

zhcui@cori02:/global/cscratch1/sd/zhcui/.consulting/block2-preview/build> CC=cc CXX=CC cmake .. -DBUILD_LIB=ON -DMPI=ON -DOMP_LIB=GNU -DLARGE_BOND=ON -DUSE_SG=ON -DUSE_BIG_SITE=ON
-- The C compiler identification is GNU 11.2.0
-- The CXX compiler identification is GNU 11.2.0
-- Cray Programming Environment 2.7.10 C
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /opt/cray/pe/craype/2.7.10/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Cray Programming Environment 2.7.10 CXX
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/cray/pe/craype/2.7.10/bin/CC - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/bin/python3 (found version "3.9.7")
-- Found PythonLibs: /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libpython3.so (found suitable version "3.9.7", minimum required is "3.9")
-- PROJECT_NAME = block2
-- PYTHON_VERSION_MAJOR = 3
-- PYTHON_VERSION_MINOR = 9
-- PYTHON_LIBRARIES = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libpython3.so
-- PYTHON_EXECUTABLE = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/bin/python3
-- PYTHON_EXECUTABLE_HINT =
-- PYTHON_INCLUDE_DIRS = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/include/python3.9
-- PYLIB_SUFFIX = .cpython-39-x86_64-linux-gnu.so
-- PYBIND_INCLUDE_DIRS = /global/homes/z/zhcui/.local/cori/3.9-anaconda-2021.11/lib/python3.9/site-packages/pybind11/include
-- BUILD_LIB = ON
-- Looking for sgemm_
-- Looking for sgemm_ - found
-- Found BLAS: implicitly linked
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Looking for cheev_
-- Looking for cheev_ - found
-- Found LAPACK: implicitly linked
-- Found MPI_C: /opt/cray/pe/craype/2.7.10/bin/cc (found version "3.1")
-- Found MPI_CXX: /opt/cray/pe/craype/2.7.10/bin/CC (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- MPI_COMPILE_FLAGS =
-- MPI_LINK_FLAGS =
-- MPI_INCLUDE_PATH =
-- MPI_CXX_LIBRARIES =
-- OPT_FLAG = -O3;-funroll-loops;-fopenmp;-Werror;-Werror=return-type;-Wno-psabi;-fvisibility=hidden;-Wno-error=attributes;-Wno-attributes
-- BOND_FLAG = -D_LARGE_BOND
-- MKL_FLAG =
-- CORE_FLAG = -D_USE_CORE
-- DMRG_FLAG = -D_USE_DMRG
-- BIG_SITE_FLAG = -D_USE_BIG_SITE
-- SP_DMRG_FLAG = -D_USE_SP_DMRG
-- IC_FLAG = -D_USE_IC
-- KSYMM_FLAG =
-- SG_FLAG = -D_USE_SG
-- COMPLEX_FLAG =
-- SINGLE_PREC_FLAG =
-- SCI_FLAG =
-- TBB_FLAG =
-- MPI_FLAG = -D_HAS_MPI
-- OMP_LIB = /global/common/software/nersc/cori-2022q1/sw/python/3.9-anaconda-2021.11/lib/libgomp.so
-- MKL_OMP_LIB_NAME = mkl_gnu_thread
-- TBB_LIBS =
-- Configuring done
-- Generating done
-- Build files have been written to: /global/cscratch1/sd/zhcui/.consulting/block2-preview/build

When trying to compile with make I get:

/global/cscratch1/sd/zhcui/.consulting/block2-preview/src/instantiation/core/../../core/csr_matrix_functions.hpp:759:21: error: no match for 'operator!=' (operand types are 'std::complex<double>' and 'int')

Symmetry related problem with compressed DMRG-SC-NEVPT2

I'm trying to carry out compressed DMRG-SC-NEVPT2 calculation on a symmetrized Fe-Porphyrin with active space (11o, 8e). I encountered an error related to symmetry during NEVPT2 process. Could you please help me with this problem?
Traceback (most recent call last): File "/hpc-cache-pfs/ailab/qiuyunze//venv/python38/bin//block2main", line 3555, in <module> pmpo.build() RuntimeError: Hamiltonian may contain multiple total symmetry blocks (small integral elements violating point group symmetry)! Traceback (most recent call last): File "/hpc-cache-pfs/ailab/qiuyunze/venv/python38/lib/python3.8/site-packages/pyscf/dmrgscf/nevpt_mpi.py", line 509, in <module> nevpt_integral_mpi(sys.argv[1],sys.argv[2],sys.argv[3],sys.argv[4]) File "/hpc-cache-pfs/ailab/qiuyunze/venv/python38/lib/python3.8/site-packages/pyscf/dmrgscf/nevpt_mpi.py", line 306, in nevpt_integral_mpi f = open(os.path.join(nevpt_scratch_mpi, 'node0', 'Va_%d'%root), 'r') FileNotFoundError: [Errno 2] No such file or directory: '/hpc-cache-pfs/ailab/qiuyunze/work/FePor_no_simplified/DMRG_11o_8e/nevpt2_1/node0/Va_0'

DMRG calculation for open shell species in active space

Greetings,

I have been trying to run some basic DMRG calculations for a small open-shell molecule, namely OH, in an active space. Unfortunately I am encountering some issues, documented below, and would be grateful if you could please help me understand and overcome them. The script I use to generate the 'fcidump.txt' is below and the fcidump file is enclosed,

from functools import reduce
import numpy
from pyscf import gto,scf,ao2mo,mcscf
from pyscf import symm
from pyscf.tools import fcidump

mol = gto.M(atom = [['O',(0,0,0)],
['H',(0,0,1)]],
basis = 'sto-6g',
verbose = 4,
symmetry = 'c2v',
spin = 1)

myhf = scf.ROHF(mol)
myhf.kernel()
myhf.analyze()

c = myhf.mo_coeff[:,2:]
mc = mcscf.CASCI(myhf,c.shape[1],(3,2))
h1e,ecore = mc.get_h1cas()
eri = ao2mo.restore(8,mc.get_h2cas(),mc.ncas)

fcidump.from_integrals('fcidump.txt',h1e,eri,mc.ncas,5,nuc=ecore,ms=1)
<<<<<<

the dmrg.conf file is below,

sym c1
orbitals fcidump.txt
nelec 5
spin 1
irrep 1
hf_occ integral
schedule default
maxM 500
<<<<<<

but the calculation returns an energy higher than the HF value. Could you please help me understand why?

Many thanks,
Mario

fcidump.txt
error.txt

"exceeding allowed memory" error

Hi Huanchen,
I'm running a dmrgscf calculation using prebuilt serial version of block2 0.5.2-rc4. I got an exceeding allowed memory error at sweep=8 which is different from the exceeding allowed memory described in the debugging hints part of the document page. The input file and error information are listed below. Do you have any idea what's going on?

from pyscf import gto, scf, lib, dmrgscf
import tempfile
import os
from pyscf.dmrgscf import dmrgci
dmrgscf.settings.BLOCKEXE = os.popen("which block2main").read().strip()
dmrgscf.settings.MPIPREFIX = ''

atomstring = '''
 C                 -0.72238100    0.45069800   -0.00153500
 H                 -0.35779400    1.03067700   -0.86954700
 C                 -0.20526400   -0.98461200   -0.02009100
 H                 -0.55945400   -1.53218700    0.85944800
 H                 -0.55962200   -1.50932900   -0.91339000
 H                  0.89096200   -1.00436400   -0.02045100
 O                 -2.13659900    0.56509800    0.00011700
 H                 -0.35761100    1.00810100    0.88106900
'''

mol = gto.M(
    atom= atomstring,
    basis='ccpvdz',
    verbose=4,
    #unit='bohr',
    spin=1,
    charge = 0)
mf = scf.ROHF(mol).newton()
mf.chkfile = 'mf.chk'
mf.kernel()

norb_frozen = 6
norb_act = 61
nelec_act = 13

#mc = shci.SHCISCF(mf, norb_act, nelec_act, frozen=norb_frozen)
mc = dmrgscf.DMRGSCF(mf, norb_act, nelec_act, maxM=800, tol=1E-5, frozen=norb_frozen)
mc.chkfile='dmrgscf.chk'
mc.internal_rotation = True
mc.fcisolver.runtimeDir = '.'
mc.fcisolver.scratchDirectory = tempfile.mkdtemp()

mc.fcisolver.threads = 48
from pyscf import lo
loc_occ = lo.PM(mol, mf.mo_coeff[:,6:13]).kernel()
loc_val_vir = lo.PM(mol, mf.mo_coeff[:,13:20]).kernel()
loc_vir = lo.PM(mol, mf.mo_coeff[:,20:]).kernel()
mo_core = mf.mo_coeff[:,:6]
import numpy as np
mo = np.hstack((mo_core, loc_occ, loc_val_vir, loc_vir))
mc.max_cycle_macro = 10
mc.mc2step(mo)
 --> Site =   28-  29 .. exceeding allowed memory (size=135000000, trying to allocate 70942)  (double)
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x6dba00f) [0x14a0a63b800f]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x749df9a) [0x14a0a6a9bf9a]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x764194f) [0x14a0a6c3f94f]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x76e84a4) [0x14a0a6ce64a4]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x667555a) [0x14a0a5c7355a]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x667861d) [0x14a0a5c7661d]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x683965b) [0x14a0a5e3765b]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x6820b8a) [0x14a0a5e1eb8a]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x683d148) [0x14a0a5e3b148]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x1ce5f3b) [0x14a0a12e3f3b]
/home/xuwa0145/tools/anaconda3/lib/python3.7/site-packages/block2.cpython-37m-x86_64-linux-gnu.so(+0x3ab59b) [0x14a09f9a959b]
/home/xuwa0145/tools/anaconda3/bin/python(_PyMethodDef_RawFastCallKeywords+0x237) [0x4ba8f7]
/home/xuwa0145/tools/anaconda3/bin/python(_PyCFunction_FastCallKeywords+0x26) [0x4ba6a6]
/home/xuwa0145/tools/anaconda3/bin/python() [0x4ba379]
/home/xuwa0145/tools/anaconda3/bin/python(_PyEval_EvalFrameDefault+0x4652) [0x4b6cc2]
/home/xuwa0145/tools/anaconda3/bin/python(_PyEval_EvalCodeWithName+0x201) [0x4b1411]
/home/xuwa0145/tools/anaconda3/bin/python(PyEval_EvalCodeEx+0x39) [0x4b1209]

Is there a distributed scratch option?

Is it possible to make each MPI process use its own scratch space?
In a configuration of 1 MPI process per node distributed scratch wold improve the disk I/O speed and reduce needs for the local scratch size.
I'm not sure, but it looks like block2 uses shared disk scratch, which forces the usage of some network filesystems that are inventively slower than local scratch.

1-step DMRGSOC

I would like to try out the 1-step DMRGSOC. Is there an example somewhere in this repo to do 1-step DMRGSOC? If not could we have access to the input files for the systems studied in the paper https://doi.org/10.1063/5.0107805 where you compare 1-step and 2-step ways of performing DMRGSOC.

Thanks and Regards
Vamshi

How to output the MPO in the tensor form ?

As I learned, the MPO is the product of a series of 3 or 4 way tensor (every site with a tensor).
I can only output the MPO in the "mpo.bin" file following https://block2.readthedocs.io/en/latest/developer/mpo-reloading.html.
So can we get the MPO in a more explicit way where the outputs are tensors for n_orb sites ?
Or can the "mpo.bin" file be transformed to it ?
Thank you very much !!!
(The goal I want to reach is to directly get the energy scalar by contracting the random MPS tensors with the MPO tensors.)

GPU implementation of block2

Hello,

Thank you for the great library.
I am not so familiar with the DMRG algorithm and block2 implementation so I am curious if there is a GPU implementation of the code. Is there any effort on it from some other branches/developers or maybe the GPU-enhanced code would solve no problem even if it exists ...?
Thank you for your time.

Restart doesn't work after 2pdm are calculated

I found an issue with restart simple DMRG after 2pdm are calculated.
After one runs twopdm or restart_twopdm , fullrestart job gets a segfault at the me.init_environments() step with the following error:

python: /home/bogdanov/src/block2/src/pybind/../core/matrix_functions.hpp:307: static void block2::GMatrixFunctions<double>::multiply(const MatrixRef&, uint8_t, const MatrixRef&, uint8_t, const MatrixRef&, double, double): Assertion `a.n >= b.m && c.m == a.m && c.n >= b.n' failed.
python: /home/bogdanov/src/block2/src/pybind/../core/matrix_functions.hpp:307: static void block2::GMatrixFunctions<double>::multiply(const MatrixRef&, uint8_t, const MatrixRef&, uint8_t, const MatrixRef&, double, double): Assertion `a.n >= b.m && c.m == a.m && c.n >= b.n' failed.

interestingly, calculations with two roots are not affected.
it also seems that bmps.info.bond_dim and kmps.info.bond_dim are not set properly. I don't know if that matters, but they are equal to startM rather then maxM as I would expect.

Minimal example inputs:
restart_bug.zip.
Running

block2main 01.inp > 01.out
block2main 02.inp > 02.out

should reproduce the error.
If one comments out twopdm line in 01.inp, calculation from 02.inp runs normally.

data race in mps_from_tag

Dear @hczhai,

I think there is a data race in the mps_from_tag method.
It occurs when I try to get an excited state through projection on the previous ground state.
It happens only when I use many MPI threads (~30) and it's really hard to reproduce.
The other option is that the first job has not finished writing the data, but everything happens in the same scratch dir which should have fast IO.

The output ends as follows:

init .. R = 1
init .. R = 1
env init finished 0.519297809980344
init .. R = 0
----- proj =   0 tag = KET1 -----
init .. R = 1
init .. R = 0
init .. R = 0
init .. R = 1
init .. R = 0
init .. R = 0
init .. R = 1
init .. R = 1
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 1
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 1
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 0
init .. R = 0
Traceback (most recent call last):
  File "/bin/Block2/p0.5.1rc2/bin/block2main", line 2161, in <module>
    xmps, xmps_info, _ = get_mps_from_tags(ipj, True, mps.center)
  File "/bin/Block2/p0.5.1rc2/bin/block2main", line 1814, in get_mps_from_tags
    smps_info.load_mutable()
RuntimeError: StateInfo::load_data on '/scratch/F.MPS.INFO.KET1-0.RIGHT.11' failed.
MPI FINALIZE: rank 17 of 30

The input file for the KET1 is:

nelec 16
spin 0
schedule
0     50  1.0000e-04  1.0000e-04
4    100  1.0000e-04  1.0000e-04
8    125  1.0000e-04  1.0000e-04
10    125  1.0000e-05  1.0000e-05
12    125  1.0000e-06  1.0000e-06
14    125  1.0000e-07  1.0000e-07
16    125  1.0000e-08  0.0000e+00
end
twodot_to_onedot 20
orbitals FCIDUMP
maxiter 100
sweep_tol 1.0000e-07
outputlevel 2
hf_occ integral
twopdm
prefix  /scratch/
mps_tags KET1

and for KET2:

nelec 16
spin 0
schedule
0     50  1.0000e-04  1.0000e-04
4    100  1.0000e-04  1.0000e-04
8    125  1.0000e-04  1.0000e-04
10    125  1.0000e-05  1.0000e-05
12    125  1.0000e-06  1.0000e-06
14    125  1.0000e-07  1.0000e-07
16    125  1.0000e-08  0.0000e+00
end
twodot_to_onedot 20
orbitals FCIDUMP
maxiter 100
sweep_tol 1.0000e-07
outputlevel 2
hf_occ integral
twopdm
prefix  /scratch/
mps_tags KET2
proj_mps_tags KET1
proj_weights 10

Any thoughts?

Segmentation errors when calculating the 1RDM (onepdm) of a complex MPS

Hi Huanchen, this is me Imam.
I tried to calculate the 1RDM of a complex MPS but the code apparently has a bug when asked to calculate RDM from a complex MPS. As a minimal input that reproduces the error, I used the following input

sym d2h
orbitals RHF_FCIDUMP

nelec 11
spin 1
irrep 1
maxM 500
schedule default

mps_tags KET-CPX

hf_occ integral
complex_mps
noreorder

restart_onepdm
outputlevel 1

KET-CPX is a complex MPS produced by a previous job, in my case, a simple trans_mps_to_complex task. At the end of the output using the above input file, I got the following segmentation error

/usr/bin/python3(PyCFunction_Call+0x59) [0x5f6489]
/usr/bin/python3(_PyObject_MakeTpCall+0x296) [0x5f7056]
/usr/bin/python3() [0x50b993]
/usr/bin/python3(_PyEval_EvalFrameDefault+0x57f2) [0x570ac2]
/usr/bin/python3(_PyEval_EvalCodeWithName+0x26a) [0x569cea]
/usr/bin/python3(_PyFunction_Vectorcall+0x393) [0x5f6a13]
/usr/bin/python3(_PyEval_EvalFrameDefault+0x1901) [0x56cbd1]
/usr/bin/python3(_PyEval_EvalCodeWithName+0x26a) [0x569cea]
/usr/bin/python3(PyEval_EvalCode+0x27) [0x68e7b7]
/usr/bin/python3() [0x680001]
/usr/bin/python3() [0x68007f]
/usr/bin/python3() [0x680121]
/usr/bin/python3(PyRun_SimpleFileExFlags+0x197) [0x680db7]
/usr/bin/python3(Py_RunMain+0x212) [0x6b8122]
/usr/bin/python3(Py_BytesMain+0x2d) [0x6b84ad]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fa373a86083]
/usr/bin/python3(_start+0x2e) [0x5fb39e]
Aborted (core dumped)

When I comment out the trans_mps_to_complex line in the previous job, and also comment out the complex_mps in the above input, hence making all MPS's involved real-valued, then the simulation with the above input runs until finished with the last output line showing the occupations of the sites/orbitals. So, it really looks like the code has a problem when calculating 1RDM from a complex MPS.

pip installation is not working

Hello block2 developers,

I'm trying to install block2 using pip but it is stuck at the building wheel step for over an hour. My python version is 3.9.12. Here are the printout lines:

Collecting block2
Using cached block2-0.5.0.tar.gz (762 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: mkl==2019 in /opt/anaconda3/lib/python3.9/site-packages (from block2) (2019.0)
Requirement already satisfied: mkl-include in /opt/anaconda3/lib/python3.9/site-packages (from block2) (2022.1.0)
Requirement already satisfied: intel-openmp in /opt/anaconda3/lib/python3.9/site-packages (from block2) (2022.1.0)
Requirement already satisfied: numpy in /opt/anaconda3/lib/python3.9/site-packages (from block2) (1.21.5)
Requirement already satisfied: cmake==3.17 in /opt/anaconda3/lib/python3.9/site-packages (from block2) (3.17.0)
Requirement already satisfied: scipy in /opt/anaconda3/lib/python3.9/site-packages (from block2) (1.8.0)
Requirement already satisfied: psutil in /opt/anaconda3/lib/python3.9/site-packages (from block2) (5.8.0)
Requirement already satisfied: pybind11 in /opt/anaconda3/lib/python3.9/site-packages (from block2) (2.10.0)
Building wheels for collected packages: block2
Building wheel for block2 (setup.py) ... |
Not sure the exact issue here since there is no printout to show the building process. I would appreciate if someone could take a look at the pip installation binary.

I also tried the pip install block2 --no-binary block2, and it also encounters the same issue.

Skipping wheel build for block2, due to binaries being disabled for it.
Installing collected packages: block2
Running setup.py install for block2 ...

I'm aware of the alternative manual installation way and trying that now. Just hope to report this issue to you. Thank you for your time and help!
Best,
Sherry

ValueError: Unrecognized keys ({'bogoliubov'})

Hi @hczhai @zhcui,
I am wondering if we can not solve it when we set bogoliubov to be True for the code.
test2dHubbard.txt
Thank you in advance,

Traceback (most recent call last):
File "/opt/tiger/miniconda3/envs/python38/bin/block2main", line 61, in
dic = parse(fin)
File "/opt/tiger/miniconda3/envs/python38/lib/python3.8/site-packages/pyblock2/driver/parser.py", line 231, in parse
raise ValueError("Unrecognized keys (%s)" % diff)
ValueError: Unrecognized keys ({'bogoliubov'})

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[1493,1],0]

Exit code: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.