GithubHelp home page GithubHelp logo

beddalumia / kmhproject Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 1.0 5.08 MB

A collection of programs and scripts to solve and analyze the Kane-Mele-Hubbard model in a variety of (dynamical) mean-field settings

MATLAB 33.86% Shell 5.61% Fortran 36.93% Python 22.19% Gnuplot 0.09% Makefile 1.32%
dmft slurm matlab python fortran mean-field-theory kane-mele topological-insulators hubbard-model strongly-correlated-systems

kmhproject's Introduction

kmhproject's People

Contributors

beddalumia avatar

Stargazers

 avatar

Watchers

 avatar

kmhproject's Issues

Numerical instabilities in Dyson Equation [KMH-MF]

Background

With commit cc0f13b we introduced an evaluation of "kinetic energy" $\langle H_0(k) \rangle$, exploiting the dmft_kinetic_energy(Hk,Sigma) function provided by DMFTtools. The strategy has been:

  1. To build the noninteracting GFs by defining a "dummy" vanishing self-energy and feeding it, together with $H_0(k)$, to dmft_gloc_matsubara / dmft_gloc_realaxis.
  2. To build the hartree-fock GFs by feeding again a vanishing self-energy to the same routines, but this time paired with $H_\mathrm{mf}(k)$ (since the interaction effects are treated at the mean-field level, they enter a corrected single-particle hamiltonian, and not the frequency dependent self-energy).
  3. Compute the effective hartree-fock self-energy by means of a Dyson's equation, connecting the two Green's functions (i.e. formally acknowledge $H_\mathrm{mf}(k)$ to be interacting, hence transfer the mean-field correction from the single-particle hamiltonian, to a frequency independent self-energy).

Such a Dyson's equation reads:

$$\Sigma_\mathrm{mf}(z) = G_0^{-1}(z) - G_\mathrm{mf}^{-1}(z)$$

and has been implemented by:

function dyson_eq(G0,G) result(S)
    complex(8),dimension(Nlat,Nspin,Nspin,Norb,Norb,L) :: G0,G,S
    complex(8),dimension(Nlat,Nspin,Nspin,Norb,Norb)   :: Gnnn, G0nnn, Snnn
    complex(8),dimension(Nlso,Nlso)                    :: Glso, G0lso, Slso
    !
    do i = 1,L
       G0nnn = G0(:,:,:,:,:,i) ; Gnnn = G(:,:,:,:,:,i)
       G0lso = nnn2lso(G0nnn)  ; Glso = nnn2lso(Gnnn)
       call inv(G0lso)         ; call inv(Glso)       
       Slso = G0lso - Glso     ; Snnn = lso2nnn(Slso)
       S(:,:,:,:,:,i) = Snnn
    enddo
    !
 end function dyson_eq

The resulting kinetic energy appeared to be "good enough", if compared to the analogous DMFT result:

so that we called it a day.

The problem

But inspecting the actual printed self-energies we recognize something is affecting seriously our computation: we expect frequency independent self-energies, we get a total mess.

And by total mess I mean:

Real Axis Matsubara
wrong_sigma_realw wrong_sigma_iw

Arguably the Matsubara axis is quite a lot less crazy, but still:

  • it's not even causal
  • for sure not really constant

First Guess

I tried to understand if we have a problem on how we implement the Dyson's equation by re-implementing it in MATLAB, by reading from file all the components of $G_0(z), G_\mathrm{mf}(z)$, by means of the following script:

import plotDMFT.*

G011 = spectral_load('G0mats_l11_s1_iw__indx000001.dat');  
G012 = spectral_load('G0mats_l11_s1_iw__indx000002.dat');
G021 = spectral_load('G0mats_l11_s2_iw__indx000001.dat');
G022 = spectral_load('G0mats_l11_s2_iw__indx000002.dat');

G11 = spectral_load('Gmats_l11_s1_iw__indx000001.dat');
G12 = spectral_load('Gmats_l11_s1_iw__indx000002.dat');
G21 = spectral_load('Gmats_l11_s2_iw__indx000001.dat');
G22 = spectral_load('Gmats_l11_s2_iw__indx000002.dat');

S11 = spectral_load('Smats_l11_s1_iw__indx000001.dat');
S12 = spectral_load('Smats_l11_s1_iw__indx000002.dat');
S21 = spectral_load('Smats_l11_s2_iw__indx000001.dat');
S22 = spectral_load('Smats_l11_s2_iw__indx000002.dat');

domain = G11.zeta;

G0_matrix = zeros(2);
G_matrix  = zeros(2);
S_matrix  = zeros(2);

for iw = 1:length(domain)

    fprintf('iw = %f\n\n',domain(iw));

    G0_matrix(1,1) = G011.real(iw) + 1j * G011.imag(iw);
    G0_matrix(1,2) = G012.real(iw) + 1j * G012.imag(iw);
    G0_matrix(2,1) = G021.real(iw) + 1j * G021.imag(iw);
    G0_matrix(2,2) = G022.real(iw) + 1j * G022.imag(iw);
    
    G_matrix(1,1) = G11.real(iw) + 1j * G11.imag(iw);
    G_matrix(1,2) = G12.real(iw) + 1j * G12.imag(iw);
    G_matrix(2,1) = G21.real(iw) + 1j * G21.imag(iw);
    G_matrix(2,2) = G22.real(iw) + 1j * G22.imag(iw);

    S_matrix(1,1) = S11.real(iw) + 1j * S11.imag(iw);
    S_matrix(1,2) = S12.real(iw) + 1j * S12.imag(iw);
    S_matrix(2,1) = S21.real(iw) + 1j * S21.imag(iw);
    S_matrix(2,2) = S22.real(iw) + 1j * S22.imag(iw);

    D_matrix = inv(G0_matrix) - inv(G_matrix);

    if not(isequal(D_matrix,S_matrix))
        disp(D_matrix-S_matrix)
    else
        disp('ok!')
    end

end  % tried with real-axis too: same warning, similar pop-up frequency

And actually got instantly a quite unsettling warning:

>> Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = xxxxxxxx.

where typical values of RCOND fell in the (E-20,E-16) range.

SciFortran's inv() does not complain at all, but I do not see any reason for the situation to be different across the two languages. Most probably the problems lies in the zeros, for real frequencies, and the exponentially vanishing tail, for the Matsubara axis.

Shame on me to not foresee such an obvious numerical pitfall.

Reduce MATLAB boilerplate

We have a lot of boilerplate within matlab scripts and functions. We can indeed drastically reduce it by collecting all the elementary routines in suitable MATLAB packages, that will then become required dependencies for this repository.

Target repository for the new packages: https://github.com/bellomia/DMFT-LAB

Tail correction for Hartree-Fock potential energy calculation

The computation of the potential energy is based on Matsubara formalism, moving from Fetter-Walecka eq. 23.14 and transforming to imaginary frequency with the assumption of a local self-energy: $\Sigma(k,iω) = \Sigma({A,B},iω)$. This would simply give:

$$ E_\mathrm{pot} = \frac{2}{\beta} \sum_ω \mathrm{Tr}[\Sigma(iω)G(iω)] $$

but we also need a semi-analytic tail correction for that we cannot compute enough matsubara points to get an accurate summation.

The customary way is to assume the product $\Sigma(iω)G(iω)$ to have a $\frac{U^2}{4w^2}$ tail, but this won't work here being the self-energy just a constant. So instead we tried with

$$ \Sigma(iω)G(iω) = \Sigma_0 G(iω) \propto \frac{U}{2\omega}\times\Sigma_0$$

which unfortunately does not work. I believe this is the right idea, but some detail might be off. To be checked when I have time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.