GithubHelp home page GithubHelp logo

Comments (6)

kunwuz avatar kunwuz commented on June 17, 2024

Hi, if possible, could you please share some examples (data) with the hyper parameters in both runs?

from causal-learn.

ikarmann avatar ikarmann commented on June 17, 2024

I attached the data to this comment. The parameters were the default alpha=0.05 and Fisher independence test. I also observed differences using alpha=0.001.

Tx

006S0731_v21_schaefer100.csv

from causal-learn.

jdramsey avatar jdramsey commented on June 17, 2024

Hmm... this doesn't look like fMRI data... -jdramsey

from causal-learn.

ikarmann avatar ikarmann commented on June 17, 2024

It is data from a subject in ADNI, preprocessed with fMRIPrep and extracted and normalized with nilearn after detrending (using a gaussian kernel) and denoising. Originally it had 140 volumes, 5 excluded due to nonstationarity and 20 due to scrubbing. It was sampled to the Schaefer (2018) 100 parcelation.

from causal-learn.

kunwuz avatar kunwuz commented on June 17, 2024

Interesting, thanks for sharing the data. Not sure what causes the difference, but I guess it might be related to some potential difference in the default settings, perhaps the priority of orientation.

from causal-learn.

jdramsey avatar jdramsey commented on June 17, 2024

@ikarmann Sorry, I got distracted with several project and am finally to a pause point.

I actually did a test for PC with Tetrad vs. causal learn on 10 variables with linear Gaussian data, and the results were the same. You have more variables, which could be an issue. Also, before, I had downloaded the wrong dataset and looked at it; when I look at your dataset, it looks considerably more like fMRI data. Sorry! But in general, PC is known to be bad at analyzing fMRI data. There are much better alternatives.

One problem with PC is that it's very sensitive to the order of the variables, so if the implementations of the algorithms are slightly different the result graphs can differ a lot.

You've tried FGES in Tetrad I take it; that's a better alternative, though FGES does well really only if the number of variables is large and not so well on dense graph. I was actually unaware that GES in causal-learn was taking so long for a dataset like yours; I'll try it out sometime.

Another option in Tetrad that works quite well for fMRI data is BOSS, which we published in Neurips this past years. Here's the BOSS paper, by the way:

Andrews, B., Ramsey, J., Sanchez Romero, R., Camchong, J., & Kummerfeld, E. (2023). Fast Scalable and Accurate Discovery of DAGs Using the Best Order Score Search and Grow Shrink Trees. Advances in Neural Information Processing Systems, 36.

BOSS is pretty good even for dense graphs and is a good candidate for fMRI data. We actually tried is on some dense fMRI data in the paper, so you can look to see if you're convinced. We also compare BOSS to FGES and PC and other algorithms form the literature, so you can look at the comparisons.

We have another algorithm called GRaSP from UAI 2022 I think that worked on similar principles to BOSS. Actually let me find the reference for you:

Lam, W. Y., Andrews, B., & Ramsey, J. (2022, August). Greedy relaxations of the sparsest permutation algorithm. In Uncertainty in Artificial Intelligence (pp. 1052-1062). PMLR.

You can also find these papers in arXiv BTW. I mention GRaSP because we did a translation of it into Python (which is slower) and that's included in causal-learn. You may be able to use GRaSP in Python and compare it to GRaSP in Tetrad and BOSS in Tetrad. Your data is pretty Gaussian, so a linear non-Gaussian algorithm like Direct LiNGAM will probably generate randomness, but BOSS and GRaSP should both be able to handle it. Anyway, you can try and see.

FGES should be fairly insensitive to the order of the variables, though it can get caught in local minima fairly easily. BOSS and GRaSP are a lot better at popping out of local minima. They're not impervious to local minima but do way better.

Hope this helps.

from causal-learn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.