GithubHelp home page GithubHelp logo

lsa-pucrs / acerta-abide Goto Github PK

View Code? Open in Web Editor NEW
44.0 11.0 25.0 895 KB

Deep learning using the ABIDE data

License: GNU General Public License v2.0

Python 99.72% Dockerfile 0.28%
neuroimaging tensorflow deep-learning abide-data inscer

acerta-abide's People

Contributors

anibalsolon avatar meneguzzi avatar shekhardewan avatar siddharth-shrivastava7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acerta-abide's Issues

HDF5

I am trying to do model training with hdf5 . I am using python 3.6. My code is :

def hdf5_handler(filename, mode="r"):
    h5py.File(filename, "a").close()
    propfaid = h5py.h5p.create(h5py.h5p.FILE_ACCESS)
    settings = list(propfaid.get_cache())
    settings[1] = 0
    settings[2] = 0
    propfaid.set_cache(*settings)
    with contextlib.closing(h5py.h5f.open(filename, fapl=propfaid)) as fid:
        return h5py.File(fid, mode)

the function call : hdf5 = hdf5_handler("./data/abide.hdf5".encode('utf-8'), "a".encode('utf-8'))
I get the following error:
Traceback (most recent call last):
File "prepare_data.py", line 139, in
prepare_folds(hdf5, folds, pheno, derivatives, experiment="{derivative}_whole")
File "prepare_data.py", line 83, in prepare_folds
fold["train"] = ids[train_index].tolist()
File "C:\Users\jfsra\AppData\Local\Programs\Python\Python36\lib\site-packages\h5py_hl\group.py", line 385, in setitem
ds = self.create_dataset(None, data=obj, dtype=base.guess_dtype(obj))
File "C:\Users\jfsra\AppData\Local\Programs\Python\Python36\lib\site-packages\h5py_hl\group.py", line 136, in create_dataset
dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds)
File "C:\Users\jfsra\AppData\Local\Programs\Python\Python36\lib\site-packages\h5py_hl\dataset.py", line 118, in make_new_dset
tid = h5t.py_create(dtype, logical=1)
File "h5py\h5t.pyx", line 1630, in h5py.h5t.py_create
File "h5py\h5t.pyx", line 1652, in h5py.h5t.py_create
File "h5py\h5t.pyx", line 1713, in h5py.h5t.py_create
TypeError: No conversion path for dtype: dtype('<U16')

Instruction to install requirements fails to install tensorflow-gpu

The instruction in README.md to install all requirements:

pip install -r requirements.txt

Fails upon trying to install tensorflow

Downloading/unpacking tensorflow-gpu (from -r requirements.txt (line 4))
  Could not find any downloads that satisfy the requirement tensorflow-gpu (from -r requirements.txt (line 4))
Cleaning up...
No distributions at all found for tensorflow-gpu (from -r requirements.txt (line 4))

How did you filter out the missing data?

Hello Team,
I really appreciate your work. I just have one question regarding the CSV dataset how did your team filter out missing data? I appreciate your help.

Looking forward to your response.

Something about prepare_data

image
I'm sorry to bother you. When I was running prepare_data I met this problem . I hope you can help me answer it, thank you!!!!!!

About prepared——Data

Uploading 20180821153238.png…
when I run the prepared_data.py, I can't solve the problem .Could you help me? Thank you !!!!

problem:
Traceback (most recent call last):
File "D:/Project/lsa-pucrs-acerta-abide-cc28c56/prepare_data.py", line 128, in
hdf5 = hdf5_handler("./data/abide.hdf5", "a")
File "D:\Project\lsa-pucrs-acerta-abide-cc28c56\utils.py", line 50, in hdf5_handler
with contextlib.closing(h5py.h5f.open(filename, fapl=propfaid)) as fid:
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 65, in h5py.h5f.open
TypeError: expected bytes, str found

Instruction in README fails to execute in any path

The instruction to run

nvidia-docker run -it --rm -v $(realpath .):/opt/acerta-abide acerta-abide /bin/bash

Fails to run in LSA (and so, this may not be completely generic)

The following error message comes to me (when in the acerta-abide git clone folder)

meneguzzi@lsa:~/code/acerta-abide$ nvidia-docker run -it --rm -v $(realpath .):/opt/acerta-abide acerta-abide /bin/bash
Using default tag: latest
nvidia-docker | 2017/03/29 22:00:45 Error: Error response from daemon: repository acerta-abide not found: does not exist or no pull access

"docker build -t acerta-abide"-Not working

The following error occurs while running the docker build. The latest version of tensorflow-gpu is installed.

Sending build context to Docker daemon 1.349GB
Step 1/5 : FROM gcr.io/tensorflow/tensorflow:latest-gpu
manifest for gcr.io/tensorflow/tensorflow:latest-gpu not found

Question about cross-validation

Sorry to bother you!
I am wondering did you train a MLP for each fold?
If there were 10 MLP models for 10 folds, why the models could fit well on the entire dataset?
I am really confused about it. Looking forward to your reply, thank you!

Prepare_data.py

Whenever I run prepare_data.py this error occurs.
I have tried adding all the answers of the previous issues about the same file, however it's still not fixed.
Is there any solution to my problem?
image

Steps missing to run SVM and Random Forest

Hi there!
I am an enthusiast so I just wanted to check out the results and wanted to run the repository. I wasn't able to find steps to reproduce to run the SVM or Random Forest Classifier. Any guidance regarding that would be really great!
Also, I would like to mention that at some places the code is redundant to Python 2.

Thanks!

Problem detected during the evaluation

Hi,

I am writing because I have reviewed and tested your code following the "cc200" derivation with 10 folds on the entire data set. However, at the time of evaluation all cases are predicted as ASD (0). Also, I have tried other configurations obtaining the same result.
I really don't understand why this is happening. It seems that the weights and final biases of the autoencoders are not being saved. I have run all the tests from Google Colaboratory. I would appreciate any instructions or suggestions on this.

Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.