GithubHelp home page GithubHelp logo

rsatoolbox's People

Contributors

aceticia avatar adkipnis avatar benjamin-peters avatar caiw avatar charlesbmi avatar doerlbh avatar g14r avatar heikoschuett avatar iancharest avatar jaspervandenbosch avatar jdiedrichsen avatar jooh avatar lajnd avatar mshahbazi1997 avatar nkriegeskorte avatar saarbuckle avatar smazurchuk avatar tal-golan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rsatoolbox's Issues

import meadows ma data as DataSet

Should follow after #78

Instead of RDMs this creates DataSet objects. Could be used for various meadows tasks

  • detection of file type
  • ma trials could be an obs_desc
  • scope: multiple tasks
  • add descriptors from task
  • add descriptors from stimuli
  • scope: one participant / multiple (MA) tasks (json only)
  • scope: multiple participants / multiple (MA) tasks (json only)

Bootstrapping over patterns-> nans instead of 0s

Previous toolboxes set values for comparing stimuli to themselves to nan/ignore instead of 0s as pyrsa does currently. This appears sensible to avoid inflated model performance, as all models will predict 0s for those comparisons.

Accessing the documentation

Hello,
I cannot access the documentation with the steps provided under "documentation." Is there a way you could upload a PDF or another way to access the documentation?
Thank you,
CJ

various docstring issues

data/dataset.py:docstring of pyrsa.data.dataset.Dataset.subset_channel:3: WARNING: Unexpected indentation.
data/dataset.py:docstring of pyrsa.data.dataset.Dataset.subset_channel:4: WARNING: Block quote ends without a blank line; unexpected unindent.
data/dataset.py:docstring of pyrsa.data.dataset.Dataset.subset_obs:3: WARNING: Unexpected indentation.
data/dataset.py:docstring of pyrsa.data.dataset.Dataset.subset_obs:4: WARNING: Block quote ends without a blank line; unexpected unindent.
data/dataset.py:docstring of pyrsa.data.dataset.DatasetBase:10: WARNING: Unexpected indentation.
data/dataset.py:docstring of pyrsa.data.dataset.DatasetBase:12: WARNING: Block quote ends without a blank line; unexpected unindent.
data/dataset.py:docstring of pyrsa.data.dataset.DatasetBase.subset_channel:3: WARNING: Unexpected indentation.
data/dataset.py:docstring of pyrsa.data.dataset.DatasetBase.subset_channel:4: WARNING: Block quote ends without a blank line; unexpected unindent.
data/dataset.py:docstring of pyrsa.data.dataset.DatasetBase.subset_obs:3: WARNING: Unexpected indentation.
data/dataset.py:docstring of pyrsa.data.dataset.DatasetBase.subset_obs:4: WARNING: Block quote ends without a blank line; unexpected unindent.
util/data_utils.py:docstring of pyrsa.util.data_utils:6: WARNING: Definition list ends without a blank line; unexpected unindent.
util/data_utils.py:docstring of pyrsa.util.data_utils:7: WARNING: Definition list ends without a blank line; unexpected unindent.
util/indicator.py:docstring of pyrsa.util.indicator:5: WARNING: Definition list ends without a blank line; unexpected unindent.
util/indicator.py:docstring of pyrsa.util.indicator.allpairs:3: WARNING: Unexpected indentation.
util/indicator.py:docstring of pyrsa.util.indicator.allpairs:4: WARNING: Block quote ends without a blank line; unexpected unindent.
util/rdm_utils.py:docstring of pyrsa.util.rdm_utils:4: WARNING: Definition list ends without a blank line; unexpected unindent.

RDM pattern_descriptors- not auto generated

The RDM pattern_descriptors field seems to be incorrect in its current form. There should be one descriptor per RDM value, and ideally they contain info about the condition pair (e.g. condition 1 vs 2) that the according dissimilarity value in the RDM reflects. However, these are not auto-generated, and instead contain an index of size # conditions.

I don't think the user should be responsible for generating these descriptors. Thoughts?

Implement DatasetList options (including sliding window operations)

As discussed and seconded in pull request #16 , one design decision is to have a supplementary class called DatasetList object with the base of a list of dataset objects. This include implementation of other principled segmentation operations such as sliding window for dynamic analysis.

Import issue

(env) charesti@colles-12904:~/Documents/GitHub/pyrsa$ ipython
Python 3.6.8 (default, Jan 14 2019, 11:02:34) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import pyrsa.rdm as rsr                                                                                                                                                                                                                               
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-1-54040479812f> in <module>
----> 1 import pyrsa.rdm as rsr

~/Documents/GitHub/pyrsa/pyrsa/__init__.py in <module>
      4 """
      5 
----> 6 import pyrsa.data as data
      7 import pyrsa.inference as inference
      8 import pyrsa.model as model

~/Documents/GitHub/pyrsa/pyrsa/data/__init__.py in <module>
----> 1 from pyrsa.data.dataset import Dataset

~/Documents/GitHub/pyrsa/pyrsa/data/dataset.py in <module>
      8 import numpy as np
      9 import pyrsa as rsa
---> 10 from pyrsa.util.data_utils import check_descriptors_dimension
     11 from pyrsa.util.data_utils import extract_dict
     12 from pyrsa.util.data_utils import get_unique_unsorted

~/Documents/GitHub/pyrsa/pyrsa/util/__init__.py in <module>
----> 1 import pyrsa.util.indicator as indicator

AttributeError: module 'pyrsa' has no attribute 'util'

fix readthedocs

  • use a config file
  • separate docs requirements file, requiring ipykernel and nbsphinx etc

Expanded indexing for RDMs.

Currently the RDMs object supports indexing by integers and bools.

It would be possible to expand this to support indexing by slices (e.g. rdms[2:]), as well as setting by index (e.g. rdms[2:2] = RDMs(…) or something), as well as indexing by descriptors (e.g. rdms['type'], or something pandas-like rdms[rdms['type'] == 'animate']). Also related magic methods such as __delitem__.

Which of these would be useful?

Which of these would violate the principle of least astonishment?

When should we return copies of RDM objects versus references?

remove dummy test

we don't need a dummy test anymore as we now have several regular tests

fix RDMs.sort_by()

The current version refers to a property called measurements which does not seem to be in use.

And when this is removed it is unclear how the following line:

dissimilarities[:, order][:, :, order]

would sort the pattern order

Reenable unit test, codeclimate and lint checks for all PR commits

At this point, the automatic unit tests, codeclimate and lint style checks by GitHub doesn't seem to be working any more. Any experts looking into this would be appreciated.

It would be great to have them as a coding guideline for contributors, especially because eventually we will officially launch this software package to the general public.

get_unique_unsorted() in data_utils.py resorts data incorrectly

The code should be:

def get_unique_unsorted(array):
    """return a unique unsorted list
    """
    u, indices = np.unique(array, return_index=True)
    temp = indices.argsort()
    return u[temp]

Otherwise the initial order of observation descriptors does not get restored.

alternative to indicator allpairs if nconditions too large

when dealing with a large number of conditions, e.g. with the natural scenes dataset, indicator will fail with a memory issue.

should we have a fallback plan to compute diff_1, and diff_2 the slow (but memory friendly) way if measurements1.shape[0] > something ?

c_matrix = allpairs(np.arange(measurements1.shape[0]))
diff_1 = np.matmul(c_matrix, measurements1)
diff_2 = np.matmul(c_matrix, measurements2)

Indicator module only handles row vectors

My understanding is that the index_vector passed to any function in indicator.py is a column vector (of same size as the number of rows in data.measurements), but it seems the indicator functions only work with row vectors.
Given my limited python experience, I don't see a fully satisfactory fix to allow dynamic handling of row or column vectors. Any insights? Or should we always assume a specific dimensionality?

Stability to unbalanced designs

The first implementations of the different rdm calculations largely assume that the rdms all have the same objects in them. We should change that if we want to be stable to unbalanced designs later

alternative to np.corrcoef for correlation distance

Hi @HeikoSchuett ,

given that we use einsum for all other RDM calc cases, I figured we could also use it for correlation distance. that speeds up performance massively compared to numpy corrcoef.

def correlation_distance(X):
    """[correlation distance]

    Args:
        X ([dataset]): conditions x voxels dataset

    Returns:
        RDM in upper triangular form.        
    """
    X = X - X.mean(axis=1, keepdims=True)
    X /= np.sqrt(np.einsum('ij,ij->i', X, X))[:, None]

    m = X.shape[0]
    r, c = np.triu_indices(m, 1)
     
    return 1 - np.einsum('ik,jk', X, X))[r, c]

saving of results object fails (pyrsa.inference.result.Result)

i tried to save a results object using its save method and it didn't immediately work.
i haven't investigated. here's the error and traceback...

File "C:\Users\Niko Kriegeskorte\Google Drive\programming\pyRSA\pyrsa\demos\exercise_all.py", line 311, in
results_3_full.save('results_alexnetSim2')

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\inference\result.py", line 67, in save
write_dict_hdf5(filename, result_dict)

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\util\file_io.py", line 25, in write_dict_hdf5
_write_to_group(file, dictionary)

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\util\file_io.py", line 41, in _write_to_group
_write_to_group(subgroup, value)

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\util\file_io.py", line 41, in _write_to_group
_write_to_group(subgroup, value)

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\util\file_io.py", line 41, in _write_to_group
_write_to_group(subgroup, value)

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\util\file_io.py", line 41, in _write_to_group
_write_to_group(subgroup, value)

File "c:\users\niko kriegeskorte\google drive\programming\pyrsa\pyrsa\pyrsa\util\file_io.py", line 38, in _write_to_group
group[key] = value

File "C:\Users\Niko Kriegeskorte\Anaconda3\envs\PyRSA\lib\site-packages\h5py-2.10.0-py3.7-win-amd64.egg\h5py_hl\group.py", line 387, in setitem
ds = self.create_dataset(None, data=obj, dtype=base.guess_dtype(obj))

File "C:\Users\Niko Kriegeskorte\Anaconda3\envs\PyRSA\lib\site-packages\h5py-2.10.0-py3.7-win-amd64.egg\h5py_hl\group.py", line 136, in create_dataset
dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds)

File "C:\Users\Niko Kriegeskorte\Anaconda3\envs\PyRSA\lib\site-packages\h5py-2.10.0-py3.7-win-amd64.egg\h5py_hl\dataset.py", line 118, in make_new_dset
tid = h5t.py_create(dtype, logical=1)

File "h5py\h5t.pyx", line 1634, in h5py.h5t.py_create

File "h5py\h5t.pyx", line 1656, in h5py.h5t.py_create

File "h5py\h5t.pyx", line 1717, in h5py.h5t.py_create

TypeError: No conversion path for dtype: dtype('<U5')

fMRI data to pyrsa

Hi,

In order to define our data set, could you please provide us with a step-by-step for how to import fMRI data an put it in the right format for the toolbox?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.