GithubHelp home page GithubHelp logo

mne-tools / mne-python Goto Github PK

View Code? Open in Web Editor NEW
2.6K 83.0 1.3K 155.24 MB

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

Home Page: https://mne.tools

License: BSD 3-Clause "New" or "Revised" License

Python 99.29% Shell 0.20% Makefile 0.03% Csound Document 0.22% JavaScript 0.08% Jinja 0.16% jq 0.01% CSS 0.01%
python neuroscience electroencephalography magnetoencephalography electrocorticography machine-learning statistics visualization eeg meg

mne-python's Introduction

MNE

MNE-Python

MNE-Python is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, EEG, sEEG, ECoG, and more. It includes modules for data input/output, preprocessing, visualization, source estimation, time-frequency analysis, connectivity analysis, machine learning, statistics, and more.

Documentation

Documentation for MNE-Python encompasses installation instructions, tutorials, and examples for a wide variety of topics, contributing guidelines, and an API reference.

Forum

The user forum is the best place to ask questions about MNE-Python usage or the contribution process. The forum also features job opportunities and other announcements.

If you find a bug or have an idea for a new feature that should be added to MNE-Python, please use the issue tracker of our GitHub repository.

Installation

To install the latest stable version of MNE-Python with minimal dependencies only, use pip in a terminal:

$ pip install --upgrade mne

The current MNE-Python release requires Python 3.9 or higher. MNE-Python 0.17 was the last release to support Python 2.7.

For more complete instructions, including our standalone installers and more advanced installation methods, please refer to the installation guide.

Get the development version

To install the latest development version of MNE-Python using pip, open a terminal and type:

$ pip install --upgrade git+https://github.com/mne-tools/mne-python@main

To clone the repository with git, open a terminal and type:

$ git clone https://github.com/mne-tools/mne-python.git

Dependencies

The minimum required dependencies to run MNE-Python are:

For full functionality, some functions require:

Contributing

Please see the contributing guidelines on our documentation website.

About

CI Codecov Bandit
Package PyPI conda-forge
Docs Docs Discourse
Meta Zenodo OpenSSF

License

MNE-Python is licensed under the BSD-3-Clause license.

mne-python's People

Contributors

adykstra avatar agramfort avatar britta-wstnr avatar cbrnr avatar choldgraf avatar christianbrodbeck avatar dengemann avatar drammock avatar guillaumefavelier avatar hoechenberger avatar jaeilepp avatar jasmainak avatar jona-sassenhagen avatar kaichogami avatar kingjr avatar larsoner avatar leggitta avatar lorenzo-desantis avatar massich avatar mluessi avatar mmagnuski avatar mscheltienne avatar rgoj avatar rob-luke avatar sappelhoff avatar teonbrooks avatar trachelr avatar wmvanvliet avatar wronk avatar yousrabk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mne-python's Issues

Same file-name error is back

Hi all,

I just noticed that the current raw.save method is not failsafe against the error that occurs when you try save a raw object using the same file name (closed some weeks ago).
Just to refresh our grey cells, we've added an exception in case the filename passed on save is already used. But hen we've added concatenation support and added info['filenames']. Now when you pass a file it's checked against the files in this list. So far so good. But the filenames in the info are absolute paths, and many users, I suppose, would just pass a name, not a path. If then this file name happens to exist in the current directory --- e.g. when you modify stuff and want to override this is not caught by our current exception handler:

if any([fname == f for f in self.info['filenames']]):
            raise ValueError('You cannot save data to the same file.'
                               ' Please use a different filename.')

I tried to resolve / handle relative paths and added a test, but effortlessly this just destroyed my example fiff.

I just post to make us aware of this issue, I'll get back with a PR as soon as I got a grasp on what's going on here.

D

raw.proj is not (ever?) set

I just noticed that in a Raw instance, raw.proj remains as None even when raw.add_proj() has been called, as add_proj() modifies raw.info['proj'] but not raw.proj. This causes reads of raw.read_raw_segment() not to use projection. It seems like a saner behavior would be to have add_proj set the raw.proj variable, since this is what is examined by raw.read_raw_segment(). Is there a good way to add this?

It might make sense to have it do this, since you can always set the projections "inactive" if you don't want this behavior. I noticed that there is currently a "proj=True/False" option in the Epochs instance---are there other places this behavior would have to be changed (i.e., making it so projection is always done in the Raw class based on the current projection status)?

This could come up if we build other functions that read raw data beyond Evoked. For example, if we computed continuous (raw) projections and added them to raw, and did this repeatedly, we'd get the same projection over and over, which doesn't seem like the desired behavior.

Failing nosetest

Hey folks,

On the latest master, I now get a failing nosetest in line 314, in test_epochs_to_nitime:

assert_true(epochs_ts.ch_names == [epochs.ch_names[k] for k in picks])

Anyone else getting this one? If not, I'll do some debugging on my end.

Sample Dataset Missing Files

Thought I'd start an issue for this to keep track: when the sample dataset is downloaded automatically for me it's missing:

  • the file sample_audvis_set1.dip
  • the labels folder

If this might be a problem specific to the way I downloaded it let me know what you'd need to know.

save epochs to fif

it would be nice to be able to store epochs in a fif file.
there is support for 3d matrices in MNE.

ICA rejection for Raw and Epochs

I would like to work on including ICA rejection functionality based on the FAST ICA from scikits-learn.
Who else is interested in that?
In the course of the next days I would like to start doing some first drafts on the API.
One question certainly is whether to 'sex-up' the objects or whether to provide a more module like API, e.g. mne.ica or mne.signal.ica.
Another issue would be how to handle the visual checking and component-(de)selection. I guess the numpy plottting is too slow for that. One way however might be to write fiffs in a way that men_browse_raw could be used for that. Channel names then would be components, and bad channels then would be bad components.
The data dispatch could certainly handled by either modifying the object's data in place (interactive python session) or via modified Raw / Epochs constructors for interoperation with mne_browse_raw.

Wdyt?

Spatio-temporal clustering code

Now that I'm getting a hang of mne-python's codebase, I'd like to port the non-parametric spatio-temporal clustering code I've written in MATLAB/C (mex files) over. It's going to be a huge pain, but probably worth it. At least that's what KC tells me...

Anyway, I want to discuss how to implement it here so that I can hopefully construct it correctly from the start.

Let me start with what I have in MATLAB.

The code takes data are in an nsrc x ntimes x nsubjects x 2 array and computes a non-parametric permutation test over the 2 conditions using a paired, 2-tailed t-test. In each permutation, a 2-tailed t test is performed and thresholded at p < 0.05 to obtain a set of putatively significant indices (from the nsrc x ntimes possible locations). The heart of the code, the spatio-temporal clustering, takes these indices and a cell array of "neighbors" and returns the N clusters that result. To get the "neighbors", I read in a source space after running "'mne_add_patch_info --dist 7 --src %s --srcp %s" to add vertex distances (usually on fsaverage).

So the question is, how should I structure the functions in python?

There are (at least) 3 that are essential:

  1. A nonparametric test function
  2. A spatio-temporal clustering function
  3. A source-space neighbor loading/calculation function

What should I call these functions, what arguments should they accept, what .py files should I stick them in?

Incidentally, calculating the geodesic neighbor distances can take some time, so I think we should offer people the ablility to save those.

Smoothing / filling labels?

I want to write a function (since I don't see one that exists) to allow for smoothing / filling in of labels. It's really convenient to be able to make .label files of significant clusters that have been filled in for plotting purposes (n_smooth=2 seems to work well in my experience).

If it's alright, I'm going to add a method to the Label class called smooth() that would take in the source space the label was made in (defaulting to fsaverage) and number of smoothing steps, thus smoothing the labels with a given number of steps. I'll default it to use fsaverage's source space and 2 smoothing steps since that'll be my (and presumably other people's?) most frequent use case.

I'm going to take a stab at it and add it to PR #131 unless I hear otherwise... going once... going twice...

2-Way Repeated Measures ANOVA + contrasts

In fact, up to this day nothing equivalent to R's anova function seems to exist in python. As this reflects one of the most commonly used deigns in cognitive neuroscience, it would be nice to have this in mne-python or at least stats-models rather than using R, SPM etc. This is also somewhat related to #133. I think it would be nice to stick with a GLM framework and think about R-formula language as currently provided by python patsy to set contrasts, etc. Also I would like to have this general enough to cover ICA -, Sensor- , and source space.

Wdyt?

mne.Epochs returns "Unknown channel type" when using reject

I ran into an issue making epochs with the following raw file:

http://faculty.washington.edu/larsoner/Eric_Loc_010_01_allclean_fil55_raw_sss.fif

Using these commands:

filename = 'Eric_Loc_010_01_allclean_fil55_raw_sss.fif'
reject = dict(grad=np.inf, mag=np.inf, eeg=np.inf)
epochs = mne.Epochs(mne.fiff.Raw(filename, preload=True, verbose=False), events=np.array([[62085,0,1],[65086,0,1]]), event_id=None, tmin=-0.2, tmax=1, reject=reject)

I get this error:
...
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/epochs.py", line 210, in init
self._reject_setup()
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/epochs.py", line 338, in _reject_setup
idx = channel_indices_by_type(self.info)
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/fiff/pick.py", line 408, in channel_indices_by_type
if channel_type(info, k) == key:
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/fiff/pick.py", line 50, in channel_type
raise Exception('Unknown channel type')
Exception: Unknown channel type

Notably, the error does not occur with the reject option removed as:
epochs = mne.Epochs(mne.fiff.Raw(filename, preload=True, verbose=False), events=np.array([[62085,0,1],[65086,0,1]]), event_id=None, tmin=-0.2, tmax=1)

Thoughts?

spatio_temporal_src_connectivity failure

It looks like spatio_temporal_src_connectivity (which calls spatio_temporal_tris_connectivity) doesn't handle some source spaces correctly.

import mne
data_path = mne.datasets.sample.data_path('..')
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = mne.minimum_norm.read_inverse_operator(fname_inv)
connectivity = mne.spatio_temporal_src_connectivity(inverse_operator['src'], 
                                                    n_times=1)
a = connectivity.shape[0] 
b = sum([s['nuse'] for s in inverse_operator['src']])
assert a == b

a is 8196 while b is 7498, so this fails. I assume this is because it either isn't designed to handle oct source spaces (?), in which case should we have it throw an error? Or is there some way to make it work with oct source spaces?

Spatial and temporal center of mass?

Is there currently code for calculating the (weighted) spatial or temporal center-of-mass? If not, I have MATLAB code I can port over. I find it useful for writing papers and such when I can pass in a label (or weighted set of vertices) that I get from clustering and get MNI / Talairach coordinates of the spatial center of mass (weighted by the number of time points each vertex is active) and the temporal center of mass (weighted by the number of vertices active at each time point).

I'd like to use / add three functions:

  1. Weighted spatial center of mass
  2. Weighted temporal center of mass
  3. Vertex to talairach (and MNI) coordinates

I think I can work out the necessary inputs / outputs from my MATLAB code, but what should I name the functions / where should I put them (if they don't exist already)?

Reduce memory usage for cluster level statistics when n_jobs > 1

Currently, the permutation cluster test (mne.stats.permutation_cluster_test) requires large amounts of memory when n_jobs > 1, since the data is copied for every parallel instance. There is currently an interesting PR for joblib to allow shared memory between processes (joblib/joblib#44). We should adapt the cluster stats code such that it can make use of this feature when it is available in joblib.

One problem is that we will need the way the data is shuffled (as we cannot modify X_full in the function _one_permutation). I suggest we create an index array of size X_full.dim[1], shuffle the index array, and apply stat_fun to a subset of variables at a time.

I can start working on this but I think it makes sense to discuss it first.

(Epoch) resampling

I'd like to implement a method for resampling epoched data---it saves a great deal of processing power and storage space for our analysis.

I noticed that the issue of temporal resampling has come up before, so I'd like to implement a general resampling function that operates on a one-dimensional numpy array (or along columns/rows if we want), and build resample methods for epoched data on top of that.

I propose replicating the MATLAB resample function, which makes careful use of upfirdn, which has been ported to Python:

http://code.google.com/p/upfirdn/

The MATLAB code takes care to temporally align the output signal with the input (to avoid inducing delay), and I should in theory be able to replicate this functionality pretty easily. What do people think?

Issue with drop_log

Hey,

I have a question about the functionality of the mne.Epochs. In the documentation, it says that the rejected channels are returned as a list of all offending channels but when I look at the code, it seems that it would only return one channel of a particular type (magnetometer, gradiometer, etc.) instead of all of the offending channels.

I found the source of this issue on line 673 with the function _is_good of mne/epochs.py. Here, it is looping through the different types in key instead of the channels themselves. Is this a bug or is this the intended value?

Computing grand covariance with different projectors across runs

I'm working on computing a grand covariance across several runs by using calculate_covariance(list_of_epochs), and I ran into a "ValueError: Epochs must have same projectors". When calculating a grand covariance in mne_process_raw, having the same projectors across runs was not a requirement. Is it possible to remove this restriction here? If there is support for averaging forward solutions (because of different head positions on each run), it seems like there should be the equivalent here for covariances (since the MEG projectors would change, for example).

epochs event_id as dict and support for multiple marker numbers

I'd like to be able to write

epochs = mne.Epochs(raw, events, event_id=dict(auditory=1, visual=3), ...)
evoked_auditory = epochs.average('auditory')
evoked_visual = epochs.average('visual')
evoked = epochs.average()

and evoked_auditory.comment would then contain "Evoked auditory" and not unknown as it is now.

cc/ @Eric89GXL @mluessi @christianmbrodbeck @dengemann if you want to give it a try...

IPython shows decorator docstring instead of function docstring

We recently implemented the verbose option using a decorator. I just noticed that IPython now shows the decorator docstring instead of the function docstring, e.g, mne.minimum_norm.apply_inverse?? shows the docstring of verbose().

This seems to be a problem with IPython but it would be good if we can find a way to fix it before the release.

read_events and write_events don't (optionally) use text files

I noticed that the read_events and write_events functions only operate on .fif files. mne_browse_raw can read both .fif (binary) files as well as .eve fies (text files). If people are okay with it, I'd like to extend read_events and raw_events to parse the filename extension, and if it's .fif, use binary, and if it's .eve/.lst/.txt, use simple text I/O. Our lab makes extensive use of .lst files, and I imagine other labs might as well, so I think this could get some use. Is it okay if I add this functionality? If so, do you think that read(/write)_events() is the correct place, or should I make alternative functions read(/write)_events_text()? I lean toward the former because it's simpler from a user standpoint.

Help computing inverse operator for MEG-only, EEG-only, etc

I noticed that in make_inverse_operator, the comments say that the first argument "info" should specify which channels to use to construct the inverse. However, if I load my forward solution and covariance and run something like:

info['chs'] = [fwd['info']['chs'][ci] for ci in mne.fiff.pick_types(fwd['info'], meg=meg_call_flags[fi][0], eeg=meg_call_flags[fi][1], eog=False)]

followed by:

inv = mne.minimum_norm.make_inverse_operator(info, fwd, empty_cov, loose=None, depth=None)

I get an error on line 154 of pick_types that "info['chs'][k][kind]" has an index out of range. I assume, then, that this is not the correct way to make EEG-only or MEG-only inverses.

How do I make an EEG-only or MEG-only inverse solution? Do I need to make an EEG-only or MEG-only evoked file first? If so, I think we should implement a way to make EEG-only or MEG-only without having to do this, because in principle these steps can be separate (and were in the C implementation).

compute_proj_ecg throws a warning

I get a warning when using the default parameters for compute_proj_ecg:

/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/filter.py:234: UserWarning: Attenuation at stop frequency 4.5Hz is only 15.1dB. Increase filter_length for higher attenuation.

I assume this has to do with the 5 Hz high-pass frequency requiring a long FIR filter.

Would it make sense to set the filter_length argument to something larger by default for compute_proj_ecg?

Numpy version >= 1.7: error reading fiff-file

With Numpy version 1.7. and above reading fiff-files is not longer possible because of TypeError: data type ">h2" not understood.

Solution:
Replace all occurences of "h2" in fiff/tag.py with "i2".

Nosetests fail because sample dataset is outdated

Would it be possible to add a link to the newest version of the sample dataset in the development section? I just downloaded the one that is linked on the FTP server, and I keep getting errors/warnings like this in nosetests:

UserWarning: Sample dataset (version 0.3) is older than mne-python (version 0.4.git). If the examples fail, you may need to update the sample dataset by using force_update=True

No such file or directory: '/home/larsoner/custombuilds/mne-python/mne/simulation/tests/../../../examples/MNE-sample-data/MEG/sample/labels/Aud-lh.label'

I got errors of this sort even when I ran this from scratch (without downloading from the FTP site). I'm also not sure where I can set force_update=True to get it to download the newest version, but having a link would be preferable for me in any case.

ENH: Preprocessing module instead of artifacts and preprocessing

Hi all,

I wondered why we actually have two modules for very interrelated operations. Also ICA for example is more then just about artefacts. For the sake of consistency and simplicity I'd suggest to move artefacts to preprocessing. What do you think?

Denis

Epoch concatenation

Let me know if this is implemented and I already missed it, but I'd like to implement a method for concatenating Epoch objects. We typically use 6-9 runs for our experiments, and it's useful to be able to treat these as being equal at some point. To concatenate epochs, I think all I'd need to do is check to make sure these properties were equal across epochs before concatenating data:

  1. info['chs']
  2. info['ch_names']
  3. info['nchan']
  4. picks
  5. proj
  6. info['projs']
  7. times

If any of these were not equal across epoch objects, an error would be thrown stating that only objects with equivalent values could be concatenated.

Then the following would need to be concatenated appropriately:

  1. events
  2. _data

One tricky part is what to do with data preloading. An easy (lazy) solution is to require preloading, so that the above concatenation of _data will work. The other option is to implement multiple-source-file support in _get_data_from_disk(), but maybe as a first pass this could be disabled and left to a future improvement?

Cannot save inverse operator because 'use_tris' in source space is None

I hit a problem saving an inverse operator:

...
Write inverse operator decomposition in Eric_Loc_01055-SSS-meg-ERM-Fixed-inv.fif...
Writing inverse operator info... [done]
Writing noise covariance matrix.
Writing source covariance matrix.
Writing orientation priors.
Write a source space...
Traceback (most recent call last):
...
File "/home/larsoner/Documents/python/utils/mneFun.py", line 50, in gen_inverses
mne.minimum_norm.write_inverse_operator(inv_name, inv)
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/minimum_norm/inverse.py", line 348, in write_inverse_operator
write_source_spaces(fid, inv['src'])
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/source_space.py", line 404, in write_source_spaces
_write_one_source_space(fid, s)
File "/home/larsoner/.local/lib/python2.7/site-packages/mne-0.4.git-py2.7.egg/mne/source_space.py", line 461, in _write_one_source_space
this['use_tris'] + 1)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'

Looks like the source space saving bombed out because this['use_tris'] was None instead of an integer. I assume this means we need to fix how read_source_space or write_source_space works, because running mne.source_spaces.read_source_spaces on the following file yields None for src[0]['use_tris'], which will cause the later error trying to write it out:

http://faculty.washington.edu/larsoner/AKCLEE_110-7-src.fif

Covariance calculation speed

I noticed that the covariance calculations take a number of minutes, and I was thinking of trying to parallelize them. Would it be alright if I made a cov-parallel branch utilizing parallel_func (like is done in raw.py)?

An annoying side-effect of this would be that you'd need more memory, as we won't be able to add the results of the dot() operations right away [I don't think "data += np.dot(e, e.T)" when iterating across "e" is parallel-friendly], and would instead have to store each result separately in memory and then sum across epochs after the fact. This could get a little hairy for long recordings; for 400 sensors and 6 minutes of empty-room recording broken into 200ms chunks, we'd need 400_400_6_60_5 = 288 million numbers. Using float64's this would be a couple gigs of data. My preferred choice would be to:

  1. default to using 1 "job", and in that case, don't use any extra memory, so it "just works" for people.
  2. add a warning in the function comments saying that to use multiple cores / parallelize, they will need lots of memory
  3. if multiple jobs are requested, use the new store-and-add-afterward parallel method.

Thoughts? For people with lots of memory, this will make things much faster. For people with less memory, they could probably still do it with evoked covariance calculations, for example.

connectivity issues with numpy 1.3

    691             if indices is None:
    692                 # only compute r for lower-triangular region

--> 693                 indices_use = np.tril_indices(n_signals, -1)
    694             else:
    695                 indices_use = check_indices(indices)

AttributeError: 'module' object has no attribute 'tril_indices'
WARNING: Failure executing file: <plot_sensor_connectivity.py>

we need to backport this...

RandomizedPCA before ICA

Hi folks,

I would like to include RandomizedPCA from sklearn as one station before the ICA. This would have at least two obvious advantages. First, it would speed up the decomposition, second we could pass explained variance criteria instead of n_components, which I guess is more systematic / controlled natural than deciding the number of components, at best, heuristically. The rPCA would then do the whitening.
One issue I see however is, I guess a minor issue, that we would slightly have to change the API due to the different init requirements. I.e. for the rPCA we need to tell the number of components in advance. If we want to use the rPCA to inform our n-components-choice on init of ICA this won't work however. I see two options.
A) displacing the picks arg from the ICA.decompose_XXX methods to ICA.init --> len(picks) will tell rPCA how many components there are, rPCA will tell ICA the n_components and the n_components arg can pass, if float between 0 and 1, the explained variance selection criterion.
B) Putting rPCA inside the ICA.decompose_XXX methods and do basically the same.

I have a preference for A) because the entire ICA workflow is fixed to the channel structure passed on decomposition. As far as I see it, nothing would be lost with this move. In fact the code would become more compact. B) could do it as well, but the rPCA would then appear in two methods and I think with regard to parameters and the interface it would take efforts to keep it consistent.

What do you think?

Denis

DICS

Hi all,

Is anyone of you intending to work on a DICS implementation in the nearer future?
A few days ago I sat down with a colleague of mine ( Cao Tri Do, now an engineer) who just implemented the DICS in... well -- IDL -- for his masters thesis.
We sat down and studied both his code and the mne-python LCMV and came to the conclusion that it should be pretty straight-forward to implement the DICS. Especially as now there is the PSD...

What do you think about this?
Of course if someone almost got there, this wouldn't make sense...

Best,
Denis

Epochs attribute exposure | private attributes

@agramfort In the course of iterating toward our final ICA code we once had a discussion about epochs.picks and epochs.raw etc. I think just now shortly before 0.5 might be a good moment to get back to this.

So the proposal would include to hide legacy / or upstream-parent attributes for child / downstream opjects in order to make things more neat and user-friendly. The obviously relevant candidates are Epochs and Evoked for which we would keep e.g. ch_names on the surface but would hide picks and raw for example. First, @christianmbrodbeck @Eric89GXL and @mluessi - wdyt on this? And if you agree, would you see other objects / attributes which could be subsumed under this umbrella?

Continuous SSP projection

Currently there are options for making SSP projectors for epochs and from averages, but none for doing a continuous SSP projection vector for an entire (raw) run like there is in mne_process_raw. I have tried "faking it" by pulling in the entire run as an epoch, but I run into an error doing this. I am probably doing something wrong, but at least having a wrapper that accomplished this correctly would be very useful.

generic msi/4D/bti support

Hi all,

I just wanted to share recent enhancements of my bit/4D importer example.

https://github.com/dengemann/mne-python/tree/juelich/juelich/fiff_from_4D/fiff_handler

I recently wrote a BTI ascii header parser and the RawFrom4D class now takes as input the headers and data files drawn from the msi / 4D routines for ascii export (a binary file you can read with numpy.fromfile and a header) whereas the former Juelich example has moved to RawFromJuelich.
I could polish this and we would get close toward generic 4D support.

What do you think?

Denis

Equivalent for mne_mark_bad_channels

Currently making system calls to accomplish this. Perhaps it could currently be done by reading a raw fif and writing it out again, but I imagine this will be slow as it will have to re-write all the channel data (I assume this is not what the C code does?).

Store ICA matrixes in fiff file

Our current ICA class allows to save sources in a fiff file. Also saving the ICA matrixes in a fiff file would be very nice for different reasons, e.g. saving the entire ICA session / sharing between programs but also sharing with colleagues and finally toggling between mixed and unmixed in regular sensor space raw object. We already started discussing that earlier but I think our discussion didn't converge satisfyingly. As we now have epochs.save for some time this one came back to mind.
I would like to create a new tag, e.g. FIFF_MNE_MISC_DATA : 123 and then save the matrixes using wirite_float_matrix.

Wdyt?

Adding extra information in epoch dropping?

Hey, I want to extend the functionality of Epochs such that it keeps track of why each epoch was dropped. We use this in our code to help find channels that we potentially should have marked as bad, but possibly missed.

I noticed that the drop_bad_epochs() function goes through and determines which epochs are good. I'd like to extend this in two ways:

  1. Store the variable good_events as self.good_events so it can be deduced which of the origin epochs were valid after the fact.

  2. Store a nepochs x nchannels x 2 bool array (or similar dimensionality if it's more consistent with code) of whether in each epoch, whether each channel violated either the the reject or flat (thus the x 2) parameter. This could presumably be easily incorporated in the _is_good_epoch function.

Thoughts?

Epoch Rejection with t-start

I have a use case where my baseline is 700 to 600 ms before my actual epoch starts. I would still like to load my actual epoch starting at -100 ms for visualization, but I would not want to reject trials because of something that occurs in that unused baseline time period. Since the reject Epochs kwarg is already a dict, how would you feel about adding an optional 'tmin' and 'tmax' entry to that dictionary?

Failing nosetest

I have a failing nosetest in test_source_space.py:

  File "/home/larsoner/custombuilds/mne-python/mne/tests/test_source_space.py", line 44, in test_write_source_space
    assert_true(s0[name] == s1[name])
AssertionError: False is not true
    'False is not true' = self._formatMessage('False is not true', "%s is not true" % safe_repr(False))
>>  raise self.failureException('False is not true')

A little bit of exploration shows this is due to the "nearest" variable, as adding this little code in test_source_space.py at line 42:

            if not s0[name] == s1[name]:
                print name
                print (s0[name], s1[name])

yields this output just before the error:

nearest
(array([    59,    228,    228, ..., 153929, 154064, 153933], dtype=int32), None)

Am I the only one getting this error? If so, it must be because my version of the "sample" dataset has inter-vertex distances computed up to 7mm. It would be good if we changed the sample dataset to include this source space:

http://faculty.washington.edu/larsoner/sample-oct-6-src.fif

@mluessi, what's the procedure for it?

Failing nosetest

I have a failing nosetest:

Test computation of ECG SSP projectors ... ERROR

======================================================================
ERROR: Test computation of ECG SSP projectors
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/larsoner/custombuilds/mne-python/mne/preprocessing/tests/test_ssp.py", line 30, in test_compute_proj_ecg
    qrs_threshold=0.5)
  File "/home/larsoner/custombuilds/mne-python/mne/utils.py", line 418, in dec
    return function(*args, **kwargs)
  File "/home/larsoner/custombuilds/mne-python/mne/preprocessing/ssp.py", line 257, in compute_proj_ecg
    ecg_l_freq, ecg_h_freq, tstart, qrs_threshold)
  File "/home/larsoner/custombuilds/mne-python/mne/utils.py", line 418, in dec
    return function(*args, **kwargs)
  File "/home/larsoner/custombuilds/mne-python/mne/preprocessing/ssp.py", line 169, in _compute_exg_proj
    n_eeg=n_eeg, n_jobs=n_jobs)
  File "/home/larsoner/custombuilds/mne-python/mne/utils.py", line 418, in dec
    return function(*args, **kwargs)
  File "/home/larsoner/custombuilds/mne-python/mne/proj.py", line 126, in compute_proj_epochs
    desc_prefix = "%-d-%-.3f-%-.3f" % (event_id, epochs.tmin, epochs.tmax)
TypeError: %d format: a number is required, not dict

Anyone else getting this? I think this has to do with the changes made to events. I'll try to debug and submit a PR...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.