ohba-analysis / osl Goto Github PK
View Code? Open in Web Editor NEWOHBA Software Library - MEG/EEG Analysis Tools
Home Page: https://osl.readthedocs.io/en/latest/
License: BSD 3-Clause "New" or "Revised" License
OHBA Software Library - MEG/EEG Analysis Tools
Home Page: https://osl.readthedocs.io/en/latest/
License: BSD 3-Clause "New" or "Revised" License
One the first steps in coregistration copies the preprocessed fif file into the rhino directory:
https://github.com/OHBA-analysis/osl/blob/main/osl/source_recon/rhino/coreg.py#L219-L241
Rather than copying the fif file. Wouldn't it be more efficient to just save the Info object?
Not sure whether this is a bug or a feature, but when we run (batch) preprocessing, reports will be created for each file in a directory like this (for the raw (maxfiltered) file named sub-001_task-rest_tsss.fif
):
report/sub-001_task-rest_tsss
However, if the creation of the report fails for whatever reason, and you run the report from scratch using osl.report.gen_report_from_fif
, it will as a default create files in directories named report/sub-001_task-rest_tsss_preproc_raw
. I think the options should be consistent, and probably should have the latter naming convention.
In the preprocessing readme, we should include a list of functions that people could use, and the options to it.
In the README in the maxfilter folder in all the examples osl_maxfilter.py is used but I believe the name of the function changed to maxfilter.py
mne.preprocessing.annotate_flat isn't available in the latest version of MNE (1.1.0). This causes the following line to error:
https://github.com/OHBA-analysis/oslpy/blob/main/osl/preprocessing/mne_wrappers.py#L180
This also causes some of the tests to fail which contain an "- annotate_flat : {}" line in the config. For now, I've just removed this line from the config in the tests so the tests don't fail (#30), but we should probably fix this function.
When you're using osl.report.gen_report_from_fif(list_of_files, outdir='/path/to/save/dir')
it will fail in case your files do not end in preproc_raw.fif
(e.g., raw.fif
or anything other. In my case: clean_raw.fif
).
This is caused by read_dataset
in https://github.com/OHBA-analysis/osl/blob/main/osl/preprocessing/batch.py#L401, where the preproc_raw
extension is hardcoded. We should make the extension optionable. Perhaps with the default raw
(but when run from within the batch preproc, the default should be preproc_raw
)
It would be good to have a directory structure that is more consistent with MNE. The proposed structure (for source reconstruction) is:
-> base_directory
---> src [by default the source reconstructed data will be put in to a directory with this name in the base_directory]
-----> sub-001
---------> bem
---------> rhino
-----------> coreg
-----------> surfaces
---------> parc.npy [parcellated time series data]
-----> sub-002
....
-----> sub-003
....
In this issue let's fix the structure of the source reconstruction then add a new directory for sign flipped data in a separate issue.
The order of the arguments for run_proc_chain is:
def run_proc_chain(infile, config, ...):
whereas for run_proc_batch it is:
def run_proc_batch(config, files, ...):
note the config and input file(s) have swapped.
We should maybe change the order of the arguments in one of these functions to make sure it's consistent. This will break existing scripts, but maybe worth doing now rather than later.
What do you think @AJQuinn @matsvanes
By default we want the output of oslpy to be written to subject specific directories. This is already the case with source reconstruction and maxfiltering. We need to change the default output structure of preprocessing.
The default output structure should be:
outdir
-> report
-> sub-001
---> sub-001_preproc_raw.fif
---> sub-001_preproc_raw.log
-> sub-002
---> ...
Hi,
MNE recommends a highpass filter at 1Hz before doing ICA:
https://mne.tools/stable/auto_tutorials/preprocessing/40_artifact_correction_ica.html#filtering-to-remove-slow-drifts
However, this is not currently present in the code:
https://github.com/OHBA-analysis/oslpy/blob/main/osl/preprocessing/_mne_wrappers.py#L312
My solution would be to check what highpass filtering had been done on the raw object and if it's less than 1Hz to automatically do a highpass filter on a copy of the raw object as recommended by MNE.
Let me know what you think, I can implement it if you agree.
fsl.utils.filetree
is deprecated according to the fslpy changelog. It's now a separate package hosted here: https://pypi.org/project/file-tree/
The new config style (i.e. not starting with method
) breaks plot_preproc_flowchart
config_text= """
meta:
event_codes:
preproc:
- crop: {tmin: 30}
- find_events: {min_duration: 0.005}
"""
osl.preprocessing.batch.plot_preproc_flowchart(config_text)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-10-2b9eecfee5cf> in <module>
----> 1 osl.preprocessing.batch.plot_preproc_flowchart(config_text)
/ohba/pi/mwoolrich/mvanes/software/oslpy/osl/preprocessing/batch.py in plot_preproc_flowchart(config, outname, show)
329 for idx, stage in enumerate(stages):
330
--> 331 b = box if stage['method'] not in ['input', 'output'] else startbox
332 stage = stage.copy()
333 method = stage.pop('method')
KeyError: 'method'
It would be great if the coreg panel displays a legend
Separate batch "coregister" into separate surface, coregister and forward to match the underlying functions
compute_surfaces()
coreg()
forward_model()
Usually when running an experiment, the experimenter will write some notes down, e.g. "eye tracker only working on one eye", or "participant moving a lot". It might be helpful if these can be added to the report on the info
tab.
I understand we can't add an unlimited amount of features to the report (or period). But I was wondering whether it would be feasible to have a helper function that can enable the user to add extra things to the report?
What do you think @AJQuinn @cgohil8 (and let me know if I should just shut up)?
At the moment osl_maxfilter only works from the command line. We should enable the user to call osl_maxfilter from a script.
When running osl_maxfilter, all files that are created have the extension that you'd expect, like sss_autobad_err.log
, sss_autobad.log
, tsss_headpos.log
, _tsss.fif
. However, the tsss
-log files have a double "tsss"-extension, e.g. _tsss_tsss.log
and _tsss_tsss_err.log
It would be good to have a plot of the variance of each parcel (as a brain plot) and the correlation between parcels after orthogonalisation in the source reconstruction report.
User interface:
Report:
Bad channels are not annotated in the report
There should be panels which summarise the entire dataset (rather than just subject-specific panels) in both the preprocessing and source reconstruction report.
Steps to reproduce the error:
python 3_manual_ica.py
via a terminal on hbaws46.
Full terminal output:
Loading dataset:
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_preproc_raw.fif
Opening raw data file /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_preproc_raw.fif...
Range : 4750 ... 139749 = 19.000 ... 558.996 secs
Ready.
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_events.npy
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_ica.fif
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_ica.fif ...
Now restoring ICA solution ...
Ready.
Creating RawArray with float64 data, n_channels=55, n_times=135000
Range : 4750 ... 139749 = 19.000 ... 558.996 secs
Ready.
Using matplotlib as 2D backend.
Traceback (most recent call last):
File "/home/cgohil/local/miniconda3/envs/osl/lib/python3.8/site-packages/matplotlib/cbook/__init__.py", line 287, in process
func(*args, **kwargs)
File "/home/cgohil/packages/oslpy/osl/preprocessing/plot_ica.py", line 1164, in _close
tmp = tmp[0]
IndexError: list index out of range
In osl:
osl.source_recon.rhino.utils._get_vol_info_from_nii
.mri_depth=dims[2]
in this function.osl.source_recon.rhino.forward_model.setup_volume_source_space
.vol_info
is passed to the MNE function mne.source_space._make_volume_source_space
here: https://github.com/OHBA-analysis/osl/blob/main/osl/source_recon/rhino/forward_model.py#L375.Links:
osl.source_recon.rhino.utils._get_vol_info_from_nii
: https://github.com/OHBA-analysis/osl/blob/main/osl/source_recon/rhino/utils.py#L154.osl.source_recon.rhino.forward_model.setup_volume_source_space
: https://github.com/OHBA-analysis/osl/blob/main/osl/source_recon/rhino/forward_model.py#L228.In MNE:
mne.source_space.setup_volume_source_space
.mne.source_space._make_volume_source_space
here: https://github.com/mne-tools/mne-python/blob/main/mne/source_space.py#L1746.vol_info
passed to _make_volume_source_space
is created here using mne._freesurfer._get_mri_info_data
.mri_depth=dims[1]
.Links:
mne.source_space.setup_volume_source_space
: https://github.com/mne-tools/mne-python/blob/main/mne/source_space.py#L1504.mne._freesurfer._get_mri_info_data
: https://github.com/mne-tools/mne-python/blob/main/mne/_freesurfer.py#L125.So the question is what should be used for the vol_info
passed to mne.source_space._make_volume_source_space
?
Note:
mri
argument passed to mne._freesurfer._get_mri_info_data
, should be a mgh
or mgz
file in their documentation: https://github.com/mne-tools/mne-python/blob/main/mne/source_space.py#L1528-L1535.mgh/mgz
file is created using freesurferAn example of a nifti file where the volume dimensions do not have mri_depth=mri_height
is any of the structurals in the Wakeman-Henson dataset.
The latest version of [neurokit2](https://github.com/neuropsychology/NeuroKit/releases/tag/v0.2.0)
doesn't return a figure when using ecg_plot
(neuropsychology/NeuroKit@496ebf7)
Therefore, raw_report
returns None
in fig = nk.ecg_plot(signals, sampling_rate=raw.info['sfreq'])
and subsequently, the next line (fig.set_size_inches(16, 7)
) returns an error
https://github.com/OHBA-analysis/oslpy/blob/main/osl/report/raw_report.py#L513
When trying to load in SPM data using the spmio, this will fail if the data contains event values that are stings.
The data I'm using has been preprocessed in osl (matlab) and contains events with type 'artefact_OSL'
and value 'VE'
, and throws the following error:
Traceback (most recent call last):
File "/Users/matsvanes/venv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-74-6aa0cb1a8070>", line 1, in <module>
D = osl.utils.spmio.SPMMEEG(
File "/Volumes/T5_OHBA/software/oslpy/osl/utils/spmio/spmmeeg.py", line 52, in __init__
self.condition_values = [int(_get_trial_trigger_value(t)) for t in self.trials]
File "/Volumes/T5_OHBA/software/oslpy/osl/utils/spmio/spmmeeg.py", line 52, in <listcomp>
self.condition_values = [int(_get_trial_trigger_value(t)) for t in self.trials]
ValueError: invalid literal for int() with base 10: 'VE'
MNE's plot_psd uses a default window of n_fft = 2048
, so in terms of samples. This means that depending on how much you resample your data, the spectral resolution can be higher or lower. I think it'd be better if we use a default option of a number of seconds, i.e. 2 seconds.
i.e. raw.plot_psd(n_fft=int(raw.info['sfreq']*2))
As a side note, we could decide to adopt @AJQuinn 's square root of f plot, which I quite like.
For the dataset that includes multiple data modalities (e.g., NTAD data with simultaneous M/EEG recordings), users might want to apply ICA denoising for each data types. For example,
- ica_raw: {n_components: 60, picks: meg}
- ica_autoreject: {picks: meg, apply: true}
- ica_raw: {n_components: 30, picks: eeg}
- ica_autoreject: {picks: eeg, apply: true}
However, using ICA preprocessing step sequentially led to following problems:
*_ica.fif
MNE object file tat was generated first gets overwritten by the latter one.gen_html_data()
only accepts one ICA object. This forces our report to only summarise and plot the most recent ICA step.plot_bad_ica()
to ICAs of EEG leads to an error at L838-842, because raw objects contain all of mag
, grad
, and eeg
in NTAD data, but we only detected ICA from EEG channels.This issue is to help us keep track of documentation that should be added somewhere.
report.gen_report fails if there is only one channel type. Traceback below.
Similar problem in plot_channel_dists()
Traceback (most recent call last):
File "", line 171, in <cell line: 167>
report.gen_report(preproc_fif_files, outdir=preproc_dir)
File "/Users/woolrich/oslpy/osl/report/raw_report.py", line 182, in gen_report
data = gen_html_data(raw, ica, outdir, level)
File "/Users/woolrich/oslpy/osl/report/raw_report.py", line 112, in gen_html_data
data['plt_temporalsumsq'] = plot_channel_time_series(raw, savebase)
File "/Users/woolrich/oslpy/osl/report/raw_report.py", line 279, in plot_channel_time_series
ax[row].plot(t, ss)
TypeError: 'AxesSubplot' object is not subscriptable
If there are rogue dirs in the preprocessing report directory then this line:
Line 260 in 1b60ab3
means that the ensuing report code will attempt to generate a report using that dir.
When running maxfilter.py the following error occurs:
Traceback (most recent call last): File "indir/maxfilter/maxfilter.py", line 14, in <module> from ..utils import validate_outdir, add_subdir ImportError: attempted relative import with no known parent package
Also there are no validate_outdir & add_subdir functions in utils.
Thanks!
Maxfilter sometimes zeros data. This should always be picked up be bad segment detection, which is currently not the case. So we should explicitly look for zeros in the data and annotate those segments as bad.
The html file embedding for coregistration doesn't load when opening a source reconstruction report on hbaws42 using firefox.
Currently, the report function creates an html with only the files that were given as inputfiles to the function call. That means that if you've done a big batch process before, for which you have a collective report, and you get new data, you would need to recreate all previous plots if you want to "update" the collective report.
It would be good if we can make an overwrite
option in the report, that if False
, will first check whether the plots for this specific data already exist and if so, skip the creation of those plots (and thus save computation power and time), and in the end generate the report with all existing and new plots in it.
when using 'adjacent' as output directory for osl_maxfilter, i.e., files are supposed to be stored in the same directory as the input file, I get an error about the log file not being found (see below). Probably something is going wrong in the initialization of the log file(s). FYI I'm running the following in an active osl environment on the latest version of OSL
The code I'm running:
osl_maxfilter tmp.txt 'adjacent' --maxpath /neuro/bin/util/maxfilter --scanner Neo --tsss --mode multistage --headpos --movecomp
The message printed to screen:
['tmp.txt', 'adjacent', '--maxpath', '/neuro/bin/util/maxfilter', '--scanner', 'Neo', '--tsss', '--mode', 'multistage', '--headpos', '--movecomp']
Namespace(files='tmp.txt', outdir='adjacent', maxpath='/neuro/bin/util/maxfilter', mode='multistage', headpos=True, movecomp=True, movecompinter=False, autobad=False, autobad_dur=None, bad=None, badlimit=None, trans=None, origin=None, frame=None, force=False, tsss=True, st=10, corr=0.98, inorder=None, outorder=None, hpie=None, hpig=None, scanner='Neo', ctc=None, cal=None, overwrite=False, dryrun=False)
OHBA-Maxfilter
Processing 1 files
Outputs will be saved alongside inputs
Processing run 1/1 : /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET.fif
/neuro/bin/util/maxfilter -f /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET.fif -o /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss.fif -autobad on -cal /net/aton/meg_pool/data/TriuxNeo/system/sss/sss_cal.dat -ctc /net/aton/meg_pool/data/TriuxNeo/system/ctc/ct_sparse.fif -v > >(tee -a /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad.log) 2> >(tee -a /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad_err.log >&2)
tee: /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad_err.log: Permission denied
tee: /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad.log: Permission denied
**************************************************
*** This is MaxFilter(TM) version 2.2.20 ***
*** Compilation date: Dec 3 2019 13:06:57 ***
*** Please report problems to [email protected] ***
**************************************************
No processing records found.
SSS origin (head) 0.0, 0.0, 40.0 mm.
Time scale: #f = 12.000 s, #l = 836.000 s
Applying the fine-calibration in /net/aton/meg_pool/data/TriuxNeo/system/sss/sss_cal.dat.
Using the cross-talk matrix in /net/aton/meg_pool/data/TriuxNeo/system/ctc/ct_sparse.fif.
Opened FIFF file /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET.fif (824 data buffers).
EXIT 4: The output file already exists. Output file /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss.fif was not written!
Failed to create file /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss.fif for writing!
Traceback (most recent call last):
File "/home/mvanes/anaconda3/envs/osl/bin/osl_maxfilter", line 8, in <module>
sys.exit(main())
File "/home/mvanes/anaconda3/envs/osl/lib/python3.9/site-packages/osl/maxfilter/maxfilter.py", line 615, in main
run_multistage_maxfilter(infifs[idx], outbase, vars(args))
File "/home/mvanes/anaconda3/envs/osl/lib/python3.9/site-packages/osl/maxfilter/maxfilter.py", line 332, in run_multistage_maxfilter
with open(outlog, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad.log'
Create a wrapper for just doing beamforming that can be called in the config when doing source reconstruction. (Currently there's only one wrapper for beamform_and_parcellate). This option should be usable as
config = """
source_recon:
- beamform: {...}
"""
When a file is already present in batch processing, and overwrite=False
, osl throws the following error:
File "/ohba/pi/mwoolrich/mvanes/software/oslpy/osl/preprocessing/batch.py", line 551, in run_proc_chain
osl_logger.critical('Skipping preprocessing - existing output detected')
AttributeError: module 'osl.utils.logger' has no attribute 'critical'
osl_logger
should be replaced with logger
The report shows properties of all "bad" ICA components, but for Elekta data (which has two sensor types), only the topography for the magnetometers is shown. Add the topo for the gradiometers.
I have a few info files that could be useful to add to oslpy. These are descriptions of Elekta channels that I believe can't be found in the raw.info structure or anything like that. They are:
division of sensor array into left/right frontal/temporal/parietal/occipital and anterior/posterior- list of channel names
division of gradiometers into longitude and lattitude
What is the best place to put these and in which format (.txt?)?
The report has a ICA tab, but no plots are shown.
In source recon reports on the sign flipping tab, the sign flip plots are not being displayed when switching between subjects using the arrow keys
This issue is used to keep track of typos or improvements to the documentation to be made.
Currently when we use the batch processing we specify a single output directory. Both the preprocessed fif files and logs files are written to this directory. E.g. if i list the contents of my output directory, I get:
osl_batch.log
sub-004_task-restEO_preproc_raw.fif
sub-004_task-restEO_preproc_raw.log
sub-005_task-restEO_preproc_raw.fif
sub-005_task-restEO_preproc_raw.log
With one big list i find it hard to see which files failed and to pick out error logs. I propose the output directory by default contains:
So if i list the output directory I see:
/logs
/report
sub-004_task-restEO_preproc_raw.fif
sub-005_task-restEO_preproc_raw.fif
Here, because no subject numbers should be repeated, i find it easy to scan across the subject numbers and would be able to pick out missing subjects. The /logs directory contains all the .log files:
osl_batch.log
sub-004_task-restEO_preproc_raw.log
sub-005_task-restEO_preproc_raw.log
and the /report directory contains the files needed to generate a report and the html file. Here, I find it much easier to pick out error log files.
What do you think?
I'm about to add a directory to hold the files needed for the report, so if we don't change the logs directory then the default output would be:
/reports
osl_batch.log
sub-004_task-restEO_preproc_raw.fif
sub-004_task-restEO_preproc_raw.log
sub-005_task-restEO_preproc_raw.fif
sub-005_task-restEO_preproc_raw.log
An error is raised if we cannot get the orientation of a sMRI file here: https://github.com/OHBA-analysis/osl/blob/main/osl/source_recon/rhino/surfaces.py#L215. The message is asks the user to check:
fslorient -orient {}
This should be
fslorient -getorient {}
Hi,
I thought it could be useful to add the EOG timeseries in the timeseries panel of the report.
This would allow direct comparison with the MEG timeseries.
The source reconstruction code uses system calls to run FSL. E.g. source_recon.rhino.utils._get_orient calls
fslorient -getorient {}
to get the orientation of a structural MRI file. However, if you have spaces in the path to the sMRI file, this will fail. E.g. this raises an error:
fslorient -getorient /Users/cgohil/OneDrive - Nexus365/datasets/wakeman_henson/src/sub-02/rhino/surfaces/smri.nii.gz
This can be fixed if we wrap the path in the system call with quotes:
fslorient -getorient '{}'
or replacing the spaces with \
.
In recon report, when coreg and beamform_and_parcellate are run using run_src_batch, followed by running fix_sign_ambiguity in a separate call to run_src_batch (which typically needs to be done separately so that the template subject can be chosen), then the sign flipping erroneously appears before coreg and beamform_and_parcellate in the workflow plot found on the config tab of recon/report/summary_report.html.
Example code:
config = """
source_recon:
- coregister:
include_nose: false
use_nose: false
use_headshape: true
model: Single Layer
already_coregistered: true
- beamform_and_parcellate:
freq_range: [4, 40]
chantypes: mag
rank: {mag: 100}
parcellation_file: fmri_d100_parcellation_with_PCC_reduced_2mm_ss5mm_ds8mm.nii.gz
method: spatial_basis
orthogonalisation: symmetric
"""
source_recon.run_src_batch(
config,
src_dir=recon_dir,
subjects=subjects,
preproc_files=preproc_fif_files,
smri_files=smri_files,
)
# Find a good template subject to align other subjects to
template = source_recon.find_template_subject(
recon_dir, subjects, n_embeddings=15, standardize=True
)
config = f"""
source_recon:
- fix_sign_ambiguity:
template: {template}
n_embeddings: 15
standardize: True
n_init: 3
n_iter: 2500
max_flips: 20
"""
# Do the sign flipping
source_recon.run_src_batch(config,
recon_dir,
subjects,
)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.