GithubHelp home page GithubHelp logo

raphaelvallat / yasa Goto Github PK

View Code? Open in Web Editor NEW
395.0 18.0 106.0 166.07 MB

YASA (Yet Another Spindle Algorithm): a Python package to analyze polysomnographic sleep recordings.

Home Page: https://raphaelvallat.com/yasa/

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
sleep sleep-analysis sleep-spindles signal-processing eeg-analysis stft deep-sleep peak-detection numba spectral-analysis

yasa's Introduction

image

image

image

image


YASA (Yet Another Spindle Algorithm) is a command-line sleep analysis toolbox in Python. The main functions of YASA are:

  • Automatic sleep staging of polysomnography data (see preprint article).
  • Event detection: sleep spindles, slow-waves and rapid eye movements, on single or multi-channel EEG data.
  • Artefact rejection, on single or multi-channel EEG data.
  • Spectral analyses: bandpower, phase-amplitude coupling, 1/f slope, and more!
  • Hypnogram analysis: sleep statistics and stage transitions.

For more details, try the quickstart or read the FAQ.


Installation

To install YASA, simply open a terminal or Anaconda command prompt and enter:

pip install --upgrade yasa

Alternatively, YASA can be installed with conda:

conda config --add channels conda-forge
conda config --set channel_priority strict
conda install yasa

What are the prerequisites for using YASA?

To use YASA, all you need is:

  • Some basic knowledge of Python, especially the NumPy, Pandas and MNE packages.
  • A Python editor: YASA works best with Jupyter Lab, a web-based interactive user interface.
  • Some sleep EEG data and optionally a sleep staging file (hypnogram).

I have sleep data in European Data Format (.edf), how do I load the data in Python?

If you have sleep EEG data in standard formats (e.g. EDF or BrainVision), you can use the MNE package to load and preprocess your data in Python. A simple preprocessing pipeline using MNE is shown below:

import mne
# Load the EDF file
raw = mne.io.read_raw_edf('MYEDFFILE.edf', preload=True)
# Downsample the data to 100 Hz
raw.resample(100)
# Apply a bandpass filter from 0.1 to 40 Hz
raw.filter(0.1, 40)
# Select a subset of EEG channels
raw.pick(['C4-A1', 'C3-A2'])

How do I get started with YASA?

If you want to dive right in, you can simply go to the main documentation and try to apply YASA's functions on your own EEG data. However, for most users, we strongly recommend that you first try running the examples Jupyter notebooks to get a sense of how YASA works and what it can do! The notebooks also come with example datasets so they should work right out of the box as long as you've installed YASA first. The notebooks and datasets can be found on GitHub (make sure that you download the whole notebooks/ folder). A short description of all notebooks is provided below:

Automatic sleep staging

Event detection

Spectral analysis

  • bandpower: calculate spectral band power, optionally averaged across channels and sleep stages.
  • IRASA: separate the aperiodic (= fractal = 1/f) components of the EEG power spectrum using the IRASA method.
  • spectrogram: plot a multi-taper full-night spectrogram on single-channel EEG data with the hypnogram on top.
  • nonlinear_features: calculate non-linear EEG features on 30-seconds epochs and perform a naive sleep stage classification.
  • SO-sigma_coupling: slow-oscillations/spindles phase-amplitude coupling and data-driven comodulogram.
  • EEG-HRV coupling: overnight coupling between EEG bandpower and heart rate variability.
  • topoplot: topoplot.

Gallery

Below some plots demonstrating the functionalities of YASA. To reproduce these, check out the tutorial (Jupyter notebooks).

The top plot show an overlay of the detected spindles on real EEG data. The middle left panel shows a time-frequency representation of the whole-night recording (spectrogram), plotted with the hypnogram (sleep stages) on top. The middle right panel shows the sleep stage probability transition matrix, calculated across the entire night. The bottom row shows, from left to right: a topographic plot, the average template of all detected slow-waves across the entire night stratified by channels, and a phase-amplitude coupling comodulogram.

The top plot show an overlay of the detected spindles on real EEG data. The middle left panel shows a time-frequency representation of the whole-night recording (spectrogram), plotted with the hypnogram (sleep stages) on top. The middle right panel shows the sleep stage probability transition matrix, calculated across the entire night. The bottom row shows, from left to right: a topographic plot, the average template of all detected slow-waves across the entire night stratified by channels, and a phase-amplitude coupling comodulogram.

Development

YASA was created and is maintained by Raphael Vallat, a former postdoctoral researcher in Matthew Walker's lab at UC Berkeley. Contributions are more than welcome so feel free to contact me, open an issue or submit a pull request!

To see the code or report a bug, please visit the GitHub repository.

Note that this program is provided with NO WARRANTY OF ANY KIND.

Citation

To cite YASA, please use the eLife publication:

yasa's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yasa's Issues

Entropy analysis on all channels

Hello,

I'm trying to use your entropy analysis algorithm on 2-minute resting-state EEG with 64 channels. Is there a way to create a df_feat that includes in columns or rows all the channels?

This is the code I have thus far.

#P1

Load data as a eeglab file

raw = mne.io.read_raw_eeglab('E:/study/EEGLAB/restingstate/1_restingstate.set', montage=None, eog=(), preload=True, uint16_codec=None, verbose=None)

Load data

data = raw._data * 1e6

sf = raw.info['sfreq']
chan = raw.ch_names
#ch_names = chan
#times = np.arange(data.size) / sf
times = np.arange(data.shape[1]) / sf
print(data.shape, chan, times)

data = data[0, :] # 0 indicates first electrode location, change number to change electrode - to see which number is each electrode look at variable 'chan'
print(data.shape, np.round(data[0:5], 3))

Convert the EEG data to 120-sec data

times, data_win = yasa.sliding_window(data, sf, window=120)

Convert times to minutes

times /= 60

data_win.shape

from numpy import apply_along_axis as apply

df_feat = {
# Entropy
'perm_entropy': apply(ent.perm_entropy, axis=1, arr=data_win, normalize=True),
'svd_entropy': apply(ent.svd_entropy, 1, data_win, normalize=True),
'spec_entropy': apply(ent.spectral_entropy, 1, data_win, sf=sf, nperseg=data_win.shape[1], normalize=True),
'sample_entropy': apply(ent.sample_entropy, 1, data_win),
# Fractal dimension
'dfa': apply(ent.detrended_fluctuation, 1, data_win),
'petrosian': apply(ent.petrosian_fd, 1, data_win),
'katz': apply(ent.katz_fd, 1, data_win),
'higuchi': apply(ent.higuchi_fd, 1, data_win),
}

df_feat = pd.DataFrame(df_feat)
df_feat.head()

def lziv(x):
"""Binarize the EEG signal and calculate the Lempel-Ziv complexity.
"""
return ent.lziv_complexity(x > x.mean(), normalize=True)

df_feat['lziv'] = apply(lziv, 1, data_win) # Slow

Any help will be greatly appreciated!

Error on import: No module named 'entropy'

Thank you for providing such a useful set of tools to the community! I was keen to try out YASA, andI have just installed YASA via the pip installer as documented in README.md. Unfortunately, on 'import yasa' I get the following error:

File "[...]\yasa\staging.py", line 8, in
import entropy as ent

ModuleNotFoundError: No module named 'entropy'

I see you have recent commits changing 'entropy' to 'antropy' so I guess that that code update may only be partially implemented in the pip install?

Bad channel detection

Nice job and very honest comparison to other models in the article.
Do you plan to implement some bad channel detection ?
Unproperly connected or highly noisy leads are currently not identified by the artifact detection (same for pyprep and MNE).

Feed MNE epochs

Great work from the author.

I understand spindles_detect welcome mne raw data. In this case, I assume the raw is a continuous signal.

However, I wonder whether the YASA have the built-capability to process mne epochs data?

issue with psd

My guess is that the where_NREM is messing up but I don't understand why... Hypnogram seems totally fine.. see output below. Thank you for your help in the matter

import mne
import yasa
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style='white', font_scale=1.2)

Load data as a edf Raw file

raw = mne.io.read_raw_edf('EEG_LR_RS.edf', montage=None, eog=None, misc=None, stim_channel='auto', exclude=(), preload=True, verbose=None)

Keep only the EEG channels

raw.pick_types(eeg=True)

Apply a bandpass filter between 0.5 - 45 Hz

raw.filter(0.5, 45)

Extract the data and convert from V to uV

data = raw._data * 1e6
sf = raw.info['sfreq']
chan = raw.ch_names

Let's have a look at the data

print('Chan =', chan)
print('Sampling frequency =', sf, 'Hz')
print('Data shape =', data.shape)

Extracting EDF parameters from EEG_LR_RS.edf
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 23039 = 0.000 ... 89.996 secs...
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 0.5 - 45 Hz

FIR filter parameters

Designing a one-pass, zero-phase, non-causal bandpass filter:

  • Windowed time-domain design (firwin) method
  • Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
  • Lower passband edge: 0.50
  • Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 0.25 Hz)
  • Upper passband edge: 45.00 Hz
  • Upper transition bandwidth: 11.25 Hz (-6 dB cutoff frequency: 50.62 Hz)
  • Filter length: 1691 samples (6.605 sec)

Chan = ['L', 'R']
Sampling frequency = 256.0 Hz
Data shape = (2, 23040)

load hypnogram

hypno = np.loadtxt('HYPNO_LR_WHOLE_LETTER.CSV', dtype=str)
hypno

array(['W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W',
'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W',
'W', 'W', 'W', 'W', 'W', 'W', 'W', 'N1', 'N1', 'N1', 'N1', 'N1',
'N1', 'N1', 'N1', 'N1', 'N1', 'N1', 'N1', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N2', 'N2', 'N2', 'W', 'N2', 'N2', 'N2', 'W', 'W', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'R', 'N1', 'N1',
'N1', 'N1', 'N1', 'N1', 'N1', 'R', 'R', 'R', 'R', 'R', 'R', 'R',
'R', 'R', 'R', 'W', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R',
'R', 'R', 'R', 'R', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'W', 'W', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3', 'N3',
'N3', 'N3', 'N2', 'N2', 'N2', 'R', 'R', 'R', 'R', 'W', 'W', 'R',
'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'W', 'W', 'R', 'R', 'R',
'R', 'R', 'R', 'R', 'W', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R',
'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R',
'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'W', 'N1', 'N1', 'N1',
'N1', 'N1', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'W', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'W', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'W', 'N2', 'N2', 'W', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'R', 'R', 'R', 'W', 'W', 'W', 'W', 'W', 'W', 'W',
'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W',
'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W',
'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W', 'W',
'W', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'W', 'W', 'N1', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'W', 'W', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2',
'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'N2', 'R',
'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'R', 'N1'], dtype='<U2')

Convert hypnogram to numerical

hypno_int = yasa.hypno_str_to_int(hypno)
hypno_int

array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 2, 2,
2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 0, 2, 2,
2, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 4, 1, 1, 1, 1, 1, 1, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 0, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
2, 2, 2, 4, 4, 4, 4, 0, 0, 4, 4, 4, 4, 4, 4, 4, 4, 4, 0, 0, 4, 4,
4, 4, 4, 4, 4, 0, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 0, 1, 1, 1, 1, 1, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2,
2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 4,
4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 1], dtype=int64)

Upsample hypnogram to fit frequency of data

hypno_up = yasa.hypno_upsample_to_data(hypno_int, sf_hypno=(1/30), data=data, sf_data=sf)
print(hypno_up.size == data.shape[1]) # Does the hypnogram have the same number of samples as data?
print(hypno_up.size, 'samples:', hypno_up)

True
23040 samples: [0 0 0 ... 0 0 0]

where_NREM = np.isin(hypno_up, [2, 3]) # True if sample is in N2 / N3, False otherwise
data_NREM = data[:, where_NREM]
print(data_NREM.shape)

(2, 0) <--- i'm pretty sure this shouldn't be zero

from scipy.signal import welch

win = int(4 * sf) # Window size is set to 4 seconds
freqs, psd = welch(data_NREM, sf, nperseg=win, average='median') # Works with single or multi-channel data

print(freqs.shape, psd.shape) # psd has shape (n_channels, n_frequencies)

Plot

plt.plot(freqs, psd[1], 'k', lw=2)
plt.fill_between(freqs, psd[1], cmap='Spectral')
plt.xlim(0, 50)
plt.yscale('log')
sns.despine()
plt.title(chan[1])
plt.xlabel('Frequency [Hz]')
plt.ylabel('PSD log($uV^2$/Hz)');

(2, 0) (2, 0)

ValueError Traceback (most recent call last)
in
7
8 # Plot
----> 9 plt.plot(freqs, psd[1], 'k', lw=2)
10 plt.fill_between(freqs, psd[1], cmap='Spectral')
11 plt.xlim(0, 50)

~\Anaconda3\lib\site-packages\matplotlib\pyplot.py in plot(scalex, scaley, data, *args, **kwargs)
2793 return gca().plot(
2794 *args, scalex=scalex, scaley=scaley, **({"data": data} if data
-> 2795 is not None else {}), **kwargs)
2796
2797

~\Anaconda3\lib\site-packages\matplotlib\axes_axes.py in plot(self, scalex, scaley, data, *args, **kwargs)
1664 """
1665 kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D._alias_map)
-> 1666 lines = [*self._get_lines(*args, data=data, **kwargs)]
1667 for line in lines:
1668 self.add_line(line)

~\Anaconda3\lib\site-packages\matplotlib\axes_base.py in call(self, *args, **kwargs)
223 this += args[0],
224 args = args[1:]
--> 225 yield from self._plot_args(this, kwargs)
226
227 def get_next_color(self):

~\Anaconda3\lib\site-packages\matplotlib\axes_base.py in _plot_args(self, tup, kwargs)
389 x, y = index_of(tup[-1])
390
--> 391 x, y = self._xy_from_xy(x, y)
392
393 if self.command == 'plot':

~\Anaconda3\lib\site-packages\matplotlib\axes_base.py in _xy_from_xy(self, x, y)
268 if x.shape[0] != y.shape[0]:
269 raise ValueError("x and y must have same first dimension, but "
--> 270 "have shapes {} and {}".format(x.shape, y.shape))
271 if x.ndim > 2 or y.ndim > 2:
272 raise ValueError("x and y can be no greater than 2-D, but have "

ValueError: x and y must have same first dimension, but have shapes (2, 0) and (0,)

Negative values in PSD lead to negative relative bandpower

Using the yasa.bandpower_from_psd function with a power spectrum that contains negative values lead to negative relative bandpowers, which is wrong. This can occur for example when removing the 1/f component of the PSD with the yasa.irasa function, as illustrated below:

image

Useful resources:

Error with welch code

Hello,

Running your example gives the following error

from scipy.signal import welch

win = int(4 * sf) # Window size is set to 4 seconds
freqs, psd = welch(data_NREM, sf, nperseg=win, average='median') # Works with single or multi-channel data

print(freqs.shape, psd.shape) # psd has shape (n_channels, n_frequencies)

Plot

plt.plot(freqs, psd[1], 'k', lw=2)
plt.fill_between(freqs, psd[1], cmap='Spectral')
plt.xlim(0, 50)
plt.yscale('log')
sns.despine()
plt.title(chan[1])
plt.xlabel('Frequency [Hz]')
plt.ylabel('PSD log($uV^2$/Hz)');


TypeError Traceback (most recent call last)
in
2
3 win = int(4 * sf) # Window size is set to 4 seconds
----> 4 freqs, psd = welch(data_NREM, sf, nperseg=win, average='median') # Works with single or multi-channel data
5
6 print(freqs.shape, psd.shape) # psd has shape (n_channels, n_frequencies)

TypeError: welch() got an unexpected keyword argument 'average'

Please advice. Thank you for your help in the matter

Entropy value outputs are not per sleep stage but per epoch

Hello Raphael,

I'm trying to output entropy values per sleep stage but it is outputting them per epoch. I've not had this issue with other analyses of YASA. I'm not sure what I am doing wrong. Any help would be greatly appreciated. Below is the code I am using:

import yasa
import mne
import numpy as np
import pandas as pd
import entropy as ent
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1.2)


#P1

# Load data as a eeglab file
raw = mne.io.read_raw_eeglab('D:/test/1_EEG_LR_THREEHRS.set', eog=(), preload=True, uint16_codec=None, verbose=None)
# Load data

data = raw._data * 1e6

sf = raw.info['sfreq']
chan = raw.ch_names
#ch_names = chan
#times = np.arange(data.size) / sf
times = np.arange(data.shape[1]) / sf
print(data.shape, chan, times)

data = data[0, :]
#print(data.shape, np.round(data[0:5], 3))


hypno = np.loadtxt('D:/test/HYPNO_LR_THREEHRS_LETTER.txt', dtype=str)
hypno


hypno_int = yasa.hypno_str_to_int(hypno)
hypno_int


hypno_up = yasa.hypno_upsample_to_data(hypno=hypno_int, sf_hypno=(1/30), data=data, sf_data=sf)
#print(np.unique(hypno_up))
#print(hypno_up.size == data.shape[1])  # Does the hypnogram have the same number of samples as data?
#print(hypno_up.size, 'samples:', hypno_up)


# Convert the EEG data to 30-sec data
times, data_win = yasa.sliding_window(data, sf, window=30)

# Convert times to minutes
times /= 60

data_win.shape


from numpy import apply_along_axis as apply

df_feat = {
    # Entropy
    'perm_entropy': apply(ent.perm_entropy, axis=1, arr=data_win, normalize=True),
    'svd_entropy': apply(ent.svd_entropy, 1, data_win, normalize=True),
    'spec_entropy': apply(ent.spectral_entropy, 1, data_win, sf=sf, nperseg=data_win.shape[1], normalize=True),
    'sample_entropy': apply(ent.sample_entropy, 1, data_win),
    # Fractal dimension
    'dfa': apply(ent.detrended_fluctuation, 1, data_win),
    'petrosian': apply(ent.petrosian_fd, 1, data_win),
    'katz': apply(ent.katz_fd, 1, data_win),
    'higuchi': apply(ent.higuchi_fd, 1, data_win),
}

df_feat = pd.DataFrame(df_feat)
df_feat.head()


def lziv(x):
    """Binarize the EEG signal and calculate the Lempel-Ziv complexity.
    """
    return ent.lziv_complexity(x > x.mean(), normalize=True)

df_feat['lziv'] = apply(lziv, 1, data_win)  # Slow
df_feat.to_csv('D:/test/P1_nonlinear.csv')

ZeroDivisionError at 01_spindles_detection.ipynb

I download code and run 01_spindles_detection.ipynb , then I got an error:

ZeroDivisionError                         Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_28896/14777483.py in <module>
      1 # Apply the detection using yasa.spindles_detect
----> 2 sp = yasa.spindles_detect(data, sf)
      3 
      4 # Display the results using .summary()
      5 sp.summary()

S:\Anaconda\envs\yasa_env\lib\site-packages\yasa\detection.py in spindles_detect(data, sf, ch_names, hypno, include, freq_sp, freq_broad, duration, min_distance, thresh, multi_only, remove_outliers, verbose)
    675             _, mcorr = moving_transform(x=data_sigma[i, :], y=data_broad[i, :],
    676                                         sf=sf, window=.3, step=.1,
--> 677                                         method='corr', interp=True)
    678         if do_rms:
    679             _, mrms = moving_transform(x=data_sigma[i, :], sf=sf,

S:\Anaconda\envs\yasa_env\lib\site-packages\yasa\others.py in moving_transform(x, y, sf, window, step, method, interp)
    192     if method in ['covar', 'corr']:
    193         for i in range(idx.size):
--> 194             out[i] = func(x[beg[i]:end[i]], y[beg[i]:end[i]])
    195     else:
    196         for i in range(idx.size):

S:\Anaconda\envs\yasa_env\lib\site-packages\yasa\others.py in func(x, y)
    183     elif method == 'corr':
    184         def func(x, y):
--> 185             return _corr(x, y)
    186 
    187     else:

ZeroDivisionError: division by zero

Then I change sp = yasa.spindles_detect(data, sf) to sp = yasa.spindles_detect(data, sf, thresh={'rel_pow': 0.2, 'corr': None, 'rms': 1.5}) and the code can sucessfully run.

I want to know why would I encounter this problem, hope your help.

automatic_sleep_staging: No module named 'lightbgm'

When I tried running the 5th code block in the notebook 14_automatic_sleep_staging, I got the error: ModuleNotFoundError: No module named 'lightgbm'
Here is my traceback:
ModuleNotFoundError Traceback (most recent call last)
in
1 # Getting the predicted sleep stages is now as easy as:
----> 2 y_pred = sls.predict()
3 y_pred

~\PyMOL\lib\site-packages\yasa\staging.py in predict(self, path_to_model)
425 self.fit()
426 # Load and validate pre-trained classifier
--> 427 clf = self._load_model(path_to_model)
428 # Now we make sure that the features are aligned
429 X = self.features.copy()[clf.feature_name]

~\PyMOL\lib\site-packages\yasa\staging.py in _load_model(self, path_to_model)
398 assert os.path.isfile(path_to_model), "File does not exist."
399 # Load using Joblib
--> 400 clf = joblib.load(path_to_model)
401 # Validate features
402 self._validate_predict(clf)

~\PyMOL\lib\site-packages\joblib\numpy_pickle.py in load(filename, mmap_mode)
583 return load_compatibility(fobj)
584
--> 585 obj = _unpickle(fobj, filename, mmap_mode)
586 return obj

~\PyMOL\lib\site-packages\joblib\numpy_pickle.py in _unpickle(fobj, filename, mmap_mode)
502 obj = None
503 try:
--> 504 obj = unpickler.load()
505 if unpickler.compat_mode:
506 warnings.warn("The file '%s' has been generated with a "

~\PyMOL\lib\pickle.py in load(self)
1086 raise EOFError
1087 assert isinstance(key, bytes_types)
-> 1088 dispatchkey[0]
1089 except _Stop as stopinst:
1090 return stopinst.value

~\PyMOL\lib\pickle.py in load_stack_global(self)
1383 if type(name) is not str or type(module) is not str:
1384 raise UnpicklingError("STACK_GLOBAL requires str")
-> 1385 self.append(self.find_class(module, name))
1386 dispatch[STACK_GLOBAL[0]] = load_stack_global
1387

~\PyMOL\lib\pickle.py in find_class(self, module, name)
1424 elif module in _compat_pickle.IMPORT_MAPPING:
1425 module = _compat_pickle.IMPORT_MAPPING[module]
-> 1426 import(module, level=0)
1427 if self.proto >= 4:
1428 return _getattribute(sys.modules[module], name)[0]

ModuleNotFoundError: No module named 'lightgbm'

How do I fix this error? Do I need to install something else? I already tried upgrading yasa with the commandpip install --upgrade yasa, and it told me all requirements were already satisfied

Error during automatic sleep staging

Hi. I was trying to get the automatically extracted sleep stages, but, while executing sls.predict() command, Python (Jupyter Lab) returned the following error.
image
I used an edf file loaded via mne.io.read_raw_edf function(...preload=True...): 4 channels (2 EEG, 2 EMG) with the same sf (200 Hz), then I executed sls = yasa.SleepStaging(raw, eeg_name="EEG1", emg_name="EMG1") which worked without errors for ~5 seconds, though the file is 26 hours long. I use the newest version of yasa (0.4.1), lightGBM (3.2.1) and antropy (0.1.4) are also installed. Seems like it cannot find the files containing pretrained classifiers. What could it be due to? Because manually I could find the files int the file system.
image
Thanks in advance.

Over-classification of wakefulness?

We are getting many (way too many?) epochs classified as wakefulness. We see frequent jumps from N3 or N2 to W, which doesn't seem right.

Have you previously encountered a problem of over-classification of wakefulness with yasa? If not, do you have any suggestion on were could we be having an error?

This is an example of our results for one night of sleep:
unknown

Extra eeg channels in automatic sleep staging

Fantastic work! I was wondering if it's possible to feed extra EEG channels (e.g., frontal and occipital channels) to yasa.SleepStaging.
Will this further improve the staging accuracy? I read the preprint paper but only central EEG is mentioned.
Thanks!

ImportError when running yasa.art_detect and fix

Hi Raphael,

Thanks for developing such an amazing algorithm to analyze sleep data on Python, making life much easier!

However, I encountered import error when using the yasa.art_detect regarding the module sklearn.cluster._kmeans and want to report this bug and its fix here in case anyone encountered it as well. (not sure if I am doing the right thing in the right place coz I am new to the community, so please forgive me if I am doing it wrong).

ImportError: no module named 'sklearn.cluster._kmeans'

This error should be related to the version issue of scikit-learn. So you need install the latest version 0.24.1 which contains the function of sklearn.cluster._kmeans but I still got the import error:

File "/opt/anaconda3/lib/python3.7/site-packages/pyriemann/clustering.py", line 5, in
from sklearn.cluster._kmeans import _init_centroids

ImportError: cannot import name '_init_centroids' from 'sklearn.cluster._kmeans' (/opt/anaconda3/lib/python3.7/site-packages/sklearn/cluster/_kmeans.py)

This issue is actually because scikit-learn 0.24 breaks pyriemann , see Issue and fix. I think they haven't update the fix on pyriemann so one need to manually adjust the code according to this fix.

Now the function yasa.art_detect can run properly!

OS version: 10.14.6
Python: 3.7.4

Difficulties implementing the package SleepStaging

Hi. I am trying to implement YASA to analyse EEG data obtained from birds. I was trying to use the SleepStaging package, but when applying the ".predict" method I am getting the error "File does not exist".
This is an image of the complete error warning. Could you help me with this? I haven't been able to solve the issue, and I checked that all required packages are properly installed.

image

king regards from Groningen, the Netherlands.

Machine learning model

hello, Raphael Vallat, what a crazy job. where is the Machine learning model used by yasa-sleep stage scoring. is it a convolutional deep neural network or another? I couldn't find the Model architecture. What shall I do? please

[ENH] Speed up `_index_to_events`

The function _index_to_events in others.py could be implemented in another fashion, avoiding the use of a for loop.

Here is my proposal:

def _index_to_events(x):
    x_copy = np.copy(x)
    x_copy[:,1] += 1 
    split_idx = x_copy.reshape(-1).astype(int)
    full_idx = np.arange(split_idx.max())
    index = np.split(full_idx, split_idx)[1::2]
    index = np.concatenate(index)
    return index

Please tell me how you think of it. If (pre)approved I could create a PR :)

Conversion to microvolts

I am working with a np.array that I convert to mne.raw to pass to SleepStaging().
I was getting huge power values and I traced it back to this line:

data = raw_pick.get_data() * 1e6

In the comments for SleepStaging(), you have

    .. important:: The PSG data should be in micro-Volts. Do NOT transform (e.g. z-score) or filter
        the signal before running the sleep staging algorithm.

The data that I am entering is in microvolts, and the transformation to raw is not affecting that

eeg
array([[-76., -29., -30., ...,  -5., -12., -44.]])
raw_array.get_data()[0]
array([-76., -29., -30., ...,  -5., -12., -44.])

The sls object has the scaled data stored

sls.data[0]
array([-75994247.75916173, -37654850.96502215, -54616406.07807892, ...,
       -65886683.36596085, -81338678.52364798, -19189291.22413166])

If I calculate the spectrum by hand, I obtain the same result either using the eeg object or the raw_array.get_data(), but if I use the * 1e6 factor I get an obviously large number.

>>> simps(scipy.signal.welch(eeg, 512, nperseg=512*2)[1])
21057.190104166664
>>> simps(scipy.signal.welch(raw_array.get_data()[0], 512, nperseg=512*2)[1])
21057.208596347984
>>> simps(scipy.signal.welch(raw_array.get_data()[0] * 1e6, 512, nperseg=512*2)[1])
2.105720859634799e+16

Should there be a way to indicate to SleepStaging() that it shouldn't rescale the data ? I am thinking

sls = yasa.SleepStaging(..., units=None) # rescales as before
sls = yasa.SleepStaging(..., units="uV") # does not rescale

This issue might or might not be there for the other bands because the default behavior is:

yasa.bandpower_from_psd_ndarray(..., relative=True)

But if I understand the code correctly, it might be also calculating a scaled spectrum and then doing the relative scaling back.

Detecting sleep spindles

Hi,

I preprocessed the sleep data which was in .edf format in MNE. I changed the channel names in accordance to 1020 montage on MNE and saved it as .set file. I am currently interested in detecting sleep spindles in the n2, n3 and rem phase for c3 and c4 channel. I used the function spindles _detect and got this error (image below). Is it because the data is not getting converted to the right units ?

Screenshot 2021-11-04 at 12 47 48

Thanks,
Apoorva.

Add the transition frequency as slow-waves parameters

From Bouchard et al 2021:

We calculated the transition frequency extracted from the filtered slow wave in the delta band. For each slow wave, the transition frequency characterizes the half-wave associated with the depolarization transition. If t denotes the delay of the transition from the maximum negative point to the maximum positive point of the slow wave, then the transition frequency is defined as f=1/(2t).

Error when using mne raw to get power spectrum. I want to use edf imported to mne in (A).

Dear Professor Vallat,

First of all many thanks for taking the time to bring us this very usefull yasa script.

I already use edf in mne in some tasks. Now I'm learning yasa to get information from resting state awake EEG without events.

I've managed to import and use mne raw in this tutorial:

08_bandpower.ipynb .

but when I try this very interesting tutorial:
https://raphaelvallat.com/bandpower.html (A)

I receive an error:


sample_data_raw_file = 'C:\\000_tmp\\00000034_s001_t002.edf'  
raw = mne.io.read_raw_edf(sample_data_raw_file, preload=True, verbose=False)

sns.set(font_scale=1.2)
# data =yasa.bandpower(raw.copy().pick_channels(['F3']), bandpass=True)
# Define sampling frequency and time vector

data = raw[:][0]   #to get the numpy array. 
sf = 100.
time = np.arange(data.size) / sf

# Plot the signal
fig, ax = plt.subplots(1, 1, figsize=(12, 4))
plt.plot(time, data, lw=1.5, color='k')
plt.xlabel('Time (seconds)')
plt.ylabel('Voltage')
# plt.xlim([time.min(), time.max()])
plt.xlim([time.min(), time.max()])
plt.title('N3 sleep EEG data (F3)')
sns.despine()


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-148-cf49d519d2c3> in <module>
      9 # Plot the signal
     10 fig, ax = plt.subplots(1, 1, figsize=(12, 4))
---> 11 plt.plot(time, data, lw=1.5, color='k')
     12 plt.xlabel('Time (seconds)')
     13 plt.ylabel('Voltage')

c:\python\python38\lib\site-packages\matplotlib\pyplot.py in plot(scalex, scaley, data, *args, **kwargs)
   3017 @_copy_docstring_and_deprecators(Axes.plot)
   3018 def plot(*args, scalex=True, scaley=True, data=None, **kwargs):
-> 3019     return gca().plot(
   3020         *args, scalex=scalex, scaley=scaley,
   3021         **({"data": data} if data is not None else {}), **kwargs)

c:\python\python38\lib\site-packages\matplotlib\axes\_axes.py in plot(self, scalex, scaley, data, *args, **kwargs)
   1603         """
   1604         kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D)
-> 1605         lines = [*self._get_lines(*args, data=data, **kwargs)]
   1606         for line in lines:
   1607             self.add_line(line)

c:\python\python38\lib\site-packages\matplotlib\axes\_base.py in __call__(self, data, *args, **kwargs)
    313                 this += args[0],
    314                 args = args[1:]
--> 315             yield from self._plot_args(this, kwargs)
    316 
    317     def get_next_color(self):

c:\python\python38\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self, tup, kwargs, return_kwargs)
    499 
    500         if x.shape[0] != y.shape[0]:
--> 501             raise ValueError(f"x and y must have same first dimension, but "
    502                              f"have shapes {x.shape} and {y.shape}")
    503         if x.ndim > 2 or y.ndim > 2:

ValueError: x and y must have same first dimension, but have shapes (202000,) and (1, 202000)

Could you show me a path to follow?

Thanks in advance
Paulo Kanda
University of São Paulo

`get_sync_events` throws an error when only one event is found and its indices exceed data

traceback:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-33-5fd9957c9aba> in <module>
     13 #         print(spt)
     14         eventst = yasa.get_sync_events(epochs_data.copy(), sf, detection=spt, center='Peak',
---> 15                                        time_before=0.5, time_after=0.5)
     16         spt["Epoch"] = i
     17         eventst["Epoch"] = i

~/.virtualenvs/sleepy3/lib/python3.7/site-packages/yasa/main.py in get_sync_events(data, sf, detection, center, time_before, time_after)
    210             df_tmp = get_sync_events(data[idx_chan, :], sf, k.iloc[:, :-2],
    211                                      center=center, time_before=time_before,
--> 212                                      time_after=time_after)
    213             df_tmp['Channel'] = c
    214             df_tmp['IdxChannel'] = idx_chan

~/.virtualenvs/sleepy3/lib/python3.7/site-packages/yasa/main.py in get_sync_events(data, sf, detection, center, time_before, time_after)
    237         idx = np.ma.compress_rows(idx_mask)
    238         print(idx)
--> 239         amps = data[idx]
    240         time = rng(0) / sf
    241 

IndexError: arrays used as indices must be of integer (or boolean) type

happens when there is only one event (in my case spindle, but that doesn't matter) and that one event is at the beginning or at the end, hence idx_mask = np.ma.mask_rows(np.ma.masked_outside(idx, 0, data.shape[0])) line actually gets rid of anything and idx = []

for now, I manually solve it in the sense, that I am always centered around Peak so I manually check whether my Peak - time_before or Peak + time_after exceed limits of the timeseries and if so, I alter the time_before or time_after.

It'd be nice if yasa offers such an option, or an option to completely skip would be also nice.

Automatic Sleep Staging Error

Hello,

About 5-6 days ago I started getting the error below when running the following line of code:

y_pred = yasa.SleepStaging(raw, eeg_name="EEG L").predict()
y_pred

I have been using this application for months, nothing has changed on my side.


ValueError Traceback (most recent call last)
in
----> 1 y_pred = yasa.SleepStaging(raw, eeg_name="EEG L").predict()
2 y_pred

~\anaconda3\lib\site-packages\yasa\staging.py in predict(self, path_to_model)
425 self.fit()
426 # Load and validate pre-trained classifier
--> 427 clf = self._load_model(path_to_model)
428 # Now we make sure that the features are aligned
429 X = self.features.copy()[clf.feature_name]

~\anaconda3\lib\site-packages\yasa\staging.py in _load_model(self, path_to_model)
400 clf = joblib.load(path_to_model)
401 # Validate features
--> 402 self._validate_predict(clf)
403 return clf
404

~\anaconda3\lib\site-packages\yasa\staging.py in validate_predict(self, clf)
372 f_diff = np.setdiff1d(clf.feature_name
, self.feature_name_)
373 if len(f_diff):
--> 374 raise ValueError("The following features are present in the "
375 "classifier but not in the current features set:",
376 f_diff)

ValueError: ('The following features are present in the classifier but not in the current features set:', array(['eeg_abspow_c5min_norm', 'eeg_abspow_p5min_norm',
'eeg_alpha_c5min_norm', 'eeg_alpha_p5min_norm',
'eeg_at_c5min_norm', 'eeg_at_p5min_norm', 'eeg_beta_c5min_norm',
'eeg_beta_p5min_norm', 'eeg_db_c5min_norm', 'eeg_db_p5min_norm',
'eeg_ds_c5min_norm', 'eeg_ds_p5min_norm', 'eeg_dt_c5min_norm',
'eeg_dt_p5min_norm', 'eeg_fdelta_c5min_norm',
'eeg_fdelta_p5min_norm', 'eeg_hcomp_c5min_norm',
'eeg_hcomp_p5min_norm', 'eeg_higuchi_c5min_norm',
'eeg_higuchi_p5min_norm', 'eeg_hmob_c5min_norm',
'eeg_hmob_p5min_norm', 'eeg_iqr_c5min_norm', 'eeg_iqr_p5min_norm',
'eeg_kurt_c5min_norm', 'eeg_kurt_p5min_norm', 'eeg_nzc_c5min_norm',
'eeg_nzc_p5min_norm', 'eeg_perm_c5min_norm', 'eeg_perm_p5min_norm',
'eeg_petrosian_c5min_norm', 'eeg_petrosian_p5min_norm',
'eeg_sdelta_c5min_norm', 'eeg_sdelta_p5min_norm',
'eeg_sigma_c5min_norm', 'eeg_sigma_p5min_norm',
'eeg_skew_c5min_norm', 'eeg_skew_p5min_norm', 'eeg_std_c5min_norm',
'eeg_std_p5min_norm', 'eeg_theta_c5min_norm',
'eeg_theta_p5min_norm'], dtype='<U24'))

Interactive plot to visually inspect the detection?

Add a .plot() method to class DetectionResults, which allow users to scroll through the data and visually check the detection. In longer-term, add support for editing the detection dataframe based on manual rejection.

What's the best interactive library for plotting? Plotly? Matplotlib? Bokeh? Will it be able to handle plotting of a full-night PSG recording -- might need to downsample.

Can also use mne.viz.plot_raw tool: https://mne.tools/stable/generated/mne.viz.plot_raw.html#mne.viz.plot_raw

Understanding `win_sec` in `bandpower`

For a second, I was expecting win_sec to operate as a rolling window, but after running:

yasa.bandpower(eeg_mean100, 
               sf=100,
               ch_names=None, 
               hypno=None, 
               include=None, 
               win_sec=4, 
               relative=False, 
               bandpass=False, 
               #bands=[1, 4, 'Delta', 5, 11, 'Theta'],
               kwargs_welch={'average': 'median', 'window': 'hamming'})

I get a df that has only one row. Same applies to other arbitrary subsets of the data. The only thing that seems to give more than one power (aka row) is having more channels or stages of a hypnogram. I was hoping to be able to calculate bandpower for arbitrary time windows so I kind of did it with this for loop, but not sure results will make sense, since I don't fully understand what win_sec is doing.

import timeit
start_time = timeit.default_timer()
temp_list = []
sliding_window_secs = 4
# signal downsampled to 100Hz
window = sliding_window_secs * 100
windows_to_calculate = eeg_mean100.shape[0]//window
for i in range(windows_to_calculate):
  eeg_sub = eeg_mean100[window*i:window * (i+1)]
  yasa_bands = yasa.bandpower(eeg_sub, 
                  sf=100,
                  ch_names=None, 
                  hypno=None, 
                  include=None, 
                  win_sec=4, 
                  relative=False, 
                  bandpass=False, 
                  bands=[(1, 4, 'Delta'), (5, 11, 'Theta')],
                  kwargs_welch={'average': 'median', 'window': 'hamming'}) 
  temp_list.append(yasa_bands)

yasa_df = pd.concat(temp_list)
print(timeit.default_timer() - start_time)

Update

Upon inspection of the docs/code, I see the actual function of win_sec and understand it has nothing to do with what I wanted it to do, so I adjusted my win_sec accordingly. Still my question holds about what would be the recommended way of getting bandpower values every nth data samples.

Feature request: Micro-arousal detection

It would be amazing if MA detection (different types such as autonomic, cortical, behavioural etc) was added to YASA. It should be relatively easy to code in as well as I imagine small modifications to the spindle detection algorithm should do the trick!

Found spindles dramatically reduced after downsampling

Thanks for creating such a cool tool!

I have some comprehension question: In my intuition, downsampling from 256 to 128 Hz should not reduce the found spindle counts significantly. However in my case it goes from 69 found spindles to 38 (in this case limited to N3).

Is this to be expected? Which results should be regarded as trustful?

256 Hz
image
128 Hz
image

Additionally, we have the problem that the spindle count does not sum up when analyzing N2+N3 together and when analyzing N2 and N3 separately. Is this due to different thresholds being computed for different segments? Or due to spindles being found that are on the edge between two windows?

Example:

spindles_NREM = yasa.spindles_detect(raw, hypno=hypno,  include=(2, 3)) 
spindles_N2 = yasa.spindles_detect(raw, hypno=hypno,  include=2) 
spindles_N3 = yasa.spindles_detect(raw, hypno=hypno,  include=3) 

spindles_NREM = 880 events
spindles_N2 = 816 events
spindles_N2 = 38 events

missing = 26 events not included in both

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.