GithubHelp home page GithubHelp logo

neurotechx / eeg-expy Goto Github PK

View Code? Open in Web Editor NEW
413.0 413.0 124.0 156.22 MB

EEG Experiments in Python

Home Page: https://neurotechx.github.io/EEG-ExPy/

License: BSD 3-Clause "New" or "Revised" License

Python 96.94% HTML 1.91% CSS 0.37% Makefile 0.78%

eeg-expy's People

Contributors

ayrusgit avatar computerscienceiscool avatar danielemarinazzo avatar div12345 avatar erikbjare avatar hubertjb avatar hvjay avatar jadintredup avatar jartuso avatar jnaulty avatar johngriffiths avatar kylemath avatar orehga avatar pellet avatar retiutut avatar tmorshed avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eeg-expy's Issues

Number of events 0

ℹ Computer information

  • Platform OS: Windows 10
  • Python Version: Python 3.7.10
  • Brain Interface Used: Muse 2

📝 Provide detailed reproduction steps (if any)

  1. switch on Muse 2 and start streaming with Petal Metrics
  2. run experiment P300 with the command eegnb runexp -ip, and input the necessary parameters; complete experiment
  3. fill in the missing values in the experimental data csv file(see #108, #111; without filling in the values you get a tokenizing error)
  4. analyze your data as explained in https://neurotechx.github.io/eeg-notebooks/auto_examples/visual_p300/01r__p300_viz.html using Jupyter Notebooks up to and including the epoching step (# Create an array containing the timestamps and type of each stimulus)

✔️ Expected result

Based on the marker column a number of events is detected, including both target and non-target events. The output resulting from running this cell shows the corresponding results

❌ Actual result

The output shows that 23 events were found, however, in the table below the number of events is 0, and there are 0 occurrences of both target and non-target events

📷 Screenshots

image

I believe that this issue is connected to #108 / #111 . If I use the example data (subject 1, session 1) that is used in https://neurotechx.github.io/eeg-notebooks/auto_examples/visual_p300/01r__p300_viz.html, target and non-target events are found. Although, there is a discrepancy in events found (1161) and the number of events in the table below (1143: 959 non-target and 184 target events).

image

By visual inspection I do not see any differences between my experimental data file and that of example data (except that I only recorded a single run of 120s, while example data had 6 runs all of longer duration).

I add my experimental data file (had to change the extension from csv to txt, since otherwise I received a file type not supported error when trying to upload it here).
recording_2021-06-28-14.34.59.txt

PyWinhook building wheel error

On Win10 using Anaconda 4.8.4

conda create -n eeg-notebooks python=3.6 pip
conda activate eeg-notebooks
pip install -r requirements.txt

 Building wheel for pyWinhook (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: 'C:\Users\Morgan Hough\.conda\envs\eeg-notebooks\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\MORGAN~1\\AppData\\Local\\Temp\\pip-install-ntayfg3e\\pywinhook\\setup.py'"'"'; __file__='"'"'C:\\Users\\MORGAN~1\\AppData\\Local\\Temp\\pip-install-ntayfg3e\\pywinhook\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\MORGAN~1\AppData\Local\Temp\pip-wheel-yrqp9qeq'
       cwd: C:\Users\MORGAN~1\AppData\Local\Temp\pip-install-ntayfg3e\pywinhook\
  Complete output (16 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build\lib.win-amd64-3.6
  creating build\lib.win-amd64-3.6\pyWinhook
  copying pyWinhook\aa hook.py -> build\lib.win-amd64-3.6\pyWinhook
  copying pyWinhook\doc.py -> build\lib.win-amd64-3.6\pyWinhook
  copying pyWinhook\example.py -> build\lib.win-amd64-3.6\pyWinhook
  copying pyWinhook\HookManager.py -> build\lib.win-amd64-3.6\pyWinhook
  copying pyWinhook\__init__.py -> build\lib.win-amd64-3.6\pyWinhook
  running build_ext
  building 'pyWinhook._cpyHook' extension
  swigging pyWinhook/cpyHook.i to pyWinhook/cpyHook_wrap.c
  swig.exe -python -o pyWinhook/cpyHook_wrap.c pyWinhook/cpyHook.i
  error: command 'swig.exe' failed: No such file or directory
  ----------------------------------------
  ERROR: Failed building wheel for pyWinhook
  Running setup.py clean for pyWinhook

Need to use conda to install swig as there is no PyPI compatible package

conda install swig

    running build_ext
    building 'pyWinhook._cpyHook' extension
    swigging pyWinhook/cpyHook.i to pyWinhook/cpyHook_wrap.c
    C:\Users\Morgan Hough\.conda\envs\eeg-notebooks\Library\bin\swig.exe -python -o pyWinhook/cpyHook_wrap.c pyWinhook/cpyHook.i
    error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
    ----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\Morgan Hough\.conda\envs\eeg-notebooks\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\MORGAN~1\\AppData\\Local\\Temp\\pip-install-cb193sym\\pywinhook\\setup.py'"'"'; __file__='"'"'C:\\Users\\MORGAN~1\\AppData\\Local\\Temp\\pip-install-cb193sym\\pywinhook\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\MORGAN~1\AppData\Local\Temp\pip-record-hrg_h6jz\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Morgan Hough\.conda\envs\eeg-notebooks\Include\pyWinhook' Check the logs for full command output.

Now looking for the required Visual Studio build tools

https://stackoverflow.com/questions/48541801/microsoft-visual-c-14-0-is-required-get-it-with-microsoft-visual-c-build-t

I assume this is because of the Python 3.6 requirement. Installing just the Desktop C++ build tools using Studio 17 solves the remaining issue building the wheel.

Studio 19 is the current version and using a Python 3.7 requirement or forcing a PyWinhook==1.6.1 in the requirements.txt could be alternative workarounds.

test installation

We have worked to clarify and streamline the installation process and instructions for eeg-notebooks .

However we know there are still some problems that people occasionally encounter.

These may be due to a) lack of clarity in some of the instructions, or b) actual installation errors

We are asking all new contributors to run through the installation instructions here, and leave feedback on this issue thread. Note that this includes installation of BlueMuse for windows users.

We would like to hear any general feedback you have on things that are unclear / incorrect and any suggested improvements.

We will also help you with installation errors you encounter here.

Loading FreeEEG32 recorded data

Hi,
I ran an ssvep experiment using eeg-notebooks for ssvep and then I loaded it by modifying the ssvep visualisation example slightly. But while loading I seem to be having a problem.
The recording was from a 2 channel (FP1 and FP2) headset with the FreeEEG32 board doing the acquisition.
This was the code -

eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
ssvep_data_path = os.path.join(eegnb_data_path, 'visual-SSVEP', 'local','freeeeg32')

subject = 1
session = 1
raw = load_data(subject, session, 
                experiment='visual-SSVEP', site='local', device_name='freeeeg32',
                data_dir = eegnb_data_path)

But this is the error I get -

['eeg_0', 'eeg_1', 'eeg_2', 'eeg_3', 'eeg_4', 'eeg_5', 'eeg_6', 'eeg_7', 'eeg_8', 'eeg_9', 'eeg_10', 'eeg_11', 'eeg_12', 'eeg_13', 'eeg_14', 'eeg_15', 'eeg_16', 'eeg_17', 'eeg_18', 'eeg_19', 'eeg_20', 'eeg_21', 'eeg_22', 'eeg_23', 'eeg_24', 'eeg_25', 'eeg_26', 'eeg_27', 'eeg_28', 'eeg_29', 'eeg_30', 'eeg_31', 'stim']
Creating RawArray with float64 data, n_channels=33, n_times=31428
    Range : 0 ... 31427 =      0.000 ...    61.381 secs
Ready.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-4-f9984a7bc1ef> in <module>
     11 raw = load_data(subject, session, 
     12                 experiment='visual-SSVEP', site='local', device_name='freeeeg32',
---> 13                 data_dir = eegnb_data_path)

d:\softwares\github\eeg-notebooks\eegnb\analysis\utils.py in load_data(subject_id, session_nb, device_name, experiment, replace_ch_names, verbose, site, data_dir)
    164             ch_ind=ch_ind,
    165             replace_ch_names=replace_ch_names,
--> 166             verbose=verbose,
    167         )
    168 

d:\softwares\github\eeg-notebooks\eegnb\analysis\utils.py in load_csv_as_raw(fnames, sfreq, ch_ind, aux_ind, replace_ch_names, verbose)
     77     raws = concatenate_raws(raw, verbose=verbose)
     78     montage = make_standard_montage("standard_1005")
---> 79     raws.set_montage(montage)
     80 
     81     return raws

<decorator-gen-22> in set_montage(self, montage, match_case, on_missing, verbose)

d:\softwares\anaconda3\envs\eeg-notebooks\lib\site-packages\mne\io\meas_info.py in set_montage(self, montage, match_case, on_missing, verbose)
    160         from ..channels.montage import _set_montage
    161         info = self if isinstance(self, Info) else self.info
--> 162         _set_montage(info, montage, match_case, on_missing)
    163         return self
    164 

d:\softwares\anaconda3\envs\eeg-notebooks\lib\site-packages\mne\channels\montage.py in _set_montage(info, montage, match_case, on_missing)
    772                 'in your analyses.'
    773             )
--> 774             _on_missing(on_missing, missing_coord_msg)
    775 
    776             # set ch coordinates and names from digmontage or nan coords

d:\softwares\anaconda3\envs\eeg-notebooks\lib\site-packages\mne\utils\check.py in _on_missing(on_missing, msg, name)
    730     _check_option(name, on_missing, ['raise', 'warn', 'ignore'])
    731     if on_missing == 'raise':
--> 732         raise ValueError(msg)
    733     elif on_missing == 'warn':
    734         warn(msg)

ValueError: DigMontage is only a subset of info. There are 32 channel positions not present in the DigMontage. The required channels are:

['eeg_0', 'eeg_1', 'eeg_2', 'eeg_3', 'eeg_4', 'eeg_5', 'eeg_6', 'eeg_7', 'eeg_8', 'eeg_9', 'eeg_10', 'eeg_11', 'eeg_12', 'eeg_13', 'eeg_14', 'eeg_15', 'eeg_16', 'eeg_17', 'eeg_18', 'eeg_19', 'eeg_20', 'eeg_21', 'eeg_22', 'eeg_23', 'eeg_24', 'eeg_25', 'eeg_26', 'eeg_27', 'eeg_28', 'eeg_29', 'eeg_30', 'eeg_31'].

Consider using inst.set_channel_types if these are not EEG channels, or use the on_missing parameter if the channel positions are allowed to be unknown in your analyses.

What is the change I must do to get over this problem?
Thanks in Advance!

Implement code vs prose comprehension task

As part of my thesis work, I want to replicate (and outperform) a 2019 study (https://ieeexplore.ieee.org/abstract/document/8813259) by @dfucci et al that attempted to classify whether the subject is reading code or prose. Their classifier exhibited very modest performance, no doubt in part due to the limited 1-channel EEG equipment they were using, which I'm confident I can improve upon for my thesis.

The study by Fucci et al is in turn a replication study of an 2017 fMRI study (https://ieeexplore.ieee.org/abstract/document/7985660), which clearly showed there were differences in reading code and prose.

More recently (yesterday), MIT researchers published an fMRI study on the differences in brain activity between reading code and prose: https://news.mit.edu/2020/brain-reading-computer-code-1215 (the actual paper: https://elifesciences.org/articles/58906)

I think this task would be a nice addition to the notebooks, and I'm creating this issue to signal my intent to implement it 🙂

EEG experiments not working with Muse2 via EEG Notebooks on Mac

I have created a virtual environment and I am trying to run the SSVEP experiment via EEG notebooks. On my command line, I execute the command "python run_notebook.py" and choose the appropriate device and configurations. It connects to the Muse and the stimulus presentation appears, but it doesn't seem to record and store any of the data and I also get an error at the end "Value error: need at least one array to concatenate".
Screen Shot 2020-10-02 at 6 42 26 PM

Screen Shot 2020-10-02 at 6 43 00 PM

Merge changes to the device abstraction ("EEG") class from my repo

My stuff is in: https://github.com/ErikBjare/thesis/blob/master/src/eegwatch/devices/

Includes a bunch of minor changes, including:

  • Added type annotations
  • Support for recording Muse PPG/ACC/GYRO streams

TODO

  • start/stop is different for Brainflow vs Muse devices, this should maybe be refactored, but at least be documented.
  • I've ran the black formatter on all my code, so merging would be easiest after similar formatting has been applied on the eeg-notebooks project (like in #30)
  • Needs a way to enable PPG/ACC/GYRO streams (without hardcoding)

sounds fades away in ssaep experiments

Hi

in both ssaep experiments the sounds fades away after a few instances, becoming barely audible.
maybe it's my perception but it's somehow less evident in the two frequencies one (4a), but still very real.

Add option to select OpenBCI usb dongle

It would be nice to select the cyton serial port, instead of having to type it each time.

Example com ports:
Mac - [DEFAULT]: OpenBCI Port Name = /dev/cu.usbserial-DM00D7TW
Windows - COM4
LINUX - /dev/ttyUSB0

This applies to Ganglion, Cyton, and CytonDaisy Boards.

error after changing psychopy PTB sound preferences

So there's this warning

3.6146 WARNING We strongly recommend you activate the PTB sound engine in PsychoPy prefs as the preferred audio engine. Its timing is vastly superior. Your prefs are currently set to use ['sounddevice', 'PTB', 'pyo', 'pygame'] (in that order).

If I do as requested, I then get an error

Traceback (most recent call last):
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\Scripts\eegnb-script.py", line 33, in <module>
    sys.exit(load_entry_point('eeg-notebooks', 'console_scripts', 'eegnb')())
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\__main__.py", line 74, in main
    cli = CLI(args.command)
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\cli.py", line 9, in __init__
    getattr(self, command)()
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\cli.py", line 70, in runexp
    run_introprompt()
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\introprompt.py", line 155, in main
    run_experiment(experiment, record_duration, eeg_device, save_fn)
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\utils.py", line 62, in run_experiment
    ssaep_onefreq.present(duration=record_duration, eeg=eeg_device, save_fn=save_fn)
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\experiments\auditory_ssaep\ssaep_onefreq.py", line 104, in present
    aud1 = sound.Sound(am1)
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\lib\site-packages\psychopy\sound\backend_ptb.py", line 334, in __init__
    hamming=self.hamming)
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\lib\site-packages\psychopy\sound\backend_ptb.py", line 413, in setSound
    _SoundBase.setSound(self, value, secs, octave, hamming, log)
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\lib\site-packages\psychopy\sound\_base.py", line 198, in setSound
    self._setSndFromArray(numpy.array(value))
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\lib\site-packages\psychopy\sound\backend_ptb.py", line 479, in _setSndFromArray
    self.stopTime = self._nSamples / float(self.sampleRate)
TypeError: float() argument must be a string or a number, not 'NoneType'

Error:name 'fetch_dataset' is not defined”)

I was trying to run the N170 experiemnt from https://neurotechx.github.io/eeg-notebooks/experiments/vn170.html but when I run the code block for downloading the data, I get an error saying “name 'fetch_dataset' is not defined”. Which makes me think Im missing a library and I dont know where to find it. I’m on linux and I’ve already installed the requirements and libraries. I would appreciate any thoughts from anyone who may or may not have had a similar issue. When installing the libraries on my virtual env, I did get a build error for wxWidgets and Phoenix but everything else worked fine.

Code:

eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')

n170_data_path = os.path.join(eegnb_data_path, 'visual-N170', 'eegnb_examples')

if not os.path.isdir(n170_data_path):

  `fetch_dataset(data_dir=eegnb_data_path,experiment='visual-N170', site='eegnb_examples');`

subject = 1

session = 1

raw = load_data(subject,session, experiment='visual-N170', site='eegnb_examples', device_name='muse2016', data_dir = eegnb_data_path)

Missing Marker Column in Output (Muse2 with BlueMuse)

ℹ Computer information

  • Platform OS (e.g Windows, Mac, Linux etc): Windows 10
  • Python Version: 3.7
  • Brain Interface Used (e.g Muse, OpenBCI, Notion etc): Muse2 with Bluemuse

📝 Provide detailed reproduction steps (if any)

  1. Connect to BlueMuse and begin streaming.
  2. Ran the demonstration code for running an experiment as per the documentation . I tried 30s, 120s, visual_n170, and visual_p300.
  3. Additionally, I ran a snippet of code for "signal quality", and I send a couple of test markers.

✔️ Expected result

The fetched data looked like this:

image

I expected all the columns except the "Right AUX" column.

❌ Actual result

(1) The result of running the experiment:
There is no marker column at all:
image

(2) Signal snippet:
Initially, there was no marker column, as above. This morning, it began to show up, but it has no header or data for the first few lines, which causes import errors. The other markers I send in the snippet do show up.

image

run n170 experiment

After you have have run through the first 'good first issue' and completed installation, the next thing to do is to run one of the experiments.

Note, this only applies to users/developers with access to an EEG device.

As with the installation issue, we would like to gather feedback and any errors encountered from this process.

Please follow through the instructions here. Be sure to select the correct EEG device, choose the visual N170 experiment, and for this test run, just enter '30' (30 seconds) for the duration option.

Please leave comments on this thread with any issues encountered, or suggestions for improvement, from your experience of this process.

Visual P300 event markers not recorded

ℹ Computer information

  • Platform OS: Windows 10
  • Python Version: Python 3.7.10
  • Brain Interface Used: Muse 2

📝 Provide detailed reproduction steps (if any)

  1. switch on your Muse device
  2. start streaming using BlueMuse or Petal Metrics (I tried with both and had the same issues)
  3. run the experiment with the command eegnb runexp -ip, and input the necessary parameters
  4. complete the experiment

✔️ Expected result

The title of the last column of the output/experimental data file is "Marker" and below it starting from the next row are zero markers denoting that no picture is presented at this time

❌ Actual result

The title of the last column is missing, and zeros start from data row 13 (row 14 of the file)

📷 Screenshots (this experimental was done using BlueMuse for streaming)

image

Add a CLI experiment run to the pre check-in tests

📝 Provide a description of the new feature

What is the expected behavior of the proposed feature? What is the scenario this would be used?

Add a mock run of the cli to allow for end-to-end test of an eeg-notebook experiment. Code can live in the /test folder.

Eventually, these test methods should run successfully before PR completion.

  • you can initialize a mock device with brainflow

If you'd like to see this feature implemented, add a 👍 reaction to this post.

Docs Review - Audience

I am assuming the “audience” we are writing to includes educators at the High School or University level, in addition to students at these respective levels? How much background with these technologies do we expect our audience to have?

(Some of the pain points I am noting related to code or explication might be specific to my lack of experience, and unnecessary for our particular audience.)

Raise minimum supported Python to 3.7 (at least)

There are a few things I've been working on where I'd really benefit from the minimum version being raised to 3.7 (time_ns, dataclasses, etc.)

Are there any blockers for this?

For the record: I've been running 3.8 just fine locally (although since #32 is unmerged, that might suggest there are some remaining issues, but we could just go ahead and merge it and wait with 3.9).

For discussion around maximum supported version, see #50

issue with installation

Hi
I am trying to install the library on windows, starting from scratch with miniconda (python 3.9).

I get a series of errors, I paste the first few lines below, I think it might be a version issue.

Any hint?

thanks!

ERROR: Command errored out with exit status 1:
command: 'C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks\python.exe' 'C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\ricercasc\AppData\Local\Temp\pip-build-env-8aype7ym\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'Cython>=0.28.5' 'numpy==1.13.3; python_version=='"'"'3.6'"'"' and platform_system!='"'"'AIX'"'"' and platform_python_implementation == '"'"'CPython'"'"'' 'numpy==1.14.0; python_version=='"'"'3.6'"'"' and platform_system!='"'"'AIX'"'"' and platform_python_implementation != '"'"'CPython'"'"'' 'numpy==1.14.5; python_version=='"'"'3.7'"'"' and platform_system!='"'"'AIX'"'"'' 'numpy==1.17.3; python_version>='"'"'3.8'"'"' and platform_system!='"'"'AIX'"'"'' 'numpy==1.16.0; python_version=='"'"'3.6'"'"' and platform_system=='"'"'AIX'"'"'' 'numpy==1.16.0; python_version=='"'"'3.7'"'"' and platform_system=='"'"'AIX'"'"'' 'numpy==1.17.3; python_version>='"'"'3.8'"'"' and platform_system=='"'"'AIX'"'"'' 'scipy>=0.19.1'
cwd: None
Complete output (641 lines):
Ignoring numpy: markers 'python_version == "3.6" and platform_system != "AIX" and platform_python_implementation == "CPython"' don't match your environment

select port prompt

Hi,

IMO, this is a minor bug.
I am following the available instructions (github/sphynx, or outdated gdoc) and when I come to the serial port declaration, i.e.

board_name = 'cyton'
port = '/dev/cu.something'   
eeg_device = EEG(device=board_name,serial_port=port)

the latest instruction still prompts a textual instruction for the user to insert the port name again. However, the port name entry is not required and if enter is pressed the code executes correctly.
Just aesthetics, but it'd be great to remove the prompt.

cueing notebook prose updates

There's a set of sphinx-gallery examples for the four notebooks that you contributed

https://neurotechx.github.io/eeg-notebooks/experiments/cueing.html

which are slightly modified, but largely unchanged from your original notebooks.

What we could do with is a go-over of those four pages

https://neurotechx.github.io/eeg-notebooks/auto_examples/visual_cueing/01r__cueing_singlesub_analysis.html#

https://neurotechx.github.io/eeg-notebooks/auto_examples/visual_cueing/02r__cueing_group_analysis.html

https://neurotechx.github.io/eeg-notebooks/auto_examples/visual_cueing/03r__cueing_behaviour_analysis_winter2019.html

https://neurotechx.github.io/eeg-notebooks/auto_examples/visual_cueing/04r__cueing_group_analysis_winter2019.html

Specifically: I was wondering if someone from your team or yourself could spare maybe an hour or so going through those and get back to me with the following:

  • Some additional explanatory text for the experiment(s) in general
  • Explanatory text for each code block section
  • Explanatory text describing what the figures are showing
  • A short concluding paragraph for each page
  • Citations of relevant papers (yours and others) in the above additions highly encouraged

If you guys don't have bandwidth for this atm, no worries. I thought I would ask as you folks would be best placed to do this, and I have a lot of other tasks to do with the rest of the documentation. Trying to get better at delegating!

If you think you could give this a stab:

You can send the above info however is most convenient.

Pull request on the source code would be ok - although the relationship of sphinx-gallery python source code embedded .rst to the webpage html isn't super obvious, and you have to compile the website to see it. So you might prefer to e.g. just grab the jupyter notebooks from each webpage (c.f. download link at the bottom), and email me updated versions, or even just text file/email. Whatever you prefer.

extra electrode not visible with Muse2 and Bluemuse?

ℹ Computer information

  • Platform OS (e.g Windows, Mac, Linux etc): Windows
  • Python Version: 3.7
  • Brain Interface Used (e.g Muse, OpenBCI, Notion etc): Muse2 with Bluemuse

📝 Provide detailed reproduction steps (if any)

  1. Created extra electrode with mini usb 5 pins (https://hackaday.io/project/162169-muse-eeg-headset-making-extra-electrode/details)
  2. Connected the Muse2 with Bluemuse
  3. Started muse-lsl

✔️ Expected result

See the AUX channel

❌ Actual result

Not see the aux channel.

The Bluemuse page https://github.com/kowalej/BlueMuse says

Muse 2 has AUX channel disabled - if I try to stream from this channel I get errors. It looks like no data comes from the channel when debugging Bluetooth inside a sniffing tool, so I'm making the assumption that Muse 2 doesn't actually support the AUX (secret electrode) input - it just has a (non functioning) GATT characteristic which is the same UUID as the Muse (2016).

Confirmed? Or is there a workaround? Is there an alternative to Bluemuse for windows? Anyone else encountered this issue?

Add code coverage analysis

Another improvement to the CI pipeline.

We don't have much in the way of tests, but we do have the auto examples which run a fair bit of the code. It'd be nice to know exactly which code is somewhat tested in CI, and which isn't (and when new code is added, how well covered it is by tests/examples).

This should be easy enough to do with coverage.py.

Add signal quality checks before recording starts

📝 Provide a description of the new feature

What is the expected behavior of the proposed feature?

This will be based off @ErikBjare 's take on the problem - https://erik.bjareholt.com/thesis/Signal.html

  • We will add a new flag to command line to check signal quality : eegnb runexp [flag for quality check]

Results will be placed on the command line prompt with a suggestion on changes to make to improve quality.

What is the scenario this would be used?

Before the recording gets started, the quality check happens to make the user aware of how valuable their recordings will be for analysis moving forward.

Initially targeting muse devices, but we aim to generalize to different device types.


If you'd like to see this feature implemented, add a 👍 reaction to this post.

EEG Analysis Utils plotting doesn't support variable channels

The current eeng/analysis/utils.py module has a plot_conditions() method that is coded to plot results from only 4 channels.

To support devices with more than 4 channels, the implementation needs to be updated to support varying channel counts.

Support newer Python versions

There seems to be a lot of issues when installing on other Python versions than 3.7, since I have a pretty extensive history of battling issues like these I thought I might give it a shot (especially now that we have a CI pipeline).

Documenting known issues here:

  • Installing pywinhook on Windows on Python 3.9 requires extra build deps (swig, VS C++ tools) due to no prebuilt wheels being available on PyPI
    • Fixed by referencing wheels built elsewhere in requirements.txt
  • Build errors for numpy/pandas
    • It seems this is merely due to the versions in requirements.txt being old. Using newer versions works just fine. (another reason for #22)
  • tables 3.6.1 has no wheels available for Python 3.9 on platforms other than Linux (see: https://github.com/NeuroTechX/eeg-notebooks/actions/runs/400623523)

pylsl missing library?

Hi again, another (newbie) issue, with a missing library.
Here the output when running python run_notebooks.py

Traceback (most recent call last):
File "run_notebooks.py", line 4, in
from eegnb.devices.eeg import EEG
File "C:\Users\ricercasc\eeg-notebooks\eegnb\devices\eeg.py", line 18, in
from muselsl import stream, list_muses, record
File "C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks_py38\lib\site-packages\muselsl_init_.py", line 1, in
from .stream import stream, list_muses
File "C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks_py38\lib\site-packages\muselsl\stream.py", line 2, in
from pylsl import StreamInfo, StreamOutlet
File "C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks_py38\lib\site-packages\pylsl_init_.py", line 2, in
from .pylsl import IRREGULAR_RATE, DEDUCED_TIMESTAMP, FOREVER, cf_float32,
File "C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks_py38\lib\site-packages\pylsl\pylsl.py", line 1217, in
lib = CDLL(libpath)
File "C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks_py38\lib\ctypes_init_.py", line 381, in init
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Users\ricercasc\Miniconda3\envs\eeg-notebooks_py38\lib\site-packages\pylsl\lib\liblsl64.dll' (or one of its dependencies). Try using the full path with constructor syntax.

Clearing old examples

We should go through the old examples and delete the ones that reference the old API.

@JohnGriffiths are there any reasons we need to keep any of these examples?

Dependency versions are overly strict

I'm trying to use eegnb as a dependency in my own code (https://github.com/ErikBjare/thesis) to reuse the EEG class for device abstraction, but am stumbling into issues with eegnb using old, specific, versions of mne (which I can't use due to other dependencies depending on newer versions).

I'd prefer it if the dependencies were managed/expressed with poetry/pyproject.toml (with specific versions locked with poetry.lock), but it would necessitate moving away from using setup.py.

If this is of interest, I could submit a PR. (I could also put together a basic CI setup at the same time, since I notice one is missing)

Lots of code duplication/inconsistency for getting the data path

Running git grep path.join yields a swindling 62+ results, here's a sample:

eegnb/__init__.py:DATA_DIR = path.join(path.expanduser("~/"), ".eegnb", "data")
eegnb/analysis/utils_old.py:    data_path = os.path.join(data_dir, experiment, site, device, subsess)
eegnb/datasets/datasets.py:    exp_dir = os.path.join(data_dir, experiment, site, device)
eegnb/datasets/datasets.py:        destination = os.path.join(data_dir, "downloaded_data.zip")
eegnb/datasets/datasets.py:        pth = os.path.join(
eegnb/datasets/datasets.py:                        pth = os.path.join(
eegnb/experiments/auditory_oddball/diaconescu.py:    mcond_file = os.path.join(
eegnb/experiments/visual_codeprose/codeprose.py:    outname = os.path.join(
eegnb/experiments/visual_cueing/cueing.py:    directory = os.path.join(
eegnb/experiments/visual_cueing/cueing.py:    outname = os.path.join(
eegnb/experiments/visual_n170/n170.py:    faces = list(map(load_image, glob(os.path.join(FACE_HOUSE, "faces", "*_3.jpg"))))
eegnb/experiments/visual_n170/n170.py:    houses = list(map(load_image, glob(os.path.join(FACE_HOUSE, "houses", "*.3.jpg"))))
eegnb/experiments/visual_n170/n170_fixedstimorder.py:fso_list_file = os.path.join(exp_dir, "visual_n170", "n170_fixedstimorder_list.csv")
eegnb/experiments/visual_n170/n170_fixedstimorder.py:        filename = os.path.join(stim_dir, filename)
eegnb/experiments/visual_n170/n170_old.py:faces_dir = os.path.join(stim_dir, "visual", "face_house", "faces")
eegnb/experiments/visual_n170/n170_old.py:houses_dir = os.path.join(stim_dir, "visual", "face_house", "houses")
eegnb/experiments/visual_p300/p300.py:    targets = list(map(load_image, glob(os.path.join(CAT_DOG, "target-*.jpg"))))
eegnb/experiments/visual_p300/p300.py:    nontargets = list(map(load_image, glob(os.path.join(CAT_DOG, "nontarget-*.jpg"))))
eegnb/stimuli/__init__.py:FACE_HOUSE = path.join(path.dirname(__file__), "visual", "face_house")
eegnb/stimuli/__init__.py:CAT_DOG = path.join(path.dirname(__file__), "visual", "cats_dogs")
examples/auditory_oddball/auditory_oddball_diaconescu.ipynb:    "output_dir = os.path.join(eegnb_data_dir, \"auditory-oddball_diaconescu\", \"jg\")\n",
examples/auditory_oddball/auditory_oddball_diaconescu.ipynb:    "recording_path = os.path.join(output_dir, \"subject\" + str(subject), \"session\" + str(session),\n",
examples/auditory_oddball/auditory_oddball_diaconescu.ipynb:    "recordings_files = sorted(glob.glob(os.path.join(recordings_dir, '*.csv')))\n",
examples/misc/mac_run_exp.py:	recording_path = os.path.join(os.path.expanduser("~"), "eeg-notebooks", "data", "visual", "P300", "subject" + str(subject), "session" + str(run), ("recording_%s.csv" % strftime("%Y-%m-%d-%H.%M.%S", gmtime())))
examples/misc/mac_run_exp.py:	recording_path = os.path.join(os.path.expanduser("~"), "eeg-notebooks", "data", "visual", "N170", "subject" + str(subject), "session" + str(run), ("recording_%s.csv" % strftime("%Y-%m-%d-%H.%M.%S", gmtime())))
examples/misc/mac_run_exp.py:	recording_path = os.path.join(os.path.expanduser("~"), "eeg-notebooks", "data", "visual", "SSVEP", "subject" + str(subject), "session" + str(run), ("recording_%s.csv" % strftime("%Y-%m-%d-%H.%M.%S", gmtime())))
examples/misc/mac_run_exp.py:  recording_path = os.path.join(os.path.expanduser("~"), "eeg-notebooks", "data", "visual", "cueing", "subject" + str(subject), "session" + str(run), ("subject" + str(subject) + "_session" + str(run) + "_recording_%s.csv" % strftime("%Y-%m-%d-%H.%M.%S", gmtime()) ) )
examples/misc/neurobrite_datasets.py:basedir = os.path.join(os.getcwd(),'stimulus_presentation/stim')
examples/misc/neurobrite_datasets.py:stimdir = os.path.join(basedir, 'olivetti_faces')
examples/misc/neurobrite_datasets.py:stimdir = os.path.join(basedir,'faces_in_wild')
examples/misc/neurobrite_datasets.py:stimdir = os.path.join(basedir, 'digits')
examples/sandbox/SSVEP_linux.ipynb:    "recording_path = os.path.join(os.path.expanduser(\"~\"), \"eeg-notebooks\", \"data\", \"visual\", \"SSVEP\", \"subject\" + str(subject), \"session\" + str(session), (\"recording_%s.csv\" %\n",
examples/sandbox/auditory_oddball_erp_arrayin.ipynb:    "recording_path = os.path.join(os.path.split(os.getcwd())[0],'data', 'auditory', 'oddball_erp_arrayin', \n",
examples/sandbox/auditory_oddball_erp_arrayin.ipynb:    "recordings_files = glob.glob(os.path.join(recordings_dir, '*'))\n",
examples/sandbox/auditory_stim_with_aux.ipynb:    "#sys.path.append(os.path.join(os.path.expanduser(\"~\"), \"eeg-notebooks\", 'utils'))\n",
examples/sandbox/auditory_stim_with_aux.ipynb:    "#recording_path = os.path.join(os.path.expanduser(\"~\"), \"eeg-notebooks\", \"data\", \"visual\", \"N170\", \"subject\" + str(subject), \"session\" + str(session), (\"recording_%s.csv\" %\n",
examples/sandbox/old_notebooks/Auditory P300 with Muse.ipynb:    "sys.path.append(os.path.join(os.path.expanduser(\"~\"), \"eeg-notebooks\", 'utils'))\n",
examples/sandbox/old_notebooks/Cross-subject classification.ipynb:    "sys.path.append(os.path.join(os.path.expanduser(\"~\"), \"eeg-notebooks\", 'utils'))\n",
examples/sandbox/old_notebooks/Go No Go with Muse.ipynb:    "sys.path.append(os.path.join(os.path.expanduser(\"~\"), \"eeg-notebooks\", 'utils'))\n",
examples/sandbox/test_muse_markers.ipynb:    "save_fn = os.path.join(os.getcwd(),'test_record_file.csv')\n",
examples/visual_cueing/01r__cueing_singlesub_analysis.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_cueing/01r__cueing_singlesub_analysis.py:cueing_data_path = os.path.join(eegnb_data_path, 'visual-cueing', 'kylemathlab_dev')
examples/visual_cueing/02r__cueing_group_analysis.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')
examples/visual_cueing/02r__cueing_group_analysis.py:cueing_data_path = os.path.join(eegnb_data_path, 'visual-cueing', 'kylemathlab_dev')
examples/visual_cueing/03r__cueing_behaviour_analysis_winter2019.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')
examples/visual_cueing/03r__cueing_behaviour_analysis_winter2019.py:cueing_data_path = os.path.join(eegnb_data_path, 'visual-cueing', 'kylemathlab_dev')
examples/visual_cueing/04r__cueing_group_analysis_winter2019.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')
examples/visual_cueing/04r__cueing_group_analysis_winter2019.py:cueing_data_path = os.path.join(eegnb_data_path, 'visual-cueing', 'kylemathlab_dev')
examples/visual_cueing/cueing_group_analysis.ipynb:    "eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')\n",
examples/visual_cueing/cueing_group_analysis.ipynb:    "cueing_data_path = os.path.join(eegnb_data_path, 'visual-cueing', 'kylemathlab_dev')\n",
examples/visual_n170/01r__n170_viz.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_n170/01r__n170_viz.py:n170_data_path = os.path.join(eegnb_data_path, 'visual-N170', 'eegnb_examples')
examples/visual_n170/02r__n170_decoding.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_n170/02r__n170_decoding.py:n170_data_path = os.path.join(eegnb_data_path, 'visual-N170', 'eegnb_examples')
examples/visual_p300/01r__p300_viz.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_p300/01r__p300_viz.py:p300_data_path = os.path.join(eegnb_data_path, 'visual-P300', 'eegnb_examples')
examples/visual_p300/02r__p300_decoding.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_p300/02r__p300_decoding.py:p300_data_path = os.path.join(eegnb_data_path, 'visual-P300', 'eegnb_examples')
examples/visual_ssvep/01r__ssvep_viz.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_ssvep/01r__ssvep_viz.py:ssvep_data_path = os.path.join(eegnb_data_path, 'visual-SSVEP', 'eegnb_examples')
examples/visual_ssvep/02r__ssvep_decoding.py:eegnb_data_path = os.path.join(os.path.expanduser('~/'),'.eegnb', 'data')    
examples/visual_ssvep/02r__ssvep_decoding.py:ssvep_data_path = os.path.join(eegnb_data_path, 'visual-SSVEP', 'eegnb_examples')

As you can see, many do the exact same thing (i.e. get a path to the ~/.eegnb/experiment/site/board/subject/session directory), but do it in various different ways (and sometimes inconsistently!).

Since we have a nice generate_save_fn (used in 33+ places) it should probably be better adopted. I'll likely do this as I stumble into these issues in the codebase.

Implement ExperimentSpec to reduce redundancy

A lot of the code/experiments needs access to static information such as:

  • device name
  • experiment name
  • subject id
  • session id
  • data directory
  • record duration

It would probably be a good idea to add a dataclass for this, that'll be easier to pass around without having to repeat the same set of arguments every time.

The idea would be that once you've constructed a ExperimentSpec, you can:

  • Run the experiment: expspec.run()
  • Get an EEG instance: expspec.device (generated from input arguments at initialization)
  • Generate a savefile: expspec.generate_output_file() (see #74)
  • Write new experiments by simply having their present functions accept the single argument spec which contains all metadata possibly needed by the experiment.

Essentially:

@dataclass
class ExperimentSpec:
    device_name: str
    experiment_name: str
    subject_id: int
    session_id: int
    device_options: dict = field(default_factory=dict, ...)

    def __post_init__(self):
        self.device = EEG(self.device_name, **self.device_options)

    def run() -> None:
        ...

    def get_output_file() -> Path: 
        ...

installation error on windows

Installation on windows just stopped working for me.

Trying to figure out why.

Collecting soundfile
  Using cached SoundFile-0.10.3.post1-py2.py3.cp26.cp27.cp32.cp33.cp34.cp35.cp36.pp27.pp32.pp33-none-win_amd64.whl (689 kB)
Collecting sounddevice
  Using cached sounddevice-0.4.1-py3.cp32.cp33.cp34.cp35.cp36.cp37.cp38.cp39.pp32.pp33.pp34.pp35.pp36.pp37-none-win_amd64.whl (167 kB)
Collecting python-bidi
  Using cached python_bidi-0.4.2-py2.py3-none-any.whl (30 kB)
Collecting arabic_reshaper
  Using cached arabic_reshaper-2.1.1.tar.gz (18 kB)
Collecting future
  Using cached future-0.18.2.tar.gz (829 kB)
    ERROR: Command errored out with exit status 1:
     command: 'C:\Users\john_griffiths\Miniconda3\envs\eeg-notebooks_test\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\john_griffiths\\AppData\\Local\\Temp\\pip-install-c_csvdrr\\future\\setup.py'"'"'; __file__='"'"'C:\\Users\\john_griffiths\\AppData\\Local\\Temp\\pip-install-c_csvdrr\\future\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\john_griffiths\AppData\Local\Temp\pip-pip-egg-info-k3w3_qc8'
         cwd: C:\Users\john_griffiths\AppData\Local\Temp\pip-install-c_csvdrr\future\
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\john_griffiths\Miniconda3\envs\eeg-notebooks_test\lib\tokenize.py", line 392, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\john_griffiths\\AppData\\Local\\Temp\\pip-install-c_csvdrr\\future\\setup.py'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

(eeg-notebooks_test) C:\Users\john_griffiths\Code\libraries_of_mine\github\eeg-notebooks>

Error when installing dependency dukpy on macOS

  • MacBook Pro 1 inch 2019 - Big Sur
  • OpenBCI Cyton and Ultra Cortex 4

I was planning on testing out the P300 and SSVEP Stimuli. However, when setting up by Anaconda Environment, I kept on getting an error on "dukpy lib." I tried to pip install dukpy but it still doesn't work. I also tried binder but got another error -- from my understanding it hasn't been updated in a while anyway.

Any Help Would Be Greatly Appreciated!

📷 Screenshot

Screen Shot 2021-04-24 at 1 00 01 AM

Notebook for Rimeannian Geometric Classification

Start notebook on explaining the differences in Riemannian Geometric Classification methods. Topics to include:

  • Cell for fetching datasets for each individual task using the eegnb.datasets.datasets.fetch_dataset
  • Cell for loading datasets using eegnb.analysis.utils.load_dataset
  • Differences between Minimum Distance to Mean (MDM) and Tangent Space (TS)
  • Process for applying RGC to SSVEP
  • Process for applying RGC to Motor Imagery
  • Process for applying RGC to ERP
  • Compare methods to standard vectorization + logistic regression results
  • Add Notion2 data to google drive

how to visualize

Dear community:
My question goes to how to run the code in eegnb examples with this command "python viz". When I run that I have to change certain things inside the code of course. So I change 'eegnb_examples' for 'local' and 'muse2016' for 'muse2'. Still that doesn't make the trick. What shall I do further on?

File "01r__n170_viz.py", line 63, in
data_dir = eegnb_data_path)
File "/Users/andraderenew/Downloads/eeg-notebooks-master/eegnb/analysis/utils.py", line 163, in load_data
verbose=verbose,
File "/Users/andraderenew/Downloads/eeg-notebooks-master/eegnb/analysis/utils.py", line 84, in load_csv_as_raw
raws = concatenate_raws(raw, verbose=verbose)
File "", line 22, in concatenate_raws
File "/Users/andraderenew/opt/anaconda3/envs/eeg-notebooks/lib/python3.7/site-packages/mne/io/base.py", line 2465, in concatenate_raws
raws[0].append(raws[1:], preload)
IndexError: list index out of range

finalize requirements.txt and pip install

The top priority remaining from the version 0.2 refactor is a clear, fully tested, and fully documented installation process.

The focus point for this is the requirements.txt.


Side note: the developer installation method should be

pip install -e .

from the repo top level folder.

This command calls the setup.py file, which reads dependencies from the requirements file


The requirements.txt needs to be extended to include the following:

  • Version numbers for all packages (e.g. pip install matplotlib==3.0.3)
  • Operating system dependencies
    (e.g. pyreadline==2.1; platform_system == "Windows" - see here and here
  • Testing of of all this across all operating systems and multiple devices
  • Removal of any redundancies

Finalizing this is number one top priority.

Machine learning functionality - clustering

Use signal processing metrics to create machine learning functionality for EEG-Notebooks and compare clustering algorithms. We can use scikit t-SNE optimization techniques (https://scikit-learn.org/stable/modules/manifold.html#optimizing-t-sne), Pyriemann Embedding (https://pyriemann.readthedocs.io/en/latest/auto_examples/ERP/plot_embedding_EEG.html#sphx-glr-auto-examples-erp-plot-embedding-eeg-py), diffusion_map (https://pydiffmap.readthedocs.io/en/master/reference/diffusion_map.html), and other ideas (anything else?)

Follow my work on Twitch: https://www.twitch.tv/hussainather

Pre-processing pipeline for low density EEG system

The attached code proposes one prospective preprocessing pipeline that helps handle artifacts in EEG data collected using the Muse system. The text contains some additional preprocessing steps which I typically use when working with denser EEG systems but some of these steps are not applicable to Muse. The code will focus on the sections without an asterisk.

  1. Load the required libraries
  2. Load the EEG, chanlocs into EEGlab format
    ii) Downsample if needed
    iii) High Pass filter (0.1-2 Hz: Depending on noise level & study)

iv) *** H-infinity adaptive ocular artifact removal *** (Note: not applicable for Muse and not included)

v) Low pass filter (depending on the study)
vi) Remove Bad segments (Visually inspecting)
vii) Remove Bad channels (Visual Inspection with Muse)
Viii) Remove burst artifacts using ASR

ix) *** ICA cleaning *** (Note: not applicable for Muse and not included)
x) *** Interpolating removed channels *** (Note: not applicable for Muse and not included)

Additional Commands
xi) Notch filter (depending on the study)
xii) Common Average Reference (Note: After ICA or add averaged channel if doing before ICA to account for rank deficiency)
xiii) Multiple plots for sanity checks

Pre_processing_tutorial.zip

kernel error - again dll

Hi

I am trying to run the SSVEP_load_and_visualize.ipynb notebook, and I get a kernel error (python 3.8, Windows 10), see below.

I tried the same on Ubuntu 18 and no error there

Traceback (most recent call last):
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\web.py", line 1704, in _execute
    result = await result
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\gen.py", line 769, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\notebook\services\sessions\handlers.py", line 69, in post
    model = yield maybe_future(
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\gen.py", line 762, in run
    value = future.result()
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\gen.py", line 769, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 88, in create_session
    kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name)
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\gen.py", line 762, in run
    value = future.result()
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\gen.py", line 769, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 100, in start_kernel_for_session
    kernel_id = yield maybe_future(
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\tornado\gen.py", line 762, in run
    value = future.result()
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 176, in start_kernel
    kernel_id = await maybe_future(self.pinned_superclass.start_kernel(self, **kwargs))
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_client\multikernelmanager.py", line 185, in start_kernel
    km.start_kernel(**kwargs)
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_client\manager.py", line 309, in start_kernel
    kernel_cmd, kw = self.pre_start_kernel(**kw)
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_client\manager.py", line 256, in pre_start_kernel
    self.write_connection_file()
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_client\connect.py", line 468, in write_connection_file
    self.connection_file, cfg = write_connection_file(self.connection_file,
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_client\connect.py", line 138, in write_connection_file
    with secure_write(fname) as f:
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_core\paths.py", line 435, in secure_write
    win32_restrict_file_to_user(fname)
  File "c:\users\ricercasc\miniconda3\envs\eeg-notebooks_py38\lib\site-packages\jupyter_core\paths.py", line 361, in win32_restrict_file_to_user
    import win32api
ImportError: DLL load failed while importing win32api: The specified module could not be found.

startup error

Hi

after updating my local repository, I am now no longer able to start the CLI. I have a python 3.7 virtual environment on Windows 10.

(eegnb_py37) C:\Users\daniele\Dropbox\code\eeg-notebooks>eegnb runexp -ip
run command line prompt script
Traceback (most recent call last):
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\Scripts\eegnb-script.py", line 33, in <module>
    sys.exit(load_entry_point('eeg-notebooks', 'console_scripts', 'eegnb')())
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\__main__.py", line 74, in main
    cli = CLI(args.command)
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\cli.py", line 9, in __init__
    getattr(self, command)()
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\cli.py", line 68, in runexp
    from .introprompt import main as run_introprompt
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\cli\introprompt.py", line 4, in <module>
    from eegnb.devices.eeg import EEG
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\devices\eeg.py", line 21, in <module>
    from eegnb.devices.utils import get_openbci_usb, create_stim_array
  File "c:\users\daniele\dropbox\code\eeg-notebooks\eegnb\devices\utils.py", line 41, in <module>
    "freeeeg32": BoardShim.get_eeg_channels(BoardIds.FREEEEG32_BOARD.value),
  File "C:\Users\daniele\miniconda3\envs\eegnb_py37\lib\enum.py", line 354, in __getattr__
    raise AttributeError(name) from None
AttributeError: FREEEEG32_BOARD

Conda 1st time install conflicts with existing python install

I tried:

conda create -n "eeg-notebooks"

conda activate "eeg-notebooks"

conda install git

conda install pip

git clone https://github.com/NeuroTechX/eeg-notebooks

cd eeg-notebooks

pip install -r requirements.txt

but encountered errors on the last step, since it tried to use my existing python 2 install on my Mac.

@m9h helped me using a different set of instructions.

error downloading cueing data during doc build

Traceback (most recent call last):
File "/Users/kylemathewson/eeg-notebooks/venv/lib/python3.7/site-packages/sphinx_gallery/gen_gallery.py", line 159, in call_memory
return 0., func()
File "/Users/kylemathewson/eeg-notebooks/venv/lib/python3.7/site-packages/sphinx_gallery/gen_rst.py", line 466, in call
exec(self.code, self.fake_main.dict)
File "/Users/kylemathewson/eeg-notebooks/examples/visual_cueing/02r__cueing_group_analysis.py", line 38, in
datasets.fetch_dataset(data_dir=eegnb_data_path, experiment='visual-cueing', site='kylemathlab_dev')
File "/Users/kylemathewson/eeg-notebooks/eegnb/datasets/datasets.py", line 85, in fetch_dataset
with zipfile.ZipFile(destination, 'r') as zip_ref:
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 1258, in init
self._RealGetContents()
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 1325, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file

Docs Review - Issues List

Hi all - please find below my points following a comprehensive docs review. [Please note that if a point does not make sense or is unecessary, it is definitely reflective of a lack of experience or understanding on my part :) Feel free to ignore.] I hope this is helpful… Please let me know if I can provide any clarification or if it needs to be reformatted.

 

EEG Notebook - Docs Review

Organization: Left Hand Menu Section Title / Subtitle  Paragraph Title Problem

Missing Content

  • Installation - Introduction - Usage Mode Two - How/where does one download an existing data set? 
    • (Perhaps include in Supported Devices?)
  • Initiating an EEG Stream - Open BCI Ganglion - Finding the Ganglion’s MAC Address
  • Initiating an EEG Stream - Open BCI Cyton - Needed Parameters/Optional Parameters
  • Initiating an EEG Stream - Open BCI Cyton + Daisy - Needed Parameters/Optional Parameters
  • Analyzing Data
  • All Notebook Examples - Cueing Group Analysis - Provide More Context (after each header, ahead of the code.)
  • All Notebook Examples - Cueing Behavioral Analysis Winter 2019 - Provide More Context
  • All Notebook Examples - Cueing Group Analysis Winter 2019 - Provide More Context 

 

Further Explication Needed

  • Running Experiments - Command Line Interface - Perhaps consider giving an example of how the syntax ought to look when the command and flags are combined? (Introprompt Flag)
  • Available Notebooks - Visual 300 - Add line for context
    • Draft Copy: The visual P300 wave is a spike in brain activity associated with decision making and cognitive processing; it is a spike that occurs 300ms after perceiving a visual stimulus. This was validated in Muse by Alexandre Barachant with the Oddball paradigm, in which low-probability target items are interspersed with high probability non-target items. (Essentially when two different stimuli are presented and one of them occurs infrequently - this is the oddball.) 
  • Available Notebooks - N170 - Add line for context
    • The N170 is an ERP (event-related brain potential) specifically related to the perception of faces; it is a negative waveform that peaks about 170ms after stimulus presentation.
  • Available Notebooks - SSVEP - Add line for context
    • The steady state visual evoked potential (or SSVEP) is a response produced following visual stimulation at specific frequencies. It was validated by Hubert in a 12 minute experiment (6 x 2 minute trials). Stimulation frequencies of 30hz and 20hz were used and an extra electrode at POz was added. Clear peaks were found in the PSD at the stimulation frequencies. 
  • All Notebook Examples - P300 Load and Visualize Data
    • It is possible that those who will load data might lack experience (as opposed to someone with an EEG device who will run their own experiment.) It might be appropriate to explain some of the key terms in one section between N170 and P300: (I chose the P300 here because it is the first in the “Available Notebooks” Section.)
      • Draft Copy
        • Visualize the Power Spectrum: Viewing the power spectrum is a standard method for quantification of EEG. This basically measures a signal’s power content over its frequency. 
        • Filtering: Filtering is a useful technique for producing interpretable EEG tracings - essentially they clean up the data. You can learn more about filters herewh.
        • Epoching: This is a protocol in which specific time-windows are extracted from the continuous EEG signal. 
        • Epoch Average: 
          • The 2-1 Wave Shows the ERP Amplitude (I think?)
          • The Green Wave Shows the Target Stimuli
          • The Red Wave Shows the Non-Target Stimuli
  • All Notebook Examples - Cueing Single Subject Analysis - Further clarify intro sentence.
    • “Here, we are computing and plotting differences between time frequency window for analysis.” 

 

Clarification

  • Available Notebooks - Unvalidated Experiments and Other Phenomena - More Context
    • Do we include? No further mention throughout the notebook...

 

Spelling and Grammar

  • Initiating an EEG Stream - Introduction First Paragraph - “Before running an experiment”
  • Initiating an EEG Stream - Introduction Fourth Paragraph - “Various support devices”
  • Available Notebooks - Visual P300 with Oddball Paradigm - “attain”
  • All Notebook Examples - P300 Load and Visualize Data - Introduction Paragraph 1: “This experiment uses an oddball paradigm: “Images of cats and dogs are shown in a rapid serial visual presentation (RSVP) stream, with cats and dogs categorized respectively as ‘targets’ or ‘non-targets’, according to which has high or low probability of occurring, respectively.”
  • All Notebook Examples - Cueing Single Subject Analysis - Spectogram - Add period after “time.”
  • P300 Load and Visualize Data - Setup, Load Data - Remove Reference to N170 example data set and Replace
  • P300 Load and Visualize Data - Filteriing - “Filtering”

 

Formatting

  • Initiating an EEG Stream - Supported Devices - Organization
  • Initiating an EEG Stream - Supported Devices - Add Photo for Interaxon Muse
  • Available Notebooks - All Titles - Match order of paragraphs to order of experiments menu
  • All Notebook Examples - Cueing Single Subject Analysis - Retitle “Now we compute and plot the differences” to “Computing and Plotting Differences.”
  • All Notebook Examples - N170 run experiment - Capitalize, “Run Experiment”
  • All Notebook Examples - P300 run experiment  - Capitalize, “Run Experiment”
  • All Notebook Examples - SSVEP run experiment - Capitalize, “Run Experiment”
  • All Notebook Examples - SSVEP run experiment - Pull intro/outro text (the blue # i.e “Next, we will chunk (epoch) the data into segments…”) out of code on Sections “Epoching,” “Stimuli-Specifc PSD,” and “Spectogram” and convert into standard formatted text so that it is entirely visible to the user.
  • All Notebook Examples - SSVEP Decoding - ^Same Note as above. Convert intro text into standard formatted text for Sections “Epoching,” “Decoding,” and “Decoding.” 
  • All Notebook Examples - Cueing Single Subject Studies - ^Same Note as above. Convert intro text into standard formatted text for Section “Now we compute and plot the differences.”

 

Possible Pain Points

  • Take out spaces before codes in Installation section - Not a huge problem, but having to remove the indentations at the start of every line when copying it from Github to Jupyter is a little cumbersome.
  • Test Installation - If you are trying to run an experiment on existing data, the "Initiate EEG device" code seems to hinder full execution of this test on my end. This might be an area where we “fork” a separate installation test for those who are using existing data.



Readme doesn't mention conda in installation

The readme currently doesn't mention the conda install in installation section, but mentions it later in troubleshooting,
2) it also mentions pip install ., but doesn't say which requirements.txt file to use
3) I think we want readme to describe install within a virtual environment like:

"To install dependencies within a virtual environment move into directory you cloned and run:

.. code-block:: shell

$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip
$ pip install ."

[Docs] Clarify Saved Data Location on Mac

Currently, this section of the docs exists but is blank. New users might not understand how to view hidden folders, but @JohnGriffiths explained that this is done as part of an existing convention.

Location in Docs: https://neurotechx.github.io/eeg-notebooks/getting_started/loading_and_saving.html?highlight=saving#macos

Full Clarification for location on Mac:

You can find recording files for all experiments in a folder called .eegnb in the User's Home folder, as seen in the following screenshot.

Screen Shot 2020-12-03 at 11 00 05 PM

OpenBCI Stream Not Detected on Windows

I first started the OpenBCI livestream using the GUI via dongle. I then go to eeg-notebooks and run the following cell:

board_name = 'cyton_daisy'
experiment = 'auditory_oddball'
subject_id = 1
session_nb = 1
#record_duration=1000
record_duration=30

eeg_device = EEG(device=board_name)

# Create save file name
save_fn = generate_save_fn(board_name, experiment, subject_id, session_nb)
print(save_fn)

I am then given a prompt for the USB port for Windows and after entering COM3 I get the following error:

---------------------------------------------------------------------------
BrainFlowError                            Traceback (most recent call last)
<ipython-input-18-591d2bca8dd7> in <module>
      6 record_duration=30
      7 
----> 8 eeg_device = EEG(device=board_name)
      9 
     10 # Create save file name

c:\users\surya\documents\eegnote\eegnotesep2020\eeg-notebooks-master\eegnb\devices\eeg.py in __init__(self, device, serial_port, serial_num, mac_addr, other, ip_addr)
     48         self.other = other
     49         self.backend = self._get_backend(self.device_name)
---> 50         self.initialize_backend()
     51 
     52     def initialize_backend(self):

c:\users\surya\documents\eegnote\eegnotesep2020\eeg-notebooks-master\eegnb\devices\eeg.py in initialize_backend(self)
     52     def initialize_backend(self):
     53         if self.backend == 'brainflow':
---> 54             self._init_brainflow()
     55         elif self.backend == 'muselsl':
     56             self._init_muselsl()

c:\users\surya\documents\eegnote\eegnotesep2020\eeg-notebooks-master\eegnb\devices\eeg.py in _init_brainflow(self)
    181         self.sfreq = BoardShim.get_sampling_rate(self.brainflow_id)
    182         self.board = BoardShim(self.brainflow_id, self.brainflow_params)
--> 183         self.board.prepare_session()
    184 
    185     def _start_brainflow(self):

c:\users\surya\anaconda3\envs\eegnbsep2020\lib\site-packages\brainflow\board_shim.py in prepare_session(self)
    808         res = BoardControllerDLL.get_instance ().prepare_session (self.board_id, self.input_json)
    809         if res != BrainflowExitCodes.STATUS_OK.value:
--> 810             raise BrainFlowError ('unable to prepare streaming session', res)
    811 
    812     def start_stream (self, num_samples: int = 1800 * 250, streamer_params: str = None) -> None:

BrainFlowError: UNABLE_TO_OPEN_PORT_ERROR:2 unable to prepare streaming session

Issues pertaining to Brainflow API

From speaking with Andrey in the OpenBrainTalk slack channel he brought up a few things that I wanted to post here for my own reference and for the sake of documentation. I will try to knock these out gradually:

  1. Latest version of Brainflow support auto discovery of Wifi shield IP address. Can integrate this when I return to troubleshooting the Wifi shield.

  2. hardcoding the COM port for windows should be fixed Here

  3. Getting the channel and timestamp names can be done more fluently with the brainflow API. See here and here.

make html on mac throws error

still working on debug, but to keep track of bugs here is one, as @JohnGriffiths suspected, may not work on mac natively:

(venv) Kyles-MacBook-Pro:doc kylemathewson$ make html
/bin/sh: sphinx-build: command not found
make: *** [html] Error 127

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.