GithubHelp home page GithubHelp logo

neurotechx / eeg-notebooks_v0.1 Goto Github PK

View Code? Open in Web Editor NEW
185.0 32.0 56.0 194.92 MB

Previous version of eeg-notebooks

Home Page: https://neurotechx.github.io/eeg-notebooks

License: BSD 3-Clause "New" or "Revised" License

Jupyter Notebook 98.60% Python 1.37% MATLAB 0.03%

eeg-notebooks_v0.1's Introduction

( Note: eeg-notebooks is now at version 0.2, with some major changes to the API and code base. The current up-to-date code base can be found here. The current repo is the the original repo, which has been frozen at version 0.1 and renamed. Why do this, rather than putting the legacy code on a separate branch? Because for version 0.2 we have stripped out a lot of old files (e.g. example data, which is now in a separate location), to get the repo down to a more manageable size (now ~30MB, as opposed to the present repo's 700+MB. So, please go check out the shiny new and improved library, and do get in touch (ideally in the form of an issue in the new repo) if you have any issues or questions! ).

EEG Notebooks

A collection of classic EEG experiments implemented in Python and Jupyter notebooks. This repo is a work in progress with the goal of making it easy to perform classical EEG experiments and automatically analyze data.

Currently, all experiments are implemented for the Muse EEG device and based on work done by Alexandre Barachant and Hubert Banville for the muse-lsl library.

Please see the documentation for advanced installation instructions and complete info about the project.

Getting Started

Installation

If you are a Mac user, follow the installation instructions here

You will need a Muse 2016 and Python installed on your computer. Psychopy, the stimulus presentation library that underlies most of the experiments, officially only supports Python 2. However, some users, especially those on Linux, have been able to work entirely in Python 3 without any issues.

git clone https://github.com/neurotechx/eeg-notebooks

Install all requirements.

pip install -r requirements.txt

See here for more detailed setup instructions for windows operating systems.

Running Experiments

Open the experiment you are interested in running in notebooks folder. Notebooks can be opened either with the Jupyter Notebook browser environment (run jupyter notebook) or in the nteract desktop application.

All experiments should be able to performed entirely within the notebook environment. On Windows 10, you will want to skip the bluetooth connection step and start an EEG data stream through the BlueMuse GUI.

*Note: if errors are encountered during viewing of the eeg data, try starting the viewer directly from the command line (muselsl view). Version 2 of the viewer may work better on Windows computers (muselsl view -v 2)

The basic steps of each experiment are as follows:

  1. Open an LSL stream of EEG data.
  2. Ensure that EEG signal quality is excellent and that there is very little noise. The standard deviation of the signal (displayed next to the raw traces) should ideally be below 10 for all channels of interest.
  3. Define subject and session ID, as well as trial duration. Note: sessions are analyzed independently. Each session can contain multiple trials or 'run-throughs' of the experiments.
  4. Simultaneously run stimulus presentation and recording processes to create a data file with both EEG and event marker data.
  5. Repeat step 4 to collect as many trials as needed (4-6 trials of two minutes each are recommended in order to see the clearest results)
  6. Load experimental data into an MNE Raw object.
  7. Apply a band-pass filter to remove noise
  8. Epoch the data, removing epochs where amplitude of the signal exceeded a given threshold (removes eye blinks)
  9. Generate averaged waveforms from all channels for each type of stimulus presented

Notebooks in the old_notebooks folder only contain the data analysis steps (6-9). They can be used by using the run_experiments.py script (e.g python run_eeg_experiment.py Auditory_P300 15 1)

Currently available experiments:

  • N170 (Faces & Houses)
  • SSVEP
  • Visual P300
  • Cueing (Kyle Mathewson)
  • Baseline (Kyle, Eye's Open vs. Closed, needs notebook made)

eeg-notebooks_v0.1's People

Contributors

amandakeasson avatar caydenpierce avatar existentialist-robot avatar gsajko avatar hubertjb avatar jdpigeon avatar johngriffiths avatar kylemath avatar nvitucci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eeg-notebooks_v0.1's Issues

SSVEP Frequency functions

I am testing the SSVEP stimulus presentation on the new branch and am getting some strange numbers for the frequencies. The problem seems to be due to the following line of code:

frame_rate = np.round(mywin.getActualFrameRate()) # Frame rate, in Hz

For some reason, on my laptop with a 60 Hz monitor, that value returns at above 2000, producing flickers that are way too fast for my monitor.

video?

Hello,
I was able to get the scripts to run - such a cool project! thank you.
I tried to get a video presentation to work, but I failed. After overcoming psychopy-video difficulties, I am able to run a video-presentation script (essentially the version that is in psychopy's examples), but when I try to interface that script with the run_experiment.py files, which call the multiprocessing/pool functions, then it fails. I tried to refactor the code, but it seems that the video + multiprocessing doesn't work (i have an intuition why that might be the case, but couldn't find much on the web). Does anybody have deeper insights or ideas for how to make it work?
best r

bluemuse and psychopy timestamps in different units

@jdpigeon @kowalej @derekbeaton

I think I've got to the bottom of the problem observed by Derek that the markers column in the data##.csv file is all empty when connecting with bluemuse.

The problems seems to be due to the units of the timestamps.

Example: here are the first few lines of an output data.csv file

timestamps,TP9,AF7,AF8,TP10,Right AUX,Marker0
1527976488650.125,999.512,-912.598,-915.527,-690.918,-603.516,2
1527976488654.031,-991.699,-794.434,-822.266,-475.586,-398.926,0
1527976488657.938,953.613,-875.488,-916.504,-496.582,-496.094,0
1527976488661.844,-913.574,-988.770,-990.234,-634.277,-632.324,0
1527976488665.750,811.523,-881.348,-893.555,-813.965,-727.051,0

The problem is that, apart from the first row, the final column contains all zeros.

To check this I added in the following line underneath the while loop in lsl-record.py

open('jg_markers_test.csv', 'w').writelines(['%s %s\n' %(m[0][0], m[1]) for m in markers])

Here are the first few lines of this output file

2 1848.45728328
1 1849.20626896
2 1849.79690854
1 1850.48569322
2 1851.05877687
2 1851.61607933
1 1852.19659235

Basically it looks like the timestamps from psychopy are different to the ones output by bluemuse.

This confuses the final few lines of lsl-record.py, which look for the closest value in the eeg timestamps matching each psychopy marker

https://github.com/alexandrebarachant/muse-lsl/blob/master/lsl-record.py#L99

Now trying to figure out what to do about this.

Convert dataset to BIDS format

As we gather more and more data it'll be important to have a clear specification for how it'll look.

Fortunately, the hard work in designing a structure for orgaizing neuroimaging data has already been done by the BIDS project.

I propose we refactor our current data folder to the BIDS format shown here in this template: https://github.com/INCF/BIDS-Starter-Kit/tree/master/templates

We'll also have to change the notebooks to record data in the appropriate dirs

Issues running on Mac

This issue will host advanced instructions, common issues, and discussion related to setting up and running EEG notebooks on MAC computers.

Common Issues

  1. Running muse-lsl.py in terminal leads to The command was not found or was not executable: gatttool
  2. Library "c" not found: alexandrebarachant/muse-lsl#63
  • Change directory in import to get ctypes library (see instructions)

Issues running on Windows

This issue will host advanced instructions, common issues, and discussion related to setting up and running EEG notebooks on Windows computers.

Common Issues

  1. pygatt.exceptions.NotConnectedError: No BGAPI compatible device detected
  • If you are using a dongle and getting the error you may need to make a quick fix to your system's version of the pygatt library. First, find the folder where bgapi.py is running with the following:
python
import os
import pygatt
print(os.path.join(os.path.dirname(pygatt.__file__), 'backends', 'bgapi', 'bgapi.py'))

Then add the following to line 191 of bgapi.py and save:
time.sleep(.25)

  • Another, dumber, solution when can often fix this issue when it is persistent is to restart windows - making sure that you keep the dongle inserted and the muse turned on.
  1. Psychopy scripts failing with errors like

Exception AttributeError: "'NoneType' object has no attribute 'close'" in <bound method Window.__del__ of <psychopy.visual.window.Window object at 0x0000000012136898>> ignored

and/or

es\pyglet\libs\win32\__init__.py", line 237, in <module> _user32.GetRawInputData.argtypes = [HRAWINPUT, UINT, LPVOID, PUINT, UINT] NameError: name 'PUINT' is not defined

This occurs if the an unsupported pyglet version is being used.

Currently pyglet versions 1.3.1 + don't seem to work on windows. Discussion of this here

The fix is:

pip install pyglet==1.2

(...which presumably you didn't already do that when installing)

  1. If psychopy is hanging, can't quit

This is a tricky one. You need to kill the terminal, which psychopy can make difficult.

The best trick I currently have when dealing with this problem is to swipe left on my touchscreen, and then kill the terminal by touching the 'x' to kill the anaconda terminal running psychopy.

This is of course not ideal.

  1. Psychopy running slowly

The visual tasks ( N170, P300 ) are rapid visual presentation paradigms. Sometimes psychopy appears to run much slower than it should do (e.g. taking several seconds for each picture) .

This appears to be primarily a RAM issue. This can be helped by:

  • Closing all concurrently running processes (web browser, etc.)
  • Close windows desktop sticky notes
  • Close the anaconda terminal, start up a new one, and try again.

This issue requires further assessment, quantification, and solutions.

  1. pygatt.exceptions.BLEError: No characteristic found matching 273e0003-4c4d-454d-96be-f03bac821358

This is an error message that comes after python muse-lsl.py has successfully detected a headband.

At a similar time I observed issues with the same headband connecting to BlueMuse and to the phone app.

I have observed that the solution described here appears to fix this (though not 100% sure it was the reason):

To solve the error, I had to connect once the headband with the official phone app, then it worked again.

  1. Conda installations give error 'permission denied'

I have noticed this on several occasions. It appears that simply re-running the installation command a second time will complete with no errors.

  1. Persistent disconnects with BlueMuse
  • Try downgrading BlueMuse to version 1.0.7.0, its the version that we've found is most stable

SSVEP stimulus presentation crashes on Windows Surface Tablet Pro

SSVEP stimulus presentation crashes on Windows due to the mywin.getActualFramerate function returning None. Unfortunately, even increasing the nMaxFrames parameters doesn't seem to allow the function to resolve with a return value.

This may likely be a surface pro related issue due to unusual display properties

Cannot open input file 'hdf5.lib'

Issue encountered on Windows using a new Python 3 Miniconda install.

Installing HDF5 tools didn't solve issue (and required a bullshit account registration).

Switching to Python 2 has allowed us to bypass issue

[HowTo]EEG Analysis across many subjects with cross validation

Hi,
Your examples are very helpful. Especially, i am looking at this one - https://github.com/NeuroTechX/eeg-notebooks/blob/master/notebooks/SSVEP.ipynb

I am seeing that the dataset actually contains data only from 1 person (subject). I am unable to find any examples of MNE and ML analysis applied to a group of subjects and being used for classification&prediction.
Can you please give an example of this? For ex, if we have resting state EEG from 10 normal and 10 MCI subjects each with n_channels * n_times (n_times is the sample points, it is different for every subject). Is it possible to fit a model and predict (by cross validation) MCI or Normal for a given individual EEG signal?

Behavioural EEG Feedback

  1. Design a really good set of instructions that the subject will undergo before running the experiment: "relax your face, keep your jaw slack" etc.
  2. Show the participants the EEG stream beforehand, including blinks so they get a good idea of what the experiment involves
  3. Prod the user to really focus on the oddbal stimulus. If need be, lie to them about the importance of counting the oddballs. However, make sure they also don't physically react to the stimuli
  4. Consider changing the 'no-stimuli' event to a gray box that is the same size as the stimuli in order to minimize the 'flashing' effect that can cause eye strain and possibly affect results.

Issues running on Linux

This issue will host advanced instructions, common issues, and discussion related to setting up and running EEG notebooks on Linux computers.

Common Issues

  1. pygatt.exceptions.BLEError: Unexpected error when scanning: Set scan parameters failed: Operation not permitted (Linux)
  • This is an issue with pygatt requiring root privileges to run a scan. Make sure you have libcap installed (sudo apt install libcap-dev) and run sudo setcap 'cap_net_raw,cap_net_admin+eip' `which hcitool`
  1. pygatt.exceptions.BLEError: No characteristic found matching 273e0003-4c4d-454d-96be-f03bac821358 (Linux)
  • There is a problem with the most recent version of pygatt. Work around this by downgrading to 3.1.1: pip install pygatt==3.1.1
  1. After connecting to Muse, BLEError: No characteristic found matching UUID
  • No explanation for this yet. Trying a different headband or charging the current one seems to fix the issue often enough, though.
  1. Package gtk+-3.0 was not found in the pkg
  • Had to install gtk-build-essentials: sudo apt-get install build-essential libgtk-3-dev
  1. GStreamer not available
  • Had to install wxPython with pip install -U -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-16.04 wxPython
  1. Package gtk+-3.0 was not found in the pkg-config search path.
  • Had to install the GTK dev package. sudo apt-get install build-essential libgtk-3-dev
  1. libSDL-1.2.so.0: cannot open shared object file: No such file or directory
  • You might encounter this trying to run the stimulus presentation scripts on a 64 bit computer. To install the 32-bit libraries necessary to run this code, try running https://askubuntu.com/questions/226613/how-do-i-install-the-library-libsdl-image-1-2-so-0-required-to-run-dwarf-fortres/226762

auditory stim presentation not working on windows

Am currently getting the following error with auditory stim presentation in windows

> python stimulus_presentation/generate_Auditory_P300.py

...

...

(base) C:\Users\John\GitBash\eeg-notebooks>python stimulus_presentation/generate_Auditory_P300.py
Traceback (most recent call last):
  File "stimulus_presentation/generate_Auditory_P300.py", line 40, in <module>
    aud1 = sound.Sound('C', octave=5, sampleRate=44100, secs=0.2, bits=8)
TypeError: __init__() got an unexpected keyword argument 'bits'
2.0765  INFO    Loaded SoundDevice with PortAudio V19.6.0-devel, revision 396fe4b6699ae929d3a685b3ef8a7e97396139a4
2.0772  INFO    sound is using audioLib: sounddevice

Have put a fix attempt on new branch, removing the bits argument. But doesn't really fix the problem (get a few sounds and then errors).

Synchronization issue when recording

Hi all,

I've tried running the P300 notebook with a Muse 2 using Python 3 and muse-lsl 2.0.1 on a Linux machine. Besides a couple fixes on imports that I can add with a PR, I couldn't get the stimulus.start() + recording.start() section to work.
The Muse is correctly working and I can view the stream, but when I get to that section it either fails with a "Cannot find EEG stream" message or, if I invert the order of the calls, with an exception related to an empty list (sorry, I don't have the full exception handy at the moment, but I can attach it later).
The interesting bit is that, if I start the stimulus and press "Esc" within a couple seconds, the EEG stream will actually be found; the markers will not be found anyway, but the recording will go on. Any ideas on what it could be?

lsl-record.py on mac

lsl-record.py, line 35:
marker_streams = resolve_byprop('name', 'Markers', timeout=2)

The timeout period was not long enough when I ran this on my Macbook, so the marker stream was not found.

Increasing the timeout period to 10s fixed the problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.