kylemath / eegedu Goto Github PK
View Code? Open in Web Editor NEWInteractive Brain Playground - Browser based tutorials on EEG with webbluetooth and muse
Home Page: http://eegedu.com
License: MIT License
Interactive Brain Playground - Browser based tutorials on EEG with webbluetooth and muse
Home Page: http://eegedu.com
License: MIT License
need better control over p5js animation frequency
one clue is to add
p.frameRate(60)
another is to use Date.now()
instead of counting draw cycles
currently loading older versions on locally served or deployed versions, kory suggested two fixes, I implemented this into the index.html file, which doesn't fix the problem:
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
@korymath also suggested doing something else in the .js file, not sure how
References: here, here, and the YAML file here
Only the steps under: "Uninstalling the Google Cloud Build app" -- let me know if you are able to do this.
I think this one is ready to go right now, want to add a record button
Describe the bug
The saved csv file does not contain any marker (stimulus or response). Tried on Windows tablet and Android Smartphone
To Reproduce
Steps to reproduce the behavior:
Desktop (please complete the following information):
Smartphone (please complete the following information):
We should write up the readme for clarity, and in doing so compile a draft of a paper for CAN-ACN 2020.
Link to poster submission
There are a few changes which should be made before this repository goes public:
@kylemath noted some testing paradigms in the Slack channel for how to properly test a PR / new feature. We should add some of those details more formally to the README to make sure that at least we are all on the same page ...
Currently we are outputting a mock data stream which is a sequence of random numbers. We might be curious to control the data stream by outputting a sine wave or a wave that is corrupted by NaN values to see how the various modules might react to these modifications.
Now that we are saving .csv data files of raw data with markers, perhaps we could make analysis modules to load that data, change settings, and look at results in real time
Is your feature request related to a problem? Please describe.
data files are made with no analysis code
Describe the solution you'd like
another module to load data and analyze it
Describe alternatives you've considered
analyzing it offline, less fun
Want to change all references to MuseFFT to be EEGEdu, including the main .js file names, the tab names, and the class name in the tabs
the FFT is just a subset of the things our EEGEdu does
MuseFFTSpectra
MuseFFTRaw
export class MuseFFTRaw extends Component {
becomes
EEGEduSpectra
EEGEduRaw
export class EEGEduRaw extends Component {
Jonathan Kuziek 3:27 PM
Hmmm well not really a setting but maybe have way to show the different plots we've been making for the stroke study? Brain symmetry, background spectra, spectra - background, lateralised plots (Tp9 and Tp10 on same plot), frequency ratios (delta+theta/alpha+beta). Would be interesting to see these in real time and then have a summary report after the 3 minutes.
Also I suppose a way to indicate impedance/variability in the data would be nice without having to look at the raw data (maybe a number to indicate variability like muselsl?).
Ummm, I suppose another thing that would be useful would be a way to customise the AUX channel so that you can tell the app where on the brain (or body) the AUX channel is connected, and it could process the data differently based on that?
For future grant applications, etc we want to start tracking number of visitors and locations , setting up analytics soon would be good
Describe the bug
When adjusting sliders, the data shows the second to last values selected, instead of the last
To Reproduce
Go to raw data, adjust sliders,
Expected behavior
should update charts to newest possible settings
I think it is a race problem where the callback is not complete at setting the new settings before the pipe gets recreated with the old settings, similar to this problem: https://stackoverflow.com/questions/30550314/react-input-range-slider-not-propagating-last-value-to-its-owner-in-chrome
function handleIntervalRangeSliderChange(value) {
setSettings(prevState => ({...prevState, interval: value}));
resetPipeSetup();
}
so resetPipeSetup() runs before setSettings is done, I tried something like this but setSettings can only have one callback:
function handleIntervalRangeSliderChange(value) {
setSettings(prevState => ({...prevState, interval: value}), function () {
resetPipeSetup();
});
}
We should start adding some tests for functions that we are adding to the code base. This will help us make sure we are unit testing consistently and ensure consistency along the way.
Line 126 in 2382d0a
Present two training sessions where individual does behaviour A in training session 1, and behaviour B in training session 2, then live classification can begin to classify into A or B.
Buttons for train A, train B, run training, run classification. Similar to Gene Kogan webcam examples, or eeg101.
Could also do continuous instead of discrete prediction with training along various parts of a continuous slider as input.
an image plot of the 2d frequency by time spectrogram, scrolling over the screen
Modulate to show time domain and hopefully simultaneously the frequency domain data while someone holds two electrodes on the muse with both hands to see their ecg. Peak picking could be used on the time domain to estimate the period and instantaneous heart rate. This can be compared to the frequency domain estimation of heart rate
currently working on the frequency domain here: https://github.com/kylemath/EEGEdu/tree/heartrate
I’d love to be able to use my Notion for this! NotionJS Docs
Are the maintainers open to that? Shouldn’t be too bad seeing as no Bluetooth drivers are needed. The only big difference is you need authorization into device.
This software should be running on a live website front-end.
This would allow for users to test the system's performance on a front end without having to deal with installing node.js packages and the like.
Need to test the evoked module with photocells
All of the modules are listed here: https://github.com/kylemath/EEGEdu#eegedu-curriculum
We should link them to the corresponding files so that we might easily jump into the corresponding code to update/edit/pull ideas.
Start of an issue to work on continuation of recording features introduced in #77
We want to change a few settings on the fly in the browser so people can see the effects of different filtering settings, different fft settings, etc.
so two easy ones are
can we change the filter in raw to toggle on vs off?
can we change the filter cutoffs with inputs from user?
can we change the fft: bins: 256 number in the spectra
relevant closed issue from pipes:
neurosity/eeg-pipes#3
What are the lessons we're trying to teach? What are the applications for this?
Let's start with:
Basic functionality is covered in #7
https://github.com/neurosity/eeg-pipes/blob/master/src/utils/createEEG.js
need to change the input of the pipe into this
need some way to turn this mode on and off that is not client facing ? Perhaps we do this is a default if their muse doesn't connect
Is your feature request related to a problem? Please describe.
It would be helpful to expose the classifier decision in a variety of ways and means so that the EEGEdu prediction backend could be used as a controller for other output tasks.
Describe the solution you'd like
add some sort of simple interface to collect short (1s?) measurements for user-defined states. The experimenter could add as many states as desired, and for each state, store as many EEG snippets as wanted. Then at test time let the user try to invoke one of those states (using a nearest neighbour classifier) and expose the output of the classifier as a MIDI message. This way the experimenter could try to train a brain-wave based musical instrument, in the spirit of Rebecca Fiebrink's work, or a controller.
Describe alternatives you've considered
Currently there is ability to predict brain states, but the output is limited to the visual playground. One potential interface-centric library for making such MIDI interaction straightforward would be MIDI.js
Is your feature request related to a problem? Please describe.
Currently EEGEdu is not measuring the performance of the app using the full extent of Firebase tools.
Solution Path
As introduced in #102
TODO: introduce logic so the predict button is only active after x samples of each, and the samples buttons are inactive during the prediction, and add a button to stop prediction?
Classic ERP experiment
Either auditory or visual stimuli
~700 ms apart with random jitter
2 types
red vs blue
high vs low tone
we collect 30-50 trials , 20% of which are one condition
we save a snip of data locked to the onset for each presentation
200 ms before and 1000 ms after
we append this to an array 1200 ms by number of trials, one for each condition
we subtract the 200 ms baseline average from the whole trial for each channel and trial
we average over the trials for each channel, average over hemispheres and get a single time series for each condition
plot this and save it
All CSS rules should be listed in index.html
public/index.html
We do not need a set of css that will be included from the src/ directory.
There is an extra channel people can get by plugging an electrode into the auxillary port with the metal of the electrode attached to a certain pin of the port. We will want to include an option to use that auxillary channel.
Also we will want the ability to plot just a single channel for some of these, not sure the write UX for selection and will think about this more
from: https://github.com/urish/muse-js/blob/master/README.md
Auxiliary Electrode
The Muse 2016 EEG headsets contains four electrodes, and you can connect an additional Auxiliary electrode through the Micro USB port. By default, muse-js does not read data from the Auxiliary electrode channel. You can change this behavior and enable the Auxiliary electrode by setting the enableAux property to true, just before calling the connect method:
async function main() {
let client = new MuseClient();
client.enableAux = true;
await client.connect();
}
buttons to save csv files
after viewing the spectra people should pick certain frequency bands and view them over time with visual..auditory feedback
page connects to the muse but no data is plotted
the axis of the plot is changed with new labels so it is getting that part
want to have ability to control variables in p5.js animations with brain waves (alpha power, etc.)
This is a little disingenuous. The disconnect button just refreshes the page, which works to disconnect the data stream but it also destroys any work that the user might have made in the sliders/textbox.
I would recommend removing the button and the user can choose to refresh as desired. For now, I think they might develop with mock data, and then disconnect the mock data to switch to the muse and then lose their cool animation.
Chrome
OSX
Muse 1 2016
Muse 2
big flow questions
how does the page navigation work
if you connect with web bluetooth can you move to a new page
and stay connected
or do the parts of the lession all happen in one page, scrolling, ...
show the raw data scrolling in, auto scale axis, some meaure of data quality with colour number?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.