GithubHelp home page GithubHelp logo

neurosity / eeg-pipes Goto Github PK

View Code? Open in Web Editor NEW
107.0 8.0 21.0 5.03 MB

Digital signal processing utilities as RxJS operators for working with EEG data in Node and the Browser

Home Page: https://neurosity.github.io/eeg-pipes

License: MIT License

TypeScript 100.00%
rxjs eeg bci fft opensource

eeg-pipes's Introduction

EEG Pipes

By Neurosity

Blazing fast EEG transformers implemented as "Pipeable" RxJS operators for Node and the Browser.

Features include:

  • FFT
  • PSD and Power Bands
  • Buffering and Epoching
  • IIR Filters
  • Signal Quality (new)
  • and more.

Read full documentation

Get started by installing the library:

npm install @neurosity/pipes

Getting started

npm install @neurosity/pipes

Then import the module

ESM
import { epoch } from "@neurosity/pipes";
Node
const { epoch } = require("@neurosity/pipes");
Browser
<script type="module">
  import { epoch } from "./node_modules/neurosity/pipes/esm/eeg-pipes.mjs";
</script>
<script nomodule src="./node_modules/neurosity/pipes/browser/eeg-pipes.js">
Electron
import { epoch } from "@neurosity/pipes/dist/electron";

Usage

An Observable of EEG data is required to work with pipes. This can be done by using fromEvent from RxJS in order to push callback events into an Observable stream.

Given a callback-driven API such as:

bci.on("data", () => { ... });

Then...

import { fromEvent } from "rxjs";

const eeg$ = fromEvent(bci, "data");

Now we have an Observable of EEG data that support Pipeable operators.

eeg$.pipe().subscribe();

The following are some libraries that provide EEG as RxJS observables out of the box:

Pipes can be added to an EEG observable of EEG data samples with the following data structure:

{
  data: [Number, Number, Number, Number], // channels
  timestamp: Date,
  info?: {
  	samplingRate?: Number,
  	channelNames?: [String, String, String, String],
  	..
  }
};

Individual samples of EEG data contain an array of values for each EEG channel as well as a timestamp. An additional info object containing metadata about the EEG stream such as sampling rate and channel names can also be included or added with the addInfo operator.

Import the pipes from the module:

import { epoch, fft, alphaPower } from "@neurosity/pipes";

Add to RxJS observable pipe:

eeg$
  .pipe(
    epoch({ duration: 256, interval: 100 }),
    fft({ bins: 256 }),
    alphaPower()
  )
  .subscribe((alphaPower) => console.log(alphaPower));

Pipes

Filtering (IIR)

Filter pipes can be applied to both samples or buffers of samples. Filters are linear IIR filters using a digital biquad implementation.

  • lowpassFilter({ nbChannels, cutoffFrequency })
  • highpassFilter({ nbChannels, cutoffFrequency })
  • bandpassFilter({ nbChannels, cutoffFrequencies: [lowBound, highBound] })
  • notchFilter({ nbChannels, cutoffFrequency })

Optional Parameters:
characteristic: 'butterworth' or 'bessel'. Default is butterworth characteristic because of its steeper cutoff
order: the number of 2nd order biquad filters applied to the signal. Default is 2.
samplingRate: should match the samplingRate of your EEG device. Default is 250

Frequency

  • bufferFFT({ bins, window, samplingRate })
  • alphaPower()
  • betaPower()
  • deltaPower()
  • gammaPower()
  • thetaPower()
  • averagePower()
  • sliceFFT([ min, max ])
  • powerByBand()

Unit conversion

  • voltToMicrovolts({ useLog })

Utility

  • epoch({ duration, interval, samplingRate })
  • bufferCount()
  • bufferTime()
  • bufferToEpoch({ samplingRate })
  • pickChannels({ channels: [c1, c2, c3] })
  • removeChannels({ channels: [c1, c2, c3] })
  • addInfo()
  • addSignalQuality()
    • signal quality is represented as standard deviation value for each channel
  • samples() // epoch to samples
  • concatEpochs()
  • dynamicBuffer({ minSamples: number, maxSamples: number, incrementCountBy: number })

Coming soon

Filtering

  • vertScaleFilter()
  • vertAgoFilter()
  • smoothFilter()
  • polarityFilter()
  • maxFrequencyFilter()

Data Structures

Sample

This is the simplest, core data structure for individual samples of EEG data. Samples have a data array containing a single reading from a number of different EEG electrodes along with a single timestamp. Samples can also contain optional other parameters added by the addInfo operator. The design for this comes directly from the discussion on the EEG stream data models repo.

{
    data: [Number, ..., Number], // length == nbChannels
    timestamp: <Number>,
    info?: {
  	  samplingRate?: Number,
  	  channelNames?: [String, String, String, String],
  	...
  }
}

Epoch

An Epoch represents the EEG data that has been collected over a specific period of time. They can be produced by using either the epoch operator or by using a standard RxJS buffering operator such as bufferTime followed by the bufferToEpoch operator. Collecting data in this way is necessary for performing frequency-based analyses and, in many cases, will improve performance of filtering and other downstream operations. Epochs contain a 2D data array (channels x samples) and an info object that always includes samplingRate and startTime data so that the timestamps of individual samples can be derived.

{
    data: [
        [Number, ... , Number], // length == duration
        [Number, ... , Number]
    ], // length == nbChannels
    info: {
        samplingRate: Number,
        startTime: Number,
        channelNames?: [String, ..., String ]
    }
}

Power Spectral Density or PSD: Proposed Work in Progress

A PSD represents the absolute power of different frequency bins in an Epoch of EEG data. PSDs are produced by applying the fft operator to Epochs or using the bufferFFT operator directly on a stream of EEG Samples. PSDs contain an array of frequencies and a corresponding array of spectral power at each of those frequencies, as well as an info object that contains samplingRate and startTime info similarly to Epochs.

{
    psd: [
        [Number, ... , Number] // spectral power; length = freqs.length
    ], // length = numChannels
    freqs: [Number, ... , Number], // length = fftLength / 2
    info: {
        samplingRate: Number,
        startTime: Number,
        channelNames?: [String, ..., String ]
    }
}

Generating documentation

To generate the docs, run npm run build:docs

eeg-pipes's People

Contributors

alexcastillo avatar andrewjaykeller avatar biasedbit avatar dependabot[bot] avatar jaykan avatar jdpigeon avatar yannickadam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eeg-pipes's Issues

'Uncaught Package not found fili' error when trying to use eeg-pipes 2.0.2 with muse-js example

I've been trying to use bufferFFT() example code with muse-js example (link).

How to replicate the issue:

  • Run yarn add eeg-pipes in muse-js directory
  • Add import { bufferFFT, alphaPower } from "eeg-pipes"; to top of main.ts file
  • After this line: client.eegReadings.subscribe(reading => {console.log(reading);});, add client.eegReadings.pipe(bufferFFT({ bins: 256 }), alphaPower()).subscribe(buffer => console.log(buffer));
  • Run yarn start and load http://localhost:4445/
  • Open js console and see Uncaught Package not found fili

@jdpigeon suggested to use v1.x. I've found v1.2.0 would give Uncaught Package not found babel-runtime error, but v1.1.0 seems to load fine.

Compute initial state when creating filter operators

Currently, whenever data starts flowing through a pipe with a filter operator the outputs of the filter fluctuate wildly for a few seconds due to 'ringing' as the filter state adjusts to the signal.

In python projects such as muse-lsl, we've overcome this issue by computing an initial state for the filter before the signal is applied. This is easy to do in scipy with the lfilter_zi function, but does not appear to be implemented in fili.

Implementing this might require significant changes to our filtering code so we should probably wait on it until we've figured out how we're going to proceed with filtering, either sticking to fili or forking and developing our own library.

Refactor all power band functions

  1. Convert filterByRange to sliceFFT that slices an FFTBuffer based on an array of ranges given to it
  2. Write averagePower operator that takes an FFTbuffer and returns the average power for each channel
  3. Write alpha, beta, theta, etc. Power operators that perform sliceFFT and averagePower operators in sequence with predetermined ranges
  4. Write powerByBand operators that returns an object with average band powers for all the classic EEG bands. Should also be able to take an object which defines names and ranges of custom bands
    ex: { delta: [c1, c2, c3, c4]. theta: [c1, c2, c3, c4], etc}

Add smoothPSD operator

Smoothed PSD buffers are quite a bit better nicer to look at and work with than raw PSDs. For spectral analysis, they can be more accurate too as they're less sensitive to noise.

Once we have a solid PSD operator implemented, adding a smoothed version shouldn't be too hard. All we really have to do is collect a list of the last few PSDs and average them (i.e. reduce) to a single array

Missing dependencies in packaged Electron app

There's an error that is popping up in the release builds of my Electron app using the library: Error: Cannot find module 'babel-runtime/helpers/extends'

I think this is related to the babel and webpack mess you got embroiled in when we introduced the createEEG operator, @alexcastillo, but I can't remember exactly where we left it. Wasn't there a fix for the issue we encountered? Maybe it hasn't been published to the npm versions of webpack yet?

Write filtering algorithms in Web Assembly

Not necessary for initial version, but high performance real-time filtering in a JS application seems like the perfect use case for WASM. We should look into implementing FIR and IIR filters in C (or Rust), compiling to WASM, and adding to the library

Feature: eeg-pipes in React Native

I am utilizing eeg-pipes in conjunction with a React Native Android app. I love eeg-pipes's API, but Babel doesn't run unless I have a debugger window open in the browser:

screenshot_20181024-193416

I presume the transpiler needs a browser window to run? I am quite new to front-end and am more than willing to add this feature myself and do a pull request; I may need a suggestion regarding where to start, however.

Info object disappears after PowerByBand

It seems some pipes, like powerByBand(), drop the info object with metadata which is available at start in the buffer.

You can follow the data in this example:

  • epoch and fft keep the info object.
  • powerByBand removes it (possibly caused by the reducer?)
        const eeg = fromEvent(self.eventEmitter, 'server:bci-file:data');

        const stream = eeg.pipe(
            epoch({ duration: 256, interval: 100, samplingRate: 250  }),
            fft({ bins: 256 }), // info object with timestamp data is still here
            powerByBand() // after this pipe, the info is gone :(
            ).subscribe((buffer) => {
                console.log(buffer)
            });
       );

I could use an addInfo() after the powerByBand() pipe, but I do not have access to that information anymore.

Is this by design, or possibly an extension that could be made to the functionality?

PS: Thank you for the awesome library, it already helped me a lot with my project.

Make CircularBuffer operator

In order to make the existing frequency operators work with our new data structure we'll have to do some tweaking of the existing bufferFFT operator. Hopefully, this can be done without disrupting too much existing work.

For the majority of EEG use cases, developers will just be interested in getting the Power Spectral Density returned from the FFT function. I think we should hide the inner workings of the fili fft functions as much as possible and just make sure to return the psd and the parameters that were used to calculate it (e.g. samplingRate, binSize, windowLength, etc.). I'm not sure because it's not mentioned in the docs, but I think fft.spectrum, which is what's currently implemented, returns the psd.

If we can also find a way to return an array of frequency bins, that would be nice. If fili doesn't provide that, though, we might not need to worry about it.

I think we should also throw the FFT objects in the bufferFFT pipe into a const array that can be reused for each event (exactly like in the filter operators).

Here's my proposed data structure for the bufferFFT result:

{
    "psd": [
        [Number, Number, ... ] // frequencies
    ], // channels
    "info": {
        "samplingRate", Number,
        "binSize", Number,
        "windowLength", Number,
        "epochLength", Number,
        "startTime", Number
    }
}

Filter ringing from dropped data points

One unfortunate side effect of bluetooth EEG is the occasional dropped data point. Standard procedure in EEG analysis is to mark dropped samples as data is collected and make sure those time points are excluded from analysis.

When using linear filters in real time, though, dropped data points can cause conspicuous ringing artifacts.

Unfortunately, the fili library doesn't not handle NaN or null values well. If any NaNs are passed through a filter function, the entire array will return as NaNs. null is treated as 0, which is undesirable as well.

It's up to users of this library to decide how they want to deal with dropped data. Smoothing around dropped data points by averaging points before and after is a good solution. Maybe we could write an operator to handle that.

Proposed Data Structures

Proposed Data Structures

As we develop the library and try to make it easier for others to use, we should formalize the data structures that are used in order to make it easier to understand and interopable with other EEG software (i.e LSL, MNE).

Here, I've presented some examples of EEG data structures in JS that could work for eeg pipes internally. Mostly, these are adapted from LSL and MNE Python.

Overarching questions

  1. What are we going to do with metadata about the data stream (e.g. sampleRate, channelNames, etc.)? It's not necessary at the basic level for most operators, but for some it is. If we ignore it entirely then the developer is stuck manually entering things like sampleRate, channelCount, into each operator as parameters. Is there some smart way to pass metadata through the pipes or should that be taken care of outside the pipes library in some other part of the developer's application?

  2. Epoching EEG data is very important for advanced EEG analysis. Usually for real-time data epoching would be implemented by a Circular Buffer or some other form of dynamic data storage that is continually being drawn from to produce epochs with various degrees of overlap for processing. Should we leave creating a persistent data buffer up to the developer or create a circular buffer operator of our own? Importantly, in contrast with the current bufferCount and bufferTime operators, it would have to maintain some internal state so that it can hold on to some of the data within it every time it fires.

Sample: unit of continuous data

This will be our core data structure for continuous EEG data (both filtered and unfiltered). The design for this comes directly from the discussion on the EEG stream data models repo and the LSL sample.

{
    "data": [Number, ..., Number], // length == numChannels
    "timestamp": <number>
}

Buffer or Chunk or Raw: chunk of continuous data

In many cases, it will be useful to pool a bunch of data together without processing it. Filtering, for example, works best when it's run on a whole chunk of data at once. Based on MNE's example, it might be best to shape this data structure by channel so that time series operations can be performed easily on on arrays that have already been put in the correct order. This structure could also become more lean by exchanging individual timestamps for startTime and sampleRate data

{
    "data": [
        [Number, ... , Number], // length == numSamples
        [Number, ... , Number]
    ], // length == numChannels
    "info": {
        "sampleRate": Number,
        "startTime": Number,
        "chNames": [String, ..., String ] // length == numChannels
    }
}

Power Spectral Density or FFT Buffer: FFT'd data

Probably important to keep all FFT info for the developer downstream. However, I'm not very familiar with exactly how the fili FFT works

{
    "psds": [
        [Number, ... , Number] // spectral power; length = freqs.length
    ], // length = numChannels
    "freqs": [Number, ... , Number], // length = fftLength / 2
    "info": {
        "sampleRate", Number,
        "binSize", Number,
        "windowLength", Number,
        "epochLength", Number,
        "startTime", Number
    }
}

Values without electrode connected

Hi,

First of all, thank you for the awesome library! This provides just exactly what I want.
I am trying to interpret the values from the eeg-pipes library and the OpenBCI. What I am trying to do is to build a neurofeedback system that specifically targets alpha waves.

When I tried this library without connecting any electrode on my head, I could see some values on my terminal. Is it safe to consider those values as noise values? Attached is the screenshot of the terminal and my code. Thanks for help!

Screen Shot 2020-02-05 at 2 04 14 PM

Screen Shot 2020-02-05 at 2 06 50 PM

Are the absolute bandpower values in log(2)?

The absolute band power values I am getting seem to be log(2) as they are too small to be uV^2 values... Is this the case and if so where can I change it so that I get 'raw' absolute band power values? Thank you for your help in the matter

Persistent filter objects in filter operators

I've designed two different ways to keep track of filters between buffers in the data stream, one storing the filter objects in a variable and one using the scan operator. Both of these implementations seem to eliminate the artifact issue that occured due to new filter objects being created every time a new buffer came in.

Variable storage:

source$ => {
  var options = {
    order,
    characteristic,
    Fs,
    Fc,
    BW
  };
  var notchArray = new Array(nbChannels)
    .fill(0)
    .map(x => createNotchIIR(options));
  return createPipe(
    source$,
    map(channelGroupBuffer =>
      channelGroupBuffer.map((channel, index) =>
        notchArray[index](channel)
      )
    )
  );
};

Execution time for this on 10ch x 1000 samples: 1.14ms

source =>
  createPipe(
    source,
    scan(
      (acc, curr) => {
        return [
          curr.map((channel, index) => acc[1][index].multiStep(channel)),
          acc[1]
        ];
      },
      [
        new Array(nbChannels).fill(0),
        new Array(nbChannels)
          .fill(0)
          .map(x =>
            createNotchIIR({ order, characteristic, Fs, Fc, gain, preGain, BW })
          )
      ]
    ),
    map(dataAndFilter => dataAndFilter[0]) // pluck just the data array to emit
  );

Execution time for this on 10ch x 1000 samples: 1.33ms

Both use a simple function I created for readbility

const createNotchIIR = options => {
  const calc = new CalcCascades();
  const coeffs = calc.bandstop(options);
  return new IirFilter(coeffs);
};

In my opinion, declaring the variable to hold filter arrays is much easier to understand. However, it feels a little bit less like 'the RxJS way' since the operator technically isn't pure anymore.

Curious which one we should use

Add real EEG data samples to MockStream

If we use real EEG data in testing and developing the library, we'll better be able to catch domain-specific peculiarities.

It would also be great if this real data could include some dropped samples (at an appropriate frequency)

Being able to use data from several different devices could also be cool.

Add option to keep data in PSD Data Structure

currently, when going from Epoch to PSD we leave behind the data array that contains the raw EEG data. Some people may want to keep this around in their processing pipeline, but for sake of efficiency, maybe we should just make this an option?

Changing filter operators on the fly

So, I've spent a quite a bit of time trying to find a way to change the type of filter applied to a stream of EEG data while it's being collected. Hopefully, we get to the point where we can swap operators in and out of an eeg pipe based on business logic.

Here I have an EEG epic (redux-observable) that uses switchMap to change the filter operator that is applied to an EEG stream based on emitted actions. The insight I had that let me do this was place the actual EEG data observable inside an observable listening for filter info actions (also inside another observable listening for graph start actions). As far as I can tell, the filter is getting applied immediately and I'm not losing any data. However, either the inner eeg data observable doesn't unsubscribe or the filter functions stack on top of each other because after switching filters once the graph gets all messed up. As if two listeners are writing to the same graph.

const setupGraphUpdate = (action$, store) =>
  action$.ofType(START_GRAPHING).pipe(
    filter(() => store.getState().device.eegObservable != null),
    switchMap(() =>  // SwitchMap to only propagate events from most recent filter type observaable
      action$.ofType(SET_FILTER_INFO).pipe(
        tap(() => console.log("filter changed")),
        pluck("payload"), //Extracts filterInfo
        switchMap(filterInfo =>
          store.getState().device.eegObservable // Upstream observable stored in Redux
          .pipe(
            groupByChannel(),
            defineFilter(filterInfo), // Switch function returns appropriate filter operator
            map(updateEEGBuffer),
            takeUntil(action$.ofType(RESET_FILTER))
          )
        )
      )
    ),
    catchError(error =>
      Observable.of(error).pipe(
        tap(p => console.log("setupGraphUpdate", p)),
        map(viewerError)
      )
    ),
    takeUntil(action$.ofType(STOP_GRAPHING, VIEWER_CLEANUP))
  );

Any idea what might be going wrong here and what the best solution will be going forward?

Question about correct parameters for recording data

Hello,

My protocol is either 15 minutes of EEG recording or 30 minutes during a relaxation session. I want to get absolute bandpower values (delta,.... gamma) for each minute of recording (0-1, 1-2,... 29-30).

I'm confused on how the EEG recording should be sampled...

For example, you have the following example in your code:

eeg$.pipe(epoch({ duration: 1024, interval: 100, samplingRate: 256 }))

In your example does this mean that the recording is 4 seconds (1024/256)? Is the interval a small "pause" between each sampling? E.g., in the first second of recording 256 samples are acquired, then there is a pause of 100ms and then another 256 samples are acquired etc?? If so can the interval be set to continuous (e.g., by putting 0)?

If this logic is correct, then in order to record 30 minutes of EEG data, the values should be:

eeg$.pipe(epoch({ duration: 460,800, interval: 0, samplingRate: 256 }))

?????

I basically want to record and analyse bandpower of 30 minutes (in 1 minute intervals) using settings similar to this: https://raphaelvallat.com/bandpower.html

e.g., 256 samples per second, 0.5-45 bandpass filter with a 4-second sliding window.

Any help would be super appreciated

Clarification of Epoch interval parameter

Hi, and thanks for this library!

Looking at the doc for Epoch, it is mentioned:
* @param {number} [options.interval=100] Time (ms) between emitted Epochs

However, a bufferCount(interval) is used to accumulate the samples. To really emit every 100ms, in the case of a sampling rate of 256Hz, an interval of ~26 would be needed.

Could you confirm?

can anyone explain me why the values I get are vastly differen?

Hello everyone,

Can anyone tell me why the values are so vastly different?

The zip file includes

  1. the raw recording file (recording.csv) that is a 12-minute long recording of EEG activity using Muse for locations AF7 and AF8
  2. Bandpower analysis of the first minute of recording using YASA a python script for EEG analysis https://raphaelvallat.com/yasa/build/html/generated/yasa.bandpower.html#yasa.bandpower

I apply a Butterworth 2 order 0.5 - 45 Hz and I chop the recording in EEGLAB into 12, 1 minute chunks. To output bandpower values I use the following yasa code:

#Timepoint 1

# Load data as a set Raw file
raw1 = mne.io.read_raw_eeglab('D:/DavidHugh LTD/Muse/MUSE EXPO/SESSION_1.set',  eog=(), preload=True, uint16_codec=None, verbose=None)
# Load data
data1 = raw1._data * 1e6

sf1 = raw1.info['sfreq']
chan1 = raw1.ch_names
times1 = np.arange(data1.shape[1]) / sf1
print(data1.shape, chan1, times1)

abspower_whole = yasa.bandpower(data1, sf1, win_sec=4, ch_names= chan1, bands=[(25, 42, 'HighFreq'), (4, 13, 'LowFreq'), (0.5, 4, 'Delta'), (1, 2.5, 'DeltaLow'), (2.5, 4, 'DeltaHigh'), (4, 8, 'Theta'), (4, 6, 'ThetaLow'), (6, 8, 'ThetaHigh'), (8,12, 'Alpha'), (8, 10, 'AlphaLow'), (10, 12, 'AlphaHigh'),(12, 15, 'SMR/Mu'),(12, 30,'Beta'),(12, 20, 'BetaLow'), (20, 30, 'BetaHigh'), (30, 45, 'Gamma'), (30, 37.5, 'GammaLow'), (37.5, 45, 'GammaHigh')], kwargs_welch={'average': 'median', 'window': 'hamming'}, relative=False)

These are the values I use from neurosity

cutOffFrequencies: [0.5, 45],
nbChannels: window,nchans
}),
epoch({
   duration:1024,
   interval: 100,
   samplingRate: 256
}),
powerByBand({
  delta: [0.5,4],
  theta: [4,8],
  alpha: [8,12],
  beta: [12,30],
  gamma: [30,45],
  high [25,42],
  low [4,13],

The two electrode values are then averaged together to output one value

Keep in mind I don't need the subbands at the moments that's why I don't include them in neurosity yet.

This is the values I get from neurosity output for minute 1:
alpha":[3.762604556733285
"beta":[1.6242335911733488
"theta":[6.85495889467235
"gamma":[0.809086470853267
"delta":[13.7046152939892
"high":[0.9377043598324029
"low":[5.282685355879077

these are the values I get from YASA for minute 1:

HighFreq | 12.04025
LowFreq | 258.9931
Delta | 632.0063
Theta | 206.5474
Alpha | 47.12576
Beta | 41.51031
Gamma | 8.44468

Why are they so different? Any idea? It is driving me absolutely nuts,

files.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.