GithubHelp home page GithubHelp logo

ohba-analysis / osl Goto Github PK

View Code? Open in Web Editor NEW
30.0 4.0 7.0 11.31 MB

OHBA Software Library - MEG/EEG Analysis Tools

Home Page: https://osl.readthedocs.io/en/latest/

License: BSD 3-Clause "New" or "Revised" License

Python 96.70% Makefile 0.04% HTML 3.27%
eeg meg python

osl's People

Contributors

ajquinn avatar cgohil8 avatar evanr70 avatar hqian96 avatar marcofabus avatar matsvanes avatar mlfarinha avatar ricsinaruto avatar sbraeut avatar scho97 avatar woolrich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

osl's Issues

report directory naming convention

Not sure whether this is a bug or a feature, but when we run (batch) preprocessing, reports will be created for each file in a directory like this (for the raw (maxfiltered) file named sub-001_task-rest_tsss.fif):
report/sub-001_task-rest_tsss

However, if the creation of the report fails for whatever reason, and you run the report from scratch using osl.report.gen_report_from_fif, it will as a default create files in directories named report/sub-001_task-rest_tsss_preproc_raw. I think the options should be consistent, and probably should have the latter naming convention.

maxfilter README example

In the README in the maxfilter folder in all the examples osl_maxfilter.py is used but I believe the name of the function changed to maxfilter.py

mne.preprocessing.annotate_flat no longer available

mne.preprocessing.annotate_flat isn't available in the latest version of MNE (1.1.0). This causes the following line to error:

https://github.com/OHBA-analysis/oslpy/blob/main/osl/preprocessing/mne_wrappers.py#L180

This also causes some of the tests to fail which contain an "- annotate_flat : {}" line in the config. For now, I've just removed this line from the config in the tests so the tests don't fail (#30), but we should probably fix this function.

Report error when extention is not preproc_raw.fif

When you're using osl.report.gen_report_from_fif(list_of_files, outdir='/path/to/save/dir') it will fail in case your files do not end in preproc_raw.fif (e.g., raw.fif or anything other. In my case: clean_raw.fif).
This is caused by read_dataset in https://github.com/OHBA-analysis/osl/blob/main/osl/preprocessing/batch.py#L401, where the preproc_raw extension is hardcoded. We should make the extension optionable. Perhaps with the default raw (but when run from within the batch preproc, the default should be preproc_raw)

Restructure output files in source reconstruction

It would be good to have a directory structure that is more consistent with MNE. The proposed structure (for source reconstruction) is:

-> base_directory
---> src [by default the source reconstructed data will be put in to a directory with this name in the base_directory]
-----> sub-001
---------> bem
---------> rhino
-----------> coreg
-----------> surfaces
---------> parc.npy [parcellated time series data]
-----> sub-002
....
-----> sub-003
....

In this issue let's fix the structure of the source reconstruction then add a new directory for sign flipped data in a separate issue.

@woolrich

Argument ordering in preprocessing functions

The order of the arguments for run_proc_chain is:
def run_proc_chain(infile, config, ...):
whereas for run_proc_batch it is:
def run_proc_batch(config, files, ...):
note the config and input file(s) have swapped.

We should maybe change the order of the arguments in one of these functions to make sure it's consistent. This will break existing scripts, but maybe worth doing now rather than later.

What do you think @AJQuinn @matsvanes

Change default output structure of preprocessing

By default we want the output of oslpy to be written to subject specific directories. This is already the case with source reconstruction and maxfiltering. We need to change the default output structure of preprocessing.

The default output structure should be:
outdir
-> report
-> sub-001
---> sub-001_preproc_raw.fif
---> sub-001_preproc_raw.log
-> sub-002
---> ...

Filtering before ICA

Hi,

MNE recommends a highpass filter at 1Hz before doing ICA:
https://mne.tools/stable/auto_tutorials/preprocessing/40_artifact_correction_ica.html#filtering-to-remove-slow-drifts

However, this is not currently present in the code:
https://github.com/OHBA-analysis/oslpy/blob/main/osl/preprocessing/_mne_wrappers.py#L312

My solution would be to check what highpass filtering had been done on the raw object and if it's less than 1Hz to automatically do a highpass filter on a copy of the raw object as recommended by MNE.

Let me know what you think, I can implement it if you agree.

new config style breaks plot_preproc_flowchart

The new config style (i.e. not starting with method) breaks plot_preproc_flowchart

config_text= """
meta:
  event_codes:
preproc:
  - crop:               {tmin: 30}
  - find_events:        {min_duration: 0.005}
"""

osl.preprocessing.batch.plot_preproc_flowchart(config_text)

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-10-2b9eecfee5cf> in <module>
----> 1 osl.preprocessing.batch.plot_preproc_flowchart(config_text)

/ohba/pi/mwoolrich/mvanes/software/oslpy/osl/preprocessing/batch.py in plot_preproc_flowchart(config, outname, show)
    329     for idx, stage in enumerate(stages):
    330 
--> 331         b = box if stage['method'] not in ['input', 'output'] else startbox
    332         stage = stage.copy()
    333         method = stage.pop('method')

KeyError: 'method'

Scan log notes in report

Usually when running an experiment, the experimenter will write some notes down, e.g. "eye tracker only working on one eye", or "participant moving a lot". It might be helpful if these can be added to the report on the info tab.

I understand we can't add an unlimited amount of features to the report (or period). But I was wondering whether it would be feasible to have a helper function that can enable the user to add extra things to the report?
What do you think @AJQuinn @cgohil8 (and let me know if I should just shut up)?

osl_maxfilter returns log filenames with _tsss_tsss extension

When running osl_maxfilter, all files that are created have the extension that you'd expect, like sss_autobad_err.log, sss_autobad.log, tsss_headpos.log, _tsss.fif. However, the tsss-log files have a double "tsss"-extension, e.g. _tsss_tsss.log and _tsss_tsss_err.log

Source Reconstruction Improvements - User Interface/Report

User interface:

  • We should have more informative errors messages / a validation function for the source reconstruction config.
  • We should align the wrappers for source reconstruction to the underlying low-level functions.
  • We should also document the options for the config somewhere.

Report:

  • We should add a section describing the parcellation/beamforming in the report. This includes:
    • The eigenspectrum of the parcellated data.

Add global panel to reports

There should be panels which summarise the entire dataset (rather than just subject-specific panels) in both the preprocessing and source reconstruction report.

Manual ICA Issues

Steps to reproduce the error:

python 3_manual_ica.py

via a terminal on hbaws46.

  • A pop up appears.
  • Initially, I can only see the 1st topo map on the left. If I move around the time series using the view, the remaining topo maps appear.
  • I can select the components I want to remove.
  • When I close the pop-up, a blank 'Figure 1' box appears.
  • I get an index error.

Full terminal output:

Loading dataset:
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_preproc_raw.fif
Opening raw data file /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_preproc_raw.fif...
    Range : 4750 ... 139749 =     19.000 ...   558.996 secs
Ready.
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_events.npy
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_ica.fif
Reading /ohba/pi/knobre/cgohil/dg_int_ext/preproc/InEx_s01_block_01_tsss/InEx_s01_block_01_tsss_ica.fif ...
Now restoring ICA solution ...
Ready.
Creating RawArray with float64 data, n_channels=55, n_times=135000
    Range : 4750 ... 139749 =     19.000 ...   558.996 secs
Ready.
Using matplotlib as 2D backend.
Traceback (most recent call last):
  File "/home/cgohil/local/miniconda3/envs/osl/lib/python3.8/site-packages/matplotlib/cbook/__init__.py", line 287, in process
    func(*args, **kwargs)
  File "/home/cgohil/packages/oslpy/osl/preprocessing/plot_ica.py", line 1164, in _close
    tmp = tmp[0]
IndexError: list index out of range

Extracting volume info from structural MRIs

In osl:

  • We use this function to extract the dimensions from a nifti file: osl.source_recon.rhino.utils._get_vol_info_from_nii.
  • Note, mri_depth=dims[2] in this function.
  • This is used in the forward modelling to setup the source space grid in the function osl.source_recon.rhino.forward_model.setup_volume_source_space.
  • The vol_info is passed to the MNE function mne.source_space._make_volume_source_space here: https://github.com/OHBA-analysis/osl/blob/main/osl/source_recon/rhino/forward_model.py#L375.

Links:

In MNE:

  • There is an equivalent function for setting up the source space grid: mne.source_space.setup_volume_source_space.
  • It too calls mne.source_space._make_volume_source_space here: https://github.com/mne-tools/mne-python/blob/main/mne/source_space.py#L1746.
  • However, the vol_info passed to _make_volume_source_space is created here using mne._freesurfer._get_mri_info_data.
  • Note, mri_depth=dims[1].

Links:

So the question is what should be used for the vol_info passed to mne.source_space._make_volume_source_space?

Note:

An example of a nifti file where the volume dimensions do not have mri_depth=mri_height is any of the structurals in the Wakeman-Henson dataset.

Source reconstruction report improvements

  • Add a legend to the coreg plot
  • Clarify what the metric is in the sign flipping panel
  • Correlation between parcels after orthogonalisation in the parcellation panel.

Error loading in SPM data when event-value is str

When trying to load in SPM data using the spmio, this will fail if the data contains event values that are stings.
The data I'm using has been preprocessed in osl (matlab) and contains events with type 'artefact_OSL' and value 'VE', and throws the following error:

Traceback (most recent call last):
  File "/Users/matsvanes/venv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-74-6aa0cb1a8070>", line 1, in <module>
    D = osl.utils.spmio.SPMMEEG(
  File "/Volumes/T5_OHBA/software/oslpy/osl/utils/spmio/spmmeeg.py", line 52, in __init__
    self.condition_values = [int(_get_trial_trigger_value(t)) for t in self.trials]
  File "/Volumes/T5_OHBA/software/oslpy/osl/utils/spmio/spmmeeg.py", line 52, in <listcomp>
    self.condition_values = [int(_get_trial_trigger_value(t)) for t in self.trials]
ValueError: invalid literal for int() with base 10: 'VE'

use a default window (in sec) for preproc report spectrum

MNE's plot_psd uses a default window of n_fft = 2048, so in terms of samples. This means that depending on how much you resample your data, the spectral resolution can be higher or lower. I think it'd be better if we use a default option of a number of seconds, i.e. 2 seconds.

i.e. raw.plot_psd(n_fft=int(raw.info['sfreq']*2))

As a side note, we could decide to adopt @AJQuinn 's square root of f plot, which I quite like.

Using ICA sequentially leads to overwrites in the report

For the dataset that includes multiple data modalities (e.g., NTAD data with simultaneous M/EEG recordings), users might want to apply ICA denoising for each data types. For example,

- ica_raw: {n_components: 60, picks: meg}
- ica_autoreject: {picks: meg, apply: true}
- ica_raw: {n_components: 30, picks: eeg}
- ica_autoreject: {picks: eeg, apply: true}

However, using ICA preprocessing step sequentially led to following problems:

  1. A *_ica.fif MNE object file tat was generated first gets overwritten by the latter one.
  2. gen_html_data() only accepts one ICA object. This forces our report to only summarise and plot the most recent ICA step.
  3. Applying plot_bad_ica() to ICAs of EEG leads to an error at L838-842, because raw objects contain all of mag, grad, and eeg in NTAD data, but we only detected ICA from EEG channels.

report.gen_report fails if there is only one channel type

report.gen_report fails if there is only one channel type. Traceback below.

Similar problem in plot_channel_dists()


Traceback (most recent call last):

File "", line 171, in <cell line: 167>
report.gen_report(preproc_fif_files, outdir=preproc_dir)
File "/Users/woolrich/oslpy/osl/report/raw_report.py", line 182, in gen_report
data = gen_html_data(raw, ica, outdir, level)
File "/Users/woolrich/oslpy/osl/report/raw_report.py", line 112, in gen_html_data
data['plt_temporalsumsq'] = plot_channel_time_series(raw, savebase)
File "/Users/woolrich/oslpy/osl/report/raw_report.py", line 279, in plot_channel_time_series
ax[row].plot(t, ss)
TypeError: 'AxesSubplot' object is not subscriptable

maxfilter.py Import Error

When running maxfilter.py the following error occurs:

Traceback (most recent call last): File "indir/maxfilter/maxfilter.py", line 14, in <module> from ..utils import validate_outdir, add_subdir ImportError: attempted relative import with no known parent package

Also there are no validate_outdir & add_subdir functions in utils.

Thanks!

Preproc bad segment detection doesn't detect zeros

Maxfilter sometimes zeros data. This should always be picked up be bad segment detection, which is currently not the case. So we should explicitly look for zeros in the data and annotate those segments as bad.

Feature request: generate report with existing plots

Currently, the report function creates an html with only the files that were given as inputfiles to the function call. That means that if you've done a big batch process before, for which you have a collective report, and you get new data, you would need to recreate all previous plots if you want to "update" the collective report.
It would be good if we can make an overwrite option in the report, that if False, will first check whether the plots for this specific data already exist and if so, skip the creation of those plots (and thus save computation power and time), and in the end generate the report with all existing and new plots in it.

osl_maxfilter 'adjacent' output returns error

when using 'adjacent' as output directory for osl_maxfilter, i.e., files are supposed to be stored in the same directory as the input file, I get an error about the log file not being found (see below). Probably something is going wrong in the initialization of the log file(s). FYI I'm running the following in an active osl environment on the latest version of OSL
The code I'm running:

osl_maxfilter tmp.txt 'adjacent' --maxpath /neuro/bin/util/maxfilter --scanner Neo --tsss --mode multistage --headpos --movecomp

The message printed to screen:

['tmp.txt', 'adjacent', '--maxpath', '/neuro/bin/util/maxfilter', '--scanner', 'Neo', '--tsss', '--mode', 'multistage', '--headpos', '--movecomp']
Namespace(files='tmp.txt', outdir='adjacent', maxpath='/neuro/bin/util/maxfilter', mode='multistage', headpos=True, movecomp=True, movecompinter=False, autobad=False, autobad_dur=None, bad=None, badlimit=None, trans=None, origin=None, frame=None, force=False, tsss=True, st=10, corr=0.98, inorder=None, outorder=None, hpie=None, hpig=None, scanner='Neo', ctc=None, cal=None, overwrite=False, dryrun=False)


OHBA-Maxfilter


Processing 1 files
Outputs will be saved alongside inputs


Processing run 1/1 : /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET.fif
/neuro/bin/util/maxfilter -f /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET.fif -o /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss.fif -autobad on -cal /net/aton/meg_pool/data/TriuxNeo/system/sss/sss_cal.dat -ctc /net/aton/meg_pool/data/TriuxNeo/system/ctc/ct_sparse.fif -v > >(tee -a /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad.log) 2> >(tee -a /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad_err.log >&2)


tee: /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad_err.log: Permission denied
tee: /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad.log: Permission denied
**************************************************
*** This is MaxFilter(TM) version 2.2.20       ***
*** Compilation date: Dec  3 2019 13:06:57     ***
*** Please report problems to [email protected] ***
**************************************************
No processing records found.
SSS origin (head) 0.0, 0.0, 40.0 mm.
Time scale: #f = 12.000 s, #l = 836.000 s 
Applying the fine-calibration in /net/aton/meg_pool/data/TriuxNeo/system/sss/sss_cal.dat.
Using the cross-talk matrix in /net/aton/meg_pool/data/TriuxNeo/system/ctc/ct_sparse.fif.
Opened FIFF file /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET.fif (824 data buffers).
EXIT 4: The output file already exists. Output file /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss.fif was not written!
Failed to create file /ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss.fif for writing!
Traceback (most recent call last):
  File "/home/mvanes/anaconda3/envs/osl/bin/osl_maxfilter", line 8, in <module>
    sys.exit(main())
  File "/home/mvanes/anaconda3/envs/osl/lib/python3.9/site-packages/osl/maxfilter/maxfilter.py", line 615, in main
    run_multistage_maxfilter(infifs[idx], outbase, vars(args))
  File "/home/mvanes/anaconda3/envs/osl/lib/python3.9/site-packages/osl/maxfilter/maxfilter.py", line 332, in run_multistage_maxfilter
    with open(outlog, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/ohba/pi/knobre/datasets/cmore/rawbids/sub-005/meg/sub-005_task-CTET_autobad_sss_autobad.log'

Option to just beamform via the config in source reconstruction

Create a wrapper for just doing beamforming that can be called in the config when doing source reconstruction. (Currently there's only one wrapper for beamform_and_parcellate). This option should be usable as

config = """
    source_recon:
    - beamform: {...}
"""

logger error in batch processing

When a file is already present in batch processing, and overwrite=False, osl throws the following error:

File "/ohba/pi/mwoolrich/mvanes/software/oslpy/osl/preprocessing/batch.py", line 551, in run_proc_chain
    osl_logger.critical('Skipping preprocessing - existing output detected')
AttributeError: module 'osl.utils.logger' has no attribute 'critical'

osl_logger should be replaced with logger

report ICA only shows single topo for Elekta data

The report shows properties of all "bad" ICA components, but for Elekta data (which has two sensor types), only the topography for the magnetometers is shown. Add the topo for the gradiometers.

adding Elekta channel descriptions

I have a few info files that could be useful to add to oslpy. These are descriptions of Elekta channels that I believe can't be found in the raw.info structure or anything like that. They are:

division of sensor array into left/right frontal/temporal/parietal/occipital and anterior/posterior- list of channel names
division of gradiometers into longitude and lattitude

What is the best place to put these and in which format (.txt?)?

File structure of preprocessing output

Currently when we use the batch processing we specify a single output directory. Both the preprocessed fif files and logs files are written to this directory. E.g. if i list the contents of my output directory, I get:

osl_batch.log
sub-004_task-restEO_preproc_raw.fif
sub-004_task-restEO_preproc_raw.log
sub-005_task-restEO_preproc_raw.fif
sub-005_task-restEO_preproc_raw.log

With one big list i find it hard to see which files failed and to pick out error logs. I propose the output directory by default contains:

  • The preprocessed fif files
  • A directory containing the log files
  • A directory containing the report files

So if i list the output directory I see:

/logs
/report
sub-004_task-restEO_preproc_raw.fif
sub-005_task-restEO_preproc_raw.fif

Here, because no subject numbers should be repeated, i find it easy to scan across the subject numbers and would be able to pick out missing subjects. The /logs directory contains all the .log files:

osl_batch.log
sub-004_task-restEO_preproc_raw.log
sub-005_task-restEO_preproc_raw.log

and the /report directory contains the files needed to generate a report and the html file. Here, I find it much easier to pick out error log files.

What do you think?

I'm about to add a directory to hold the files needed for the report, so if we don't change the logs directory then the default output would be:

/reports
osl_batch.log
sub-004_task-restEO_preproc_raw.fif
sub-004_task-restEO_preproc_raw.log
sub-005_task-restEO_preproc_raw.fif
sub-005_task-restEO_preproc_raw.log

Suggestions for report improvement

  • remember the tab in the previous file when you go to the next (or have a toggle that can control whether to do so or not)
  • still have a list of all files somewhere so you can quickly go to a specific file
  • in the window indicate how many files there are (e.g. 63/114 files)

Feature request: EOG timeseries

Hi,

I thought it could be useful to add the EOG timeseries in the timeseries panel of the report.
This would allow direct comparison with the MEG timeseries.

Issues with FSL if filepath contains a space

The source reconstruction code uses system calls to run FSL. E.g. source_recon.rhino.utils._get_orient calls

fslorient -getorient {}

to get the orientation of a structural MRI file. However, if you have spaces in the path to the sMRI file, this will fail. E.g. this raises an error:

fslorient -getorient /Users/cgohil/OneDrive - Nexus365/datasets/wakeman_henson/src/sub-02/rhino/surfaces/smri.nii.gz

This can be fixed if we wrap the path in the system call with quotes:

fslorient -getorient '{}'

or replacing the spaces with \ .

fix_sign_ambiguity appears before coreg, beamform_and_parcellate in the config workflow report plot

In recon report, when coreg and beamform_and_parcellate are run using run_src_batch, followed by running fix_sign_ambiguity in a separate call to run_src_batch (which typically needs to be done separately so that the template subject can be chosen), then the sign flipping erroneously appears before coreg and beamform_and_parcellate in the workflow plot found on the config tab of recon/report/summary_report.html.

Example code:

config = """
    source_recon:
    - coregister:
        include_nose: false
        use_nose: false
        use_headshape: true
        model: Single Layer
        already_coregistered: true
    - beamform_and_parcellate:
        freq_range: [4, 40]
        chantypes: mag
        rank: {mag: 100}
        parcellation_file: fmri_d100_parcellation_with_PCC_reduced_2mm_ss5mm_ds8mm.nii.gz
        method: spatial_basis
        orthogonalisation: symmetric
"""

source_recon.run_src_batch(
    config,
    src_dir=recon_dir,
    subjects=subjects,
    preproc_files=preproc_fif_files,
    smri_files=smri_files,
)

# Find a good template subject to align other subjects to
template = source_recon.find_template_subject(
    recon_dir, subjects, n_embeddings=15, standardize=True
)

config = f"""
    source_recon:
    - fix_sign_ambiguity:
        template: {template}
        n_embeddings: 15
        standardize: True
        n_init: 3
        n_iter: 2500
        max_flips: 20
"""

# Do the sign flipping
source_recon.run_src_batch(config,
                           recon_dir,
                           subjects,
                           )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.