GithubHelp home page GithubHelp logo

opentreeoflife / phylesystem-api Goto Github PK

View Code? Open in Web Editor NEW
9.0 14.0 5.0 5.03 MB

API access to Open Tree of Life treestore

License: BSD 2-Clause "Simplified" License

Python 89.54% Shell 0.31% CSS 4.75% Jinja 5.39%

phylesystem-api's Introduction

Code style: black

The Open Tree Of Life API

Build Status

This repository holds the code that runs The Open Tree Of Life API, which talks to the backend datastore phylesystem.

Introduction

This repo uses peyotl to interact with a copy of the phylesystem data on a server. The code here provides a web services API to that data store. The best description of the phylesystem is in the published paper in Bioinformatics.

See docs/ for examples of how to use the API with curl.

Installation

There are a dependencies installable from pypi using pip, and the open tree of life client-side python library is also used on the server side for handling some aspects of NexSON.

$ pip install -r requirements.txt
$ cd ..

The first time you run, you'll need to:

$ git clone https://github.com/OpenTreeOfLife/peyotl.git
$ cd peyotl
$ pip install -r requirements.txt
$ python setup.py develop

Subsequently changing to the peyotl directory and running

$ git pull origin master

should be sufficient to get the latest changes.

Configuration

$ cp private/config.exampl private/config

then open private/config in a text editor and tweak it.

  • repo_parent should be a file path which holds 1 or more phyleystem-# repositories with the data.

  • git_ssh and pkey

Logging configuration

The behavior of the log for functions run from with a request is determined by the config file. Specifically, the

[logging]
level = debug
filepath = /tmp/my-api.log
formatter = rich

section of that file.

If you are developer of the phylesystem-api, and you want to see logging for functions that are not called in the context of a request, you can use the environmental variables:

  • OT_API_LOG_FILE_PATH filepath of log file (StreamHandler if omitted)
  • OT_API_LOGGING_LEVEL (NotSet, debug, info, warning, error, or critical)
  • OT_API_LOGGING_FORMAT "rich", "simple" or "None" (None is default)

Deploying

This git repository is meant to be a "web2py application", so you need to create a symlink in $WEB2PY_ROOT/applications to the API repo directory:

cd $WEB2PY_ROOT/application

this will make the app available under /api

ln -sf /dir/with/api.opentreeoflife.org api

Using the API from the command-line

See docs/ for examples of how to use the API with curl.

Using the API from Python

See peyotl has wrappers for accessing phylesystem web services. See the peyotl wiki for details.

Authors

See the CREDITS file

Code formatting

Recommended .git/hooks/pre-commit:

#!/bin/sh
if ! black --check phylesystem_api/phylesystem_api phylesystem_api/setup.py phylesystem_api/tests ; then
    black phylesystem_api/phylesystem_api phylesystem_api/setup.py phylesystem_api/tests
    echo "code reformatted. commit again!"
    exit 1
fi

phylesystem-api's People

Contributors

jimallman avatar mtholder avatar leto avatar snacktavish avatar jar398 avatar kcranston avatar josephwb avatar bredelings avatar brainsucker avatar opentree-api avatar dependabot[bot] avatar

Stargazers

James Waterford avatar zhang ning avatar Kenji Fukushima avatar Brian O'Meara avatar Ahmed Bani avatar Cory Armbrecht avatar scvalencia avatar Scott Chamberlain avatar  avatar

Watchers

 avatar  avatar Peter E. Midford avatar Rick Ree avatar  avatar James Cloos avatar Cody Hinchliff avatar  avatar Jessica Grant avatar  avatar Open Tree Of Life API avatar Dail Laughinghouse avatar Romina Gazis avatar  avatar

phylesystem-api's Issues

Allow for partial updates to a study

[MOVED here from OpenTreeOfLife/treenexus/issues]

Given the size of existing (and likely) study data, I'd love to see a mechanism for partial updates. Here are some thoughts on how we might handle updates to, eg, a single tree, or just study metadata, or new annotations.

A common REST practice is to expose child resources at their own URLs. So we might fetch study 23 like so:
GET http://api.opentreeoflife.org/1/study/23.json
... while just one of its trees (also contained in the response to above) has its own URL:
GET http://api.opentreeoflife.org/1/study/23/tree/987.json
We could also use this URL to PUT changes to just this tree.

Of course, lots of other people have encountered the problem of needing partial updates to large, composite resources. Another approach that seems to be gaining favor is to patch the target resource, either by sending a partial document (with rules for resolution / deletion / etc) or using a more robust patch format like this:
http://tools.ietf.org/html/draft-ietf-appsawg-json-patch-08

Here are some other interesting posts on partial updates in REST APIs:
http://www.mnot.net/blog/2012/09/05/patch
http://tools.ietf.org/html/draft-dusseault-http-patch-16
http://stackoverflow.com/questions/8132445/how-to-support-partial-updates-patch-in-rest
http://wisdomofganesh.blogspot.com/2012/08/best-practice-for-use-of-patch-in.html
http://stackoverflow.com/questions/232041/how-to-submit-restful-partial-updates

Support for JSONP (cross-domain API calls)

The OTOL API should support cross-domain requests using JSONP. This is required even for the OpenTree curation tool, since we're planning to host the API service on a separate domain (api.opentreeoflife.org).

This means that each API method should recognize 'jsoncallback', 'callback', and '_' as expected arguments. If a callback is specified, we should wrap the JSON response in a function body and a call to that function.

I'm doing this for the initial GET call that fetches studies.

Remove hard-wired references to treenexus repo

It is useful to be able to build an API instances that access the treenexus_test repo (or perhaps some other). Yet there seem to be several hardwired references to treenexus in the sources. These should all be parameterized. Here is some grep output:

./repo/api.opentreeoflife.org/controllers/compare.py: return github_client.compare("OpenTreeOfLife/treenexus",base,head, auth_token)
./repo/api.opentreeoflife.org/.gitmodules:[submodule "treenexus"]
./repo/api.opentreeoflife.org/.gitmodules: path = treenexus
./repo/api.opentreeoflife.org/.gitmodules: url = git://github.com/OpenTreeOfLife/treenexus.git
./repo/api.opentreeoflife.org/modules/github_client.py: which calls out to https://api.github.com/repos/OpenTreeOfLife/treenexus/compare/c3608e5...master
./repo/api.opentreeoflife.org/modules/github_client.py: # find the 'treenexus' repo (confirm it exists and is available)
./repo/api.opentreeoflife.org/modules/github_client.py: repo_path = 'OpenTreeOfLife/treenexus' # TODO: pull from config file?
./repo/api.opentreeoflife.org/modules/githubsearch.py: self.repo = "OpenTreeOfLife/treenexus"

modified study on the wrong WIP

Jim reported this: note that study 9 is modifed on a WIP for study 10: https://gist.github.com/jimallman/9275020
Could be failure to keep lock while os.chdir, could be failure to backout failed commit.
Note that MTH was flailing a bit with the install and testing, so it is possible that this was a one-off corruption not normal usage. But it still should not be possible for client tests to do this to the repo...

support for multiple repos

from the Git TODO doc.
Features:

  • Detect study ID to repo mapping on launch
  • Create python dictionary keep in memory. mutex locks to synchronize access.
    Safe to assume:
    • Studies won’t move
    • One parent dir for all repos

Add org/repo of datastore to configuration file

Right now the "OpenTreeOfLife/treenexus" repo is hard-coded into our app, which makes testing inconvenient.

We should be reading the org and repo from private/config like we do for our github_client_id and github_client_secret

Assign alphabetic prefix to study IDs

This should prevent collisions between studies created in phylografter (which uses simple integers) and the those created through the OTOL API, including studyes in the OpenTree webapp.

Possible prefixes would be pg for phylografter, ot for the OTOL API, and maybe more if someone else does batch study creation and submits a pull request.

Getting a study happens on the current branch (not chosen)

I believe that when we GET a study, the Open Tree API currently doesn't check to see whether we're on a sensible branch (the curator's WIP branch, or master if no WIP branch is found). Instead, it simply grabs the study Nexson from whatever branch it happens to be on, which might be someone else's WIP branch for another study.

Implement GET /v1/study/N.json

This is the most basic API method which will involve writing a lot of the internal infrastructure which talks to the central datastore, i.e. treenexus.git . Other API methods will be easier to implement once this scaffolding is in place.

Retry on a Github API 502

The Github API returns HTTP error code 502 "Bad gateway" when there is a time-out in their backend. During testing I see this happen occasionally.

The API code should detect a 502 return code on any API call and retry that call at least once.

Finalize IDs of ad-hoc rooting elements when PUTting a study

Re-rooting a tree on an edge requires adding a node (and usually an edge) to the tree. Subsequent re-rooting uses the same elements. On saving the study, these ad-hoc elements may appear with IDs in the form {TREE_ID}_ROOT and {TREE_ID}_ROOT_EDGE. For example, a tree tree3 might include these elements:

  • a node with the id tree3_ROOT
  • an edge with the id tree3_ROOT_EDGE

(The tree ID is a necessary component here, since element IDs should be unique throughout the document.) If found, these should be reified by converting their IDs to the next available "permanent" IDs in the Nexson document, e.g. node320 and edge321. Ideally, we'd also find and convert any reference to these IDs, which are likely in annotations or properties of the tree.

This should probably be handled by our eventual "smart merge" tool, since one goal is to avoid ID collisions when merging changes to a Nexson study.

N.B. Once these elements have a permanent ID, they will be retained in the Nexson (even after subsequent re-rooting) in case they are the subject of an annotation, etc.

Support for CORS (cross-domain requests)

As stated in #9:

The OTOL API should support cross-domain requests using JSONP. This is required even for the OpenTree curation tool, since we're planning to host the API service on a separate domain (api.opentreeoflife.org).

JSONP is choking on large request bodies (Nexson) since it forces all data to the query string. So we need to use CORS instead.

Support pure JSON or text requests in PUT (update) method

In testing very large studies, one bottleneck occurs during saves (study updates), when the browser tries to build an AJAX request with contentType='application/x-www-form-urlencoded; charset=UTF-8'. This forces it to encode (escape) the entire Nexson string, which kills the browser for large studies.

If we modify the AJAX request to use contentType 'application/json; charset=utf-8', it's very quick to build. But we need to modify the API's PUT method to recognize this and the other values in the request:

  • nexson
  • author_name
  • author_email
  • auth_token

The easiest approach is probably to simply wrap the Nexson along with the other values into a tiny JSON structure, but this forces web2py to parse the (potentially huge) Nexson. Is this OK? It looks like we're doing this anyway during validation, so maybe we're just moving the JSON parsing a little earlier in the process.

I had thought about simply pre-pending a few lines of text with the other expected values, and splitting the request.body on newlines. Or prepending a tiny dict with just the other values in it, all to avoid having to parse the monster. But this is all non-standard hackery, so a proper JSON request body would be best.

Thoughts?

Prepare for using oti search from a different server

The OTOL API makes assumptions about the oti's location and base URL. To match the flexibility elsewhere, we should take advantage of the server-config variables used in install-web2py-apps.sh, preferably using the same OTI_BASE_URL (which of course incorporates the host and port plus leading path information).

I'm putting this ticket here because we'll need to modify oti_search module (or its caller) to use this standard form. It looks very close already, I'm checking now to see if they agree on what the "base URL" means in this case.

Use conventional HTTP methods for a RESTful API

Currently the OTOL API is expecting a POST request for any method "relating to writing". Based on previous conversations, we should follow RESTful convention and use:

  • GET to view a resource or collection
  • POST to create a resource (incl. by importing its data)
  • PUT (and someday PATCH) to update a resource
  • DELETE to remove a resource

See also #20 for notions on how to override the request method, if we hit firewall or other obstacles down the road...

Can we authenticate the user, for proper attribution in the datastore?

Beyond an app-specific API key, it seems like we need to capture the user's identity to properly attribute submissions, edits, etc. in the datastore.

I realize there's a Contributor field in the study data, but this only captures the study's initial submitter. Subsequent edits will need some kind of attribution, and it seems a shame not to take advantage of the user identity and permissions provided by Github.

Pull from upstream before committing changes to studies

I'm seeing some commits (saving study changes) fail with this error:

RAN: '/usr/bin/git push originssh jimallman_study_9'

STDOUT:

STDERR:
To [email protected]:OpenTreeOfLife/treenexus.git
! [rejected]        jimallman_study_9 -> jimallman_study_9 (non-fast-forward)
error: failed to push some refs to '[email protected]:OpenTreeOfLife/treenexus.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Merge the remote changes (e.g. 'git pull')
hint: before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

This particular case is probably due to outside edits to the annotations in this study. But I think we should allow for other tools that might be working from the same repo and branch. Perhaps we should always do a fresh pull from upstream when GETing the study as well?

avoid os.chdir in git operations

@snacktavish and I looked into this a bit. from looking at the code for sh (http://amoffat.github.io/sh/) it looks like:

  • there is a _cwd arg rather than changing the web2py proc dir, however
  • sh.py uses os.chdir
    We need to dig around a bit more to be confident that wrapper around git is not changing the launching process dir.

The current implementation is a roadblock to supporting multiple repos.

Write operations should take a SHA argument that identifies their parent

We are using lazy branch creation (create WIP on first write not on the GET), which seems good.
But the change set should be applied to the parent that corresponds to the version of the NexSON that the curator was looking at when he/she made the edits.
On write, we should:
1 check for WIP for this study, user, parent commit SHA

  1. if not found:
    A. checkout parent commit,
    B. create WIP from that commit,
    C. checkout WIP
  2. write new content on WIP then commit and return new SHA

The change we need is 2A (instead of checking out master, we should checkout the correct parent).

local mirror of master on deployment machine

We have some operations that will always be done on master:

We could have a second local copy of the phylesystem repos on the deployed server. Every time we alter master in the working, we'd pull those changes to a local "master mirror" repo. This would allow read-only master-only operations to read from a repo (the master mirror) and cut down on contention for the lock in the working dir, and large checkouts for a GET.

It is unclear if we need this, so it is probably a long term potential enhancement.

See #56 which describes the need for a short-term fix.

Storage and retrieval of supporting files

I've started to implement this part of the OTOL API in branch 'subresource-files', but stopped short of actually storing or retrieving file data. Regardless, it does include some useful logic in the API controller for recognizing and handling "subresource" requests in the API, e.g. GET or PUT to /api/v1/study/123/file/456

We might opt to create a single database table just to enable web2py's default behavior file-upload and storage. In this case, the files end up on the filesystem, in web2py's uploads/ folder, which should be shared if we scale to multiple app servers.

I think it might be wise to instead push each file to a third-party, dumb datastore and return its URL to the OTOL API caller. Most likely, this will be the web curation UI, which will store the URL in its supporting-files annotation. But I'd like to maintain feature parity for all clients, so I'm not sure how best to proceed. I suppose if the API service also remembers the external file URL, that's good, but it seems we keep all other "state" in the git repo.

Maybe this is a good use case for git-fat or similar? This would drop placeholders (with SHA for retrieval) into git while keeping the files in a separate store.

This entire feature is kind of ancillary and short-lived (pending data deposition) for each study, so I don't want to gild the lily here. But we need a solid, working solution. Suggestions welcome!

Forward study-creation request to a (long-running?) call to treemachine

Rather than the curation UI calling treemachine directly to create / import a new Nexson study, I'd much prefer we call the OTOL API and have it (in turn) call treemachine for just the Nexson template (+ imported data, if any). This is because there's a lot we should do in the data repo beyond the actual creation of the Nexson:

  • assign a new ID for this study, create its folder in repo(?)
  • forward ID and info to treemachine, expect to get study JSON back
  • IF treemachine returns JSON, save as {ID}.json and return URL as '201 Created'
  • or perhaps '303 See other' w/ redirect?
  • (should we do this on a WIP branch? save empty folder in 'master'?)
  • IF treemachine throws an error, return error info as '500 Internal Server Error'

This is also outlined in a TODO comment in the study-creation code for the OTOL API:

https://github.com/OpenTreeOfLife/api.opentreeoflife.org/blob/restful-api-methods/controllers/default.py

There you can also see the arguments passed from the study-creation page:
http://dev.opentreeoflife.org/curator/study/create

@mtholder, please disregard any of these import options if they don't make sense, or suggest new ones that we can support. Here's what I'm currently sending in the creation request to OTOL API that can probably be forwarded to treemachine as-is:

cc0_agreement signals their agreement to CC0 terms. This should always be true (but let's confirm)

import_option should be either 'IMPORT_FROM_TREEBASE', 'IMPORT_FROM_PUBLICATION_DOI', or 'IMPORT_FROM_MANUAL_ENTRY' (in which case, just return an empty Nexson template)

treebase_id only matters if import_option is 'IMPORT_FROM_TREEBASE'

publication_DOI only matters if import_option is 'IMPORT_FROM_PUBLICATION_DOI'

Other less important bits include jsoncallback and auth_token (ignore these). author_name and author_email could be used to populate the Curator field in study metadata.

API URL to receive Github post-receive hooks and ask OTI to reindex studies

A "push" system is wanted so that, for example, when study 10 is updated and merged into the master branch, the OTI service can be asked to reindex just that study.

This can be accomplished by looked at the data in a Github post-receive hook, sifting out commits to the master branch and looking at the list of modified and added files in each commit, then making a request to reindex those files to the OTI service.

https://help.github.com/articles/post-receive-hooks

Some thoughts on the OTL API

@kcranston asked about our (ropensci) thoughts on the API. Our perspective is of course from working in R...here goes:

  • Write methods would be useful for R users. There is a huge community of people working with phylogenies in R that could contribute back if write methods available.
  • Phylogeny formats: A variety of formats would be nice: NeXML, Newick, etc., but we could convert from one to others easily if only one format available.
  • Data compression would be nice to have so that data comes back faster (not R specific of course)
  • Authentication: For our use case, using web APIs via a programming interface (in this case R), using OAuth is a bit of an awkward experience since you have to go out of R command line to a browser to authenticate. API keys are easier to deal with for users and our own development. If you prefer OAuth, we can definitely make that work.
  • Fuzzy search on node/tip names: Don't know if this is built in, but would be awesome to have.
  • Web interface behaves the same as the API: For example, in the old GBIF API, data returned was only for the exact taxonomic match, while in the web interface, they returned data for the taxonomic match plus its synonyms. This results in users not trusting the API, so they want to stick with using the web interface. The new GBIF API fixes this I think. Anyway, it's a more seamless experience if R users can get the same data back when using the browser and R.
  • Taxonomy: Can be tricky because no one stop shop for taxonomy, and the trusted source differs by taxonomic group. Perhaps taxonomic identifiers can be provided for a variety of different taxonomy sources (ITIS, NCBI, GBIF backbone, etc.).
  • Multiple input fields (e.g., study_id='4,5,6' or study_id=4&study_id=5&study_id=6 instead of haivng to request 4, 5 and 6 in 3 separate calls): This may be harder on your side, but the APIs we work with that allow passing in of multiple identifiers in a single call (e.g., the PLOS altmetrics API) make for faster data acquisition.
  • Will there be Solr or ElasticSearch endpoints to hit? Or perhaps text search is available in the graph database?
  • Pagination (allowing us to decide how much to retrieve) - obviously better for you too
  • Time stamps - nice metadata to have
  • Data aggregation where suitable (so it is done on server side rather than on our end [R is slow as you probably know])

Support prefixed study ids

We need to have multiple study id namespaces, so use a CURIE i.e. short prefix, colon, suffix. Study id would be for example 'o:123' or 'ot:123' in contexts that expect a study id, but converted to 'o_123' or 'ot_123' in contexts such as file system calls that expect a file name.

Continue to accept unprefixed ids for now for phylografter-originated studies. Feature to consider: allow prefixed 'pg:456' as a synonym for '456' ? Not sure, we need to think about this.

API documentation update needed

Documentation is currently in two places:

https://github.com/OpenTreeOfLife/api.opentreeoflife.org/blob/master/DESIGN.md
https://github.com/OpenTreeOfLife/opentree/wiki/Open-Tree-of-Life-APIs

Some of the information is stale and I suspect not all API services are documented.

I recommend moving the documentation from the opentree wiki into the api repo, leaving a forwarding address behind. I would call it something other than DESIGN.md which does not have the right connotation for me. If there are multiple files create a doc/ directory. Leave a pointer to the documentation in the README.md file.

Jonathan

Remove web2py

Currently our repo has a copy of web2py in it, which means Travis CI doesn't have to download it on every commit.

But recently @jimallman and I decided that it is suboptimal, so I will be removing web2py from this repo and just have Travis CI do a bit of extra work each time.

This will make our repo much smaller, cleaner and easier to navigate.

Implement searching for a property across multiple NexSON files

To implement searching inside NexSON files, a full-text index of our NexSON files will be essential.

Currently, without an index, we can use the Github Search API to search for a string in our NexSON files, but that API has no idea about the structure of NexSON, so we cannot do a search like "find all studies that have a focalClade = X".

Additonally, the Github Search API can tell us the studies that a search term appears in, but not much else. We would need to do multiple Github API requests to grab the relevant metadata for each study returned in the search results. This would be quite slow and quickly use all our API credits.

On the most recent Opentree call @chinchliff mentioned that there was some kind of full-text index API that we can use. Can you add some details about that here, @chinchliff ?

Tagged releases of OTOL data repo

If we want to offer periodic "blessed" versions of the OTOL data (studies and/or the synthetic tree), we should probably use the existing tagging facilities in Github, as described here:
http://git-scm.com/book/en/Git-Basics-Tagging

Based on a quick look, I think we should:

  • date these in a sortable order, eg, 'OTOL.2014-11-29.a'
  • consider annotated tags (for checksum and explanation) vs. lightweight tags
  • watch for any additional requirements for "proper publication"

Define and spec authentication needs

Recently I learned that while the curation app will have Github oauth tokens, the main opentree app uses Janrain.

More specifications are needed around how those will tie into the OTOL API

Implement POST /v1/study/N.json

This allows users of the API to submit an updated study, completely over-writing the previously stored study data.

This will not support partial updates, but that feature is planned for the near future.

Branches and pull requests for pending changes

I've added a test branch modified-study-10-jimallman to the treenexus repo, and an accompanying pull request:
https://github.com/OpenTreeOfLife/treenexus/pull/5

Assuming that these actions can be largely automatic, our curation tool might work something like this:

  1. When a user edits a study, we pull from their "pending changes" branch if one exists (based on naming convention above), else from master branch.
  2. When this user saves changes in the curation UI, we push them to Github. If they pulled the study from the master branch, we implicitly create the "pending changes" branch named as above, and commit to that branch. Until that branch is merged back into master, this is where they'll work on this study.
  3. Once the user (and Travis?) agree that this data is ready for sharing and synthesis, a pull request is generated automatically and appears to the OTOL synthesis team.
  4. Acceptance of the pull request could be handled automatically (via hooks and validation) or manually by the synthesis team or curators in a taxonomic area(?). Once the changes are merged back into the master branch, they potentially enter into synthesis.

Pending branches aren't truly private, but still pretty useful. They can serve as a holding area for experimental work, etc. We can accommodate other use cases by altering the root name for a branch, or perhaps using git notes.

default controller do_commit does pull, commit, and push

Because we are committing the raw NexSON from the client and the annotated form, each (successful) POST and PUT operation does 2 git pulls, 2 git commits, and 2 git pushes.
We should probably be doing the pulls and pushes as background operations to make the PUT and PUSH operations more responsive.

Run validation code on POSTed NexSON

@mtholder has written code which will validate NexSON and produce errors (a JSON blob that maps to XML that violates the NeXML schema) and warnings (files that are not ideal with respect to
OpenTree's needs).

More details on the validator are here: https://github.com/OpenTreeOfLife/api.opentreeoflife.org/blob/master/nexson-validator/README.md

If there is an error, the POST should be rejected and the errors should be returned as JSON, so users of the API can report what is wrong. If no errors are present, the POSTed data should be committed, as it is now.

Additionally, the warning annotations (if any exist) should be made as an additional commit on the given WIP branch for the given user and study.

minimize time that repo-level lock is held

Currently while a write operation holds the lock it (among other things) writes the NexSON to a file.

  1. Writing to a tempfile location before obtaining the lock,
  2. getting the repo level lock.
  3. moving the file
  4. completing the procedure and releasing the lock
    would cut down on lock contention.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.