GithubHelp home page GithubHelp logo

orcasound / orcasite Goto Github PK

View Code? Open in Web Editor NEW
43.0 8.0 49.0 29.44 MB

Live-listening web app -- http://live.orcasound.net :star:

Home Page: http://live.orcasound.net

License: GNU Affero General Public License v3.0

JavaScript 1.05% Elixir 56.63% HTML 0.36% Dockerfile 0.47% Shell 0.33% TypeScript 41.11% Procfile 0.02% CSS 0.02%

orcasite's Introduction

Orcasite

Website License

Slack PRs Welcome

This repo specifies the web backend and frontend for the Orcasound app that plays a live audio stream through the user's browser of choice. The backend is an Elixir app using the Phoenix framework. The frontend is built in Next.js.

Contributing

Please check out the CONTRIBUTING doc for tips on making a successful contribution, as well as learning resources!

Quick Start (e.g. at hackathons):

  • Load the Docker Configuration: docker-compose up
  • Wait for container to start up
  • Navigate to localhost:3000 to view the website
  • Navigate to localhost:4000 to access the API server

Note This assumes you have installed docker and docker-compose.

Running in Docker

Docker is the quickest way to get the project up and running, especially if you haven't already set up Erlang/Elixir/Node. The only requirement is that you have both docker and docker-compose installed on your machine.

Once you clone the repository, you can just run docker-compose in the root directory:

docker-compose up

This will build an image locally with all the dependencies you need. It will also pull a pre-built image from Docker Hub for the database, automatically configure everything, and both the Phoenix and Next.js servers. The orcasite page will be accessible at http://localhost:3000 as soon as the web container finishes starting up.

Developing

The default Docker configuration is great for getting the project up and running, but if you want to do development, you'll want to be able to modify the source code without re-building an entire Docker image.

Setup options

There are several options for how to setup your dev environment:

Once you have one of these up and running, see the Getting everything running section for how to start the project.

Using VS Code

This project comes with a devcontainer.json configuration which can be used with VS Code. This takes care of all the docker-compose stuff in the background so you don't have to worry about it. When you open the project in VS Code, it should prompt you to start it in a dev container (assuming you've installed docker and the dev containers extension). Once the dev container starts, you can open a new terminal window in VS Code to run commands. See the commands below for how to get everything started.

Using docker-compose directly

If you prefer not to use VS Code dev containers, the easiest way to develop in docker is by starting up docker-compose manually (using the dev compose file):

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d

Note You can also set the COMPOSE_FILE environment variable to avoid having to specify the compose files every time. This one-liner will do that for you no matter which subdirectory you're in:

export COMPOSE_FILE=`git rev-parse --show-toplevel`/docker-compose.yml:`git rev-parse --show-toplevel`/docker-compose.dev.yml

After setting COMPOSE_FILE, you can just run docker-compose up -d from anywhere in the project.

Once you have the services started, you can start a session inside the web container:

docker-compose exec web bash

From here you can run the commands below to get everything started.

Note The docker-compose setup uses bind mounting for the source files, which means if you edit the source on your host file system, the changes will get picked up and live reloaded in your browser.

Set up directly on machine

If Docker doesn't suit your needs, you can follow these instructions to get everything running directly on your machine.

Language

You will need to install Erlang, Elixir, and Node.js. You can use a tool like asdf to manage your language dependencies.

Language-level dependencies can be found under .tool-versions.

Database

You will need to install Postgres and setup the postgres user with a password. The default connection details are:

username: "postgres"
password: "postgres"
database: "orcasite_dev"
hostname: "localhost"
port: 5432

You can pass in custom values using env variables. Full details can be found in dev.exs.

Orcasite uses PostGIS for location data inside of Postgres. To install on MacOS, run

brew install postgis

Getting everything running

Once you have your environment setup via one of the options above, you can start the project. You'll need to run both the Phoenix server and the Next.js server.

Server

In a new terminal session, from the root directory:

> cd server/
> mix deps.get
> mix ecto.setup
> iex -S mix phx.server

Note For future runs, you can skip running the mix commands and just start the server with iex -S mix phx.server

The server should soon be available at http://localhost:4000.

UI

Open another terminal session and run these commands to start Next.js:

> cd ui/
> npm i
> npm run dev

Once everything finishes starting up, you'll be able to access the UI at http://localhost:3000.

Tests

UI

The new version (v3) is currently under development, rapidly changing, and has no tests yet

Deployment

For the moment, this app is running in a Heroku instance with mix phx.server. To access the console, run:

heroku run FEED_STREAM_QUEUE_URL="" REDIS_URL="" POOL_SIZE=2 iex -a <app name> -- -S mix

The POOL_SIZE config var is necessary due to the current Postgres db having 20 connections. You can read more about it here.

Emails

Orcasite uses MJML for email templating. There are a few online MJML renderers, including: mjml.io and grapes.js

API

An API is available using the JSON API spec. For access to available endpoints, navigate to the server's /api/json/swaggerui or /api/json/redoc path for documentation and examples. As an example, you may access the full list of feeds with:

curl -s https://beta.orcasound.net/api/json/feeds \
-H "Content-Type: application/vnd.api+json" \
-H "Accept: application/vnd.api+json"

orcasite's People

Contributors

akasprzok avatar axelson avatar dependabot[bot] avatar dhananjaypurohit avatar mrose15 avatar paulcretu avatar rahulsnkr avatar samanthaberk avatar scottveirs avatar shawnvita avatar skanderm avatar vibecastor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

orcasite's Issues

Add notifications for candidate and confirmed detection events

Is your feature request related to a problem? Please describe.
Admins are currently not notified when there are new candidates and must regularly check the admin panel. Admins must also manually send email notifications when there is confirmed activity on a node.

Describe the solution you'd like

  1. Automated emails can to go admins when there are new candidates.
  2. Notifications can then go out to subscribers when there is a confirmed detection.
  3. Allow management of notifications through admin panel.

Describe alternatives you've considered
The alternative is to keep the system as-is - admins send manual notifications.

Additional context
Notifications can be later expanded to browser push notifications, partner organizations, etc. I'd like to make use of Ash's Notifiers API for notification hooks that go out to various channels: https://hexdocs.pm/ash/notifiers.html

Add listener count to feed list

It would be nice if we could show the number of listeners next to each feed in the feed list:

image

This is harder than it looks because of how Presence works—we'll probably need a new websocket channel for getting listener counts without actually joining each feed's channel.

Make map button more obvious on mobile

  • caret in upper right is easy to miss, non-obvious that it shows a map
  • instead, add a button that clearly indicated there's a map
  • button could be in upper right or even bottom center (above player)
  • see airbnb map button for ideas

Current button

image

Could look something like this instead

image

Incorporate timestamps from orcanode

The player should be able to take advantage of timestamped .ts files from orcanode. Like if someone hears something moving back to relisten so it can make a note of the time, etc.

Reset map zoom level on node change

Expected

When navigating between feeds, the map should zoom back out (or in) to a preset zoom level

Actual

The map stays on the last zoom level

Selecting new node for listening does not change feed

On live.orcasound.net you can currently click on a hydrophone node and listen to it's live feed (e.g. Bush Point). But if you then select a new node (e.g. Orcasound Lab), the old feed continues to play, even upon subsequent clicks of the "Listen to" button or the play/pause button.

If you reload live.orcasound.net however, and then first select Orcasound Lab for listening, it's feed plays successfully. So, it seems the inter-linking is broken, rather than one of the feeds being down...

Add landing page

  • move feed list to /feeds
  • check designs and v2 site for potential landing page
  • what's the CTA? make it easy to get started listening

Report Sound button not accessible upon playing feed in v3 on iOS

Describe the bug
While the "REPORT SOUND" button is visible and functioning well in on the desktop version of the v3 UI, on iOS devices it is not accessible to the user.

To Reproduce
Steps to reproduce the behavior on iOS devices (screenshots below from Chrome, Safari, and Firefox -- in that order):

  1. Point browser to beta.orcasound.net
  2. Select a hydrophone location (aka feed). In this case I chose Orcasound Lab:
  3. After scrolling down a bit and then back up, pull down from upper right white space to reveal dynamic map behind drawer.
  4. Press play button for active feed (Orcasound Lab, its hydrophone icon blue on map)
  5. You may very briefly see the "REPORT SOUND" button overlaid on the map, but it is almost instantly covered by the default iOS audio player which blacks out the phone screen and renders typical audio controls.

Expected behavior

As a Concerned Community Scientist,
I want to be able to view the dynamic map UI and interact with the "REPORT SOUND" button,
while listening to the live audio stream from the feed I've most recently selected to be played.

Screenshots

Screenshot 2023-08-25 at 1 32 08 AM Screenshot 2023-08-25 at 1 32 20 AM Screenshot 2023-08-25 at 1 32 32 AM

Smartphone :

  • Device: iPhone13
  • OS: iOS 16.6
  • Browsers: Chrome (v. 116.0.5845.103), Safari (v. ? ), Firefox (v. 116.3 )

Additional context
An unexpected benefit of this bug is that the default iOS players -- both the black active audio screen and the lock screen audio player UI -- allow scrubbing back through the HLS segments. That is a feature we've not yet implemented for the Orcasound web app player, but it is one that is often used with the Lime Kiln and Point Robinson hydrophone audio via the YouTube scrubber by Facebook community scientists to review audio recorded in the last ~12 hours.

IMG_0040
IMG_0052

Real Time Listener Sound Identification

From ROADMAP Trello Board:
https://trello.com/c/u3c13GKf/46-human-classification-ui-real-time

⭐ New Feature! ⭐

As a community scientist, I want to be able to select whether I am hearing a "whale", "ship", or "fish" when I am using the Orcasound Live Listening App, so that I can assist with identification and learn to identify sounds.

⭐ Acceptance Criteria ⭐

  • Options should appear below play button
  • Once option is clicked, pop up should appear to guage certainty with "Rather Not Say", "Beginner", "Competent", "Proficient", and "Expert" as options
  • Should fit with current wireframes of Orcasite V3.0 (contact Zarema Ross in Orcasound Slack for most up to date UX wireframes)

⭐ Technical Requirements ⭐

  • Document APIs used for this feature (GraphQL? AWS?)
  • Determine if modifications are needed
  • TBD- Dev to fill reqs

🐳 First Time Contributing Code to Orcasound? 🐳

Check out Orcasite's README.md and CONTRIBUTING Documents to learn how to get started!

Chrome+Safari begins playback at first segment in .m3u8 manifest

It would be ideal if player initiated playback closer to real-time...

In the case I examined the down-loaded .m3u8 entry had hundreds of segments listed, but watching in the Javascript Console, playback began at live000.ts -- no matter if the page was reloaded, with or without first clearing the local Safari cache and cookies.

Ensure human detection timestamp accurately indicates when the user hears something interesting

The problem

The timestamps of human detections are not always well-aligned with the audio signals that inspired them. This can create confusion when trying to use the labels to find signals of interest, especially short ones like echolocation clicks.

In version 1 the detection timestamp was determined when the user selected the "I hear something interesting" button. This worked well. In version 2, the timestamp was based on when the user completed their free-form text annotation and selected the "SUBMIT" button. Because some users take a while (e.g. minutes) to type in their annotation and hit the SUBMIT button, the annotations were less synchronized with their associated signals, and therefore were less useful.

Screen Shot 2021-12-15 at 2 35 53 PM

This feature request involves returning to the version 1 method, or something akin to it.

The proposed solution

In version 3 of the live-listening app and going forward, it would be optimal if human detections were timestamped based on the first indication by the user that they are hearing a signal and want to annotate it. As a community scientist helping label audio data, I want the timestamp for my detection to be established when I first begin the annotation process.

Assuming the v3 annotation UI looks something like what is in the recent mockups --

Screen Shot 2021-12-15 at 3 00 13 PM

-- then the ideal solution would include generating a timestamp as soon the "I hear something" button is selected. The timestamp should not be affected by how long it takes the user to select default labels (e.g. via the labeling buttons for whale, ship, fish, other), add any comments, and ultimately select the submit button.

Screen Shot 2021-12-15 at 2 58 15 PM

Describe alternatives you've considered
None

Additional context

Skander and Scott came up with the idea of detections vs candidates. A detections occur whenever any user annotates the live audio data stream. Candidates are temporal groups of detections. The benefits of this nomenclature are that review of a candidate table is more efficient; instead of listing every record, the algorithm at least begins the process of grouping interesting signals into an acoustic "bout."

Here's an example of the Candidates list in the Orcasound admin UI (as of Dec 2021):

Screen Shot 2021-12-15 at 11 59 22 AM

And here's a screenshot of the same data records, but exported from the Heroku Postgres database to provide an Orcasound data visualization opportunity:

Screen Shot 2021-12-15 at 12 27 07 PM

Notes from an Orcasound Slack thread on 12/15/21 related to this feature request:

Paul:

Just took a look at this too, turns out that yes the timestamp is calculated properly on the backend using the playlist epoch + offset. The problem then is if the offset being sent to the server by the UI is correct. In the DetectButton component, the offset is indeed calculated right when the button is pressed. Unfortunately we don't use DetectButton anymore, I think that's the button from v1. For v2 it looks like we moved over to DetectionDialog component, which only grabs the offset when the submit button gets clicked in the dialog/modal.

DetectionDialog.js
                   <Button
                    onClick={() => {
                      this.onDetect(submitDetection)
                    }}
                    color="secondary"
                  >
                    SUBMIT
                  </Button>

So in short, for v1 detections corresponded to the exact time the button was clicked, and for v2 that's no longer the case and now is dependent on when modal dialog is submitted.

Skander:

inserted_at is when the database entry was made. playlist_timestamp is the epoch of the .ts file, and then player_offset is actually when the detection happened relative to the .ts file. Like e.g. 5 seconds into a 10-second HLS segment.

Valentina screenshot
while reviewing functions within orcasite code...

Screen Shot 2021-12-15 at 12 36 28 PM

Scott (scientist, not dev!):

My understanding is that the DateTime is derived from the audio stream by the player knowing two things:

  1. the Unix timestamp (UTC time zone) when the current stream of HLS segments began
  2. the offset (in seconds, I think) between that Unix timestamp and the current position of the player in the stream

So, to get the timestamp of the audio data when a user is hearing a signal, you would add 1+2 (and then convert to whatever format and time zone you want)

Confirming that offset should have units of seconds, per https://www.unixtimestamp.com/

Skander:

"Lockout stuff is mostly about not getting spammed by bots. The timestamps should be stored as UTC in the db, but it’s possible it’s being displayed in a local timezone in the admin panel

Specifically field(:timestamp, :utc_datetime) in
https://github.com/orcasound/orcasite/blob/58d0b89e8fc6aca79fd1ff14c8367c50f7e92a92/lib/orcasite/radio/detection.ex

detection.ex
defmodule Orcasite.Radio.Detection do
  use Ecto.Schema
  import Ecto.Changeset

  alias Orcasite.Radio.{Feed, Candidate}

Make detections API publicly accessibly

Currently, the detections endpoint requires authorization:

graphiql demo query

It would be nice to make it publicly accessible so the data is available to use. Alternatively, if public access isn't feasible, at least add a way of generating API keys. Potential problems with public access:

  • no rate limiting currently
  • don't expose sensitive data (sourceIp)

Add UX research opt in (to build separate email list)

Is your feature request related to a problem?
V3 UI mockup and prototype do not yet have a way for concerned community scientists to opt in to UX research.

Describe the solution you'd like
Possible solutions:

  1. A CTA that links to a Google or MailChimp form.
  2. Add a checkbox to whatever form is used to input new subscribers. (May require porting from one form to two Mailchimp accounts, one for subscribers and one for UX research purposes.)

Describe alternatives you've considered

Could recycle old "Feedback" button (at bottom of V2 UI), but the "Subscription" button seems more applicable.

Additional context
In fall 2022, we are paying for and using Mailchimp accounts. We could theoretically change to some other means of managing email subscriptions, or a CRM solution.

If fall 2023, we are beta-testing custom emails sent via Ash admin UI and AWS. It may be possible to add a check-box to the Google form that then triggers a new subscriber to be added to a UX-research subscription (which we would need to create).

Move feed urls to /listen

  • to avoid future issues with feed names conflicting with routes (especially if we ever allow self-serve node creation)
  • now is the time for breaking changes

Add analytics

  • what events to track? (see v2 for ideas)
  • alternatives to Google Analytics?

CPU peaks after long runs in Chrome

I've been noticing that if I leave player running for a long time (e.g. hours), the CPU associated with the Chrome tab increases, eventually reaching near/past 100%. While the audio performance doesn't seem to be impacted -- even if working on other stuff in other tabs or applications while listening -- one resulting non-optimal behavior in the UI is that the player isn't visible for extended periods (many seconds) when the tab is re-selected. So, if I hear something interesting and switch back to the tab to tag the audio data, the UI isn't immediately available.

Redo feed page content and styling

  • todo: update this issue with screenshots of designs

Tasks

Player loops in old folder instead of detecting new folder

NOTE: I think this behavior occurs on live.orcasound.net, but this issue report is base on listening (via Macbook+Chrome) to beta.orcasound.net/orcasound-lab through one of the automated (cronjob) re-boots on the half hour.

I listened from ~10 minutes before the re-boot until about 20 minutes after the re-boot. The player got a 404 on segment live0329.ts one of the final 6 segments listed in the manifest but nonexistent in the associated data folder: streaming-orcasound-net/rpi_orcasound_lab/hls/1569799936. Then it seemed to cycle back some amount (30 seconds? 90?) and play again while appearing to download the now aging latest.txt and manifest.m3u8 files, and repeatedly getting the 404 error on live0329.ts .

I failed to check whether the newest latest.txt file contained the old timestamp (1569799936) or the new one (1569803534), but suspect the new timestamp is being downloaded, but the player is not responding to it -- due to a bug?

As of 18:24:00 local time (Seattle), the latest.txt file contains 1569803534 and I'll check it after the next re-boot in ~5 minutes...

Should license be more permissive? Copyright assigned?

The current license is aGPL with no copyright assigned. Most recent Orcasound repositories have utilized the MIT license; a few have specified Orcasound as a copyright holder, e.g.:

Describe the solution you'd like
A discussion of open source licensing in #dev-general (Orcasound Slack channel). Then, possibly, a shift in licensing here.

Describe alternatives you've considered

  • Copyright held by contributors
  • Copyright held by an over-arching foundation
  • Pre-agreements about licensing and copyright within on-boarding materials?

"I hear something interesting" prompt is accepting empty values on listen live page.

Describe the bug
"I hear something interesting" prompt is accepting empty values on listen live page.

To Reproduce
Steps to reproduce the behavior:

  1. Go to listen live button
  2. Click on listen live tab
  3. Select any from the drop down menu.
  4. click on play icon and then i hear something interesting .
  5. submit it empty.
  6. See error

Expected behavior
it should validate the field before submitting and display some error message if empty.

Screenshots
ezgif com-gif-maker

Download Past Live Listening Recordings from Recent Activities on Orcasite Page

From ROADMAP Trello Board:
https://trello.com/c/68KGGm5a/27-user-access-to-recordings

⭐ New Feature! ⭐

As a community scientist, I want to be able to download recordings of past Live Listening sessions in which interesting sounds were captured on the same page as the Live Listening button, so that I can use them on future community projects.

⚠️ Playback ticket on Orcasite Issue #76 must be completed before work on this ticket can begin. ⚠️

⭐ Acceptance Criteria ⭐

  • "Record Download" button added to each item in the Recent Activities table
  • Recordings should download in the .wav format onto the user's desktop ❌ DEVS PLEASE VARIFY
  • Should fit with current wireframes of Orcasite V3.0 (contact Zarema Ross in Orcasound Slack for most up to date UX wireframes)

🐳 First Time Contributing Code to Orcasound? 🐳

Check out Orcasite's README.md and CONTRIBUTING Documents to learn how to get started!

Google Analytics data isn't being collected for the website

Google Analytics data isn't being collected for the website. Brendan got this report:

No Hits
First detected: May 22, 2020 7:57:17 AM
Last detected: Aug 2, 2020 1:07:29 PM
Property Orcasound May 2020 Launch is not receiving hits. Either your site is not receiving any sessions or it is not tagged correctly.
Google Tag Assistant Recordings can help you ensure that your site is tagged correctly.
Learn more: https://support.google.com/analytics/bin/answer.py?answer=1008080&hl=en-US
support.google.comsupport.google.com
Set up the Analytics global site tag - Analytics Help
In this article: Basic instructions Set up data collection based on your type of site

Stream Past Live Listening Recordings on Orcasite Page

From ROADMAP Trello Board:
https://trello.com/c/68KGGm5a/27-user-access-to-recordings

⭐ New Feature! ⭐

As a community scientist, I want to be able to listen to recordings of past Live Listening sessions in which interesting sounds were captured on the same page as the Live Listening button, so that I can listen to captured sounds even if I receive the notification too late.

⚠️ This ticket must be finished before Orcasite Issue #77 for downloading recording work can begin. ⚠️

⭐ Acceptance Criteria ⭐

  • Past 5 recordings should display on the Live Listening page in a "Recent Activity" table, with options to load more
  • Each recording should include "Date of Recording", "Time" in hh:mm:ss format, "Sounds Identified"
  • Should note that recordings can be used for their own use (show Orcasound Creative Commons License) or to share with peers.
  • Should fit with current wireframes of Orcasite V3.0 (contact Zarema Ross in Orcasound Slack for most up to date UX wireframes)

🐳 First Time Contributing Code to Orcasound? 🐳

Check out Orcasite's README.md and CONTRIBUTING Documents to learn how to get started!

Polish landing page

  • Re-add learning section
  • Add an image or something to make mobile look less austere (ideally snap point to show map)
  • Edit copy

Tasks

Admin UI fails to play candidate with time close to restart of streaming container

Describe the bug
This is an edge case (rare) where playback of recorded HLS segments fails within the Orcasound admin UI.

To Reproduce
Steps to reproduce the behavior:

  1. Go to live.orcasound.net/admin (logged in as an admin user)
  2. View candidate dated 9/1/2021 6:30:09AM
  3. Select Play button
  4. See spinning progress indicator, but hear no playback

Expected behavior
Normally, candidates playback the recorded HLS segments

Screenshots
Screen Shot 2021-09-02 at 9 17 44 AM

Desktop (please complete the following information):

  • OS: OSX 10.15.5
  • Browser: Chrome 92.0.4515.159 (Official Build) (x86_64)

Additional context
The manifest (.m3u8 file) associated with this bug has 2158 segments (each supposedly 10 seconds long). The Raspberry Pi at this hydrophone location (Orcasound Lab) has a cron job that restarts the streaming container every 6 hours, starting at 00:30 each day. The expected seconds of data in those six hours less the aggregate seconds of data in the relevant S3 bucket (Unix datetime stamp = ) shows the amount of time the restart takes: ~20 seconds [ (36006)-(215810)=20 ].

The root of this playback failure is probably that the time of the human annotation (06:30:09 in this case) is within a restart period where there was no streaming happening... i.e. within a data gap. Perhaps this happened because the annotator selected the "I hear something interesting" button just after the container had restarted, and as their player reached the final segment available in the latest (then penultimate) S3 folder?

Possible solutions:

  1. Long-term: Shift strategy within orcanode repo to avoid any data gaps. For example just stream to a S3 folder continuously without any streaming container restarts.
  2. Short-term: Write custom playback code for this edge case
  3. Short-term: For each annotation, capture and utilize the time of the button push, not the submit button that is selected after an unknown period during which the user is typing in their annotation text.

Detection dialog closes on stream stop

Expected

When submitting a detection, the detection dialog/modal should stay open until user submits the detection or closes the dialog/modal

Actual

The detection dialog closes if the stream stops or errors, because the DetectionDialog component gets unmounted when the stream URL is undefined

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.