GithubHelp home page GithubHelp logo

microsoft / synapseml Goto Github PK

View Code? Open in Web Editor NEW
5.0K 146.0 813.0 141.61 MB

Simple and Distributed Machine Learning

Home Page: http://aka.ms/spark

License: MIT License

Shell 0.75% Scala 85.36% Python 10.76% R 0.02% JavaScript 1.99% Dockerfile 0.78% CSS 0.34%
spark pyspark azure scala microsoft ml machine-learning databricks cognitive-services lightgbm

synapseml's Introduction

SynapseML

Synapse Machine Learning

SynapseML (previously known as MMLSpark), is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. SynapseML provides simple, composable, and distributed APIs for a wide variety of different machine learning tasks such as text analytics, vision, anomaly detection, and many others. SynapseML is built on the Apache Spark distributed computing framework and shares the same API as the SparkML/MLLib library, allowing you to seamlessly embed SynapseML models into existing Apache Spark workflows.

With SynapseML, you can build scalable and intelligent systems to solve challenges in domains such as anomaly detection, computer vision, deep learning, text analytics, and others. SynapseML can train and evaluate models on single-node, multi-node, and elastically resizable clusters of computers. This lets you scale your work without wasting resources. SynapseML is usable across Python, R, Scala, Java, and .NET. Furthermore, its API abstracts over a wide variety of databases, file systems, and cloud data stores to simplify experiments no matter where data is located.

SynapseML requires Scala 2.12, Spark 3.4+, and Python 3.8+.

Topics Links
Build Build Status codecov Code style: black
Version Version Release Notes Snapshot Version
Docs Website Scala Docs PySpark Docs Academic Paper
Support Gitter Mail
Binder Binder
Usage Downloads
Table of Contents

Features

Vowpal Wabbit on Spark The Cognitive Services for Big Data LightGBM on Spark Spark Serving
Fast, Sparse, and Effective Text Analytics Leverage the Microsoft Cognitive Services at Unprecedented Scales in your existing SparkML pipelines Train Gradient Boosted Machines with LightGBM Serve any Spark Computation as a Web Service with Sub-Millisecond Latency
HTTP on Spark ONNX on Spark Responsible AI Spark Binding Autogeneration
An Integration Between Spark and the HTTP Protocol, enabling Distributed Microservice Orchestration Distributed and Hardware Accelerated Model Inference on Spark Understand Opaque-box Models and Measure Dataset Biases Automatically Generate Spark bindings for PySpark and SparklyR
Isolation Forest on Spark CyberML Conditional KNN
Distributed Nonlinear Outlier Detection Machine Learning Tools for Cyber Security Scalable KNN Models with Conditional Queries

Documentation and Examples

For quickstarts, documentation, demos, and examples please see our website.

Setup and installation

First select the correct platform that you are installing SynapseML into:

Microsoft Fabric

In Microsoft Fabric notebooks SynapseML is already installed. To change the version please place the following in the first cell of your notebook.

%%configure -f
{
  "name": "synapseml",
  "conf": {
      "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:<THE_SYNAPSEML_VERSION_YOU_WANT>",
      "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
      "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
      "spark.yarn.user.classpath.first": "true",
      "spark.sql.parquet.enableVectorizedReader": "false"
  }
}

Synapse Analytics

In Azure Synapse notebooks please place the following in the first cell of your notebook.

  • For Spark 3.4 Pools:
%%configure -f
{
  "name": "synapseml",
  "conf": {
      "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:1.0.4",
      "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
      "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
      "spark.yarn.user.classpath.first": "true",
      "spark.sql.parquet.enableVectorizedReader": "false"
  }
}
  • For Spark 3.3 Pools:
%%configure -f
{
  "name": "synapseml",
  "conf": {
      "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.11.4-spark3.3",
      "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
      "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
      "spark.yarn.user.classpath.first": "true",
      "spark.sql.parquet.enableVectorizedReader": "false"
  }
}

To install at the pool level instead of the notebook level add the spark properties listed above to the pool configuration.

Databricks

To install SynapseML on the Databricks cloud, create a new library from Maven coordinates in your workspace.

For the coordinates use: com.microsoft.azure:synapseml_2.12:1.0.4 with the resolver: https://mmlspark.azureedge.net/maven. Ensure this library is attached to your target cluster(s).

Finally, ensure that your Spark cluster has at least Spark 3.2 and Scala 2.12. If you encounter Netty dependency issues please use DBR 10.1.

You can use SynapseML in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:

https://mmlspark.blob.core.windows.net/dbcs/SynapseMLExamplesv1.0.4.dbc

Python Standalone

To try out SynapseML on a Python (or Conda) installation you can get Spark installed via pip with pip install pyspark. You can then use pyspark as in the above example, or from python:

import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
            .config("spark.jars.packages", "com.microsoft.azure:synapseml_2.12:1.0.4") \
            .getOrCreate()
import synapse.ml

Spark Submit

SynapseML can be conveniently installed on existing Spark clusters via the --packages option, examples:

spark-shell --packages com.microsoft.azure:synapseml_2.12:1.0.4
pyspark --packages com.microsoft.azure:synapseml_2.12:1.0.4
spark-submit --packages com.microsoft.azure:synapseml_2.12:1.0.4 MyApp.jar

SBT

If you are building a Spark application in Scala, add the following lines to your build.sbt:

libraryDependencies += "com.microsoft.azure" % "synapseml_2.12" % "1.0.4"

Apache Livy and HDInsight

To install SynapseML from within a Jupyter notebook served by Apache Livy the following configure magic can be used. You will need to start a new session after this configure cell is executed.

Excluding certain packages from the library may be necessary due to current issues with Livy 0.5.

%%configure -f
{
    "name": "synapseml",
    "conf": {
        "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:1.0.4",
        "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind"
    }
}

Docker

The easiest way to evaluate SynapseML is via our pre-built Docker container. To do so, run the following command:

docker run -it -p 8888:8888 -e ACCEPT_EULA=yes mcr.microsoft.com/mmlspark/release jupyter notebook

Navigate to http://localhost:8888/ in your web browser to run the sample notebooks. See the documentation for more on Docker use.

To read the EULA for using the docker image, run docker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release eula

R

To try out SynapseML using the R autogenerated wrappers see our instructions. Note: This feature is still under development and some necessary custom wrappers may be missing.

Building from source

SynapseML has recently transitioned to a new build infrastructure. For detailed developer docs please see the Developer Readme

If you are an existing synapsemldeveloper, you will need to reconfigure your development setup. We now support platform independent development and better integrate with intellij and SBT. If you encounter issues please reach out to our support email!

Papers

Learn More

Contributing & feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

See CONTRIBUTING.md for contribution guidelines.

To give feedback and/or report an issue, open a GitHub Issue.

Other relevant projects

Apacheยฎ, Apache Spark, and Sparkยฎ are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

synapseml's People

Contributors

brendanwalsh avatar chris-smith-zocdoc avatar dciborow avatar dependabot[bot] avatar dylanw-oss avatar eisber avatar elibarzilay avatar ezherdeva avatar imatiach-msft avatar jessicaxywang avatar joanfm avatar k-rush avatar lhrotk avatar mabusch avatar marktab avatar memoryz avatar mhamilton723 avatar mmlspark-bot avatar ms-kashyap avatar niehaus59 avatar ppruthi avatar rastala avatar riserrad avatar serena-ruan avatar stuartleeks avatar svotaw avatar terrytangyuan avatar thinkall avatar tongwen11 avatar xuwq1993 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synapseml's Issues

Need PySpark method to access the raw CNTK model

This is needed to inspect the neural network models, especially to get the shape of input and output layers. There appears to be a Scala method for this purpose, but it's not available in PySpark API.

Document Pip install

  • Is this currently possible
  • If so how
  • This might make the lives of those using pycharm easier

GPU Work

  • README.md link to the 401 notebook as a demonstration of the GPU functionality.
  • Possibly figure out a way to still include the 401 notebook so it's visible.
  • Add to the release procedure an item for testing the 401 notebook.

Support cross-validation based FindBestModel

Add a new Estimator - FindBestModelCV - that is like FindBestModel, but uses cross-validation instead of evaluation against dataset.

FindBestModelCV would take a list of un-trained model or models, do cross-validation for each of them using the same fold splits, and compare metrics. It would then train the best model against full data, and return that as output Model.

Additionally, FindBestModelCV should produce a table of metrics for all sweeps.

How to read CIFAR images with scala in mmlspark

Currently, example 301 which evaluates pre-trained CNTK model with CIFAR10 images, is totally written by python. This example use pickle.load to read cifar-10-batches-py/test_batch and then parallelize to distributed RDD, however, I cannot directly use pickle to read data in scala code application.

I tried spark.readImages in mmlspark, but it seems cannot deal with cifar-10-batches-bin data well. And I finally choose cookie-datasets to read cifar data in scala (the master branch of cookie-datasets still used spark-1.5, and I upgrade it to spark-2.1 with necessary changes)

BTW, since you only have python examples, and I have already interpreted 101 and part 301 examples code to scala, I'm not sure whether you want this part of example codes?

Install script action fails on HDInsight cluster

Hi folks,

I recently provisioned an HDInsight Spark 2.1 cluster and tried to install MMLSpark using the script action URI and instructions, as I've previously done for MMLSpark 0.6 without any issues. The script action fails, showing the following "Debug information" in Azure Portal:

{
    "href": "http://10.0.0.23:8080/api/v1/clusters/mawahwasb3/requests/40",
    "tasks": [
        {
            "href": "http://10.0.0.23:8080/api/v1/clusters/mawahwasb3/requests/40/tasks/156",
            "Tasks": {
                "attempt_cnt": 1,
                "command": "ACTIONEXECUTE",
                "command_detail": "run_customscriptaction ACTIONEXECUTE",
                "end_time": 1503679947306,
                "error_log": "/var/lib/ambari-agent/data/errors-156.txt",
                "exit_code": 1,
                "host_name": "hn0-mawahw.3ejwtsbjuzpurdzrrmda4wm3nd.gx.internal.cloudapp.net",
                "id": "156",
                "output_log": "/var/lib/ambari-agent/data/output-156.txt",
                "request_id": "40",
                "role": "run_customscriptaction",
                "stage_id": "0",
                "start_time": 1503679944176,
                "status": "FAILED",
                "stderr": null,
                "stdout": null,
                "structured_out": null
            }
        }, [...and all other tasks "COMPLETED"]

The last few lines printed to the mentioned output log (/var/lib/ambari-agent/data/output-156.txt) are:

Setting up ocl-icd-libopencl1:amd64 (2.2.8-1) ...
Setting up libhwloc-plugins (1.11.2-3) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
[azureml_327951dc2df6f88e104edcd22c5f680e] ('Start downloading script locally: ', u'https://mmlspark.azureedge.net/buildartifacts/0.7/install-mmlspark.sh')
Fromdos line ending conversion successful
('Unexpected error:', "('Execution of custom script failed with exit code', 1)")
Removing temp location of the script

And the last few lines of the mentioned error log (/var/lib/ambari-agent/data/errors-156.txt) are:

/tmp/tmpyhsA9w: line 50: CNTK_WHEELS[$env]: Unknown conda env for CNTK: azureml_327951dc2df6f88e104edcd22c5f680e
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/custom_actions/scripts/run_customscriptaction.py", line 194, in <module>
    ExecuteScriptAction().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/custom_actions/scripts/run_customscriptaction.py", line 179, in actionexecute
    ExecuteScriptAction.execute_bash_script(bash_script, scriptpath, scriptparams)
  File "/var/lib/ambari-agent/cache/custom_actions/scripts/run_customscriptaction.py", line 149, in execute_bash_script
    raise Exception("Execution of custom script failed with exit code",exitcode)
Exception: ('Execution of custom script failed with exit code', 1)

Is there any other info I can provide to help identify the problem? Thanks in advance for your help!

Additional R wrappers work

  • Documentation
  • Tests (this will also require adding R to the stuff that gets installed on dev environments.)

Python AutoGen does not append "_" to nested classes.

I have a program...
ALS.scala:
class ALS {}
@InternalWrapper
class ALSModel {}

I have methods in a program called
ALSModel.py

In _ALS.py
def _ALS(self):
def ALSModel(self):

Which is conflicting with the name in my provided .py.

MMLSpark model.save changes dfs working directory

With the following diagnostic code,

hdi_wd = get_hdispark_working_dir()
print("AFTER STATS TRANSFORM, BEFORE WRITE")
print(hdi_wd)

# save model
model.save("outputs/aot-mmlspark.model")
hdi_wd = get_hdispark_working_dir()
print("AFTER WRITE")

AFTER STATS TRANSFORM, BEFORE WRITE
wasb://snip/testhdi_1505503317753
Running HDI/Spark job in wasb://snip
AFTER WRITE

Regular spark model:
Running HDI/Spark job in wasb://snip/testhdi_1505505091869
BEFORE WRITE
wasb://snip/testhdi_1505505091869
Running HDI/Spark job in wasb://snip/testhdi_1505505091869
AFTER WRITE
wasb://snip/testhdi_1505505091869

Improve CNTKTrainer Style

src/cntk-train/src/main/scala/Builder

  • Should use immutable classes with constructor arguments instead of setFoos.
  • Rename printOutput -> runWithOutput
  • Go over bogus quotes in CommandBuilders.scala

Maven Dependency

I added the following dep to my poml.xml

com.microsoft.ml.spark mmlspark 0.6 test

and also added the repo

azureedge.net MS Azure Maven Repo https://mmlspark.azureedge.net/maven

Still maven cannot find the artifact...

Provide sample on how to deploy a model in Spark

MMLSpark provides a way to save the model. It would be nice to take this to the next step to show how to deploy the model to Spark using one of the methods - Spark UDF and/or Livy endpoint. It woudl be great to update one of the notebook to demostrate this.

Add support for missing value cleaner

Add an Estimator that computes a missing value replacement value, such as mean, median or mode, for training data. The Estimator then produces a Model that can be applied to replace missing values.

The missing value cleaner should support one or more input columns. Different types should be supported as follows:

  • Floating point numbers: mean, median, mode
  • Integer numbers: median, mode
  • Strings and categoricals: model
  • Vectors and other composite types: not supported

The missing value cleaner should be a PipelineStage so it is compatible with SparkML pipelines.

Train Classifier and Train Regressor need API review

  • Relying on metadata and magic numbers not idiomatic and makes it hard to parameterize these pipeline stages
  • our automatic compute model statistics modules rely on this metadata making the two unnecessarily coupled and not friendly to the rest of the ecosystem
  • metadata wrangling makes code unnecessarily complex
  • the two share a huge amount of code and both should just output pipeline models
  • num features default not set idiomatically
  • verify train classifier's test needs to be run in a specific directory

`ZipIterator` issues

The current implementation of ZipIterator has a bunch of issues:

  1. If it's possible, it would be much better to implement a plain
    iterator that returns the entries as lazy values that actually do the
    reading when needed. This would make it possible to drop the
    sampling completely, and use something like
    .filter(_ => r.nextDouble < someRatio) instead of baking it in.

  2. Reading the quick description, it's not clear to me that it always
    returns the same elements (ie, the setSeed(0)) -- but maybe this is
    idiomatic and shouldn't be documented?

  3. Also, there is the known algorithm that returns N random elements,
    maybe it's also useful to do that? (This would be easy if the first
    point is done.)

  4. The implementation is not too great -- it looks like there are too
    many vars, and the return inside the while loop is making it
    hard to follow. Again, doing the first point would make all of this
    complexity go away.

Add pre-trained DNNs for text data

Add support for pre-trained DNN models that can be used to extract features from free-form text data, such as word embedding vectors.

These features could then be used as inputs, for example, for document classification models.

Finding the mapping of string class labels to integer indices used by a TrainedClassifierModel

Hi folks,

I created a TrainedClassifierModel using a training dataset in which the label column is string-valued. I've noticed that when I apply the TrainedClassifierModel's transform method to a validation dataset, the resulting scored_labels column is integer-valued: presumably those are integer indices for the predicted labels. To produce a human-interpretable result, I'd like to map the scored_labels values back to their corresponding strings.

  • How can I find the label-to-index mapping that the TrainedClassifierModel has learned? Is it exposed as an attribute?
  • Suppose that I manually mapped my string-valued labels to consecutive integer indices beginning at zero, then used that integer column as my label column during training. Will MMLSpark adopt my integer-valued labels as its own class indices, or is there a potential for permutation?
    • If the former is true, then I would know exactly how to map scored_labels indices back to strings.
    • The latter may be true e.g. if MMLSpark simply assigns indices to labels in the order it encounters them.

Thanks!

Try to running the sample notebook results error

Hi, I just include the package using

pyspark --packages com.microsoft.ml.spark:mmlspark_2.11:0.5 \
 --repositories=https://mmlspark.azureedge.net/maven

However, when I run the sample notebook example 302, the following code

import mmlspark
import numpy as np
from mmlspark import toNDArray

IMAGE_PATH = "datasets/CIFAR10/test"
images = spark.readImages(IMAGE_PATH, recursive = True, sampleRatio = 0.1).cache()
images.printSchema()
print(images.count())

results error:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-1-4aed556d1de7> in <module>()
      6 images = spark.readImages(IMAGE_PATH, recursive = True, sampleRatio = 0.1).cache()
      7 images.printSchema()
----> 8 print(images.count())

/home/wonglab/spark_install/spark/python/pyspark/sql/dataframe.py in count(self)
    378         2
    379         """
--> 380         return int(self._jdf.count())
    381 
    382     @ignore_unicode_prefix

/home/wonglab/spark_install/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/home/wonglab/spark_install/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/home/wonglab/spark_install/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling o45.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.UnsatisfiedLinkError: org.opencv.imgcodecs.Imgcodecs.imdecode_0(JI)J

Any idea? Thanks in advance!

Need to Organize Repo Structure

  • we should organize our additional pipeline stages into a project: src/stages
  • all other additions to spark can be their own projects either in core, or in src

Make ImageReader fail fast if file path is invalid

Currently, if the path to images is incorrect, the ImageReader fails lazily when it tries to read images. This makes debugging hard because the failure might happen much later in the pipeline during some different operation.

Add check to ImageReader to validate the correctness and existence of the file path upon instantiation, for better debugging experience.

`/tmp` cleanups

There are still a pile of stuff left in /tmp, all directories. The offending name patterns that need to be clean up are:

  • SavedModels-<N> (I just cleaned almost 19 thousand of these)
  • MML-Test-<N> (about 700)
  • MML-Test-<N>powerBI.parquet (about 160)

Most of these are empty directories, so perhaps there is some broken cleanup that removes files but not the directories. The last pattern is the only one that has files left in it.

Featurize needs API review

  • The featurize estimator has a strange API, namely it takes a map from a string to a sequence of strings as a parameter.
  • the featurize estimator uses assemble features which for some reason is an estimator (should be a transform)
  • featurize should have a helper function that maps a column types to featurization pipelines. The logic of the estimator should be very simple and just compose a pipeline out of these pipelines

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.