GithubHelp home page GithubHelp logo

apache / kyuubi Goto Github PK

View Code? Open in Web Editor NEW
1.9K 63.0 858.0 58.21 MB

Apache Kyuubi is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.

Home Page: https://kyuubi.apache.org/

License: Apache License 2.0

Shell 1.50% Scala 69.74% Python 2.87% Java 23.37% Dockerfile 0.17% ANTLR 0.39% Smarty 0.03% HTML 0.02% Vue 0.93% TypeScript 0.89% SCSS 0.07% Batchfile 0.03%
spark hive sql thrift jdbc spark-sql data-lake hadoop kubernetes hacktoberfest

kyuubi's Introduction

Kyuubi logo

Project - Documentation - Who's using

Apache Kyuubi

Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.

What is Kyuubi?

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/Lakehouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation Documentation Status

Quick Start

Ready? Getting Started with Kyuubi.

Project & Community Status

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.

kyuubi's People

Contributors

a49a avatar angerszhuuuu avatar bowenliang123 avatar byyue avatar cfmcgrady avatar charlesy6 avatar cxzl25 avatar dnskr avatar hddong avatar iodone avatar jiaoqingbo avatar lightning-l avatar link3280 avatar lsm1 avatar pan3793 avatar simon824 avatar stenicholas avatar turbofei avatar ulysses-you avatar wforget avatar yanghua avatar yaooqinn avatar yikf avatar zhang1002 avatar zhaohehuhu avatar zhaomin1423 avatar zhenjiaguo avatar zhouyifan279 avatar zml1206 avatar zwangsheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kyuubi's Issues

Make Kyuubi directly supporting Different Spark versions without patching them

So far, in order to enable multi tenancy feature of Kyuubi, users have to patch a given patch to specified Spark source code, which is not convenient to use.

I am considering a plan to support this during compile/build time of the kyuubi project, and maybe some day can support this during runtime by configuration.

releasing kyuubi 0.2.0 as the 1st release

expected behavior

releasing kyuubi 0.2.0 👍

actual behavior

no releasing packages before 👎

steps to reproduce

  • PASS . run tpc_ds 99 queries concurrently via multi users on kyuubi against spark 2.1.2 💯
  • PASS . run tpc_ds 99 queries concurrently via multi users on kyuubi against spark 2.2.1 💯
  • PASS . run tpc_ds 99 queries concurrently via multi users on kyuubi against spark 2.3.0 💯

specifications

  • version of the project

  • operating system

  • hardware

bring handles to inside kyuubi

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

support random service port

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

Support FetchOrientation.FetchFirst

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

making kyuubi server a sub-project

Is your feature request related to a problem? Please describe.
no
Describe the solution you'd like

this pr goes to make current work on kyuubi as a sub project of whole kyuubi named kyuubi-server.
then we can implement more interesting features to kyuubi and not breaking the compatibilities.
Describe alternatives you've considered
no
Additional context
no

Preparing release v0.1.0

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

open session with exception not handled right

Expected behavior

When open session with a wrong databases, unexpected message shows, which actually error with
no database found exception

Actual behavior.

18/05/03 13:55:42 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hzadg-jenkins.server.163.org:10009/default2;principal=hive/[email protected];hive.server2.proxy.user=hzyaoqin#spark.yarn.queue=default;spark.sql.haha=hehe
Index: 15, Size: 15

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

endless iterator cause client oom

Expected behavior

Actual behavior.

Steps to reproduce the problem.

show tables

Specifications like the version of the project, operating system, or hardware.

handle token during open session

Expected behavior

tokens dropped during open session

Actual behavior.

they actually should be handled.

Steps to reproduce the problem.

none

Specifications like the version of the project, operating system, or hardware.

nothing related.

initial db switch fails with privileges check

Expected behavior

session established or show correct exception

Actual behavior.

session fails with no status no code

Steps to reproduce the problem.

specify a un authorized db to connect for a proxy user

bring fetch type into kyuui

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

support incrementally get result from executors to reduce oom risks

As a multi tenant server, kyuubi plays the role of many "spark apps"'s driver/client, it might need more memory than the spark thrift server. I think that Dataset.toLocalIterator api can receive partial results from executors incrementally, which helps us to reduce the server oom risk

bring info value / type to kyuubi

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

support cluster deploy mode

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

runtime support running on different versions of spark

#67 makes kyuubi support original spark release package with specified maven options. for instance, -Pspark-2.2 gives us a kyuubi runnable package twards spark2.2.x.

But still, we need offer a more compatible way to support different versions of Spark at runtime

Kyuubi server fails to start in ha mode for NodeExistsException

Describe the bug

When HA mode on, Kyuubi start will fail...

18/06/29 16:19:43 ERROR server.KyuubiServer$: Error starting Kyuubi Server
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /kyuubi
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
	at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:688)
	at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:672)
	at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
	at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:668)
	at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:453)
	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:443)
	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:423)
	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44)
	at yaooqinn.kyuubi.ha.HighAvailabilityUtils$.addServerInstanceToZooKeeper(HighAvailabilityUtils.scala:83)
	at yaooqinn.kyuubi.server.KyuubiServer$.main(KyuubiServer.scala:85)
	at yaooqinn.kyuubi.server.KyuubiServer.main(KyuubiServer.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.KyuubiSubmit$.runMain(KyuubiSubmit.scala:259)
	at org.apache.spark.deploy.KyuubiSubmit$.submit(KyuubiSubmit.scala:93)
	at org.apache.spark.deploy.KyuubiSubmit$.main(KyuubiSubmit.scala:79)
	at org.apache.spark.deploy.KyuubiSubmit.main(KyuubiSubmit.scala)

To Reproduce
Steps to reproduce the behavior:

bin/start-kyuubi.sh --conf spark.kyuubi.authentication=KERBEROS --conf spark.kyuubi.ha.enabled=true --conf spark.kyuubi.ha.zk.quorum=<host:port> --conf spark.kyuubi.ha.zk.namespace=kyuubi

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

fails with java.io.IOException: No space left on device in initializing spark context

Expected behavior

Actual behavior.

server side

8/03/19 11:18:14 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:326)
	at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
	at java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:255)
	at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:360)
	at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:238)
	at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:377)
	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:574)
	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:895)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:171)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
	at yaooqinn.kyuubi.session.KyuubiSession$$anon$1.run(KyuubiSession.scala:132)

client side

Error: Get SparkSession for [hzyaoqin] failed: java.util.concurrent.TimeoutException: Futures timed out after [60 seconds] (state=08S01,code=0)
Beeline version 1.2.1.spark2 by Apache Hive

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

bring operation state to kyuubi

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

support codecov

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

sql standard authorization support with spark-authorizer plugin

Is your feature request related to a problem? Please describe.
no
Describe the solution you'd like

Currently, Kyuubi with Official released Spark versions offers no SQL Standard Authorizations ability

User whoever wants to using kyuubi with column level sql access control need to modify spark source code to support.

Here is an example I did to support this feature for spark source tag v2.1.2 yaooqinn/spark@eb9e1a2

Together with spark-authorizer's new WIP feature -- [Support multi tenant server such as kyuubi with authorization] (yaooqinn/spark-authorizer#2)

we intend to support that inside kyuubi directly to achieve "FINE GAINED ROW/COLUMN LEVEL SQL STANDARD ACCESS CONTROL"

Describe alternatives you've considered
no

Additional context

no

Kyuubi Server Web UI

Is your feature request related to a problem? Please describe.
Due to the problem that we describe in issue #72, there are some UI display incompatibilities to support the original spark releases.

If we maintain the UIUtils class in Kyuubi source we may easily solve this problem, but this module is evolving and there are and will be more foreseeable coming problems. It is unworthy to maintain such a non-core module inside to break the whole compatibilities with different spark releases.

If we have to let the users modify Spark first to solve this problem, it will a disaster for Kyuubi to be an open source project and to become a member of Spark ecosystem.

Describe the solution you'd like

Here we implement a server side Web UI for Kyuubi

Describe alternatives you've considered

For session level, the UI‘s problem may still need to solve, although a server side UI will somehow reduces the urgency

Additional context

NONE

support for structured streaming jobs

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

column based set not handle null values properly

Expected behavior

482 | NULL |
| 174 | NULL |

Actual behavior.

482 | NULL |
| 174 | 0 |
| 413 | 0 |
| 494 | 0 |
| 207 | 0 |
| 199 | 0 |
| 466 | 0 |
| 208 | 0 |
| 174 | 0 |
| 399 | 0 |
| 396 | 0 |
| 247 | 0 |
| 417 | 0 |
| 489 | 0 |
| 162 | 0 |
| 377 | 0 |
| 397 | 0 |
| 309 | 0 |
| 365 | 0 |
| 266 | 0 |
| 439 | 0 |
| 342 | 0 |
| 367 | 0 |
| 325 | 0 |
| 167 | 0 |
| 195 | 0 |
| 475 | 0 |
| 17 | 0 |

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

spark.ui.proxyBase will be overwritten each time after submit an yarn app

this conf is used by spark‘s live web ui to map ui root address for tabs, pages, and any other links on the spark web ui
https://github.com/apache/spark/blob/v2.3.0/core/src/main/scala/org/apache/spark/ui/UIUtils.scala#L153

kyuubi submit multi yarn apps in a single jvm, but spark treat this conf as System Props, so it will be overwritten each time, which will cause ui mapping isusse when visiting ui.
This means you need correct the proxy address with the right app_id on each click

a suitable solution may be found here yaooqinn/multi-tenancy-spark@aeb0a83

i will work on this later

doc should be updated so far

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

bring fetch orientation to kyuubi

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

SparkContext promise in kyuubi session time out do not stop its instantiating

Expected behavior

fails sc intitializing when timeout

Actual behavior.

sc initializing continued

Steps to reproduce the problem.

we can reproduce by set spark.kyuubi.backend.session.init.timeout to short wait timeout

bin/beeline -u "jdbc:hive2://hzadg-jenkins.server.163.org:10000/;principal=hive/[email protected];hive.server2.proxy.user=hzyaoqin#spark.sql.haha=hehe;heihei=huhu;spark.kyuubi.backend.session.init.timeout=5s"

make distribution script

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

Start-SparkContext-xxx thread throws java.lang.AbstractMethodError

kyuubi distribution made by ./build/dist --tgz which actually uses spark 2.1.2.When I ran this distribution on spark 2.3.0, incompatible error show as bellow:

Exception in thread "Start-SparkContext-xxx" java.lang.AbstractMethodError
	at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
	at org.apache.spark.SparkEnv$.initializeLogIfNecessary(SparkEnv.scala:141)
	at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
	at org.apache.spark.SparkEnv$.log(SparkEnv.scala:141)
	at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
	at org.apache.spark.SparkEnv$.logInfo(SparkEnv.scala:141)
	at org.apache.spark.SparkEnv$.<init>(SparkEnv.scala:142)
	at org.apache.spark.SparkEnv$.<clinit>(SparkEnv.scala)
	at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:423)
	at yaooqinn.kyuubi.spark.SparkSessionWithUGI$$anon$1.run(SparkSessionWithUGI.scala:53)

support for http frontend service

Expected behavior

Actual behavior.

binary only now

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

OperationManger log Schema should be val

Expected behavior

Actual behavior.

end less parsing log schema

18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string
18/04/08 15:02:20 INFO CatalystSqlParser: Parsing command: string

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

use toLocalIterator as action

Expected behavior

Actual behavior.

Steps to reproduce the problem.

Specifications like the version of the project, operating system, or hardware.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.