GithubHelp home page GithubHelp logo

apache / camel-kafka-connector Goto Github PK

View Code? Open in Web Editor NEW
148.0 31.0 99.0 52.01 MB

Camel Kafka Connector allows you to use all Camel components as Kafka Connect connectors

Home Page: https://camel.apache.org

License: Apache License 2.0

Java 99.81% Shell 0.16% Batchfile 0.04%
camel integration kafka java

camel-kafka-connector's Introduction

Camel Kafka Connector

Chat on Zulip Master Build

Introduction

Note

This project is still in its infancy. Is it relatively stable and leverages more than a decade of Apache Camel experience and maturity. At the same time, it is in constant evolution. Almost every week new features are implemented and current ones adjusted to integrate better with both Camel and Kafka Connect.

This is a "Camel Kafka connector adapter" that aims to provide a user-friendly way to use all Apache Camel components in Kafka Connect. For more information about Kafka Connect take a look here.

Build the project

mvn clean package

Run integration tests

To run the integration tests it is required to:

  • have Docker version 17.05 or higher running

  • then run:

    mvn -DskipIntegrationTests=false clean verify package

It is also possible to point the tests to use an external services. Please check the testing guide.

Documentation

camel-kafka-connector's People

Contributors

aemiej avatar apupier avatar celebrate-future avatar claudio4j avatar codexetreme avatar cunningt avatar davsclaus avatar dependabot[bot] avatar djencks avatar duanasq avatar evgen1000end avatar ffang avatar github-actions[bot] avatar jakubmalek avatar janstey avatar lburgazzoli avatar luigidemasi avatar mathieux51 avatar mureinik avatar nicolaferraro avatar niteshkoushik avatar omarsmak avatar orpiske avatar oscerd avatar rgannu avatar scholzj avatar tadayosi avatar unsortedhashsets avatar valdar avatar zregvart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

camel-kafka-connector's Issues

Find a way to manage dependencies

In order to support camel components, the proper dependencies must be in the classpath. Decide a way to do it, there are some alternatives:

  • trivial option: put all the supported camel components as project dependencies
  • generate a kafka connector for each component like more or less is done for camel spring boot starters

comment 1 by @oscerd: Maybe the second one and then release a bom with all the kafka-connector we have.

comment 2 by @omarsmak: @valdar @oscerd question wondering in my head, I think there are components IMHO don't make sense to be contained as Kafka Connect connector (e.g: activemq, osgi related ..etc), how do we plan to tackle these?

Test coverage: springboot autoconfigurations vs kafka-connect configuration format

We need to ensure test coverage for different types of configuration formats. There are minor - intentional - differences between the configuration formats used by SpringBoot autoconfiguration [1] (ie.: camel.component.aws-s3.secret-key vs. Kafka Connect configuration [2] format camel.component.aws-s3.secretKey).

  1. https://camel.apache.org/components/latest/aws-kinesis-component.html
  2. https://github.com/apache/camel-kafka-connector/blob/master/connectors/camel-aws-s3-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/awss3/CamelAwss3SourceConnectorConfig.java#L179

Unable to initialize REST resources in tests

Tests logs contains:

org.apache.kafka.connect.errors.ConnectException: Unable to initialize REST resources
	at org.apache.kafka.connect.runtime.rest.RestServer.initializeResources(RestServer.java:301) ~[connect-runtime-2.4.1.jar:?]
	at org.apache.kafka.connect.runtime.Connect.start(Connect.java:54) ~[connect-runtime-2.4.1.jar:?]
	at org.apache.camel.kafkaconnector.services.kafkaconnect.KafkaConnectRunner.run(KafkaConnectRunner.java:198) ~[test-classes/:?]
	at org.apache.camel.kafkaconnector.services.kafkaconnect.KafkaConnectRunnerService.lambda$start$0(KafkaConnectRunnerService.java:107) ~[test-classes/:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
Caused by: java.lang.NullPointerException
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:802) ~[jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
	at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:276) ~[jetty-servlet-9.4.20.v20190813.jar:9.4.20.v20190813]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) ~[jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813]
	at org.apache.kafka.connect.runtime.rest.RestServer.initializeResources(RestServer.java:299) ~[connect-runtime-2.4.1.jar:?]```
This is most probably due to how kafka connect environment is initialized mimiking what kafka connect standalone dose and missing the REST server init part.

ERROR - Plugin class loader for connector: was not found.

Tests logs have this error:

2020-04-03 17:10:52,295 [main           ] ERROR org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader - Plugin class loader for connector: 'org.apache.camel.kafkaconnector.CamelSinkConnector' was not found. Returning: org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader@78aa7b77

it seems to

indicates that the connector is getting loaded in the same classloader as the connect worker, probably because the connector is in the Kafka lib directory. I would suggest moving the connector out of that directory to avoid this exception and have it loaded only through the plugin.path instead.
(see confluentinc/kafka-connect-jdbc#766)

it seems not to be an issue for the tests themselves, still better to investigate.

Replace `confluentinc/cp-kafka` image with an upstream version

Here are the images pulled as a result of a fresh mvn clean install:

REPOSITORY                                          TAG                 IMAGE ID            CREATED             SIZE
centos                                              8                   0f3e07c0138f        2 months ago        220MB
docker.elastic.co/elasticsearch/elasticsearch-oss   7.3.2               876f2f753cc8        3 months ago        612MB
localstack/localstack                               0.9.4               5c6c5a293024        6 months ago        1.29GB
confluentinc/cp-kafka                               5.2.1               af7cf4356c58        8 months ago        569MB
quay.io/testcontainers/ryuk                         0.2.3               64849fd2d464        10 months ago       10.7MB
alpine                                              3.5                 f80194ae2e0c        10 months ago       4MB

would be nice to replace confluentinc/cp-kafka image with an upstream version.

Allow to bridge between user's custom Camel Processor and Kafka Connect SMT

If the user has custom Camel Processor, IMHO it will be nice to bridge them into Kafka Connect as an SMT wrapper, where the user will just add the Processor class as config, using reflection, we load it into this wrapper and process the data accordingly. However, sure the processing only limited to the messages of the exchange, nothing else

author @omarsmak

support camel type converters

Provide a way to support camel type converters and/or integrating them in kafka connect key.converter and value.converter.

comment 1 @omarsmak: Hello @valdar , what is the plan for this issue? If no one is working on it currently, I can work on it as well

comment 2 @valdar : Sure @omarsmak I will be more busy with #4 in near future.
I would appreciate if you can outline your idea before starting to code, so we can be sure we are on the same page; I admit the issue's description is not the most detailed one... they were initially made as a self note/brain dump...

comment 3 @valdar : Actually, the first thing to do here is to understand if it is probably better to implement a custom Kafka connect transformation: https://docs.confluent.io/current/connect/concepts.html#transforms for doing this (as opposed to using connector converters https://docs.confluent.io/current/connect/concepts.html#converters). @omarsmak

comment 4 @omarsmak Sure, one option I could think of from the top of my head is, we could try to utilize Camel type converter registry somehow, we can use this to serialize the data if we have the info about the data from -> to types. Will need to experiment this though

comment 5 @omarsmak : Actually it depends, the question what do we want to achieve here, do we want to convert the data from/to byte before sending it to kafka, or do we want just to transform the data? The way I can think about it is like this:

  • Kafka Connect Transformation can work with camel type converters
  • Kafka Connect Converter can work with camel data format
    However let's take an example, let's see we have this S3 component that includes a type converter that convert from S3ObjectInputStream to byte and vice versa, if that is the case, then it would be better suited as Kafka Connect converter, isn't?

comment 6 @omarsmak : @valdar I need to dump my thoughts before I forget :D:
Probably you are right, an SMT wrapper will be suitable for this. We can use Camel TypeConverter registry that can be obtained via DefaultCamelContext. What I can purpose is the following:

  • We add a TypeConverterTransform SMT class, which is a wrapper around Camel TypeConverter registry, since we don't have enough information about the types that we need convert from -> to, I suggest the user will need to add this into the config, then in this wrapper, we will add the schema info in the Struct object and convert the value accordingly per the user data type configs.
  • Once we have the above SMT, we can as well use it as a 'Default' converter into a Kafka Connect converter wrapper, to be used as Kafka Connect converter, before sending the data to Kafka as byte[].

Please feel free to discuss :)

Integrations tests fails if confluentinc/cp-kafka:5.2.1 has not been pulled

if one do a mvn clean verify package without having pulled confluentinc/cp-kafka:5.2.1 this is the result:

[INFO] Running org.apache.camel.kafkaconnector.source.timer.CamelSourceTimerITCase
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.141 s <<< FAILURE! - in org.apache.camel.kafkaconnector.source.timer.CamelSourceTimerITCase
[ERROR] testLaunchConnector(org.apache.camel.kafkaconnector.source.timer.CamelSourceTimerITCase)  Time elapsed: 0.14 s  <<< ERROR!
org.testcontainers.containers.ContainerLaunchException: Container startup failed
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
Caused by: java.lang.reflect.UndeclaredThrowableException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: com.github.dockerjava.api.exception.NotFoundException: 
{"message":"No such image: confluentinc/cp-kafka:5.2.1"}


[INFO] Running org.apache.camel.kafkaconnector.sink.jms.CamelSinkJMSITCase
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.155 s <<< FAILURE! - in org.apache.camel.kafkaconnector.sink.jms.CamelSinkJMSITCase
[ERROR] testBasicSendReceive(org.apache.camel.kafkaconnector.sink.jms.CamelSinkJMSITCase)  Time elapsed: 0.155 s  <<< ERROR!
org.testcontainers.containers.ContainerLaunchException: Container startup failed
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
Caused by: java.lang.reflect.UndeclaredThrowableException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: com.github.dockerjava.api.exception.NotFoundException: 
{"message":"No such image: confluentinc/cp-kafka:5.2.1"}

author @valdar

comment 1 by @valdar : @orpiske can you take a look?

comment 2 by @orpiske : @valdar absolutely. I will take a look at this one.

comment 3 by @orpiske : @valdar is there any more details you could share about your environment? I haven't been able to reproduce it yet. I have tried a couple of things* - always using a clean environment - but none worked so far (different docker versions, OSes (Fedora/RHEL), System configurations, etc).

comment 4 by @valdar : @orpiske sure:

$ cat /etc/fedora-release 
Fedora release 30 (Thirty)
$ docker --version
Docker version 19.03.2, build 6a30dfc

no particular configs that I am aware might affect this issue

comment 5 by @orpiske : Thanks @valdar. When you get a chance, can you also attach the tests.log for when this issue happen, please? I have a guess that this might be related to an unreliable or slow network and I think I can get some clues from that from that log.

Add a Camel like DSL to translate between DSL and properties

If we have all the components together, I thought that we can add a DSL that is very similar to Camel DSL, that is to make it easier for our users. It could be for start limited to numbers of processors. Example:

from("file://name.txt?...")
      .process(()->{})
      .to("sql://my-table?...);

This will be translated to three properties, one for the Kafka Connect Source for Camel Consumer file, one for Kafka Connect SMT for the processor and finally a Kafka Connect Sink for the Camel Producer sql.
Even later in the future, we can even utlize Kafka Stream in order to bridge some of the EIP patterns that are implemented in Camel to this project.

author @omarsmak

Kafka/Strimzi Connect - Camel example, where to put it?

Hi,

I've been thinking of adding an example of Kafka Connect Component for Strimzi and I see that there is already a doc created here: https://github.com/apache/camel-kafka-connector/blob/master/docs/modules/ROOT/pages/try-it-out-on-openshift-with-strimzi.adoc

As I had a look at the examples they seem to be just properties that connect component uses. So should we create folders per example in the examples folder like in the camel-spring-boot or camel-quarkus examples?

Or should we create the Kafka connect example for both Kafka and Strimzi in the per framework repositories like camel, camel-spring-boot or camel-quarkus?

What is your suggestion or thoughts?

Release 0.1.0

We need to setup zip and tar.gz packaging and cut a first release.

Create and ad hoc connector for syslog

Since syslog camel component is not really a component and is not listed in the camel-catalog but is a collection of dataformat and netty codec, its Kafka connector needs to be manually created.

I think there are just a few components like this one (the most notable one being camel-hl7), so it is not worth to generally handling them in the automatic Kafka connector generation.

Better support for value Schema in the SourceTask

Currently we build the SourceRecord with Schema.BYTES_SCHEMA schema. However, this can be misleading, for example, you can have body message of type String but however the schema will be always Schema.BYTES_SCHEMA regardless.
However, as is hard to cover all types, we can have optional BYTES_SCHEMA as a fallback schema. This is just an idea, no idea about the implications downstream.

Ordering of endpoint path options is not taken in account

If a component has more than one path option like org.apache.camel.kafkaconnector.cql.CamelCqlSinkConnector. The order is not taken into account, so for a configuration property like:

camel.sink.path.hosts=localhost
camel.sink.path.port=34719
camel.sink.path.keyspace=ckc_ks

the correct constructed url should look like: cql:localhost:34719:ckc_ks?otherOptions=.. but tha is not garanted, could happen that it ended up being cql:34719:localhost:ckc_ks?otherOptions=... leting the connector crash at startup.

Unclean shutdown causing lots of warning messages in the logs

When the test is shutting down, there are a lot of messages like this in the logs (~9k messages like this for a test run).

2019-12-08 20:24:49,992 [7c-6916da6c7843] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=6a23d856-c3d2-40b6-be7c-6916da6c7843] Connection to node 1 (localhost/127.0.0.1:34649) could not be established. Broker may not be available.

This doesn't really affect anything, so probably low priority, but it seems to be fixed by a separate patch I am working on, so I may refer to this one when I send that one.

Not able to run aws s3 source connector

I am trying to run the aws s3 source connector. I have followed this documetation to install connector:
https://camel.apache.org/camel-kafka-connector/latest/try-it-out-on-openshift-with-strimzi.html.

I have installed the connectors correctly as I can list it connecor:

kubectl exec -i kafka-cluster-kafka-0 -n rucvan -- curl -X GET http://my-connect-cluster-connect-api:8083/connector-plugins
Defaulting container name to kafka.
Use 'kubectl describe pod/kafka-cluster-kafka-0 -n rucvan' to see all of the containers in this pod.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0[{"class":"io.confluent.connect.s3.source.S3SourceConnector","type":"source","version":"1.2.1"},{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.0.1-SNAPSHOT"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.0.1-SNAPSHOT"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"2.3.0"},{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"sink","version":"2.3.0"}]

This is the json for source connector:

{
"name": "s3-connector-camel",
"config": {
  "connector.class": "org.apache.camel.kafkaconnector.CamelSourceConnector",
  "tasks.max": "1",
  "key.converter": "org.apache.kafka.connect.storage.StringConverter",
  "value.converter": "org.apache.camel.kafkaconnector.converters.S3ObjectConverter",
  "camel.source.kafka.topic": "s3-topic",
  "camel.source.url": "aws-s3://{BucketName}?autocloseBody=false",
  "camel.component.aws-s3.configuration.access-key": "*****",
  "camel.component.aws-s3.configuration.secret-key": "*****",
  "camel.source.maxPollDuration": 10000
  }
}

Now when I am trying to print status for the connector, I get following error:
{"name":"s3-connector-camel","connector":{"state":"RUNNING","worker_id":"100.80.22.122:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"100.80.22.122:8083","trace":"org.apache.kafka.connect.errors.ConnectException: Failed to create and start Camel context\n\tat org.apache.camel.kafkaconnector.CamelSourceTask.start(CamelSourceTask.java:98)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:199)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: java.lang.IllegalArgumentException: Error configuring property: camel.component.aws-s3.configuration.secret-key because cannot find component with name aws-s3. Make sure you have the component on the classpath\n\tat org.apache.camel.main.BaseMainSupport.lambda$autoConfigurationFromProperties$14(BaseMainSupport.java:895)\n\tat org.apache.camel.main.BaseMainSupport.computeProperties(BaseMainSupport.java:1084)\n\tat org.apache.camel.main.BaseMainSupport.autoConfigurationFromProperties(BaseMainSupport.java:892)\n\tat org.apache.camel.main.BaseMainSupport.postProcessCamelContext(BaseMainSupport.java:545)\n\tat org.apache.camel.main.BaseMainSupport.initCamelContext(BaseMainSupport.java:422)\n\tat org.apache.camel.main.Main.doInit(Main.java:108)\n\tat org.apache.camel.support.service.ServiceSupport.init(ServiceSupport.java:80)\n\tat org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:108)\n\tat org.apache.camel.main.MainSupport.run(MainSupport.java:77)\n\tat org.apache.camel.kafkaconnector.utils.CamelMainSupport$CamelContextStarter.run(CamelMainSupport.java:214)\n\t... 3 more\n"}],"type":"source"}

The camel-kafka-connector artifacts pulls number of dependencies

I am trying to use this project to integrate with Hazelcast Jet. It works pretty nicely, well done!

public final class KafkaConnectTest {
    private static final String BROKER_URL = "tcp://10.0.0.113";
    private static final String TOPIC = "test";

    public static void main(String[] args) {
        JetInstance jet = Jet.newJetInstance();

        Pipeline pipeline = Pipeline.create();
        pipeline.readFrom(KafkaConnectSources.connect(kafkaConnectProps()))
                .withoutTimestamps()
                .map(record -> new String((byte[])record.value()))
                .writeTo(Sinks.logger());

        JobConfig jobConfig = new JobConfig();
        Job job = jet.newJob(pipeline, jobConfig);
        job.join();
    }

    private static Properties kafkaConnectProps() {
        Properties properties = new Properties();
        properties.setProperty("name", "camel-source-connector");
        properties.setProperty("connector.class", "org.apache.camel.kafkaconnector.CamelSourceConnector");
        properties.setProperty("camel.source.url", "paho:" + TOPIC + "?brokerUrl=" + BROKER_URL);
        return properties;
    }
}

There is one thing I don't quite understand: The main artifact (camel-kafka-connector) pulls a number of dependencies which I don't need and I have to manually exclude. Is there any reason for this?

Unstable test: CamelSourceTaskTest.testSourcePollingTimeout

I am noticing some sporadic failures on the CamelSourceTaskTest.testSourcePollingTimeout test. From time to time I get this:

[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.009 s <<< FAILURE! - in org.apache.camel.kafkaconnector.CamelSourceTaskTest
[ERROR] org.apache.camel.kafkaconnector.CamelSourceTaskTest.testSourcePollingTimeout  Time elapsed: 3.033 s  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
	at org.apache.camel.kafkaconnector.CamelSourceTaskTest.testSourcePollingTimeout(CamelSourceTaskTest.java:196)

I did not have time to investigate it in depth, but I can take when I get back.

Allow to bridge between Camel DataFormat and Kafka Connect SMT

It will be nice if we can somehow bridge between whetever we have for Camel DataFormat and bridge it as Kafka SMT (Single Message Transformation).
However, my gut feelings is that, we might need to add an wrapper for each DataFormat we have, because some DataFormat (let's say Protobuf), may it needs some extra processing, for example for the header

author @omarsmak

Allow the user to set the key of the SourceRecord explicitly

Currently, when the messages being polled, via this method:

SourceRecord record = new SourceRecord(sourcePartition, sourceOffset, topic, Schema.BYTES_SCHEMA, exchange.getMessage().getBody());

We don't set the Key of the record in the SourceRecord constructor which could be dangerous to send these data downstream non-keyed, therefore to tackle this, I'd suggest to all the user to set the Key explicitly, let's see a key of the exchange is available as a header, the user will just supply the name of the header in order to key the record with.
By default, I'd suggest to key the message by the messageId in case the user didn't set any configuration for the keys

author @omarsmak

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.