GithubHelp home page GithubHelp logo

rancher / catalog-dockerfiles Goto Github PK

View Code? Open in Web Editor NEW
131.0 17.0 102.0 22.11 MB

Dockerfiles for Rancher Catalog containers

License: Apache License 2.0

Shell 82.23% Smarty 1.36% Go 15.19% XSLT 1.22%

catalog-dockerfiles's Introduction

Dockerfiles for Rancher Catalog Items.


Purpose

These are the Dockerfiles for the containers that we use for Rancher entries in the community catalog. Also, you will find compose files that are here for development purposes.

Layout

Each App has its own folder. There is a containers directory where the various containers for the apps are kept. We try to version the directories to the version of the container.

License

Copyright (c) 2014-2016 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

catalog-dockerfiles's People

Contributors

benfavre avatar cloudnautique avatar galal-hussein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catalog-dockerfiles's Issues

logstash-config: how to configure patterns

I can't figure out how to use the confd patterns key.

Is it supposed to work yet? It is not documented ATM.

I noticed that the patterns template uses config keys that do not have the /self/service/metadata part like in the config template.

I tried configuring rancher like this :

logstash:
  metadata:
    logstash:
      patterns:
        - ${indexer_patterns}

and like this:

logstash:
  metadata:
    logstash:
      patterns:
        - |
          ${indexer_patterns}

But the patterns file stays empty… @cloudnautique, maybe you can help?

Isn't the placement of Dockerfiles here bad for CD and security?

Software is always evolving, thats why SemVer is adopted almost everywhere, including on mostly docker images. As so, my question are:

Why are you placing images Dockerfile and respective assets?
Are them maintained on this repo?
Is docker-compose.yml, despite their image info, building from those?

Thanks :)

If you update the image galera-conf

Hi,

Can you add this in script start_galera

printenv > /etc/environment

This will make available the environment variables for cron

thanks

kibana from elk: nginx-proxy_kibana4_1 permanetly restarting

hey,
I did set up the ELK stack as proposed, and the kibana_nginx-proxy_kibana4_1 is permanently restarting, hinders kibana-vip to come up.
I'm using
Component Version
Rancher v1.1.0-dev5
Cattle v0.164.1
User Interface v1.1.10
Rancher Compose v0.8.4
stack in question is running on latest rancherOS.
image: rancher/nginx:v1.9.4-3

no logs available from "kibana_nginx-proxy_1"
log from "kibana_nginx-proxy_kibana4_1":

28.6.2016 15:39:44{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:39:44.520Z","v":0}
28.6.2016 15:39:44{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:39:44.524Z","v":0}
28.6.2016 15:39:53{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:39:53.263Z","v":0}
28.6.2016 15:39:53{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:39:53.268Z","v":0}
28.6.2016 15:40:01{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:01.552Z","v":0}
28.6.2016 15:40:01{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:01.556Z","v":0}
28.6.2016 15:40:10{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:10.883Z","v":0}
28.6.2016 15:40:10{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:10.886Z","v":0}
28.6.2016 15:40:20{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:20.423Z","v":0}
28.6.2016 15:40:20{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:20.428Z","v":0}
28.6.2016 15:40:29{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:29.644Z","v":0}
28.6.2016 15:40:29{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:29.648Z","v":0}
28.6.2016 15:40:38{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:38.305Z","v":0}
28.6.2016 15:40:38{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:38.309Z","v":0}
28.6.2016 15:40:48{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:48.034Z","v":0}
28.6.2016 15:40:48{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:48.037Z","v":0}
28.6.2016 15:40:56{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":50,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:56.967Z","v":0}
28.6.2016 15:40:56{"name":"Kibana","hostname":"kibana_nginx-proxy_1","pid":1,"level":60,"err":{"message":"Request Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 5000ms\n    at null.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n    at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)"},"msg":"","time":"2016-06-28T13:40:56.974Z","v":0}

Add ntp to etcd image

I get the folowing error on etcd in kubernetes environment:
rafthttp: the clock difference against peer a5307ab2af371d91 is too high [5.028292803s > 1s]

MongoDB cluster splits up replica set after 1st node in cluster is removed

I have an issue with the MongoDB cluster after the first node in the cluster gets removed. It seems to cause a new node to be created and initializes its own new empty replicaset.

For example when the host with the first node is brought down, a new node will be created. MongoDB does take care of electing a new primary, but it seems it can't find the existing cluster and the scaling script will start to create a new replica set for the new node and lose all the data from the existing node.

This issue might occur when the host with the first node running might be replaced.

edit: updated findings

MongoDb does not shutdown gracefully

MongoDB does not shutdown gracefully and should have "trap" in the script to propagate signals properly to mongod process. Currently after the restart there is a recovery session. Log history:

2018-02-05T17:44:12.083+0000 W - [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
2018-02-05T17:44:12.088+0000 I - [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2018-02-05T17:44:12.088+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.

rancher/community-catalog#732

LogStash version - getting logs in, from non-docker source

Is there any plan on doing a newer version of logstash? This template runs 1.5.

Also, what would be a good way to get IIS logs from outside my docker environment into logstash? The server is questions is already setup to use GELF, but I don't know how to configure the inputs on this build. Won't any settings I add to the config file be overwritten if the containers move from one server to another?

connection problem of mongodb with docker

Hi,

I am trying to connect and run mongodb in docker. But it is showing error.

$docker pull mongo
$ docker run -it mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADD
R:$MONGO_PORT_27017_TCP_PORT/PMSDB" '
MongoDB shell version v3.6.2
connecting to: mongodb://:/PMSDB
2018-01-30T10:31:34.696+0000 E QUERY [thread1] Error: Empty host component parsing HostAndPort from ":" :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

user2@system2 MINGW64 ~/sample
$

How to resolve this

Elasticsearch 2.0

I am currently playing around with the ES template attempting to get 2.0.0 up and running. I changed all the ES images to use 2.0.0 but the stack seems to get stuck in an upgrading service loop when I bring it up and I'm not sure where to start troubleshooting. Any ideas?

MariaDB Galera Cluster : Add a sidekick for backup

Hi,

Is there a way to have a sidekick for backup of the Galera cluster?
Or to mount a /var/lib/mysql from the galera data on the host?

I had a big DDOS last week, which crushed the Galera cluster totally, and I was unable to access it or restart it.
And when I did restart it, it cleared the databases empty.
(had backups but from 6 hours before so lot's of datas were lost).

So having the possibility to have immediate backup out of the dockers or a volume would be great.

Thanks

Unable to load native-hadoop library for your platform

I am using Rancher for manage an environment , I am using Hadoop + Yarn (Experimental) for flink and zookeeper in rancher .

I am trying to configure the hdfs on the flink-conf.yaml.
This is the changes that I made in connection to Hdfs :

  1. fs.hdfs.hadoopconf: /etc/hadoop
  2. recovery.zookeeper.storageDir: hdfs://:8020/flink/recovery
  3. state.backend.fs.checkpointdir: hdfs://:8020/flink/checkpoints

And I get an error that say that :

2016-09-06 14:10:44,965 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

What I did wrong ?

Best regards

MongoDb - cluster running on one machine

i run mongo-cluster on one machine, but the cluster seems not ready.

here is an mongo instance

rs0:PRIMARY> rs.conf()
{
	"_id" : "rs0",
	"version" : 1,
	"protocolVersion" : NumberLong(1),
	"members" : [
		{
			"_id" : 0,
			"host" : "MongoDB-mongo-cluster-1:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,
			"tags" : {

			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		}
	],
	"settings" : {
		"chainingAllowed" : true,
		"heartbeatIntervalMillis" : 2000,
		"heartbeatTimeoutSecs" : 10,
		"electionTimeoutMillis" : 10000,
		"getLastErrorModes" : {

		},
		"getLastErrorDefaults" : {
			"w" : 1,
			"wtimeout" : 0
		},
		"replicaSetId" : ObjectId("589ae4c47a9700c0bd526102")
	}
}

and another

> rs.conf()
2017-02-08T10:56:30.593+0000 E QUERY    [thread1] Error: Could not retrieve replica set config: {
	"info" : "run rs.initiate(...) if not yet done for the set",
	"ok" : 0,
	"errmsg" : "no replset config has been received",
	"code" : 94
} :
rs.conf@src/mongo/shell/utils.js:1197:11

@galal-hussein

[Request] HDFS volume mapping.

I am using this template as a proof of concept for deploying hadoop with Rancher. Is there an easy way to map in the external volumes for persistant data storage?

[Request] Prometheus monitoring

It would be cool to have a stack that runs some combination of stat collector (cadvisor), aggregator (Prometheus), and dashboard (Grafana or promdash) for viewing.

Service upgrade and create_index

Hi,

I deployed a zookeeper cluster made of 3 nodes.

As I wanted to update the configuration and do a rolling upgrade, I opted for the following command:

rancher-compose -p zookeeper up --upgrade --batch-size 1 --interval 60000 

The problem is that rancher creates a new container which has a different index from the original container we wanted to upgrade.

The same issue applies to any cluster that makes use of node IDs, e.g. kafka.

How can we make sure that we retain the same index when doing an upgrade?

Cheers,
Matteo

Errors in galera-conf/Dockerfile creates errors in RHEL7.x

there are 2 errror in the https://github.com/rancher/catalog-dockerfiles/blob/master/galera/containers/0.2.0/galera-conf/Dockerfile that prevents a successful build on RHEL 7.x

  1. The "ADD ./run ./start_galera /" is not allowed in RHEL 7 because it tries to handle both ./run and /run (which is a folder in alpine). this can be easily solved by renaming ./run to something that is unique.

  2. Giddyup is no longer available as a binary download. This line returns a 404 ;
    ADD https://github.com/cloudnautique/giddyup/releases/download/v0.1.0/giddyup /giddyup

elasticsearch using gli

when using glusterfs as the data volume, I see the following:

4/3/2016 8:53:55 AM[2016-04-03 12:53:55,083][WARN ][cluster.action.shard ] [Nicole St. Croix] [.kibana][0] received shard failed for target shard [[.kibana][0], node[MlquNusiR2O-9Lc5x2dLeQ], [P], v[1], s[INITIALIZING], a[id=j_vQF1yPRMWPQR56c3Th_w], unassigned_info[[reason=INDEX_CREATED], at[2016-04-03T12:53:50.855Z]]], indexUUID [_4mgkEHzRxawb_2dE3vihA], message [failed recovery], failure [IndexShardRecoveryException[failed recovery]; nested: AlreadyClosedException[Underlying file changed by an external force at 2016-04-03T12:53:54.102783Z, (lock=NativeFSLock(path=/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/.kibana/0/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],ctime=2016-04-03T12:53:54.102783Z))]; ] 4/3/2016 8:53:55 AM[.kibana][[.kibana][0]] IndexShardRecoveryException[failed recovery]; nested: AlreadyClosedException[Underlying file changed by an external force at 2016-04-03T12:53:54.102783Z, (lock=NativeFSLock(path=/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/.kibana/0/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],ctime=2016-04-03T12:53:54.102783Z))]; 4/3/2016 8:53:55 AM at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:179)

It appears to be a similar issue to https://www.gluster.org/pipermail/gluster-users/2015-September/023676.html

I also created a forum post about it. https://forums.rancher.com/t/glusterfs-and-elasticsearch/2293

Wondering if anyone else has seen this or knows of a workaround. I've considered using the setting recommended here: https://www.gluster.org/pipermail/gluster-users/2015-September/023676.html , (gluster volume set cluster.consistent-metadata on), but it might not be supported yet.

Elasticsearch confd template parsing failing

So, I'm having trouble trying to get zen discovery working with the confd configuration provided by the elasticsearch-conf package. There seems to be two problems:

  1. The {{range ls "/self/service/metadata/elasticsearch/yml"}} command produces a value called "self" which causes the {{getv (printf "/self/service/metadata/elasticsearch/yml/%s" .)}} call to fail since that key does not exist in the rancher meta data.
  2. The other issue is this call, {{ $containerName := getv (printf "/self/service/containers/%s" .)}} fails. This is odd considering that if I make a curl request to curl http://rancher-metadata/latest/self/service/containers I do in fact see the container that the template is trying to use.

How to connect a client?

I'm confused how to connect a client to the MongoDB catalog replica set offering. I understand how to create it but how would a client connect to the primary of the replica set? I've tried setting up HAProxy LB to connect to it but it just round robins between the nodes in the cluster. Since the cluster nodes don't expose 27017, you can't set-up the usual mongodb URL to connect to the primary consistently.

MongoDB - Missing Bash

Hey guys,
I just noticed that the scripts in the mongodb config container are not working because /bin/bash is missing!

MongoDB cluster should have service for generating connection string

The main difficulty of using mongodb cluster on rancher is that the connection string must be list of all replica set members. It is hard for non-pro rancher user like me to implement this kind of stuff. So I suggest that there should be a service, integrated in catalog, that responses the running service's connection string.

elasticsearch increase JVM heap size

I have already deployed an elasticsearch cluster with rancher catalog. I found the elasticsearch node JVM heap size is default 1g, it is too small, there is a way to increase the JVM heap size for elasticsearch nodes?

kopf's port of elasticsearch 0.1 is not open

Hi there,

I try to replay your demo : http://rancher.com/container-logging-with-docker-and-elk-september-2015-recorded-online-meetup/
So I'm using the elasticsearch 0.1.0 docker-compose file : https://github.com/rancher/compose-templates/blob/master/elasticsearch/0.1.0/docker-compose.yml

But the port 80 of kopf is not open outside the container.
Any clue ? Where should I start to investigate that problem ?

Thanks for your help

I like rancher by the way ;)

Upgrade / contribution documentation.

It would be great if you could document some things.

  • Will you upgrade these catalogs, for example when a new version of the underlying software gets released?
  • If not, should we make PRs, should we make issues and PRs, what's the process?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.