GithubHelp home page GithubHelp logo

blind-oracle / cortex-tenant Goto Github PK

View Code? Open in Web Editor NEW
106.0 5.0 56.0 324 KB

Prometheus remote write proxy that adds Cortex/Mimir tenant ID based on metric labels

License: Mozilla Public License 2.0

Makefile 6.26% Shell 0.51% Go 85.14% Dockerfile 2.74% Smarty 5.35%
cortex tenant proxy prometheus metrics labels timeseries cortex-tenant kubernetes mimir

cortex-tenant's People

Contributors

adberger avatar ajpauwels avatar alexdcraig avatar arempter avatar automatedops avatar blind-oracle avatar dandydeveloper avatar df-cgdm avatar giedriuss avatar jeroen-nijssen avatar ksrt12 avatar lenglet-k avatar marcinfigiel avatar matthewjstanford avatar ronan-wescale avatar sentoz avatar till avatar vincentfree avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cortex-tenant's Issues

Make a release

Hello,

Can you create a tag 1.11.1 with max_conns_per_host configuration ?

Thanks you

Suggestion: Create a helm chart

Good afternoon !
I find your project very useful and wanted to implement it since I currently have the use case with mimir. However, I noticed that there are only the manifest files for the k8s resources and I couldn't help but think that an helm chart would allow more people to easily set up and customize your solution !
It is posible to publish one and claim ownership on Artifact Hub
If you are interested, I could create a merge request with a basic helm chart with templated manifests, let me know !

Issues writing to Grafana Mimir

Hi there, promising project.
Unfortunately I have a problem with the configuration.

Prometheus configuration (deployed via https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack):

prometheus:
  prometheusSpec:
    remoteWrite:
     # doesn't work
     - name: cortex-tenant
       url: http://cortex-tenant.default.svc:8080/push
       writeRelabelConfigs:
         - targetLabel: namespace
           replacement: tenant-a
     # works
     - name: direct
       url: http://mimir-nginx.mimir-test.svc:80/api/v1/push
       writeRelabelConfigs:
          - targetLabel: namespace
            replacement: tenant-a

Cortex-Tenant configuration:

listen: 0.0.0.0:8080
listen_pprof: 0.0.0.0:7008
target: http://mimir-nginx.mimir-test.svc:80/api/v1/push
log_level: debug
timeout: 10s
timeout_shutdown: 10s
concurrency: 1000
metadata: false
log_response_errors: true

tenant:
  label: namespace
  label_remove: false
  header: X-Scope-OrgID
  default: cortex-tenant-default
  accept_all: false

In Mimir and also in cortex-tenant I see the following errors popping up:

Mimir:

10.244.0.87 - - [24/May/2023:10:58:40 +0000]  405 "GET /api/v1/push HTTP/1.1" 0 "-" "cortex-tenant" "-"
10.244.0.87 - - [24/May/2023:10:58:40 +0000]  405 "GET /api/v1/push HTTP/1.1" 0 "-" "cortex-tenant" "-"
10.244.0.87 - - [24/May/2023:10:58:40 +0000]  405 "GET /api/v1/push HTTP/1.1" 0 "-" "cortex-tenant" "-"
10.244.0.87 - - [24/May/2023:10:58:41 +0000]  405 "GET /api/v1/push HTTP/1.1" 0 "-" "cortex-tenant" "-"

cortext-tenant:

time="2023-05-24T11:11:56Z" level=error msg="proc: src=10.244.0.86:54214 req_id=30cb3077-f5bd-4861-80fa-e485a56995f4 HTTP code 405 ()"
time="2023-05-24T11:11:56Z" level=error msg="proc: src=10.244.0.86:54214 req_id=90bcdf68-4c83-45ad-a68b-d23d3e4c6f53 HTTP code 405 ()"
time="2023-05-24T11:11:56Z" level=error msg="proc: src=10.244.0.86:54214 req_id=1d746aa7-fcf7-4b6b-9715-9adbf6313453 HTTP code 405 ()"
time="2023-05-24T11:12:01Z" level=error msg="proc: src=10.244.0.86:54214 req_id=c126f0be-0350-4afc-82e4-08497bbe1690 HTTP code 405 ()"

HTTP method GET doesn't seem right (see https://grafana.com/docs/mimir/latest/references/http-api/#remote-write).

Maybe you can help me?

Support matching on multiple labels

In our environment we have several different metric labels that could all indicate what tenant the metric should belong to. Could we add support for multiple label matches, in a sort of hierarchy?

For example, here is the current configuration:

tenant:
  label: namespace

But we would like to prefer a label of tenant if it exists, and also some of our metrics labels are rewritten exported_label, so we'd prefer that as well.

Looking for something like this:

tenant:
  label_list:
    - tenant
    - exported_namespace
    - namespace

This wouldn't need to be a breaking change. We can use the list logic only if the list is set, otherwise it will default to the original behavior.

Does this sound like a good addition? If so I can get a PR together for this.

Propagate metrics metadata to Cortex

Hi, we came across a problem when trying to get information about the metrics metadata in Cortex.
Currently, metadata requests from Prometheus are dropped as they don't include timeseries. From line 119 in processor.go

// If there's metadata - just accept the request and drop it
if len(wrReqIn.Metadata) > 0 {
    return
}

Could we discuss why this is the case and if is it possible to add functionality to forward metadata to Cortex?
Documentation from Cortex: https://cortexmetrics.io/docs/proposals/support-metadata-api/

Thanks in advance 👍

Not an issue actually

Hello,

Just wanted to THANK you alot for this tool, totally resolves my issue where I need to monitor different blackbox services for various tenants from the same prometheus instance.
Really great tool, saves me a lot of headaches ;)

HTTP 405 error

prometheus-server ts=2023-05-17T07:33:56.434Z caller=dedupe.go:112 component=remote level=error remote_name=24a04d url=http://a3.ap.elb.amazonaws.com:8080/push msg="non-recoverable error" count=10 err="server returned HTTP status 405 Method Not Allowed: "

prometheus cm -

    remote_write:
    - url: "http://a3.ap.elb.amazonaws.com:8080/push"
    scrape_configs:
    - job_name: tenant-job
      scrape_interval: 60s
      static_configs:
      - targets:
          - localhost:9090
        labels:
          tenant: foobar

tenant-cortex cm

apiVersion: v1
    log_level: debug
    # HTTP request timeout
    timeout: 10s
    # Timeout to wait on shutdown to allow load balancers detect that we're going away.
    # During this period after the shutdown command the /alive endpoint will reply with HTTP 503.
    # Set to 0s to disable.
    timeout_shutdown: 10s
    # Max number of parallel incoming HTTP requests to handle
    concurrency: 1000
    # Whether to forward metrics metadata from Prometheus to Cortex
    # Since metadata requests have no timeseries in them - we cannot divide them into tenants
    # So the metadata requests will be sent to the default tenant only, if one is not defined - they will be dropped
    metadata: false
    tenant:
      # Which label to look for the tenant information
      label: tenant
      # Optional hard-coded prefix with delimeter for all tenant values.
      # Delimeters allowed for use:
      # https://grafana.com/docs/mimir/latest/configure/about-tenant-ids/
      prefix: ""
      # Whether to remove the tenant label from the request
      label_remove: true
      # To which header to add the tenant ID
      header: X-Scope-OrgID
      # Which tenant ID to use if the label is missing in any of the timeseries
      # If this is not set or empty then the write request with missing tenant label
      # will be rejected with HTTP code 400
      default: cortex-tenant-default
      # Enable if you want all metrics from Prometheus to be accepted with a 204 HTTP code
      # regardless of the response from Cortex. This can lose metrics if Cortex is
      # throwing rejections.

Update - Implement InsecureSkipVerify

Hi,

For my environment I need to set "InsecureSkipVerify" in fasthttp client side to allow unknown certificat on my route.

I check on my side to propose a pull request, but it's maybe more simple for you to implement this params more rapidly.

Thanks

[chart] Add health/readiness probes

To make sure that the service is ready to receive requests, it would be nice to be able to configure liveness and readiness probes in the Helm chart.

These could (by default) be set up such that the liveness probe checks for TCP liveness by attempting to open a connection to a specified port (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe) and the readiness probe could check the /alive using httpGet (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request).

Currently, the workaround is to manually create a post-renderer which attaches the probes, but it could be nice if this was added to the Helm chart

Support ipv6 clusters

When running cortex-tenant in an ipv6 cluster, I'm hitting this error:

time="2022-12-15T15:15:36Z" level=error msg="proc: src=127.0.0.1:45666 couldn't find DNS entries for the given domain. Try using DialDualStack"

Does cortex-tenant only support querying for A records and not AAAA?

Help needed in configuring Prometheus for Cortex

Hi all,

I need to push metrics from Prometheus to Mimir along with tenant id based on deployed pod/resource label workspaceId.

$ kubectl get pod/deployment-afb57b22d92b4d46bdd201e437e23d50-76f7f67f45-m8wg6 -n 2ee8544f99ba4cedaf627e4bccaa7abe -o jsonpath='{$.metadata.labels}' | jq
{
  "app.kubernetes.io/name": "vault",
  "deploymentId": "afb57b22d92b4d46bdd201e437e23d50",
  "pod-template-hash": "76f7f67f45",
  "region": "eb1ce3b00d024e27ad620ee8e74ed691",
  "workspaceId": "2ee8544f99ba4cedaf627e4bccaa7abe"
}


So Deployed cortex-tenant using this below config,

# only changed values are shown
data:
  cortex-tenant.yml: |
    tenant:
      # cusom label name to be used as tenant information
      label: workspaceId


While deploying prometheus/kube-prometheus-stack chart, tried using the below config, but still couldn't able to get those labels in metrics.

# only corresponding values are shown
kube-state-metrics:
  metricLabelsAllowlist: 
    - pods=[workspaceId]


How to inject a specific pod label in prometheus time series, so that it can be used in cortex-tenant?
Could someone help configuring Prometheus for this use case?

Release of the current Helm Chart

The latest version of the Helm Chart in the https://blind-oracle.github.io/cortex-tenant/ repository is 0.4.0. Can you please release the newer versions of the Helm Chart?

For tenant prefix, prefer the prometheus tenant id

In our environment we'd like to use a single deployment of cortex-tenant to manage input from multiple prometheus instances. Each prometheus instance is already setting a tenant id on the request, but that tenant id is lost when it hits cortex-tenant.

It would be nice if we could preserve the source tenant id, and use it as the the tenant prefix when cortex-tenant writes it to the backend.

I'm thinking something along these lines:

tenant:
  prefix: default-prefix
  prefix_prefer_source: true

Example:

Prom-A sends metrics to cortex-tenant with X-Scope-OrgID: Prom-A, and contains metrics with namespace labels that will be translated to a tenant: namespace: app1. Assuming we're using namespace as the tenant label in cortex-tenant. These metrics would be mapped to a new tenant called Prom-A-app1.

Prom-B also sends to the same cortex-tenant instance with X-Scope-OrgID: Prom-B and metric label namespace: app1. This would be translated to Prom-B-app1.

A third prometheus that has no tenant set would be mapped to default-prefix-app1.

And so on.

The reason this is preferred over simply having multiple cortex-tenant deployments is that it keeps the infra as simple as possible. We're decoupling the cortex-tenant deployments from the prometheus deployments.

Thoughts? I've made this change locally and it works as expected. Would be happy to create a PR if this is something that would be useful for other folks.

Prefix params doesn't work

I deployed on openshift with the K8S sample config dans run into this error :

level=fatal msg="Unable to parse config: yaml: unmarshal errors:\n line 28: field prefix not found in type struct { Label string; LabelRemove bool \"yaml:\\\"label_remove\\\"\"; Header string; Default string; AcceptAll bool \"yaml:\\\"accept_all\\\"\" }"

When I comment this params that's work, but I need to prefix tenant name

Cortex Authentication Enabled

Hi,

I am using multiple prometheus deployments on multiple Kubernetes (k8s) clusters that use the remote_write HTTP API to push metrics to Cortex deployed in a target/destination k8s cluster. However currently I am using auth_enabled to true and adding basic_auth with username and password to authenticate each prometheus deployment per tenant to authenticate to the remote Cortex using an nginx reverse proxy.

However I am finding this entire setup to be quite cumbersome. So, if I were to use cortex-tenant, can I achieve multi tenancy at a per physical tenant (single prometheus deployed into a tenant k8s cluster) which are each using its own remote_write API to write to Cortex? Is this even possible? Do I still need to enable auth in cortex? (I am using a Helm chart to deploy cortex).

Kindly let me know.

Issue with Docker image

I'm getting below error when trying to run the docker container. Any idea how to fix this ?

docker image issue

[chart] Set `autoscaling.minReplica=2` by default

By default, autoscaling.enabled=true, autoscaling.minReplica=1, podDisruptionBudget.enabled=true, and podDisruptionBudget.minAvailable=1.

This means that since (by default) the HPA allows autoscaling to 1 replica, you might get errors when patching nodes, since kubectl drain will try to remove the single Cortex Tenant pod, which the PDB doesn't allow.

The workaround is simple; set autoscaling.minReplica=2 or disable the PDB.
However, it would be nice if the chart is possible to deploy to a cluster with as few changes as possible, so I would like the chart to have autoscaling.minReplica=2 by default. What do you think of this?

Cortex-tenant K8s manifests

@blind-oracle I have created deployment, service and configmap(configuration file) manifest to deploy cortex-tenant docker image in a Kubernetes cluster. Is it okay if I push those to this repo ? I can not attach those in this issue.

Docker image

Would you be up for publishing a docker image? E.g. using GH packages? I can PR a workflow.

Nil pointer dereference in error handling code

We had a situation when the traffic got so high it was too much for the proxy instances we were running in our Kubernetes cluster - the default 1000 concurrent connections handled by fasthttp server was not enough. That situation revealed a bug in the proxy code, which resulted in pods restarting due to a SIGSEGV:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x976a2e]

goroutine 836 [running]:
main.(*processor).handle(0xc000170300, 0xc000219800)
    /build/processor.go:191 +0x42e
github.com/valyala/fasthttp.(*Server).serveConn(0xc00015c400, {0xb62818?, 0xc0000ac6f0})
    /go/pkg/mod/github.com/valyala/[email protected]/server.go:2359 +0x120d
github.com/valyala/fasthttp.(*workerPool).workerFunc(0xc0000f4000, 0xc0001a9a40)
    /go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:224 +0xa9
github.com/valyala/fasthttp.(*workerPool).getCh.func1()
    /go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:196 +0x38
created by github.com/valyala/fasthttp.(*workerPool).getCh
    /go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:195 +0x1b0

This seems to be caused by this code:

if r.err != nil {
	ctx.Error(err.Error(), fh.StatusInternalServerError)
	p.Errorf("src=%s req_id=%s: unable to proxy metadata: %s", clientIP, reqID, r.err)
	return
}

in processor.go file. It checks if r.err is not nil, but in the next line tries to dereference err.Error().

[helm] Add service monitor in charts

Service monitor is not in helm chart also pod and service port configuration doesn't exist.

I had make it on my fork and forward you a PR.

Thank for your job

1.12.3 No docker image

Hi,

thanks for tag 1.12.3, but have you build a docker images ? because when i run a docker pull i have this error:

docker pull ghcr.io/blind-oracle/cortex-tenant:v1.12.3
manifest unknown

And when i run the latest image i have this error:

level=fatal msg="Unable to parse config: yaml: unmarshal errors:\n  line 8: field max_conns_per_host not found in type main.config"

Maybe the 1.12.3 is not in latest tag ?

Dropping bad series data?

I have a situation where some Cassandra exporters are outputting bad data for me which I need to resolve.

In the meantime, as a workaround, I'd like to be able to just drop these series on a certain number of rejections.

Example of what I'm seeing;

time="2021-05-12T04:51:15Z" level=error msg="proc: src=10.244.3.46:60714 req_id=e67dc8e2-eaf7-48a8-95b9-977b3f3e34a6 HTTP code 400 (user=xyz: sample with repeated timestamp but different value; last value: NaN, incoming value: 0 for series {__name__=\"cassandra_table_readlatency_mean\", app=\"rook-cassandra\", app_kubernetes_io_managed_by=\"rook-cassandra-operator\", app_kubernetes_io_name=\"rook-cassandra\", cassandra_rook_io_cluster=\"rook-cassandra\", keyspace=\"cortex\", kubernetes_namespace=\"rook-cassandra\", kubernetes_pod_name=\"rook-cassandra-aus1-aus1rack1-0\",table=\"chunks_2670\"}\n)"
time="2021-05-12T04:51:15Z" level=error msg="proc: src=10.244.3.46:60714 req_id=562fbe5b-f593-4159-bc3c-155a58f53012 HTTP code 400 (user=xyz: sample with repeated timestamp but different value; last value: NaN, incoming value: 0 for series {__name__=\"cassandra_table_readlatency_mean\", app=\"rook-cassandra\", app_kubernetes_io_managed_by=\"rook-cassandra-operator\", app_kubernetes_io_name=\"rook-cassandra\", cassandra_rook_io_cluster=\"rook-cassandra\", keyspace=\"system\", kubernetes_namespace=\"rook-cassandra\", kubernetes_pod_name=\"rook-cassandra-aus1-aus1rack1-0\",table=\"batches\"}\n)"

So in this case, I'm getting a 400, and it looks like these effectively bottleneck the tenant because nothing is being done with this series on rejection.

Does this have any inherent logic to remove "bad" series or rejected series from Cortex?

[K8S] HPA working but loadbalancing not working as expected

I have HPA enabled on my cortex-tenant deployment, but after scale, other pod doesn't receive timeseries.
When I check"cortex_tenant_timeseries_received" metrics only the first pod has count
But when I check cortex_tenant_timeseries_batches_received metrics, other pod receive some batch but no much like the first pod :

Result series: 3

pod 1 : 25353
pod 2: 4
pod 3 : 15

Maybe we can add an nginx proxy (or other app) to loadbalance the query between the backend ?
Tell me and I can try to implement it.

Security: Do i need to put nginx or anything in front of cortex-tenant?

I was thinking about deploying cortex-tenant on the public web and just configuring my tenants to send data to it. Beyond being at risk for a bunch of useless metrics data being shipped to me, are there any other risks to be aware of? As far as I can tell this project is a is a write only endpoint so no one should be able to read data that they send to me?

Tenant label for non static configs

In the Prometheus Example Config there are only references for adding in the tenant label for static configs.

How does one go to add them for other jobs? Would something like this work for all services from a prometheus server?

global:
  external_labels:
    tenant: foobar

Thanks for making this btw, very clean.

Basic Auth support

Hi, our central cortex/mimir cluster is basic-auth protected, could cortex-tenant be configured to read a k8s secret containing username/password ?

Topology would be like:
(local (prometheus) -> (cortex-tenant)) -----> (remote (cortex basic-auth ingress))

cortex-tenant proxy ignores Cortex error codes

Proxy always return 500 error (https://github.com/blind-oracle/cortex-tenant/blob/main/processor.go#L153 ) to Prometheus even then Cortex returns 400.

The problem is with Line 143:

    } else if r.code < 200 || r.code >= 300 {
      errs = me.Append(errs, fmt.Errorf("HTTP code %d (%s)", r.code, string(r.body)))
      p.Errorf("src=%s req_id=%s HTTP code %d (%s)", clientIP, reqID, r.code, string(r.body))
    }

Error is added to the collection when Cortex returns non-2xx code, so the code below it basically doesn’t make sense, as the method always returns code 500 in line 150.

	if errs.ErrorOrNil() != nil {
		ctx.Error(errs.Error(), fh.StatusInternalServerError)
		return
	}

This also means that flag AcceptAll in line 154 doesn't work.

This leads to Prometheus always retry requests even when by default it should not. See:

https://github.com/prometheus/prometheus/blob/fe06f16c116a109330cce0bcb61c2c6728ab7227/storage/remote/client.go#L228

Prometheus only retries on 5XX errors.

Problem

When trying to send out-of-order metrics using the proxy Prometheus get stuck in the retry loop so no data is sent. From the proxy, I would expect the same behaviour as the normal Prometheus --> cortex path.

Prometheus:

ts=2021-11-09T08:50:06.236Z caller=dedupe.go:112 component=remote level=warn remote_name=cortex_tenant url=http://127.0.0.1:8070/push msg="Failed to send batch, retrying" err="server returned HTTP status 500 Internal Server Error: 1 error occurred:"
ts=2021-11-09T08:51:06.313Z caller=dedupe.go:112 component=remote level=warn remote_name=cortex_tenant url=http://127.0.0.1:8070/push msg="Failed to send batch, retrying" err="server returned HTTP status 500 Internal Server Error: 1 error occurred:"
ts=2021-11-09T08:52:06.321Z caller=dedupe.go:112 component=remote level=warn remote_name=cortex_tenant url=http://127.0.0.1:8070/push msg="Failed to send batch, retrying" err="server returned HTTP status 500 Internal Server Error: 1 error occurred:"
ts=2021-11-09T08:53:06.366Z caller=dedupe.go:112 component=remote level=warn remote_name=cortex_tenant url=http://127.0.0.1:8070/push msg="Failed to send batch, retrying" err="server returned HTTP status 500 Internal Server Error: 1 error occurred:"
ts=2021-11-09T08:54:06.385Z caller=dedupe.go:112 component=remote level=warn remote_name=cortex_tenant url=http://127.0.0.1:8070/push msg="Failed to send batch, retrying" err="server returned HTTP status 500 Internal Server Error: 1 error occurred:"

Cortex-proxy:

time="2021-11-09T08:55:48Z" level=error msg="proc: src=127.0.0.1:41428 req_id=5fe80dbd-06ec-41a6-987d-f3d0960fcfd5 HTTP code 400 (user=user-default: err: out of order sample. timestamp=2021-11-09T06:41:54.143Z, series= ...
time="2021-11-09T08:55:48Z" level=error msg="proc: src=127.0.0.1:41798 req_id=f2e31318-940c-4bdd-bbff-8434b00da2e6 HTTP code 400 (user=user-default: err: out of order sample. timestamp=2021-11-09T06:41:43.816Z, series= ...

Cortex-distributor:

level=error ts=2021-11-09T08:55:12.142835681Z caller=push.go:51 org_id=org-id traceID=368b869738f1dffe msg="push error" err="rpc error: code = Code(400) desc = user=user-default: err: out of order sample. timestamp=2021-11-09T06:41:43.816Z, series= ...
level=error ts=2021-11-09T08:55:12.154785238Z caller=push.go:51 org_id=org-id traceID=2a175fef0e882923 msg="push error" err="rpc error: code = Code(400) desc = user=user-default: err: out of order sample. timestamp=2021-11-09T06:42:02.956Z, series= ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.