GithubHelp home page GithubHelp logo

influxdata / telegraf-operator Goto Github PK

View Code? Open in Web Editor NEW
79.0 16.0 38.0 335 KB

telegraf-operator helps monitor application on Kubernetes with Telegraf

License: Apache License 2.0

Dockerfile 0.58% Makefile 2.33% Go 87.33% Shell 3.21% EJS 6.56%

telegraf-operator's Introduction

Telegraf-operator

Docker Repository on Quay CircleCI

The motto

Easy things should be easy. Adding monitoring to your application has never been as easy as now.

Does your application exposes prometheus metrics? then adding telegraf.influxdata.com/port: "8080" annotation to the pod is the only thing you need to add telegraf scraping to it

Why telegraf-operator?

No one likes monitoring/observability, everybody wants to deploy applications but the burden of adding monitoring, fixing it, maintaining it should not weight that much.

Getting started with telegraf-operator

Releasing docker images at: Quay

Installing telegraf-operator in your Kubernetes cluster

Helm chart

An up to date version of telegraf-operator can be installed by using the InfluxData Helm Repository.

Simply run:

helm repo add influxdata https://helm.influxdata.com/
helm upgrade --install telegraf-operator influxdata/telegraf-operator

To change one or more settings, please use the --set option - such as:

helm upgrade --install telegraf-operator influxdata/telegraf-operator \
  --set certManager.enable=true

The certManager.enable setting will use cert-manager CRDs to generate TLS certificates for the webhook admission controller used by telegraf-operator. Please note that this requires cert-manager to be installed in the cluster to work.

It is recommended to use a values file instead of setting name-values.

It's also recommended to configure the classes.data values, which specify the telegraf-operator classes and how gathered data should be stored or persisted. Classes are described in more details in Global configuration - classes section.

For example:

classes:
  data:
    default: |
      [[outputs.file]]
        files = ["stdout"]

This will cause telegraf for default class of monitored workloads to write their data to standard output of the telegraf container.

All of the available settings can be found in the values.yaml file bundled with the Helm chart.

Information about the Helm chart can also be found at https://artifacthub.io/packages/helm/influxdata/telegraf-operator.

OperatorHub

An up to date version of telegraf-operator is also available from OperatorHub.io.

Please follow instructions at https://operatorhub.io/operator/telegraf-operator for installing telegraf-operator.

Adding annotations to workloads

In order for telegraf-operator to monitor a workload, one or more annotations need to be added to the pod. The telegraf.influxdata.com/class annotation specifies which class of workload it is. It also needs information on how to scrape data. For prometheus metrics the annotation is telegraf.influxdata.com/ports, which specifies port or ports to scrape at. The default path is /metrics and can be changed.

By default telegraf-operator comes with an example default class configured to write to an in-cluster instance of InfluxDB.

For Deployment, StatefulSet and most other Kubernetes objects, this should be added to .spec.template.metadata.annotations section - such as:

apiVersion: apps/v1
kind: Deployment
# ...
spec:
  # ...
  template:
    metadata:
      annotations:
        telegraf.influxdata.com/class: "default"
        telegraf.influxdata.com/ports: "8080"
    spec:
      # ...

Please see Pod-level annotations for more details on all annotations telegraf-operator supports.

Adding telegraf-operator in development mode

For development purposes, the repository provides a development version that can be installed by running:

kubectl apply -f https://raw.githubusercontent.com/influxdata/telegraf-operator/master/deploy/dev.yml 

The command above deploys telegraf-operator, using a separate telegraf-operator namespace and registering webhooks that will inject a telegraf sidecar to all newly created pods.

In order to use telegraf-operator, what's also needed is to define where metrics should be sent. The examples/classes.yml file provides a set of classes that can be used to get started.

To create sample set of classes, simply run:

kubectl apply -f https://raw.githubusercontent.com/influxdata/telegraf-operator/master/examples/classes.yml

Installing InfluxDB for data retrieval

In order to see the data, you can also deploy InfluxDB v1 in your cluster, which also comes with Chronograf, providing a web UI for InfluxDB v1.

To set it up in your cluster, simply run:

kubectl apply -f https://raw.githubusercontent.com/influxdata/telegraf-operator/master/deploy/influxdb.yml 

After that, every new pod (created directly or by creating a deployment or statefulset) in your cluster will have include telegraf container for retrieving data.

Installing a sample application with telegraf-operator based monitoring set up

You can try it by running one of our samples - such as a redis server. Simply do:

kubectl apply -f https://raw.githubusercontent.com/influxdata/telegraf-operator/master/examples/redis.yml

You can verify the telegraf container is present by doing:

kubectl describe pod -n redis redis-0

The output should include a telegraf container.

In order to see the results in InfluxDB and Chronograf, you will need to set up port-forwarding and then access Chronograf from your browser:

kubectl port-forward --namespace=influxdb svc/influxdb 8888:8888

Next, go to http://localhost:8888 and continue to Explore section to see your data

Configuration and usage

Telegraf-operator consists of the following:

  • Global configuration - definition of where the metrics should be sent and other auxiliary configuration, specified as classes
  • Pod-level configuration - definition of how a pod can be monitored, such as ports for Prometheus scraping and additional configurations

Global configuration - classes

Telegraf-operator is based on concepts of globally defined classes. Each class is a subset of Telegraf configuration and usually defines where Telegraf should be sending its outputs, along with other settings such as global tags.

Usually classes are defined as a secret - such as in classes.yml file - and each class maps to a key in a secret. For example:

stringData:
  basic: |+
    [[outputs.influxdb]]
      urls = ["http://influxdb.influxdb:8086"]
    [[outputs.file]]
      files = ["stdout"]
    [global_tags]
      hostname = "$HOSTNAME"
      nodename = "$NODENAME"
      type = "app"

The above defines that any pod whose Telegraf class is basic will have its metrics sent to a specific URL, which in this case is an InfluxDB v1 instance deployed in same cluster. Its metrics will also be logged by telegraf container for convenience. The data will also have hostname, nodename and type tags added for all metrics.

Hot reload

As of version 1.3.0, telegraf-operator supports detecting when the classes configuration has changed and update telegraf configuration for affected pods.

This functionality requires telegraf version 1.19, which is the first version that supports the new --watch-config option required for this feature.

The development deployment example has hot reload enabled. For Helm chart, version 1.3.0 or newer has to be used and hotReload should be set to true. It is set to false by default to avoid issues when using a version of telegraf prior to 1.19.0.

If deploying telegraf-operator in a different way, telegraf-operator should be run with --telegraf-watch-config=inotify option. The args section of the telegraf-operator Deployment should be added or modified and include the said options - such as:

          args:
            - --enable-default-internal-plugin=true
            - --telegraf-default-class=basic
            - --telegraf-classes-directory=/config/classes
            - --enable-istio-injection=true

Pod-level annotations

Each pod (either standalone or as part of deployment as well as statefulset) may also specify how it should be monitored using metadata.

The redis.yml example adds annotation that enables the Redis plugin so that Telegraf will automatically retrieve metrics related to it.

apiVersion: apps/v1
kind: StatefulSet
  # ...
spec:
  template:
    metadata:
      annotations:
        telegraf.influxdata.com/inputs: |+
          [[inputs.redis]]
            servers = ["tcp://localhost:6379"]
        telegraf.influxdata.com/class: basic
      # ...
    spec:
      containers:
      - name: redis
        image: redis:alpine

Please see redis input plugin documentation for more details on how the plugin can be configured.

The telegraf.influxdata.com/class specifies that the basic class above should be used.

Users can configure the inputs.prometheus plugin by setting the following annotations. Below is an example configuration, and the expected output.

  • telegraf.influxdata.com/port: is used to configure which port telegraf should scrape
  • telegraf.influxdata.com/ports : is used to configure which port telegraf should scrape, comma separated list of ports to scrape
  • telegraf.influxdata.com/path : is used to configure at which path to configure scraping to (a port must be configured also), will apply to all ports if multiple are configured
  • telegraf.influxdata.com/scheme : is used to configure at the scheme for the metrics to scrape, will apply to all ports if multiple are configured ( only http or https are allowed as values)
  • telegraf.influxdata.com/interval : is used to configure interval for telegraf scraping (Go style duration, e.g 5s, 30s, 2m .. )
  • telegraf.influxdata.com/metric-version : is used to configure which metrics parsing version to use (1, 2)
  • telegraf.influxdata.com/namepass : is used to configure scraped metrics to preserve configuration for telegraf, being a TOML value to add to telegraf configuration; all metrics are passed if not specified

NOTE: all annotations should be formatted as strings - for example telegraf.influxdata.com/port: "8080", telegraf.influxdata.com/metric-version: "2" or telegraf.influxdata.com/namepass: "['metric1','metric2']".

Example Prometheus Scraping

apiVersion: apps/v1
kind: StatefulSet
  # ...
spec:
  template:
    metadata:
      annotations:
        telegraf.influxdata.com/class: influxdb # User defined output class
        telegraf.influxdata.com/interval: 30s
        telegraf.influxdata.com/path: /metrics
        telegraf.influxdata.com/port: "8086"
        telegraf.influxdata.com/scheme: http
        telegraf.influxdata.com/metric-version: "2"
      # ...
    spec:
      containers:
      - name: influxdb
        image: quay.io/influxdb/influxdb:v2.0.4

Configuration Output

[[inputs.prometheus]]
  urls = ["http://127.0.0.1:8086/metrics"]
  interval = "30s"
  metric_version = 2

[[inputs.internal]]

Additional pod annotations that can be used to configure the Telegraf sidecar:

  • telegraf.influxdata.com/inputs : is used to configure custom inputs for telegraf
  • telegraf.influxdata.com/internal : is used to enable telegraf "internal" input plugins for
  • telegraf.influxdata.com/image : is used to configure telegraf image to be used for the telegraf sidecar container
  • telegraf.influxdata.com/class : configures which kind of class to use (classes are configured on the operator)
  • telegraf.influxdata.com/secret-env : allows adding secrets to the telegraf sidecar in the form of environment variables
  • telegraf.influxdata.com/env-configmapkeyref-<VARIABLE_NAME> : allows adding configmap key references to the telegraf sidecar in the form of an environment variable
  • telegraf.influxdata.com/env-fieldref-<VARIABLE_NAME> : allows adding fieldref references to the telegraf sidecar in the form of an environment variable
  • telegraf.influxdata.com/env-literal-<VARIABLE_NAME> : allows adding a literal to the telegraf sidecar in the form of an environment variable
  • telegraf.influxdata.com/env-secretkeyref-<VARIABLE_NAME> : allows adding secret key references to the telegraf sidecar in the form of an environment variable
  • telegraf.influxdata.com/requests-cpu : allows specifying resource requests for CPU
  • telegraf.influxdata.com/requests-memory : allows specifying resource requests for memory
  • telegraf.influxdata.com/limits-cpu : allows specifying resource limits for CPU
  • telegraf.influxdata.com/limits-memory : allows specifying resource limits for memory
  • telegraf.influxdata.com/istio-requests-cpu : allows specifying resource requests for CPU for istio sidecar
  • telegraf.influxdata.com/istio-requests-memory : allows specifying resource requests for memory for istio sidecar
  • telegraf.influxdata.com/istio-limits-cpu : allows specifying resource limits for CPU for istio sidecar
  • telegraf.influxdata.com/istio-limits-memory : allows specifying resource limits for memory for istio sidecar
  • telegraf.influxdata.com/volume-mounts : allows specifying extra volumes mount into the telegraf sidecar, the value should be json formatted, eg: {"volumeName": "mountPath"}
Example of extra additional options
apiVersion: apps/v1
kind: StatefulSet
  # ...
spec:
  template:
    metadata:
      labels:
        app: redis
      annotations:
        telegraf.influxdata.com/env-fieldref-NAMESPACE: metadata.namespace
        telegraf.influxdata.com/env-fieldref-APP: metadata.labels['app']
        telegraf.influxdata.com/env-configmapkeyref-REDIS_SERVER: configmap-name.redis.url
        telegraf.influxdata.com/env-secretkeyref-PASSWORD: app-secret.redis.password
        telegraf.influxdata.com/env-literal-VERSION: "1.0"
        telegraf.influxdata.com/volume-mounts: {"xxx-3080bfa7-log":"/opt/xxx/log"}
        telegraf.influxdata.com/inputs: |+
          [[inputs.redis]]
            servers = ["$REDIS_SERVER"]
            password = "$PASSWORD"
      # ...
    spec:
      containers:
      # ...

These annotations result in additional environment variables available for the telegraf container, which can be used in for example the tags.
And they can be used in the additional input configuration provided in the annotation as shown above.

stringData:
  basic: |+
    [global_tags]
      hostname = "$HOSTNAME"
      nodename = "$NODENAME"
      namespace = "$NAMESPACE"
      app = "$APP"
      version = "$VERSION"

Support

This operator is community supported. InfluxData provides no official support for their use.

Pull requests and issues are the responsibility of the project's moderator(s) which may include vetted individuals outside of the InfluxData organization. All issues should be reported and managed via GitHub (not via InfluxData's standard support process).

Contributing to telegraf-operator

Please read the CONTRIBUTING file for more details on how to get started with contributing to to telegraf-operator.

Maintainers

telegraf-operator's People

Contributors

bondanthony avatar cbos avatar dependabot[bot] avatar gitirabassi avatar goller avatar jaymebrd avatar jdstrand avatar lamebear avatar russorat avatar wojciechka avatar ymatsiuk avatar zak-pawel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

telegraf-operator's Issues

Add support for storing configuration in configmap

Today, the operator stores the configuration of the telegraf sidecar in a secret. Our environments are restricted on who can create secrets, so it is preferable to use a configmap instead. And as our configuration never has a credential stored in it, this would be safe for us to use. Would like a feature enhancement to enable storing configuration in a configmap instead.

Relevant URLs

Would work well with #122

What products and version are you using?

Telegraf Operator 1.3.8

LICENSE Mismatch

The LICENSE.md file is MIT, but the code comments in the files mention Apache 2

Not possible to connect to external Influxdb?

Our Influxdb instance is running in another setup with it's on domain name. When added telegraf operator to a deployment, it seems that the side car can't connect to the external influxdb at all. And curling a random address from the pod gives no result (if that is relevant). Do I need something in place to solve this?

What products and version are you using?

v1.1.1

Add ability to provide per-namespace classes

The telegraf.influxdata.com/class annotation allows a user to choose which class out of the pre-configured classes telegraf-operator is configured with.

However, there are many reasonable uses of a k8s cluster that involve N teams sharing the same cluster, with one team handling the "infrastructure" part and N-1 teams handling each their own stuff in their own namespace(s). It's quite a pain when a new service needs a custom telegraf config and you need to update the central secret that contains all the classes definition for the telegraf operator.

Proposal

  1. A CRD called TelegrafClass, which contains a telegraf config. It also contains references to Secrets which can be mapped to env vars, so that the body of the config doesn't have to be a secret.\
  2. A CRD called ClusterTelegrafClass which would be the non-namespaced version of (1) (similar to Role vs ClusterRole)
  3. A pod with a telegraf class name referenced in telegraf.influxdata.com/class will match a class defined in a TelegrafClass CR with the same name found in the same namespace or, if that's not found, it will match a ClusterTelegrafClass resource and if that's not found either, it will fallback to the current behaviour (the classes files mounted in the telegraf-operator container)

Write README.md

We should have a README.md that describes what this project is all about and a recording from asciicinema with an example of the innerworking

Access to classes from secrets fails

Seems there is an issue on accessing data from the classes file:

2020-05-14T13:01:38.674Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": ":8080"}
2020-05-14T13:01:38.674Z        INFO    setup.entrypoint        setting up webhook server
2020-05-14T13:01:38.674Z        INFO    setup.entrypoint        registering webhooks to the webhook server
2020-05-14T13:01:38.674Z        INFO    setup.podInjector       validating class data from directory /etc/telegraf-operator
2020-05-14T13:01:38.674Z        INFO    setup.podInjector       unable to retrieve class data from file ..data: read /etc/telegraf-operator/..data: is a directory

My values.yml file is as below:

classes:
  secretName: "telegraf-operator-classes"
  default: "infra"
  data:
    infra: |+
      [[outputs.influxdb_v2]]
        urls = ["<REDACTED>"]
        token = "<REDACTED>"
        organization = "<REDACTED>"
        bucket = "<REDACTED>"
      [[outputs.file]]
        files = ["stdout"]
      [global_tags]
        hostname = "$HOSTNAME"
        nodename = "$NODENAME"

This lead to:

# kubectl get secret/telegraf-operator-classes -o yaml         
apiVersion: v1
data:
  infra: <REDACTED>
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: telegraf-operator
    meta.helm.sh/release-namespace: telegraf-operator
  creationTimestamp: "2020-05-14T13:01:32Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: telegraf-operator-classes
  namespace: telegraf-operator
  resourceVersion: "20994828901"
  selfLink: /api/v1/namespaces/telegraf-operator/secrets/telegraf-operator-classes
  uid: d8a8f3d0-6471-4a02-8a26-864ea26a952b
type: Opaque

Ok so it should try to read either (same file, 1st is a symlink to second) :

  • /etc/telegraf-operator/infra
  • /etc/telegraf-operator/..data/infra

Ok, looking at the code: https://github.com/influxdata/telegraf-operator/blob/master/class_data.go#L45-L58

I guess that filtering on Directory is not enough, it should also exclude the ..data link from results to parse

What you have in the pod:

ls /etc/telegraf-operator/ -al
total 4
drwxrwxrwt    3 root     root           100 May 14 14:13 .
drwxr-xr-x    1 root     root          4096 May 14 14:13 ..
drwxr-xr-x    2 root     root            60 May 14 14:13 ..2020_05_14_14_13_56.563495811
lrwxrwxrwx    1 root     root            31 May 14 14:13 ..data -> ..2020_05_14_14_13_56.563495811
lrwxrwxrwx    1 root     root            12 May 14 14:13 infra -> ..data/infra

So you should add an exclusion on ..data in the list of non dirs. It would also avoid files to be parsed twice (once in the root of the /etc/telegraf-operator and once in /etc/telegraf-operator/..data/

What products and version are you using?
helm list                                   
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
telegraf-operator       telegraf-operator       2               2020-05-14 15:15:13.41696274 +0200 CEST deployed        telegraf-operator-1.0.0 v1.0.6

Add documentation on how to develop telegraf-operator locally

Telegraf-operator is not a complex piece of software but it needs integrating with bunch of things, which are not obvious at first

A CONTRIBUTING.md should be created with basic commands on how to run the whole thing locally with either kind or minikube

Issue(telegraf-operator): sidecar pods runs as root and should not

When adding a securityContext to a deployment runAsNonRoot: true telegraf will prevent the deployment from coming up since it runs as root and will need the Dockerfile to run as a non root user. This means that within any environment using the telegraph operator that all sidecar containers are running as Root. Which is very bad.

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Warning  Failed     9m50s (x10 over 11m)  kubelet            Error: container has runAsNonRoot and image will run as root
  Normal   Pulled     82s (x49 over 11m)    kubelet            Container image "docker.io/library/telegraf:1.14" already present on machine

Add support for mounted secrets

With the telegraf 1.27 release, there is support for reading the credentials from a secrets mounted on the file system. influxdata/telegraf#13035. Would like to be able to utilize a kubernetes secret for credentials, which would allow for the configuration to be stored in a configmap rather than a secret. (Another issue will be entered to see if we can support configuration stored in a configmap.)

Relevant URLs

influxdata/telegraf#13035

What products and version are you using?

Telegraf operator 1.3.8

Ability to set Basic Auth credentials for the default prometheus input plugin

At the moment I can't figure out a way to set the username and password for basic authentication for the default prometheus input plugin.

Having a look at:
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus

It seems that this is possible by setting username and password, however this does not seem currently possible via pod annotations.

To work around this I currently redefine the entire prometheus input via the annotations as follows:

podAnnotations:  
  telegraf.influxdata.com/class: "default"    
  telegraf.influxdata.com/inputs: |+
    [[inputs.prometheus]]
      urls = ["http://127.0.0.1:8080/metrics"]
      username = "MyUsername"
      password = "MyPassword"

However this results in two prometheus inputs in the /etc/telegraf/telegraf.conf file as below.

[[inputs.prometheus]]
  urls = ["http://127.0.0.1:8080/metrics"]

[[inputs.prometheus]]
  urls = ["http://127.0.0.1:8080/metrics"]
  username = "MyUsername"
  password = "MyPassword"
...

Of course the first / default one that I cannot disable or set basic auth credentials on spews out 401 errors.

So either the username and password should be settable via annotations, or just being able to disable the default generated input would work.

Deprecation warnings

Bunch of warnings displayed during deployment:

ymatsiuk in nixps in helm-charts/charts/telegraf-operator on ๎‚  master [!โ‡ก] is ๐Ÿ“ฆ v1.1.5 via โŽˆ v3.5.4 โฏ helm upgrade --install telegraf-operator influxdata/telegraf-operator --set replicaCount=1
Release "telegraf-operator" does not exist. Installing it now.
W0521 16:34:59.774738  383781 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0521 16:34:59.776759  383781 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W0521 16:34:59.803823  383781 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0521 16:34:59.812655  383781 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
NAME: telegraf-operator
LAST DEPLOYED: Fri May 21 16:34:59 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
ymatsiuk in nixps in helm-charts/charts/telegraf-operator on ๎‚  master [!โ‡ก] is ๐Ÿ“ฆ v1.1.5 via โŽˆ v3.5.4 โฏ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"archive", BuildDate:"1980-01-01T00:00:00Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Relevant URLs
  • Provide relevant URLs
What products and version are you using?

v1.1.1 / master

I can't find how to mount a volume?

Describe the issue here.

I need mount a dir of own container to telegraf container in the Pod by emptyDir volume.
But I can't find the annotations for this condition.

is there have the suitable annotations for this condition?

What products and version are you using?

latest version

Weird behaviour of cpu&memory limits&requests?

I have these annotations on a deployment:

        telegraf.influxdata.com/requests-cpu: 100m
        telegraf.influxdata.com/requests-memory: 500Mi
        telegraf.influxdata.com/limits-cpu: 500m
        telegraf.influxdata.com/limits-memory: 500Mi

And this leads to this pod description:

...
  telegraf:
    Container ID:  containerd://695348bbc7dfbdfaa1c7a99058226a9034b084c02bb6fca98f65c87a3dee0510
    Image:         docker.io/library/telegraf:1.19
    Image ID:      docker.io/library/telegraf@sha256:53f70f9e91c21c110912d622b7a92731e848c6288591d95564c386aa1c61c4e5
    Port:          <none>
    Host Port:     <none>
    Command:
      telegraf
      --config
      /etc/telegraf/telegraf.conf
      --watch-config
      inotify
    State:          Running
      Started:      Tue, 10 May 2022 12:14:09 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                100m
      ephemeral-storage:  1Gi
      memory:             500Mi
    Requests:
      cpu:                100m
      ephemeral-storage:  1Gi
      memory:             500Mi
    Environment:
...

Note the limits there. They don't seem to match what I specified in my annotations. Is this to be expected?

Telegraf secret already exist

Hi guys,
i'm just trying to look around with telegraf-operator and I got this issue that telegraf operator log that secret which is created by operator already exist.

It happens only in :lastest version not v1.1.1

2021-01-07T17:58:40.519Z	DEBUG	controller-runtime.webhook.webhooks	received request	{"webhook": "/mutate-v1-pod", "UID": "4a031063-0907-4ade-b42d-58d8d89c1e86", "kind": "/v1, Kind=Pod", "resource": {"group":"","version":"v1","resource":"pods"}}
2021-01-07T17:58:40.519Z	INFO	setup.podInjector	adding sidecar container
2021-01-07T17:58:40.549Z	DEBUG	controller-runtime.webhook.webhooks	wrote response	{"webhook": "/mutate-v1-pod", "UID": "4a031063-0907-4ade-b42d-58d8d89c1e86", "allowed": true, "result": {}, "resultError": "got runtime.Object without object metadata: &Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,RemainingItemCount:nil,},Status:,Message:,Reason:,Details:nil,Code:200,}"}
2021-01-07T17:58:40.554Z	DEBUG	controller-runtime.webhook.webhooks	received request	{"webhook": "/mutate-v1-pod", "UID": "d161bfe4-5f42-4321-88fb-6708c615de09", "kind": "/v1, Kind=Pod", "resource": {"group":"","version":"v1","resource":"pods"}}
2021-01-07T17:58:40.555Z	INFO	setup.podInjector	adding sidecar container
2021-01-07T17:58:40.594Z	ERROR	setup.podInjector	unable to create secret	{"error": "secrets \"telegraf-config-redis-master-0\" already exists"}
github.com/go-logr/zapr.(*zapLogger).Error
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
main.(*podInjector).Handle
	/workspace/handler.go:114
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).Handle
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/webhook.go:135
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).ServeHTTP
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/http.go:87
sigs.k8s.io/controller-runtime/pkg/webhook.instrumentedHook.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:117
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2007
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2387
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2802
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:1890

Reproduce

I deployed dev operator using standard example from https://raw.githubusercontent.com/influxdata/telegraf-operator/master/deploy/dev.yml

I have redis deployes from helm
values.yml

## Global Docker image parameters

## Cluster settings
cluster:
  enabled: false

master:
  podAnnotations:
    telegraf.influxdata.com/inputs: |+
      [[inputs.redis]]
        servers = ["tcp://localhost:6379"]
  persistence:
    enabled: false

kubectl create namespace redis
helm repo add "stable" "https://charts.helm.sh/stable"
helm -n redis install -f values.yml stable/redis

I tried it with mysql image also with same problem.

Mybe I'm doing something wrong ?

Add support for Telegraf's ability to pull remote configurations

Describe the issue here:
InfluxDB 2.X users will hopefully be taking advantage of the feature that allows for storing Telegraf configurations centrally in their InfluxDB instance. Telegraf currently supports pulling its configuration from a URL so it can grab these configs.

Since this is tooling designed to automate the deployment and configuration of Telegraf, I imagine supporting this form of pulling configuration will be desired for the operator as well.

Obviously this operator is in its very early stages so I have no data to back up this hypothesis but figured I'd log this in case others felt it would be a necessary addition as well!

Support istio integration

With istio 1.5 and newer version mixer is removed from the equation and we need to scrape all the pods running the istio sidecar at port 15090 with path /stats/prometheus

To make this automatic as much as possible we need to make some changes:

  • make telegraf-operator idempotent to multiple requests for the same pod where we might already have injected sidecar and created the secret. this is needed as we need to use a MutatingWebhookConfiguration with reinvocationPolicy: IfNeeded. this is due to the fact that in kubernetes there is no ordering between mutating webhook so we wouldn't know if the telegraf-operator webhook has run before of after the istio webhook
  • need to add 2 more flags: --enable-istio-injection and --istio-output-class. these would enable the functionalies described below. if the class specified doesn't exist this feature will be not enabled. this feature is turned off by default.
  • if the pod as annotation sidecar.istio.io/status then we create a container called telegraf-istio which has a only 2 inputs: internal and prometheus scrapring the above described endpoint. this container will have minimal requests and limits. and the secret name used will be of form telegraf-istio-<pod_name>

Scraping istiod, ingressgateway and egressgateway will be following the standard telegraf-operator workflow

cc @wojciechka @BondAnthony

Specify custom Certificate Authority

We export metrics to an internal proxy that uses a self hosted CA. Is there an ability to specify a custom CA for the telegraf sidecar to use?

What products and version are you using?

Telegraf 1.22.4

Container Image: Latest is out of date

When a release is made the latest image in not updated. Latest currently points to image v1.0.6, I think we need to revisit the image build process and settle on a multi-version tag process.

Maybe something like this?

image_templates:
    - "quay.io/influxdb/telegraf-operator:latest"
    - "quay.io/influxdb/telegraf-operator:{{ .Tag }}"
    - "quay.io/influxdb/telegraf-operator:v{{ .Major }}"
    - "quay.io/influxdb/telegraf-operator:v{{ .Major }}.{{ .Minor }}"

Allow removing resource requests/limits on sidecar

In my case I'd like to remove the CPU resource limit on the telegraf sidecar. If I leave out the annotation, it will use the default value. It would be nice if there are either 1) no resource requests/limits on the sidecar, or 2) a way to disable setting a default value

Support multiple Telegraf into the same pod

Describe the issue here.

Is there any possible to support multiple Telegraf into the same pod?
In actual, I would like using Telegraf to collect metrics and logs from one pod.
But if I wrote input about metrics and another input about logs in the same "telegraf config", its collected data will be mixed to the same output.

Or, had already existed solution for the situation?

Support setting global tags from labels, annotations, etc

I'd like to be able to set global tags based on annotations/labels/k8s namespace/etc. More specifically, I'd like to define a class like this:

classes:
  data:
    foo: |+
      [global_tags]
        hostname = "$HOSTNAME"
        nodename = "$NODENAME"
        app = "$KUBERNETES_LABEL_APP"

To be clear, obviously I can already define a class like that, but it won't work because there's no way to get the environment variable $KUBERNETES_LABEL_APP populated in the Telegraf container. If I was creating the container myself I'd be able to do something like,

env:
  - name: KUBERNETES_LABEL_APP
    valueFrom:
      fieldRef:
        fieldPath: metadata.labels['app']

But there's no way to set up anything like that when the container is injected automatically by the Operator.

As an alternative, being able to just specify telegraf.influxdata.com/include-labels: 'app,whatever' or telegraf.influxdata.com/tags: 'app=foo,whatever=bar' as an annotation and have the Operator automatically construct...

[global_tags]
app = "foo"
whatever = "bar"

...would also be pretty good, but I get the feeling that could get awkward to implement since you can't define the same table multiple times in a TOML document, so if the class defines a [global_tags] table as well then they'd clash.

Sidecar not injecting on AKS

I installed the offical helm release of telegraf operator and annotated one of my deployments with "telegraf.influxdata.com/class: infra" hoping an side car will be injected . But its not working as its expected.

Upon checking the telegraf operator logs I am seeing the below error logs .

2023-01-31T00:39:55.238Z        DEBUG   controller-runtime.webhook.webhooks     wrote response  {"webhook": "/mutate-v1-pod", "code": 200, "reason": "telegraf-injector has no power over this pod", "UID": "a79cbe67-ce7d-4ba8-adf1-8b79c3e713b8", "allowed": true}
2023-01-31T00:39:56.997Z        DEBUG   controller-runtime.webhook.webhooks     received request        {"webhook": "/mutate-v1-pod", "UID": "478f04ec-4f86-408e-bcba-9eba440a65a6", "kind": "/v1, Kind=Pod", "resource": {"group":"","version":"v1","resource":"pods"}}
2023-01-31T00:39:56.999Z        INFO    setup.inject-handler    Deleting secret=telegraf-config-clustercheck-8667d796b-ccxkm/agys-stay
2023-01-31T00:39:57.029Z        INFO    setup.inject-handler    secret=telegraf-config-clustercheck-8667d796b-ccxkm/agys-stay error:secrets "telegraf-config-clustercheck-8667d796b-ccxkm" not found
2023-01-31T00:39:57.029Z        INFO    setup.inject-handler    Deleting secret=telegraf-istio-config-clustercheck-8667d796b-ccxkm/agys-stay
2023-01-31T00:39:57.034Z        INFO    setup.inject-handler    secret=telegraf-istio-config-clustercheck-8667d796b-ccxkm/agys-stay error:secrets "telegraf-istio-config-clustercheck-8667d796b-ccxkm" not found
2023-01-31T00:39:57.034Z        DEBUG   controller-runtime.webhook.webhooks     wrote response  {"webhook": "/mutate-v1-pod", "code": 200, "reason": "telegraf-injector couldn't delete one or more secrets", "UID": "478f04ec-4f86-408e-bcba-9eba440a65a6", "allowed": true}
2023-01-31T00:39:58.113Z        DEBUG   controller-runtime.webhook.webhooks     received request        {"webhook": "/mutate-v1-pod", "UID": "b49319e0-672c-4e45-a41e-d35553131964", "kind": "/v1, Kind=Pod", "resource": {"group":"","version":"v1","resource":"pods"}}
2023-01-31T00:39:58.113Z        INFO    setup.inject-handler    Deleting secret=telegraf-config-clustercheck-8667d796b-ccxkm/agys-stay
2023-01-31T00:39:58.118Z        INFO    setup.inject-handler    secret=telegraf-config-clustercheck-8667d796b-ccxkm/agys-stay error:secrets "telegraf-config-clustercheck-8667d796b-ccxkm" not found
2023-01-31T00:39:58.118Z        INFO    setup.inject-handler    Deleting secret=telegraf-istio-config-clustercheck-8667d796b-ccxkm/agys-stay
2023-01-31T00:39:58.128Z        INFO    setup.inject-handler    secret=telegraf-istio-config-clustercheck-8667d796b-ccxkm/agys-stay error:secrets "telegraf-istio-config-clustercheck-8667d796b-ccxkm" not found
2023-01-31T00:39:58.128Z        DEBUG   controller-runtime.webhook.webhooks     wrote response  {"webhook": "/mutate-v1-pod", "code": 200, "reason": "telegraf-injector couldn't delete one or more secrets", "UID": "b49319e0-672c-4e45-a41e-d35553131964", "allowed": true}
2023-01-31T00:39:58.146Z        DEBUG   controller-runtime.webhook.webhooks     received request        {"webhook": "/mutate-v1-pod", "UID": "da959bdf-8a50-42fd-bf49-aee13f238268", "kind": "/v1, Kind=Pod", "resource": {"group":"","version":"v1","resource":"pods"}}
2023-01-31T00:39:58.146Z        INFO    setup.inject-handler    Deleting secret=telegraf-config-clustercheck-8667d796b-ccxkm/agys-stay
2023-01-31T00:39:58.152Z        INFO    setup.inject-handler    secret=telegraf-config-clustercheck-8667d796b-ccxkm/agys-stay error:secrets "telegraf-config-clustercheck-8667d796b-ccxkm" not found
2023-01-31T00:39:58.152Z        INFO    setup.inject-handler    Deleting secret=telegraf-istio-config-clustercheck-8667d796b-ccxkm/agys-stay
2023-01-31T00:39:58.157Z        INFO    setup.inject-handler    secret=telegraf-istio-config-clustercheck-8667d796b-ccxkm/agys-stay error:secrets "telegraf-istio-config-clustercheck-8667d796b-ccxkm" not found
2023-01-31T00:39:58.157Z        DEBUG   controller-runtime.webhook.webhooks     wrote response  {"webhook": "/mutate-v1-pod", "code": 200, "reason": "telegraf-injector couldn't delete one or more secrets", "UID": "da959bdf-8a50-42fd-bf49-aee13f238268", "allowed": true}

I read some articles online and tried adding ClusterRoleBinding and service account it didnt help .

I also tired adding the below snippet to the "telegraf-operator-classes " secret which also dosent seem to help . Need help and advise .

[[inputs.kubernetes]]
      url = "https://kubernetes.default.svc"
      bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
      insecure_skip_verify = true

Telegraf Operator should use a custom resource type instead of Secret

Running telegraf operator currently requires a cluster role with full permissions for secrets for any targeted namespaces within a k8s cluster.

The issues:

  1. Requiring full permissions for all secrets for all namespaces is not acceptable from a data security perspective, these permissions are far beyond what the application should require
  2. Programmatically generated objects pollute the user view of secrets

A custom, namespaced resource type like TelegrafConfig would keep this application data out of the user view and provide for much more reasonable permissioning for telegraf-operator

How to set `metric_version` on `inputs.prometheus`

I am looking for how to set metric_version=2 on the generated config for inputs.prometheus.

Current output:

[[inputs.prometheus]]
  urls = ["http://127.0.0.1:8086/metrics"]
  interval = "30s"

Desired output:

[[inputs.prometheus]]
  urls = ["http://127.0.0.1:8086/metrics"]
  interval = "30s"
  metric_version=2

InfluxDBv2 authorization fails if docker secrets are used

Hi folks,

as soon as I switch to docker secrets (file), telegraf authorization to influx fails. If I hardcode the token into the config file, it works.

Bind-mount of docker secret is (IMO) correct:

alexander@ubuntu:~/home-automation$ docker exec -it telegraf sh
# telegraf --version
Telegraf 1.29.2 (git: HEAD@d92d7073)
# cat /run/secrets/telegraf_influxdb_token
jCSA****Ong==
# 

Relevant parts of telegraf config:

# Secret-store to access Docker Secrets
[[secretstores.docker]]
  ## Unique identifier for the secretstore.
  ## This id can later be used in plugins to reference the secrets
  ## in this secret-store via @{<id>:<secret_key>} (mandatory)
  id = "docker_secretstore"

and

# Configuration for sending metrics to InfluxDB 2.0
[[outputs.influxdb_v2]]
  urls = ["http://influx.zimmermann.XXX:8086"]
  token = "@{docker_secretstore:telegraf_influxdb_token}"
  #token = "jCSA****Ong=="
  organization = "zimmermann.XXX"
  bucket = "telegraf/autogen"

Any idea?

Missing telegraf sidecar after pod destroy

Describe the issue here.

When telegraf is running as sidecar inside pod, so we have our app container + telegraf running, after we destroy the pod (for example using k9s) new pod is created but there is no sidecar for telegraf anymore. The same happens if we change the number of replicas to 0 and then back to 1 or more (sometimes required for DB operations)
Using version 1.3.10

Add `global_tags` annotation

Oftentimes you don't need a brand new class. All you want is to add a common tag to all the metrics emitted by a given pod.

Proposal

annotations:
  telegraf.influxdata.com/global-tag-literal-region: us-east1
  telegraf.influxdata.com/global-tag-fieldref-ns: metadata.namespace
  telegraf.influxdata.com/global-tag-configmapkeyref-foo: bar.baz

equivalent to use this generated config:

    [global_tags]
      region = "us-east1"
      ns = "myns"
      foo = "quz"

(the implementation may choose to use some intermediate env vars if so desired)

Support Default Environment Variables on Sidecar

It would be great if there was a way to support a default set of environment variables for sidecar containers.

For example, I would like to add namespace as a global tag, which require that the $NAMESPACE environment variable exists. Current the only way to support this is to ensure that every pod in the cluster is deployed with an telegraf.influxdata.com/env-fieldref-NAMESPACE: metadata.namespace annotation. If an application developer fails to include this then all their metrics are instead tagged with namespace = $NAMESPACE.

It would be great if there was a way to configure some default variables on a global level so that some global tags that require dynamic values like fieldref can be supported at a platform level rather than just hoping devs remember to add the right annotation.

Should metric-version be a string?

I'm using the telegraf operator to get prometheus metrics from my pod into influxdb. With telegraf.influxdata.com/metric-version: "2" things work, but with telegraf.influxdata.com/metric-version: 2 (as the readme currently says), I get an error on kubectl apply -f. The error I get:

cannot convert int64 to string

Version info:

โžœ  kubectl version -o yaml
clientVersion:
  buildDate: "2022-03-16T15:58:47Z"
  compiler: gc
  gitCommit: c285e781331a3785a7f436042c65c5641ce8a9e9
  gitTreeState: clean
  gitVersion: v1.23.5
  goVersion: go1.17.8
  major: "1"
  minor: "23"
  platform: linux/amd64
serverVersion:
  buildDate: "2022-01-25T21:19:12Z"
  compiler: gc
  gitCommit: 816c97ab8cff8a1c72eccca1026f7820e93e0d25
  gitTreeState: clean
  gitVersion: v1.23.3
  goVersion: go1.17.6
  major: "1"
  minor: "23"
  platform: linux/amd64

Using dev.yml from v1.3.6 of the telegraf-operator.

Multiple classes for the single pod

There is a Kafka node. Kafka has its own metrics which can be exposed with kafka-class like the number of consumers etc. And Kafka has standard JVM metrics which can be observed with jvm-class. Jvm-class can be utilized for other JVM-based pods.
I'm thinking of something like telegraf.influxdata.com/classes: jvm,kafka

Support `name_override` for `metrics_version: 2`

When using metrics_version: 2 with the prometheus plugin all metrics are stored in InfluxDB under the measurement name prometheus (see here).

The operator should support a simple method of overriding this name for telegraf sidecars via the name_override property on the prometheus input plugin. I suggest adding an annotation:

telegraf.influxdata.com/metric-v2-name-override

The only way to currently achieve this is to use the telegraf.influxdata.com/inputs annotation to configure the prometheus plugin, and ignore the other annotations.

Telegraf-istio sidecar is killed with OOM

We use --enable-istio-injection=true to inject sidecar for Envoy instances. The default memory limit is 200Mi. OOM killer kills the sidecar approximately every 10 minutes. With the sidecar instance fails.
For workloads we can tune limits with metadata annotations. It should be good to add annotations to workloads to tune resources for telegraf-istio sidecar. Of course, it's better if we can tune memory/cpu/ephemeral storage globally in Istio-operator config (not sure if it's possible).

Telegraf Operator prevents pods creation when the secret already exists

If a secret for pod already exists, the exception is raised and pod is not being created (with or without sidecar)

telegraf-operator logs

2020-10-09T12:44:53.473Z        ERROR   setup.podInjector       unable to create secret {"error": "secrets \"telegraf-config-demo-redis-bitnami-slave-0\" already exists"}
github.com/go-logr/zapr.(*zapLogger).Error
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
main.(*podInjector).Handle
        /workspace/handler.go:123
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).Handle
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/webhook.go:135
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).ServeHTTP
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/http.go:87
sigs.k8s.io/controller-runtime/pkg/webhook.instrumentedHook.func1
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/server.go:117
net/http.HandlerFunc.ServeHTTP
        /usr/local/go/src/net/http/server.go:2007
net/http.(*ServeMux).ServeHTTP
        /usr/local/go/src/net/http/server.go:2387
net/http.serverHandler.ServeHTTP
        /usr/local/go/src/net/http/server.go:2802
net/http.(*conn).serve
        /usr/local/go/src/net/http/server.go:1890

statefulset events

Events:
  Type     Reason        Age                From                    Message
  ----     ------        ----               ----                    -------
  Warning  FailedCreate  2s (x14 over 43s)  statefulset-controller  create Pod demo-redis-bitnami-slave-0 in StatefulSet demo-redis-bitnami-slave failed error: admission webhook "telegraf.influxdata.com" denied the request: secrets "telegraf-config-demo-redis-bitnami-slave-0" already exists

I expect to automatically reload the secret (if it was created using telegraf-operator) or to run pods without the sidecat

Relevant URLs

N/A

What products and version are you using?
  • telegraf-operator helm chart - 1.1.4
  • telegraf-operator - quay.io/influxdb/telegraf-operator:v1.1.0

How to enable output prometheus

none 9273 to listening

classes:
  secretName: "telegraf-operator-classes"
  default: "infra"
  data:
    infra: |
      [[outputs.prometheus_client]]
        listen = ":9273"
        string_as_label = false

when OLM update telegraf-operator version, the classes secret data be reset null

Describe the issue here.

[Bug] After the OLM update telegraf-operator version, the secret of classes's data will be set to empty value.

bcops@test-deployment:~$ kubectl -n operators get secret classes 
NAME      TYPE     DATA   AGE
classes   Opaque   3      176m

This will lead the telegraf sidecar container inject failed.

What products and version are you using?

telegraf-operator 1.3.9 -> 1.3.10

Make `internal` enabled by default with flag

Describe the issue here.

Right now, if you want to add the internal input plugin you need to add the annotation to every single deployment

I'd like to add a deployment flag like --enable-default-internal-plugin=true in which every telegraf will have internal plugin enabled by default

telegraf.influxdata.com/interval options does no work

Hello, we are using telegraf operator and telegraf sidecarimage versions below

telegraf-operator:1.1.1
telegraf:1.14.2-alpine

annotation we added to the pod below

                      telegraf.influxdata.com/inputs:
                        [[inputs.prometheus]]
                          urls = ["http://localhost:9153/metrics"]
                      telegraf.influxdata.com/interval: 30s

but interval parameter ignored and metrics collection goes with 10s default interval
config output from sidecar container

/ # cat etc/telegraf/telegraf.conf

[[inputs.internal]]

[[inputs.prometheus]]
  urls = ["http://localhost:9153/metrics"]

[[outputs.influxdb]]
  urls = ["http://dev.influxdb-data.service.consulzone:8086"]
  database = "infra-metrics"
  retention_policy = "7d"
[global_tags]
  cluster = "dev-general"

the influx also show metric collection every 10 seconds versus 30s. Can you suggest why interval parameter ignored?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.