GithubHelp home page GithubHelp logo

dash0hq / otelbin Goto Github PK

View Code? Open in Web Editor NEW
236.0 6.0 9.0 4 MB

Web-based tool to facilitate OpenTelemetry collector configuration editing and verification

Home Page: https://www.otelbin.io

License: Apache License 2.0

JavaScript 8.06% TypeScript 85.82% CSS 5.28% Dockerfile 0.72% Shell 0.12%
logging metrics opentelemetry opentelemetry-collector tracing editor pipeline observability otel

otelbin's Introduction

OTelBin is a configuration tool for OpenTelemetry collector pipelines

OTelBin

OTelBin is a configuration tool for OpenTelemetry collector pipelines.

Twitter License SonarCloud

Introduction · Features · Badges · Tech Stack · Contributing · License


Introduction

OTelBin is a configuration tool to help you get the most out of the OpenTelemetry Collector.

OTelBin hosted with ❤️ by the Dash0 people at otelbin.io.

Features

OTelBin will enable you to:

  1. Visualize for you the configured OpenTelemetry Collector pipelines as swimlanes
  2. Validate your configuration and highlight errors
  3. Enable you to share your OpenTelemetry Collector configurations online (requires login with a GitHub or Google account)

Badges

Use shields.io-powered badges within documentation to reference a collector configuration.

OpenTelemetry collector configuration on OTelBin

  • URL
    https://www.otelbin.io/badges/collector-config
    
  • Markdown
    ![OpenTelemetry collector configuration on OTelBin](https://www.otelbin.io/badges/collector-config)
  • HTML
    <img src="https://www.otelbin.io/badges/collector-config" alt="OpenTelemetry collector configuration on OTelBin">
    

Tech Stack

Contributing

We love our contributors! Here's how you can contribute:

Acknowledgements

OTelBin makes use of the output of cfgmetadatagen and specifically a post-processed version of it that is part of nimbushq/otel-validator.

otelbin's People

Contributors

bripkens avatar dependabot[bot] avatar github-actions[bot] avatar haoqixu avatar marcelbirkner avatar mehrnooshakbarizadeh avatar mirko-novakovic avatar mmanciop avatar roshan-gh avatar samueljrz avatar snyk-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

otelbin's Issues

Parse Otel config from K8s ConfigMap

Why

Since all our Otel components run in Kubernetes the config is stored in K8s ConfigMaps. This is also a very common setup customers have. It would be nice if a user can copy/paste the K8s ConfigMap into OtelBin and it automatically extracts the actual data.

Currently I need to copy/paste the configMap into an Editor, manually extract the config and fix the identation of the YAML file. This is a bit cumbersome.

Here is an example of a K8s ConfigMap for an Otel component:

Examples

Minimal Configmap

apiVersion: v1
data:
  relay: |
    # Learn more about the OpenTelemetry Collector via
    # https://opentelemetry.io/docs/collector/
    
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    
    processors:
      batch:
    
    exporters:
      otlp:
        endpoint: otelcol:4317
    
    extensions:
      health_check:
      pprof:
      zpages:
    
    service:
      extensions: [health_check, pprof, zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlp]
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlp]
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlp]
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: dummy-release-name
    meta.helm.sh/release-namespace: dummy-release-name
  creationTimestamp: "2023-09-08T08:58:14Z"
  labels:
    app.kubernetes.io/instance: dummy-release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: dummy-name
    app.kubernetes.io/version: 0.77.0
    helm.sh/chart: opentelemetry-collector-0.57.1
  name: receiver
  namespace: dummy-release-name
  resourceVersion: "25387039"
  uid: 77016485-f2c2-489c-a4f1-be41834bc9f7

Larger Configmap

apiVersion: v1
data:
  relay: |
    connectors:
      spanmetrics: {}
    exporters:
      logging: {}
      otlp:
        endpoint: 'opentelemetry-demo-jaeger-collector:4317'
        tls:
          insecure: true
        auth:
          authenticator: bearertokenauth/withscheme
        endpoint: opentelemetry-demo.eu-west-1.aws.example.com:4317
      prometheus:
        enable_open_metrics: true
        endpoint: 0.0.0.0:9464
        resource_to_telemetry_conversion:
          enabled: true
    extensions:
      bearertokenauth/withscheme:
        scheme: Bearer
        token: auth_eFS0iwMVqmp4MmD7jnCYzQxv5E
      health_check: {}
      memory_ballast:
        size_in_percentage: 40
    processors:
      batch: {}
      filter/ottl:
        error_mode: ignore
        metrics:
          metric:
          - name == "queueSize"
      k8sattributes:
        extract:
          metadata:
          - k8s.namespace.name
          - k8s.deployment.name
          - k8s.statefulset.name
          - k8s.daemonset.name
          - k8s.cronjob.name
          - k8s.job.name
        filter:
          node_from_env_var: K8S_NODE_NAME
        passthrough: false
        pod_association:
        - sources:
          - from: resource_attribute
            name: k8s.pod.ip
        - sources:
          - from: resource_attribute
            name: k8s.pod.uid
        - sources:
          - from: connection
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 25
      resourcedetection:
        detectors:
        - ec2
      transform:
        metric_statements:
        - context: metric
          statements:
          - set(description, "Measures the duration of inbound HTTP requests") where name
            == "http.server.duration"
    receivers:
      jaeger:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:14250
          thrift_compact:
            endpoint: ${env:MY_POD_IP}:6831
          thrift_http:
            endpoint: ${env:MY_POD_IP}:14268
      kubeletstats:
        auth_type: serviceAccount
        collection_interval: 20s
        endpoint: ${K8S_NODE_NAME}:10250
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            cors:
              allowed_origins:
              - http://*
              - https://*
            endpoint: 0.0.0.0:4318
      prometheus:
        config:
          scrape_configs:
          - job_name: opentelemetry-collector
            scrape_interval: 10s
            static_configs:
            - targets:
              - ${env:MY_POD_IP}:8888
      zipkin:
        endpoint: ${env:MY_POD_IP}:9411
    service:
      extensions:
      - health_check
      - memory_ballast
      - bearertokenauth/withscheme
      pipelines:
        logs:
          exporters:
          - logging
          processors:
          - k8sattributes
          - memory_limiter
          - batch
          receivers:
          - otlp
        metrics:
          exporters:
          - prometheus
          processors:
          - k8sattributes
          - memory_limiter
          - filter/ottl
          - resourcedetection
          - transform
          - batch
          receivers:
          - otlp
          - kubeletstats
        traces:
          exporters:
          - otlp
          processors:
          - k8sattributes
          - memory_limiter
          - resourcedetection
          - batch
          receivers:
          - otlp
          - jaeger
          - zipkin
      telemetry:
        logs:
          level: debug
        metrics:
          address: ${env:MY_POD_IP}:8888
kind: ConfigMap
metadata:
  creationTimestamp: "2023-05-16T14:33:24Z"
  labels:
    app.kubernetes.io/instance: opentelemetry-demo
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: otelcol
    app.kubernetes.io/version: 0.76.1
    argocd.argoproj.io/instance: opentelemetry-demo
    helm.sh/chart: opentelemetry-collector-0.55.1
  name: opentelemetry-demo-otelcol-agent
  namespace: otel-demo
  resourceVersion: "71060066"
  uid: 4c9cee55-7fe2-4752-8f77-e81d9d11c983

Connect connector exporters and receivers

The connection between connector's exporter and receiver should be visualised as a directed arrows from exporter to receiver, see attached screenshot. When hovering on any of the connector's components in the visualisation, as well as when hovering on the definition of the connector, the connector's components and link between them in the visualisation view should be highlighted.
Screenshot 2023-10-09 at 10 11 52

Track feature usage of OTelBin

Why

To understand what is actually useful and what might have been unnecessary.

What

Leverage Vercel's analytics

  • pasting a config
  • opening an OTelBin link with a config

Automatic Refactoring Support

  • Enable/disable debug logging of the collector itself (not to be confused with the logging/debug exporter)
  • Automatic migration from logging to debug exporter

improve Editor word pattern

Currently editor word pattern is for '/\w+/\w+|\w+/' regex. Consider the words with "xxx-xxx-xxx" pattern in order to zoom and focus on nodes work correctly.

Show details when selecting components in the graph view

What

I am not sure if this is already planned in other tickets.

When the user selects a component in the graph view it would be nice to see details about this component. Here are two examples what it could look like for the "memory_limiter" and "transform" processors.

Mockup

image

Improve edges UI

Improve edges UI design (edges margins around them) as the screenshot:
Screenshot 2023-10-17 at 00 46 39

Update welcome modal

  • Update screenshots
  • Add sharing
  • Add collector distribution validation
  • Revise texts

How are the autocompleted options for a component chosen?

First off thanks for working on this, it's pretty cool.

I'm wondering how the autocompleted options for a component are chosen. When I type googlecloud<TAB> in the exporters section, it automatically adds the following config keys:

  googlecloud:
    project: 
    user_agent: opentelemetry-collector-contrib {{version}}
    destination_project_quota: false
    timeout: "12s"

All but project are not options that users would normally want to tweak. It seems like the defaults come maybe come from the mapconfig default struct tag? I would guess that options with a default are usually the least relevant to users.

Support OtelCol builds other than the default one

Why

Different OtelCol builds, e.g., those shipped by vendors like the AWS Distro for OpenTelemetry, the Splunk Distribution of OpenTelemetry Collector, support a different subset of extensions, exporters, receivers and processors than the default distort from OpenTelemetry.

We should support for the user of OTelBin to select a specific release (as in "GitHub release", e.g.: https://github.com/aws-observability/aws-otel-collector/releases or https://github.com/signalfx/splunk-otel-collector/releases!). By selecting the GitHub repository and a specific GitHub release, we should then download the binary, and extract the schema of what configurations it accepts, and use that for validation.

Make Clerk and Upstash optional

We should not require community members that want to host OTelBin or want to develop it, to have a Clerk and/or Upstash API key. OTelBin cannot even open without these keys set in .env.

Features that require 3rd-party services like Clerk and Upstash must be made optional and the UI should hide them away if the 3rd-party services are not configured.

Support automatic migration debug extension

The logging exporter is being phased out in favour of the debug exporter1, that provides more or less the same functionality, but avoids confusion with logging pipelines, and changes the naming of log levels.

OtelBin should show a warning associated with logging exporters and, if the target version of the OpenTelemetry Collector is recent enough to include the debug exporter, offer a one-click migration step.

Footnotes

  1. https://words.boten.ca/byebye-logging-exporter/

What are the prerequisites / requirements for adding new distro?

Jaeger is working on v2 that will be based on OTEL collector infrastructure, so we could point to this tool for config validation. I have a couple questions:

  1. what are the prerequisites that a disto must meet to be compatible with this tool?
  2. I assume extensions are also validated - are there plans for visualizing them (including inter-dependencies that will be supported in the next collector version)?

Resolve build-time / dev-time warnings

./node_modules/vscode-languageserver-textdocument/lib/umd/main.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted

Import trace for requested module:
./node_modules/vscode-languageserver-textdocument/lib/umd/main.js
./node_modules/monaco-yaml/yaml.worker.js
./src/contexts/EditorContext.tsx
./src/app/page.tsx

./node_modules/vscode-languageserver-types/lib/umd/main.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted

Import trace for requested module:
./node_modules/vscode-languageserver-types/lib/umd/main.js
./node_modules/monaco-yaml/yaml.worker.js
./src/contexts/EditorContext.tsx
./src/app/page.tsx

Click on the visualization item with no definition behavior

Currently, a normal click on visualization items redirects the cursor to item definitions in the editor. also along with the custom validation errors, the pipeline visualization is functional, what should happen when the user clicks on the pipeline visualization item that still is not defined in the editor's config?

example config:
https://www.otelbin.io/s/d0e264da-9b18-4e37-9fbc-46e12518bf75

image

in this config, "prometheus" is defined as an exporter in pipelines, but there is no definition for it. what should happen when the user clicks on "prometheus" visualization?

Add a cookie overlay

What

Add a simple cookie overlay asking our users to accept cookies. Once they have accepted the cookies, set a cookie called otelbin-cookies-accepted with the value true and then reload the page window.location.reload()

We should use our toast component. Visually, we should have an Opt-out and an Accept button in the dialog.

Inspiration

image

image

Allow multiple collector configs to be visualized together

It would be great if someone could input more than one collector config at the same time and either visualize them side by side to understand the differences OR combine them if they refer to one another. I.e. if i have a collector sidecar on a pod whose exporter is pointing to another collector in gateway mode, i would be able to combine the visualizations where the sidecar's OTLP exporter points to the gateways OTLP receiver. Let me know if you need more details or some examples :)

Impl jumping within the editor

What

For editor jumping from pipelines to the exporter/processor/definition, do the following

  • a context menu with "Go to Definition" (see screenshot)
  • CMD+Click (for Intellij users on MacOS) / Click+Click (for Intellij users on Windows) / OPT+Click (for VSCode users)
  • CMD+B / CTRL+B as keyboard shortcut for Intellij users
  • F12 as keyboard shortcut for VSCode users

Examples

image

And for the opposite direction, add a context menu with a list of the pipelines in which the element is used

image

Missing validation on missing referenced exporters, receivers, processors, extensions

The pipelines may be referencing components that do not exist, and OtelBin is not raising those as issues. in the screenshot, the health_check/i_do_not_exist extension is not declared (only heath_check is), so this configuration is invalid. The same applies with, for example, a traces pipeline referencing an extension that is not declared in the extensions list.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.