GithubHelp home page GithubHelp logo

qubitproducts / exporter_exporter Goto Github PK

View Code? Open in Web Editor NEW
327.0 19.0 55.0 218 KB

A reverse proxy designed for Prometheus exporters

License: Apache License 2.0

Go 97.40% Makefile 1.81% Dockerfile 0.79%
prometheus cadvisor prometheus-exporter proxy nginx ceh infra

exporter_exporter's Introduction

Exporter Exporter - prometheus exporter proxy

"you mean apache/nginx" - bbrazil

This provides a simple reverse proxy for prometheus exporters. It is intended as a single binary alternative to nginx/apache for use in environments where opening multiple TCP ports to all servers might be difficult (technically or politically)

The advantages are:

  • A single port can be used to query multiple exporters (to ease firewall configuration concerns).
  • Can provide TLS with optional client certificate authentication.
  • Provides verification that the target is serving prometheus metrics.
  • Can be used to execute scripts that produce prometheus metrics.
  • up behaviour is the same as for querying individual collectors.
  • Small code size, minimal external depedencies, easily auditable.

The exporter has three endpoints.

  • /: displays a list of all exporters with links to their metrics.

    • Returns JSON if the header "Accept: application/json" is passed
  • /proxy: which takes the following parameters:

    • module: the name of the module from the configuration to execute. (a default module can be selected using the defaultModule config option)
    • args: (only for exec modules): additional arguments to the backend command.
    • all other query string parameters are passed on to any http backend module. (excluding the first module parameter value).
  • /metrics: this exposes the metrics for the collector itself.

Features that will NOT be included:

  • merging of module outputs into one query (this would break up behaviour)

Installation

You can build directly using a plain go get github.com/QubitProducts/exporter_exporter. The provided Makefile is primarily used for releases.

Pre-built binaries and a debian package are available on the GitHub release page.

An ansible recipe as also available (kindly provided by one of our users).

TODO:

  • Config reload on HUP (or POST, or config file change?)
  • route to a docker/rocket container by name

Windows Service

The binary can be installed as a Windows service by supplying the -winsvc install arg. All other arguments passed along with -winsvc install will be added to the service startup and can only be changed by uninstalling/installing it again (or modifying the Windows registry directly).

Configuration

In expexp.yaml list each exporter listening on localhost with its known port.

defaultModule: node # called if "module" param is not supplied
modules:
  node:
    method: http
    http:
       port: 9100

  mtail:
    method: http
    http:
       port: 3903
       headers:
          foo: bar

  cadvisor:
    method: http
    http:
       verify: false
       port: 4194

  netdata:
    method: http
    http:
       port: 19999
       path: '/api/v1/allmetrics?format=prometheus'

  blackbox:
    method: http
    http:
       port: 9115
       path: '/probe'

  somescript:
    method: exec
    timeout: 1s
    exec:
      command: /tmp/myscript.sh
      args:
        - "myarg1"
        - "myarg2"
      env:
        THING: "1"
        THING2: "2"

  somefile:
    method: file
    file:
      path: /tmp/myfile.prometheus.txt

In your prometheus configuration

scrape_configs:
  - job_name: 'expexp_metrics'
    scrape_interval: 1s
    static_configs:
      - targets: ['host:9999']
  - job_name: 'cadvisor'
    scrape_interval: 5s
    metrics_path: /proxy
    params:
      module:
        - cadvisor
    static_configs:
      - targets: ['host:9999']
  - job_name: 'mtail'
    scrape_interval: 5s
    metrics_path: /proxy
    params:
      module:
        - mtail
    static_configs:
      - targets: ['host:9999']
  - job_name: 'somescript'
    scrape_interval: 5s
    metrics_path: /proxy
    params:
      module:
        - somescript
    static_configs:
      - targets: ['host:9999']
  - job_name: 'blackbox'
    metrics_path: /proxy
    params:
      module:
        - blackbox
        - icmp_example
    static_configs:
      - targets:
        - 8.8.8.8
        - 8.8.4.4
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: host:9999

Blackbox Exporter

The blackbox exporter also uses the "module" query string parameter. To query it via exporter_exporter we rely on the stripping of the initial "module" parameter. For example

curl http://localhost:9999/proxy\?module\=blackbox\&module\=icmp_example\&target\=8.8.8.8

Will query the icmp_example module in your blackbox configuration.

Directory-based configuration

You can also specify -config.dirs to break the configuration into separate files. The module name is taken from the name of the file (minus the yml/yaml extension), and the configuration for that module goes in at the top level.

Note that if you want to use only this configuration method and not the file-based configuration (-config.file option), you must provide an empty string for the file option : ./exporter_exporter -config.file "" -config.dirs "/etc/exporter_exporter/"

==> expexp.yaml <==
modules: {}

==> expexp.d/node.yaml <==
method: http
http:
   port: 9100

==> expexp.d/mtail.yaml <==
method: http
http:
   port: 3903

TLS configuration

You can use exporter_exporter with TLS to encrypt the traffic, and at the same time enforce strong mutual authentication between the nodes and the prometheus server.

Note that -web.tls.verify will accept any certificate signed by the -web.tls.ca, so you need to create a separate CA for this purpose - or use a self-signed certificate, which acts as its own CA.

Here is a simple configuration example, using one key/cert for the prometheus server and one key/cert shared between all the remote nodes. Firstly, create the keys and certs:

openssl req -x509 -newkey rsa:2048 -keyout prom_node_key.pem -out prom_node_cert.pem -days 29220 -nodes -subj /commonName=prom_node/ -addext "subjectAltName=DNS:prom_node"
openssl req -x509 -newkey rsa:2048 -keyout prometheus_key.pem -out prometheus_cert.pem -days 29220 -nodes -subj /commonName=prometheus/ -addext "subjectAltName=DNS:prometheus"

Create an /etc/prometheus/ssl/ directory on the prometheus server and all the remote nodes. Install both cert.pem files everywhere. It is safe for them to be world-readable.

Install prom_node_key.pem only on the nodes, and set file permissions to protect it so that only exporter_exporter can read it. Similarly, install prometheus_key.pem only on the prometheus server, and set permissions so that only the prometheus process can read it.

Configuration for exporter_exporter on the nodes (here it also disables plain HTTP):

EXPEXP_FLAGS='-web.listen-address= -web.tls.listen-address=:9998
 -web.tls.cert=/etc/prometheus/ssl/prom_node_cert.pem
 -web.tls.key=/etc/prometheus/ssl/prom_node_key.pem
 -web.tls.ca=/etc/prometheus/ssl/prometheus_cert.pem
 -web.tls.verify'

To test, use curl to make a scrape, replacing x.x.x.x with the IP address of the target:

curl --cert /etc/prometheus/ssl/prometheus_cert.pem \
     --key /etc/prometheus/ssl/prometheus_key.pem \
     --cacert /etc/prometheus/ssl/prom_node_cert.pem \
     --resolve prom_node:9998:x.x.x.x \
     -v https://prom_node:9998/proxy?module=node

When this is working, configure your prometheus server to use https. Example:

  - job_name: node
    scrape_interval: 1m
    scrape_timeout: 50s
    file_sd_configs:
      - files:
        - /etc/prometheus/targets.d/node_targets.yml
    scheme: https
    tls_config:
      # Verifying remote identity
      ca_file: /etc/prometheus/ssl/prom_node_cert.pem
      server_name: prom_node
      # Asserting our identity
      cert_file: /etc/prometheus/ssl/prometheus_cert.pem
      key_file: /etc/prometheus/ssl/prometheus_key.pem
    metrics_path: /proxy
    params:
      module: [ node ]
    relabel_configs:
      - source_labels: [__address__]
        target_label: instance
      - source_labels: [__address__]
        regex: '([^:]+)'
        target_label: __address__
        replacement: '${1}:9998'

Example /etc/prometheus/targets.d/node_targets.yml:

- labels: []
  targets:
  - 192.0.2.1
  - 192.0.2.2

exporter_exporter's People

Contributors

alexandary avatar brian-brazil avatar britcey avatar candlerb avatar cyril-s avatar dallemon avatar danpoltawski avatar ebik-economia avatar ebikt avatar joneug avatar ldaneliukas avatar steinbrueckri avatar sysboy avatar tcolgate avatar theredcat avatar thomasf avatar tykling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

exporter_exporter's Issues

utf-8 encoded labels

Hi,
first, thank you for this awesome project, it's working very well.

I have the following issue: A gitlab-rails exporter exports a metric with the following line:

curl -sk -o - https://localhost/-/metrics | grep -E '^rack_requests_total.*sessions'
rack_requests_total{action="new",controller="sessions",route="g\u003c/w",state="completed"} 2

and exporter exporter breaks with the following error:

level=error msg="Verification for module 'gitlab-rails' failed: Failed to decode metrics from proxied server: text format parsing error in line 37055: invalid escape sequence '\\u'"

AFAIK labels should support utf-8. Is this reproducible and a bug in exporter-exporter?
Thank you,
keachi

Clarify TLS server config

I plan to run exporter_export as a TLS server.

Issue 1: --help output says:

  -web.tls.verify
    	Disable client verification

Looking at the code:

        verify   = flag.Bool("web.tls.verify", false, "Disable client verification")
...
                        if *verify {
                                pool := x509.NewCertPool()
                                cabs, err := ioutil.ReadFile(*caPath)
                                if err != nil {
                                        log.Fatalf("Could not open ca file,, " + err.Error())
                                }
                                ok := pool.AppendCertsFromPEM(cabs)
                                if !ok {
                                        log.Fatalf("Failed loading ca certs")
                                }
                                tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
                                tlsConfig.ClientCAs = pool
                        }

So it seems that this option is to enable client verification (and defaults to false). Should the help text output be updated to reflect this?

(Aside: two commas in error message?)

Issue 2: it's unclear what sort of verification is done on client certs. My best assumption is: the server will accept any client cert, as long as it's signed by any CA in the web.tls.ca file. That is: it does not check the certificate identity or fingerprint.

If that's true, you'd have to set up a separate dummy CA for client authentication, rather than using any existing PKI. I don't have a problem with this, I just want to ensure I understand it properly.

curl: (52) Empty reply from server

Hello,
Is there a limit to scrape metrics with exporter_exporter ?

I have an exporter with 5102 metrics lines, but when i request it :

curl: (52) Empty reply from server

Config :

modules:
gitlab:
method: http
http:
address: 10.0.0.152
port: 8080

All other modules works, but this one always failed

Any idea ?

Regards

[Feature request] Add HTTP basic auth ?

Hi,

First of all: I was looking for a small & simple, easy to deploy on various system (golang, yeay!), tls-enabled http proxy to secure all prometheus exporter for some time and it your exporter_exporter is doing exactly that! Thanks a lot :-)
(and the consistency with other prometheus components is very welcome)

I was wondering if you would consider adding basic http-auth to remove the burden of using tls-client cert verification to secure the access to the metrics?

Cheers

params for proxied modules

Hello,

I am trying to use blackbox_exporter which needs among other things a "module" parameter, but I cant figure out how to send it.

README mentions that the /proxy endpoint supports a "params" parameter:

params (optional): named parameter to pass to the module (either as CLI args, or http parameters).

Is there an example of how to use this somewhere? I tried to find it in the code but I couldn't find anything. Trying it out it seems to only a (list of) strings, not a dict/map, so how does one give a named parameter to the proxied module?

Thanks! :)

Transition option to keep the route for a default module

It's often the case that there's already an exporter in place, and one might want to add another. exporter_exporter has the nice property of listing all the exporters it reverse proxies, listen on a single port, etc.

But at scale, changing the address of all targets from /metrics to /proxy?module=previous-exporter at the same time is often not feasible. It might be desirable to keep the existing route working during a transition.

Would you be review to a pull request for this ? The goal would be add an option for overriding the /metrics and maybe / route by the route of a module that is being replaced.

manipulate pushgateway through this exporter

I have many nodes that should push and delete metrics to pushgateway on a remote server. Since the labels are parsed from the url, I am wondering if it is possible to do it through exporter exporter or if I should access pgw directly.

Thanks

exporter_exporter windows service functionality

Currently, exporter_exporter does not have means to run as a windows service. Running it as a service on windows without a service wrapper a la nssm/winsw would be ideal.

I also investigated the possibility of building it as an MSI package with INSTALL_OPTIONS a la wmi_exporter.

Is this worth investigating?

Support additional metadata

There is already an issue for this here: #36. Adding an additional use case for it here.

I have a setup where a host has

  • blackbox exporter
  • node exporter
  • haproxy exporter
  • and few other exporters

I also have exporter exporter as a proxy to all the exporters on the host.

the same setup is deployed in dev, staging and production environments.

I see lot of value in adding common properties like "environment" added (as extra metadata) to all the exported metrics. One immediate benefit is that the same prometheus job can be used to scrape from all environments.

Make -web.tls.ca optional when using TLS without client verification

When using TLS without client verification, the parameter -web.tls.ca is not required (as far as I understand). However, it is required to pass a (valid?) CA path, otherwise startup fails with

main.go:168] Could not open ca file,, open : no such file or directory

Current workaround is to pass some dummy CA.
It should be possible to load the CA only when -web.tls.verify is set.

Support running as a sokcs5 proxy

I think this is doable when prometheus is compiled with go 1.9 and gets socks5
proxy support for it's proxy_url argument.

something like this could work where node:1 is resolved by the socks proxy to
match a module named node in exporter_exporter as the first entry in the node
exporter section.

  - job_name: 'node'
    scrape_interval: 1s
    static_configs:
      - targets: ['node:1']
    scheme: https
    proxy_url: "socks5://localhost:3000"
    tls_config:
      ca_file: ca.crt
      key_file: client.key
      cert_file: client.crt
      server_name: localhost

It get's interesting when we suddenly have a process cluster running on a node
which exporterexporter needs to pick up.

One could imagine something like this:

modules:
  myapp:
    method: http
    http:
      ports: 7000-7032

which would map to scrapable myqpp-targets 1-16

  - job_name: 'myapp'
    scrape_interval: 1s
    static_configs:
      - targets: ['myapp:1', 'myapp:2', 'myapp:3', 'myapp:4', 'myapp:5', 'myapp:6', ....
    scheme: https
    proxy_url: "socks5://localhost:3000"
    tls_config:
      ca_file: ca.crt
      key_file: client.key
      cert_file: client.crt
      server_name: localhost

As exporterexporter works right now I need one job for each myapp process in that host local application cluster which is the main problem I want to solve.

Switch logger

I like glog as well, I even have my own fork with some additional stuff but "github.com/prometheus/common/log" should probably be used to behave more like the "stock" exporters.

basic auth?

Hello,

How do I configure if I need exporter_exporter to proxy to something protected by basic auth?

Thanks! :)

Conflict on 'module' query string?

Several exporters already make use of a 'module' query argument: e.g.

  • snmp_exporter uses it to select the SNMP configuration
  • blackbox_exporter uses it to select what sort of probe to do

This means you might have to do:

localhost:9999/proxy?module=snmp&target=10.12.255.1&module=if_mib_secret
                     ^^^^^^^^^^^                    ^^^^^^^^^^^^^^^^^^^^
                  exporter_exporter                    snmp_exporter

This is confusing at first glance, and exporter_exporter has to strip off the first instance of module and leave the remaining instances in place.

I have tested this (with 0.2.9, prior to the parsing changes in #19), and it does work: tcpdump shows the above example proxies to

GET /snmp?module=if_mib_secret&target=10.12.255.1 HTTP/1.1
Host: localhost:9116
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip

However, I wonder if this could be done in a cleaner way? Suggestions:

  1. Make the exporter_exporter module selection be part of the path, REST-style

    localhost:9999/proxy/snmp?target=10.12.255.1&module=if_mib
    
  2. Use a query string parameter which is unlikely to conflict

    localhost:9999/proxy?exporter_exporter=snmp&target=10.12.255.1&module=if_mib
    

    (This might be better for dynamic rewrite rules: setting __params_exporter_exporter is easy, but I'm not sure if __metrics_path__ can be rewritten) __metrics_path__ can be set in relabelling, but __param_module cannot be set to a list.

Both could be done in a backwards-compatible way by falling back to the existing 'module' query parameter.

"path" prevents query strings

Goal:
I wish to use exporter_exporter to safely export metrics about the local consul daemon. Consul already cooperates with prometheus by provding a prometheus-compatible metrics output if I ask properly.

What I did:

modules:
  consul:
    http: { path: '/v1/agent/metrics?format=prometheus', port:8500 }
    method: http

What I expected:
curl http://localhost:9999/proxy?module=consul would return the same data that I get from curl http://localhost:8500/v1/agent/metrics?format=prometheus

What I got:
An error has occurred during metrics gathering: text format parsing error in line 1: invalid metric name

Extra information:
running a strace on exporter_exporter (I don't have tcpdump on this machine) I can see that exporter_exporter is sending an http request "GET /v1/agent/metrics%3Fformat..."

Assistance about TLS

Hi,
After a lot of tests with certificate/TLS, I need some help.
Most of my nodes answer with the same domain xxx.uman-it.fr so I generate a node cert with *.uman-it.fr as Common Name

Now I need to monitor one node which answer with another domain lets say uman-it.infra, If I try to use the same certificate, it fail (invalid certificate)

I tried to create a SAN certificate (Common Name = *.uman-it.fr ; SAN DNS = *.uman-it.infra) but it doesn't work.

Then I found issue #48 about subjectAltName verification, I tried to use web.tls.certmatch but no more success...

Is it possible to make it work or must I generate multiple node certificate ?

Thanks a lot

Binaries built with a go version containing obsolete mlock kernel issue checks

Hi, I believe the (latest) v0.4.0 binaries were built with go v1.14.0 that contains old/imprecise checks for a specific kernel issue: golang/go#37436

When we used the prebuilt binaries on a fresh Ubuntu 20.04 (kernel 5.4.0-65-generic), the binary crashed after a few minutes with the mlock issue described in the golang/go thread, even though the kernel is already patched for this issue. When we rebuilt the binaries from source with the latest go (go1.15.8), the crashes went away.

I think this can be mitigated by releasing a new version with a more recent go version, or compiling the source manually; I wanted to raise this in case someone else encounters this issue.

Thanks for the exporter!

Verify does not work with OpenMetrics

Wanted to make a PR but it seems like there is no solution near.
Did not find a parser for OpenMetrics so could not create a PR here.

prometheus/client_golang#829

Workaround for now is to skip verify:

modules:
  mimir:
    method: http
    http:
      verify: false
      port: 8080

Error code from exporter_exporter is:
ERRO[0003] Verification for module 'mimir' failed: Failed to decode metrics from proxied server: text format parsing error in line 89: unknown metric type "unknown"

ed25519

Is ed25519 supported?

I noticed this module is using an older version of crypto and I wasn't able to find details of when support was added.

Proxying metrics with auth via headers

We're trying to scrape the control plane metrics of Kubernetes, e.g. the kube-apiserver that exposes metrics on localhost:6443/metrics but that requires authorization via the bearer-token header when using RBAC. (Metrics For The Kubernetes Control Plane)

This could be done by a dedicated job in Prometheus, however, when exporter_exporter is already used to proxy other exporters on the same machine, we'd prefer to do the same here.

That's only a single example derived from an actual use case that we've encountered. Implementing this would require a simple modification to the reverse proxy so that headers from the module config (if any) are added when proxying, would this be acceptable?

Better use 2048bit for the TLS-certificate-example

At least on Centos 7 the curl fails with:
curl: (58) could not load PEM client certificate, OpenSSL error error:140AB18F:SSL routines:SSL_CTX_use_certificate:ee key too small, (no key found, wrong pass phrase, or wrong file format?)

If you create the certificates with -newkey rsa:2048 then everything is fine.

sockets leak when fetching module with verify=false

Description

exporter_exporter leaks sockets when serving metrics for module with verify=false.

How to reproduce

Run exporter_exporter with some module configured like this:

modules:
  nodeNoVerify:
    method: http
    http:
      verify: false
      port: 9100

Inspect current statistics on sockets in your system:

$ ss -s
Total: 1295
TCP:   71 (estab 20, closed 16, orphaned 0, timewait 2)

Transport Total     IP        IPv6
RAW	  1         0         1        
UDP	  31        18        13       
TCP	  55        38        17       
INET	  87        56        31       
FRAG	  0         0         0        

Generate some load and check statistics again e.g.:

$ go-wrk -c=4 -n=2000 http://localhost:9999/proxy?module=nodeNoVerify
...
$ ss -s
Total: 3329
TCP:   6283 (estab 2054, closed 4193, orphaned 0, timewait 4180)

Transport Total     IP        IPv6
RAW	  1         0         1        
UDP	  31        18        13       
TCP	  2090      1054      1036     
INET	  2122      1072      1050     
FRAG	  0         0         0  

Checkout lots of open connections between exporter_exporter and node_exporter:

$ ss -etnp | grep 9100

Checkout logs of exporter_exporter:

2020/01/15 12:15:01 http: Accept error: accept tcp [::]:9999: accept4: too many open files; retrying in 5ms
2020/01/15 12:15:01 http: proxy error: dial tcp: lookup localhost: device or resource busy

Expected behavior

One open connection to node_exporter

Actual behavior

Approx. 2000 open connections to node_exporter

Possible solution

This happens because http.Transport struct must be reused (as doc suggests) but new instance is created every time httpConfig.ServerHTTP method is invoked. Creating http.Transport with DisableKeepAlives: true also solves the issue

SSL related vulnerabilities

Hi Team,

Is there any option with exporter_exporter to fix below mentioned vulnerabilities ?

Expecting immediate response . Thanks in advance.

  1. SSL Version 2 and 3 Protocol detection.
  2. Red Protocal detection.
  3. SSL Medium strength Cipher suites supported (sweet32)

Thanks and Regards,
Vinod M V

Support for cached scrapping

Sometimes, I need to time the scraping of an endpoint or to run a script less frequently than the minimum scrape interval (~5min in prometheus before declaring the data stale). This may happens when the scraping may have a cost (contention, costly operations or on weak servers) and

  • you have multiple prometheus scraping for resiliency reasons
  • a costly scripts such as one fectching a big database is run

The idea would be to add the possibility to cache the result of the scrape and set a stamp on it. When the next scrape arrives

  • if the elapsed time is less than the value configured, server the cached data
  • otherwise scrape as usual and memorize the result

Currently, the only way to achieve that is to write the result in a promfile, execute the script at the wanted interval and use node_exporter to collect it. It adds a lot of setup for such a simple use case.

In terms of configuration, it could be something similar to HTTP configuration:

 somescript:
    method: exec
    ...
    cache:
        enabled: True
        max_age: 30m
    exec:
      command: /tmp/don_t_launch_that_too_often.sh

Another nice addition for the HTTP method would be the possibility to make a HEAD request on the endpoint to at least check if it is up but this would depend on the exporter supporting it.

Error responses not passed through

When a backend gives an error status, it appears to confuse exporter_exporter.

Here is an example using snmp_exporter. If I talk to it directly with bad arguments, it returns a 400 status code and a useful message.

root@prometheus:~# curl 'localhost:9116/snmp?target=10.12.255.1&module=WRONG'
Unknown module 'WRONG'

root@prometheus:~# curl -v 'localhost:9116/snmp?target=10.12.255.1&module=WRONG'
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9116 (#0)
> GET /snmp?target=10.12.255.1&module=WRONG HTTP/1.1
> Host: localhost:9116
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Sat, 19 Oct 2019 09:11:42 GMT
< Content-Length: 23
<
Unknown module 'WRONG'
* Connection #0 to host localhost left intact

But using exporter_exporter:

modules:
  snmp:
    method: http
    http:
       port: 9116
       path: snmp

Here is what I get:

root@prometheus:~# curl 'localhost:9999/proxy?module=snmp&target=10.12.255.1&module=WRONG'
An error has occurred while serving metrics:

text format parsing error in line 1: expected float as value, got "module"

root@prometheus:~# curl -v 'localhost:9999/proxy?module=snmp&target=10.12.255.1&module=WRONG'
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9999 (#0)
> GET /proxy?module=snmp&target=10.12.255.1&module=WRONG HTTP/1.1
> Host: localhost:9999
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Sat, 19 Oct 2019 09:13:07 GMT
< Content-Length: 121
<
An error has occurred while serving metrics:

text format parsing error in line 1: expected float as value, got "module"
* Connection #0 to host localhost left intact

I looks like exporter_exporter has attempted to parse the error message as a metric, and this in turn has caused a 500 internal server error.

I am not actually sure why exporter_exporter needs to parse metrics - why not pass the body through unchanged?

But in any case, when the backend returns a non-2xx code then I think that the backend response code and body should be passed through unchanged.

Windows release contains a linux binary

The exe contained in https://github.com/QubitProducts/exporter_exporter/releases/download/v0.4.0/exporter_exporter-0.4.0.windows-amd64.zip is not a windows executable:

$ unzip ../exporter_exporter-0.4.0.windows-amd64.zip
Archive:  ../exporter_exporter-0.4.0.windows-amd64.zip
  inflating: build/exporter_exporter-0.4.0.windows-amd64/exporter_exporter.exe
$ file build/exporter_exporter-0.4.0.windows-amd64/exporter_exporter.exe
build/exporter_exporter-0.4.0.windows-amd64/exporter_exporter.exe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=-Q9NPMunilMXaOGIuByt/_eiyrk3CQNfYU62rUW7n/VKYarLwOWrQMst-s20fd/BrW2dD2LRqFZcSBwgZze, not stripped

Limit queries by IP

Hello,

Is there any way to allow queries only from a specific IP, or network range (maybe something like mrtg's allowed_hosts option)? Using TLS is a perfect option, but I would also like to be able to limit access to expexp's port by source ip address (without having to configure an iptables rule, or use an external firewall), so it will only respond to queries from my prometheus server.

Thanks in advance!

Document build process

I can't find any documentation related to the build process. While the Makefile is pretty explanatory, there's some docker magic that needs to be figured out.

Alternatively, it would be helpful if you'd provide a recent build. The windows binaries don't have support for --web.bearer.token or --web.bearer.token-file.

Additional metadata

We're looking for utilizing exporter_exporter to expose additional metadata about each module that would then be consumed by third-party systems via the JSON response.

Currently, what we're using is labels. I.e. each module has a section of labels that is also exposed via JSON and can then be used by another system which constructs scrape targets for file_sd. Note, these aren't actually added to the metrics themselves by exporter_exporter as it still only acts as a proxy. Would that be something that could be added to exporter_exporter?

E.g.

modules:
  node:
    method: http
    http:
       port: 9100
    labels:
       labelname: labelval
       labelname2: labelval2

and

type moduleConfig struct {
	Method  string                 `yaml:"method"`
	Timeout time.Duration          `yaml:"timeout"`
	Labels  map[string]string      `yaml:"labels"`

That's pretty much it, we're not actually doing anything more than exposing it in the json response.

build info missing from `--version`

Using the published released binary:

# /usr/local/bin/exporter_exporter --version
Version: 0.4.5- (from , built by  on )

I don't think that's expected?

ARM Release

Hi *,
I need for a project an ARM release, can you adjust the build and also build an ARM binary on every release?

Cheers ๐Ÿป
Richard

Support info metric type

Hello,

Thank you for a great product, I use it on all my servers and VMs.

When trying to use https://github.com/prometheus/client_python#info metric type in an exporter behind expexp I get this error:

Mar 9 06:51:35 blackbox3 supervisord[64200]: exporter_exporter time="2023-03-09T06:51:35Z" level=error msg="Verification for module 'dnsexp' failed: Failed to decode metrics from proxied server: text format parsing error in line 20: unknown metric type \"info\""

I think it will be the same issue for https://github.com/prometheus/client_python#enum type metrics.

It would be nice to support both types.

Question - how to understand/debug verify issue

I was running into an issue where exporter_exporter couldn't verify the metrics being exported by https://github.com/prometheus/jmx_exporter (v 0.18.0 running the standalone http server) and was able to resolve it by setting verify: false in the exporter config in expexp.yaml.

Though disabling verify works, I'm hoping to understand the underlying issue; a sample log entry from the exporter_exporter service looks like this:

Apr 17 16:45:14 api0 exporter_exporter[677967]: time="2023-04-17T16:45:14-06:00" level=error msg="Verification for module 'core_api-jmx' failed: Failed to decode metrics from proxied server: text format parsing error in line 12: unknown metric
 type \"unknown\""

And the (truncated) output from the endpoint:

runofthemill@api0:~$ curl http://localhost:9999/proxy?module=core_api-jmx
# HELP jmx_config_reload_success_total Number of times configuration have successfully been reloaded.
# TYPE jmx_config_reload_success_total counter
jmx_config_reload_success_total 0.0
# HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter.
# TYPE jmx_exporter_build_info gauge
jmx_exporter_build_info{version="0.18.0",name="jmx_prometheus_httpserver",} 1.0
# HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded.
# TYPE jmx_config_reload_failure_total counter
jmx_config_reload_failure_total 0.0
# HELP java_lang_MemoryPool_UsageThresholdSupported java.lang:name=Metaspace,type=MemoryPool,attribute=UsageThresholdSupported
# TYPE java_lang_MemoryPool_UsageThresholdSupported untyped
java_lang_MemoryPool_UsageThresholdSupported{name="Metaspace",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="Code Cache",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="Compressed Class Space",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="G1 Eden Space",} 0.0
java_lang_MemoryPool_UsageThresholdSupported{name="G1 Old Gen",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="G1 Survivor Space",} 0.0

Line 12 seems to be untyped from the preceding annotation, which (I think) should be okay? Any suggestions on how I can better understand why the verification is failing?

Thank you!

http without any restriction is always open

Hi,
i am running the exporter with this command line:

EXTERNAL_IP=$(curl -s ifconfig.me)
cajetan/bin/exporter_exporter \
  --config.file  /home/mon/cajetan/etc/exporter_exporter/expexp.yml \
  --log.level debug \
  --web.tls.cert cajetan/etc/exporter_exporter/prom_node_cert.pem \
  --web.tls.key cajetan/etc/exporter_exporter/prom_node_key.pem \
  --web.tls.ca cajetan/etc/exporter_exporter/prometheus_cert.pem \
  --web.tls.listen-address ${EXTERNAL_IP}:9999 \
  --web.listen-address ${EXTERNAL_IP}:1234 \
  --web.tls.verify \
  --web.tls.certmatch=^prometheus$

Https is fine

OMD[cajetan@admin]:~$ curl --cert ~/clients/Debian/20/x86_64/etc/exporter_exporter/prometheus_cert.pem \
     --key ~/etc/prometheus/ssl/prometheus_key.pem \
    --cacert ~/clients/Debian/20/x86_64/etc/exporter_exporter/prom_node_cert.pem \
    --resolve prom_node:9999:11.203.192.54 -vvv https://prom_node:9999/metrics
...
# HELP build_info A metric with a constant '1' value labeled by version, revision, branch and goversion from which exporter_exporter was built.
# TYPE build_info gauge
build_info{branch="",goversion="go1.17",revision="",version="0.4.5"} 1

using any other certificate/key fails as expected:

OMD[cajetan@admin]:~$ curl --cert /tmp/prometheus_cert.pem \
    --key /tmp/prometheus_key.pem \
    --cacert ~/clients/Debian/20/x86_64/etc/exporter_exporter/prom_node_cert.pem \
    --resolve prom_node:9999:11.203.192.54 \
    -vvv https://prom_node:9999/metrics
...
* TLSv1.3 (IN), TLS alert, bad certificate (554):
* OpenSSL SSL_read: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate, errno 0
* Closing connection 0

But: the http-port is still wide open for everybody:

OMD[cajetan@admin]:~$curl http://11.203.192.54:1234/metrics
# HELP build_info A metric with a constant '1' value labeled by version, revision, branch and goversion from which exporter_exporter was built.
# TYPE build_info gauge
build_info{branch="",goversion="go1.17",revision="",version="0.4.5"} 1
...

I don't see a way to close the http port. If i leave --web.listen-address, then it opens the default port 9999. The only way to block the http access is a firewall rule. Am i missing something here?

Returning JSON of modules broken

It seems that 0.3.1, or more specifically #38 broke the ability to return JSON of all modules and their configuration on request.

Executing curl -XGET -H "Accept: application/json" localhost:9999 now logs ERRO[0013] json: unsupported type: func(*http.Request)

Duplicate help item with missing metric causes loss of all metrics

I'm using the ceph exporter and having just started investigating rbd mirroring the following error just occurred:

An error has occurred while serving metrics:

text format parsing error in line 2614: second HELP line for metric name "ceph_rbd_mirror_replay"

When looking at the ceph endpoint, it looks like the problematic HELP item is for an item without a metric to go with it:

# HELP ceph_rocksdb_get_latency_count Get latency Count
# TYPE ceph_rocksdb_get_latency_count counter
ceph_rocksdb_get_latency_count{ceph_daemon="mon.link"} 4236013.0
ceph_rocksdb_get_latency_count{ceph_daemon="mon.yoshi"} 4188788.0
ceph_rocksdb_get_latency_count{ceph_daemon="mon.bowser"} 4158142.0
# HELP ceph_rbd_mirror_replay Replays
# TYPE ceph_rbd_mirror_replay counter
# HELP ceph_prioritycache:meta_pri0_bytes bytes allocated to pri0
# TYPE ceph_prioritycache:meta_pri0_bytes gauge
ceph_prioritycache:meta_pri0_bytes{ceph_daemon="osd.6"} 0.0
ceph_prioritycache:meta_pri0_bytes{ceph_daemon="osd.14"} 0.0

I understand this may be a bug in the ceph exporter, but is there a way to avoid this causing the loss of all metrics in exporter_exporter?

Expose list of configured exporters

Description

Exporter_exporter currently returns a 404 when accessing its http root. If we look at other exporters, e.g. blackbox_exporter, it might be better to return some information about the endpoints which the exporter can proxy.

Proposal

Display a list of configured module names that link to /proxy?module=$name when accessing /.
PR: #16

Proxyed 'path' are URL encoded, which is lead to different from config request on backend

Example config

modules:
  consul:
    method: http
    http:
      port: 8500
      path: "/v1/agent/metrics?format=prometheus"

Expected behaviour:
proxy request /proxy?module=consul must return metrics from consul agent

Actual behaviour:
proxy request /proxy?module=consul returns 404

Actual request to consul are URL encoded by exporter_exporter which is lead to the following
/v1/agent/metrics%3Fformat=prometheus
On that request consul returns 404

But if try to get metrics directly from consul it works well.

$ curl -v '0:8500/v1/agent/metrics%3Fformat=prometheus'
*   Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0 (127.0.0.1) port 8500 (#0)
> GET /v1/agent/metrics%3Fformat=prometheus HTTP/1.1
> Host: 0:8500
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Date: Tue, 31 Mar 2020 14:34:19 GMT
< Content-Length: 0
<
* Connection #0 to host 0 left intact
$ curl -v '0:8500/v1/agent/metrics?format=prometheus'
*   Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0 (127.0.0.1) port 8500 (#0)
> GET /v1/agent/metrics?format=prometheus HTTP/1.1
> Host: 0:8500
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain; version=0.0.4; charset=utf-8
< Vary: Accept-Encoding
< Date: Tue, 31 Mar 2020 14:34:33 GMT
< Transfer-Encoding: chunked
<
# HELP consul_client_api_catalog_datacenters consul_client_api_catalog_datacenters
# TYPE consul_client_api_catalog_datacenters counter
...

Race condition in error reporting?

Seeing something strange, this is with the official exporter_exporter 0.3.0 binary. Sometimes error reports just show "context canceled" with no other message.

root@ldex-mon2:~# exporter_exporter -config.file=/etc/prometheus/exporter_exporter.yml -config.dirs=/etc/prometheus/exporter_exporter.d -web.tls.listen-address '127.0.0.1:9998' -web.tls.verify
FATA[0000] context canceled                              source="main.go:270"

But then a little later when I ran the exact same command:

root@ldex-mon2:~# exporter_exporter -config.file=/etc/prometheus/exporter_exporter.yml -config.dirs=/etc/prometheus/exporter_exporter.d -web.tls.listen-address '127.0.0.1:9998' -web.tls.verify
FATA[0000] Could not parse key/cert, open cert.pem: no such file or directory  source="main.go:234"

I'm not sure what's going on, but it seems like some sort of race condition.

It's a bit of a pain because I know something is going on here, but I can't see the error message:

root@ldex-mon2:~# exporter_exporter -config.file=/etc/prometheus/exporter_exporter.yml -config.dirs=/etc/prometheus/exporter_exporter.d -web.tls.listen-address ':9998' -web.tls.cert=/path/to/cert.pem -web.tls.key=/path/to/privkey.pem
FATA[0000] context canceled                              source="main.go:270"

Basic Auth for the exporter_exporter binary itself

Hello,

I think that exporter_exporter binary should provide basic authentication support.
Coupled with tls config, it will help to better secure the unsecured exporter that exist still without relying to apache or nginx.
That said, to serve the unsecured exporter via exporter_exporter is more secure than the exporter itself, but I believe that the authentication may add a stronger security.

Best Regards,
Christophe

Use as library package

I have 29 microservices and want to provide it metrics from single endpoint.
Now I'm write some code by hand to get metrics from instances and provide output

Optionally load scrape configs from discrete files in a directory

In more dynamic environment it would be very nice to just be able to drop in a configuration in a directory, and restart exporter_exporter and have it amend those configs to scrape_configs. One could also imagine this was reloaded on HUP. Would such a feature be of interest?

For example:

/etc/exporter_exporter/exporter.yml
/etc/exporter_exporter/scrape_config.d/service1.yml
/etc/exporter_exporter/scrape_config.d/service2.yml
...

Where exporter.yml has some directive for pointing out the scrape_config.d path.

Return multiple module data in one call

I am not sure is that the proper way or I am doing something wrong but when I configured my prometheus (v2.33.5) as

  - job_name: 'myjob'
    scheme: 'https'
    metrics_path: '/proxy'
    params:
      module:
      - 'node'
      - 'nginx'
      - 'mysqld'
      - 'php-fpm'
      - 'postgres'
    static_configs:
      - targets:
        - 'targethost:9998'
        labels:
          environment: 'staging'

I see on the target:

May 11 07:09:05 targethost prometheus-exporter-exporter[1422569]: time="2023-05-11T07:09:05+02:00" level=info msg="10.28.1.1 - GET \"/proxy?module=node&module=process&module=nginx&module=mysqld&module=php-fpm&module=postgres\" 200 OK (took 108.894097ms)"

But only the node metric shown up in the Prometheus, so I executed the following on the node runs the prometheus

wget -O - 'https://treebeard.login.hu:9998/proxy?module=node&module=nginx&module=mysqld&module=php-fpm&module=postgres' 

and looks like only the first module (eg. in this case the node) is included in the response

wget -q -O - 'https://treebeard.login.hu:9998/proxy?module=node&module=nginx&module=mysqld&module=php-fpm&module=postgres' | grep -E '^(node_load1|nginx_connections_writing) '
node_load1 0.71

if i start switch the order, I get only the nginx data

wget -q -O - 'https://treebeard.login.hu:9998/proxy?module=nginx&module=node&module=mysqld&module=php-fpm&module=postgres' | grep -E '^(node_load1|nginx_connections_writing) '
nginx_connections_writing 1

Is this the expected behaviour? So Am I have to configure like the following?

  - job_name: 'myjob-node'
    scheme: 'https'
    metrics_path: '/proxy'
    params:
      module: [ 'node' ]
    static_configs:
      - targets:
        - 'targethost:9998'
        labels:
          module: 'node'
          environment: 'staging'

  - job_name: 'myjob-nginx'
    scheme: 'https'
    metrics_path: '/proxy'
    params:
      module: [ 'nginx' ]
    static_configs:
      - targets:
        - 'targethost:9998'
        labels:
          module: 'nginx'
          environment: 'staging'
...
  - job_name: 'myjob-{module}'
    scheme: 'https'
    metrics_path: '/proxy'
    params:
      module: [ '{module}' ]
    static_configs:
      - targets:
        - 'targethost:9998'
        labels:
          module: '{module}'
          environment: 'staging'

Support to drop few metrics(whitelist/blacklist)

Can we add support to provide a list of metrics which can be whitelisted/blacklisted, so that we need not wait till prometheus drops metrics at relabel stage.

Also a support to add extra labels

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.