GithubHelp home page GithubHelp logo

cloudposse / prometheus-to-cloudwatch Goto Github PK

View Code? Open in Web Editor NEW
165.0 9.0 38.0 643 KB

Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch

Home Page: https://cloudposse.com/accelerate

License: Apache License 2.0

Makefile 2.06% Go 96.09% Dockerfile 1.85%
prometheus prometheus-exporter cloudwatch metrics aws kubernetes

prometheus-to-cloudwatch's Introduction

README Header

Cloud Posse

prometheus-to-cloudwatch Build Status Latest Release Slack Community

Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch


This project is part of our comprehensive "SweetOps" approach towards DevOps.

It's 100% Open Source and licensed under the APACHE2.

Screenshots

kube-state-metrics-to-cloudwatch kube-state-metrics to CloudWatch

Usage

NOTE: The module accepts parameters as command-line arguments or as ENV variables (or any combination of command-line arguments and ENV vars). Command-line arguments take precedence over ENV vars

Command-line argument ENV var Description
aws_access_key_id AWS_ACCESS_KEY_ID AWS access key Id with permissions to publish CloudWatch metrics
aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS secret access key with permissions to publish CloudWatch metrics
cloudwatch_namespace CLOUDWATCH_NAMESPACE CloudWatch Namespace
cloudwatch_region CLOUDWATCH_REGION CloudWatch AWS Region
cloudwatch_publish_timeout CLOUDWATCH_PUBLISH_TIMEOUT CloudWatch publish timeout in seconds
prometheus_scrape_interval PROMETHEUS_SCRAPE_INTERVAL Prometheus scrape interval in seconds
prometheus_scrape_url PROMETHEUS_SCRAPE_URL The URL to scrape Prometheus metrics from
cert_path CERT_PATH Path to SSL Certificate file (when using SSL for prometheus_scrape_url)
keyPath KEY_PATH Path to Key file (when using SSL for prometheus_scrape_url)
accept_invalid_cert ACCEPT_INVALID_CERT Accept any certificate during TLS handshake. Insecure, use only for testing
additional_dimension ADDITIONAL_DIMENSION Additional dimension specified by NAME=VALUE
replace_dimensions REPLACE_DIMENSIONS Replace dimensions specified by NAME=VALUE,...
include_metrics INCLUDE_METRICS Only publish the specified metrics (comma-separated list of glob patterns)
exclude_metrics EXCLUDE_METRICS Never publish the specified metrics (comma-separated list of glob patterns)
include_dimensions_for_metrics INCLUDE_DIMENSIONS_FOR_METRICS Only publish the specified dimensions for metrics (semi-colon-separated key values of comma-separated dimensions of METRIC=dim1,dim2;, e.g. 'flink_jobmanager=job_id')
exclude_dimensions_for_metrics EXCLUDE_DIMENSIONS_FOR_METRICS Never publish the specified dimensions for metrics (semi-colon-separated key values of comma-separated dimensions of METRIC=dim1,dim2;, e.g. 'flink_jobmanager=job,host;zk_up=host,pod;')
force_high_res FORCE_HIGH_RES Whether publish all metrics with high resolution to Cloudwatch or only those labeled with __cw_high_res.

NOTE: If AWS credentials are not provided in the command-line arguments (aws_access_key_id and aws_secret_access_key) or ENV variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY), the chain of credential providers will search for credentials in the shared credential file and EC2 Instance Roles. This is useful when deploying the module in AWS on Kubernetes with kube2iam, which will provide IAM credentials to containers running inside a Kubernetes cluster, allowing the module to assume an IAM Role with permissions to publish metrics to CloudWatch.

Examples

Build Go program

go get

CGO_ENABLED=0 go build -v -o "./dist/bin/prometheus-to-cloudwatch" *.go

Run locally

export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export CLOUDWATCH_NAMESPACE=kube-state-metrics
export CLOUDWATCH_REGION=us-east-1
export CLOUDWATCH_PUBLISH_TIMEOUT=5
export PROMETHEUS_SCRAPE_INTERVAL=30
export PROMETHEUS_SCRAPE_URL=http://xxxxxxxxxxxx:8080/metrics
export CERT_PATH=""
export KEY_PATH=""
export ACCEPT_INVALID_CERT=true
# Optionally, restrict the subset of metrics to be exported to CloudWatch
# export INCLUDE_METRICS='jvm_*'
# export EXCLUDE_METRICS='jvm_memory_*,jvm_buffer_*'
# export INCLUDE_DIMENSIONS_FOR_METRICS='jvm_memory_*=pod_id'
# export EXCLUDE_DIMENSIONS_FOR_METRICS='jvm_memory_*=pod;jvm_buffer=job,pod'

./dist/bin/prometheus-to-cloudwatch

Build Docker image

NOTE: it will download all Go dependencies and then build the program inside the container (see Dockerfile)

docker build --tag prometheus-to-cloudwatch  --no-cache=true .

Run in a Docker container

docker run -i --rm \
        -e AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXX \
        -e AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
        -e CLOUDWATCH_NAMESPACE=kube-state-metrics \
        -e CLOUDWATCH_REGION=us-east-1 \
        -e CLOUDWATCH_PUBLISH_TIMEOUT=5 \
        -e PROMETHEUS_SCRAPE_INTERVAL=30 \
        -e PROMETHEUS_SCRAPE_URL=http://xxxxxxxxxxxx:8080/metrics \
        -e CERT_PATH="" \
        -e KEY_PATH="" \
        -e ACCEPT_INVALID_CERT=true \
        -e INCLUDE_METRICS="" \
        -e EXCLUDE_METRICS="" \
        -e INCLUDE_DIMENSIONS_FOR_METRICS="" \
        -e EXCLUDE_DIMENSIONS_FOR_METRICS="" \
        prometheus-to-cloudwatch

Run on Kubernetes

To run on Kubernetes, we will deploy two Helm charts

  1. kube-state-metrics - to generates metrics about the state of various objects inside the cluster, such as deployments, nodes and pods

  2. prometheus-to-cloudwatch - to scrape metrics from kube-state-metrics and publish them to CloudWatch

Install kube-state-metrics chart

helm install stable/kube-state-metrics

Find the running services

kubectl get services

Copy the name of the kube-state-metrics service (e.g. gauche-turtle-kube-state-metrics) into the ENV var PROMETHEUS_SCRAPE_URL in values.yaml.

kube-state-metrics-service

It should look like this:

PROMETHEUS_SCRAPE_URL: "http://gauche-turtle-kube-state-metrics:8080/metrics"

Deploy prometheus-to-cloudwatch chart

cd chart
helm install .

prometheus-to-cloudwatch will start scraping the /metrics endpoint of the kube-state-metrics service and send the Prometheus metrics to CloudWatch

kube-state-metrics-to-cloudwatch

Share the Love

Like this project? Please give it a ★ on our GitHub! (it helps us a lot)

Are you using this project or any of our other projects? Consider leaving a testimonial. =)

Related Projects

Check out these related projects.

Help

Got a question? We got answers.

File a GitHub issue, send us an email or join our Slack Community.

README Commercial Support

DevOps Accelerator for Startups

We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.

Learn More

Work directly with our team of DevOps experts via email, slack, and video conferencing.

We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
  • Site Reliability Engineering. You'll have total visibility into your apps and microservices.
  • Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
  • GitOps. You'll be able to operate your infrastructure via Pull Requests.
  • Training. You'll receive hands-on training so your team can operate what we build.
  • Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
  • Troubleshooting. You'll get help to triage when things aren't working.
  • Code Reviews. You'll receive constructive feedback on Pull Requests.
  • Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.

Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

Newsletter

Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.

Office Hours

Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!

zoom

Contributing

Bug Reports & Feature Requests

Please use the issue tracker to report any bugs or file feature requests.

Developing

If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Fork the repo on GitHub
  2. Clone the project to your own machine
  3. Commit changes to your own branch
  4. Push your work back up to your fork
  5. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

Copyright

Copyright © 2017-2020 Cloud Posse, LLC

License

License

See LICENSE for full details.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.

About

This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!

Cloud Posse

We're a DevOps Professional Services company based in Los Angeles, CA. We ❤️ Open Source Software.

We offer paid support on all of our projects.

Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.

Contributors

Erik Osterman
Erik Osterman
Andriy Knysh
Andriy Knysh
Igor Rodionov
Igor Rodionov
yufukui-m
yufukui-m
Satadru Biswas
Satadru Biswas
Austin ce
Austin ce

README Footer Beacon

prometheus-to-cloudwatch's People

Contributors

aknysh avatar austince avatar bernata avatar edfungus avatar hangxie avatar mattellis avatar pantuza avatar sbiswas-suplari avatar sergei-ivanov avatar vadim-hleif avatar yufukui-m avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prometheus-to-cloudwatch's Issues

Long name metrics cannot be submitted to Cloudwatch

Thank you for this chart... We have it working in the most part, but many metrics are missing, and they map to the logs we are seeing from the exporter pod:

2019/06/21 11:45:10 prometheus-to-cloudwatch: error publishing to CloudWatch: InvalidParameterValue: The value for parameter MetricData.member.10.Dimensions.member.3.Value contains non-ASCII characters.
The parameter MetricData.member.10.Dimensions.member.3.Value must be shorter than 257 characters.
	status code: 400, request id: 06754578-941a-11e9-a7de-0dbbdca37bd2
2019/06/21 11:45:11 prometheus-to-cloudwatch: error publishing to CloudWatch: InvalidParameterValue: The value for parameter MetricData.member.1.Dimensions.member.4.Value contains non-ASCII characters.
The parameter MetricData.member.1.Dimensions.member.4.Value must be shorter than 257 characters.
	status code: 400, request id: 0677688e-941a-11e9-8b66-85751c18c2cd
2019/06/21 11:45:15 prometheus-to-cloudwatch: published 135 metrics to CloudWatch

To be clear, we are seeing most metrics, but are just missing the ones that have long names. I suspect that when the name is chopped, that is when the non-ASCII problems happen. We are not using non-ASCII chars so I think that problem is a red herring.

We have retrieved the metrics form the server:

Side note, the one that is working (foofoob) is not scheduled, ie not running (due to node exhaustion during scale up. You can see this as the attribute "node" is just "", so it is not running anywhere). This, (funnily enough) means the metric line is short enough to be submitted to Cloudwatch. Perversely, this means we are getting metrics for pods that aren't yet running anywhere, but as soon as they land on a node, their metric line gets too long and they stop reporting.

EDIT: Apologies, the missing metrics in the original PR are actually present, it was a Cloudwatch graphing error!

The problem of (other) statistics failing to be sent is still occurring however (see the error log lines above). We are now not even sure which ones are being missed. What is the best solution here? Can the metric lines be made shorter somehow, or can we log which ones are not being sent?

only one additional dimension possible?

An issue that has come up for me is that I want to send metrics to cloudwatch from many different sources for consumption in grafana later. To filter my metrics and construct sensible dashboards I want labels for categories, call "servergroup" and "environment", and within these categories metrics can also come from multiple instances, so I also want a label for "instance_id".

However I found that when I specified multiple -additional_dimension arguments to the program it would only label my metrics with the last argument specified. I looked at the code.

var additionalDimensions = map[string]string{}

var additionalDimensions = map[string]string{}
if *additionalDimension != "" {
	key, val := keyValMustParse(*additionalDimension, "-additionalDimension must be formatted as NAME=VALUE")
	additionalDimensions[key] = val
}

var replaceDims = map[string]string{}
if *replaceDimensions != "" {
	kvs := strings.Split(*replaceDimensions, ",")
	if len(kvs) > 0 {
		for _, rd := range kvs {
			key, val := keyValMustParse(rd, "-replaceDimensions must be formatted as NAME=VALUE,...")
			replaceDims[key] = val
		}
	}
}

It looks like an map was used just like with replaceDimensions yet the actual code seems limited to only ever populating that map with at most a single key-value pair. This seems like a disappointing limitation?

Would there be any drawback to changing additional_dimension to additional_dimensions (or adding additional_dimensions for backwards compatibility) and copying the same pattern for parsing it as is already in use for replace_dimensions? It looks at a glance like the rest of the code is already there.

Include/Exclude metrics not respected in values.yaml

Im trying to only include one metric to send to cloudwatch so I set the include metric value to: INCLUDE_METRICS: 'kube_pod_container_resource_requests*' and helm installed the chart. However the exporter is just exporting every single metric in prometheus.

Update REAMDE

what

  • Update to use latest README.yaml template

why

  • Consistency / commercial support notice

Secrets doesn't recognise role_arn to switch roles

We have a payers account and access key id and access key for the same. We use role_arn to switch to a custom role. specifying the values of role_arn doesn't gets picked up while container is running and this gives error: access denied for root user as switching role hasn't been performed.

Is there a limit to the number of metrics I can push?

Hi guys,

First of all great work, I really like this project.

My Prometheus is scarping a lot of metrics from different exporters however your tool is just sending 84 of them to Cloudwatch. Is there a limit or something or a config maybe?

From the logs I can't see much, maybe a way to enable debug?

2018/09/17 05:26:30 prometheus-to-cloudwatch: published 84 metrics to CloudWatch
2018/09/17 05:27:30 prometheus-to-cloudwatch: published 84 metrics to CloudWatch
2018/09/17 05:28:30 prometheus-to-cloudwatch: published 84 metrics to CloudWatch

The thing I noticed is the metrics which are being pushed anyway are from the local host of the system where Prom is running itself nothing from the other nodes from where I 'm exporting metrics using node exporter and cadvisor mainly.

Can you guys please help me out here?

Regards,
Ashish

Image missing

The image or image tag in the values file cannot be pulled.

Ability to send metrics to VPC endpoint

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

I would like to send metrics to a VPC endpoint instead of sending metrics to the public API.

Expected Behavior

I expect the ability to ship metrics to CloudWatch over a private network where the instance does not have the ability to reach the public endpoint over the Internet.

Use Case

I have EC2 instances that do not have access to the public internet. The EC2 instances are in a private subnet and they do not have a NAT gateway in order to reach the Internet. This EC2 instance has a service that is exposing prometheus styled metrics that I would like to ship to CloudWatch, but would need to be done over the private network through a VPC endpoint.

Describe Ideal Solution

I would like the ability to add the VPC endpoint as a parameter so that the cloudwatch client is able to put metrics into CloudWatch over the private network. I believe this would be achieved by setting the Endpoint option linked below.

https://docs.aws.amazon.com/sdk-for-go/api/aws/client/#ConfigProvider

It appears that the CloudWatch client is being set here, and this is where the Endpoint option might have to be set.
https://github.com/cloudposse/prometheus-to-cloudwatch/blob/master/prometheus_to_cloudwatch.go#L136-L196

Alternatives Considered

Explain what alternative solutions or features you've considered.

Additional Context

Add any other context or screenshots about the feature request here.

RabbitMQ prometheus endpoint return status code 406

Current implementation performs HTTP request with Accept header including quality factor (q parameter) and extensions (see code).

RabbitMQ prometheus endpoint returns HTTP status code 406 Not Acceptable (see RabbitMQ issue) with current Accept header. The problem is related to the version=0.0.4 extension of the text/plain media type, removing the version=0.0.4 from the header value resolves the issue.

Is there any specific reason to have the version=0.0.4 extension specified for the text/plain media type?

Would it be acceptable to have it removed? (I can provide PR)

Thanks,

RJB

[Feature] Fetch alerts or recording rules from a Prometheus server

First of all hello and thank you very much for sharing this very cool project!

I think it would be nice to support the ability to fetch metrics held in the Prometheus server which are not usually exposed any other way. For instance alerts and recording rules.

I have found a workaround for this use case, if anyone is interested, to scrape the /federate endpoint and passing the correct "instant vector selector" and url-encoding the query.

For instance, let's say that your metric is named candies (could be a recording rule aggregating and manipulating other primitive metrics), then you can invoke prometheus-to-cloudwatch as:

prometheus_to_cloudwatch -prometheus_scrape_url "http://localhost:9090/federate?match[]=%7B__name__%3D~%22candies.%2A%22%7D"

where the query is simply the url-encode of match[]={__name__=~"candies.*"}.

While this workaround seems to work, it makes the code not very readable and a bit hard to maintain, so perhaps this feature could be integrated directly into the code?

If you think it's a good idea to include the feature I might be able to open a PR for it.

Thank you for your feedback.

text format parsing error in line 1: invalid metric name

Deployed the docker image in AWS ECS. The PROMETHEUS_SCRAPE_URL is set to "http://10.33.10.133:9090/api/v1/query?query=nsq_depth but log showing error 2019/01/15 20:13:55 reading text format failed: text format parsing error in line 1: invalid metric name Just to check I SSH into my EC2 instance (running my container) and I can run curl http://10.33.10.133:9090/api/v1/query?query=nsq_depth and it works fine.
Please let me know if any clues what might be going on here?

Error: found in requirements.yaml, but missing in charts/ directory: common

Trying to install via helm cd chart helm install . I get Error: found in requirements.yaml, but missing in charts/ directory: common.
I added to my repo via helm repo add common https://kubernetes-charts-incubator.storage.googleapis.com/ but made no difference. Next try route of just downloading https://kubernetes-charts-incubator.storage.googleapis.com/common-0.0.4.tgz and installing but that gives me Error: release kind-saola failed: no objects visited. I prefer not to go down this path of installing one dependency at a time. Any clues on how to resolve the original error?

Ability to set Dimensions per Metric

Hey there, would anyone be interested in a PR that allows you to set the dimensions on a per-metric, or perhaps per-metric-glob, basis? We're running into the issue of the 10 metric limit truncating the actual dimensions we care about.

I'm thinking about updating the Bridge and passing in a flag like:

	dimensionsForMetrics     = flag.String("dimensions_for_metrics", os.Getenv("DIMENSIONS_FOR_METRICS"), "Hard-coded dimensions for metrics (semi-colon-separated list of comma-separated dimensions, e.g. 'flink_jobmanager=job,host;zk_up=host,pod;')")

// ...

type Bridge struct {
	dimensionsForMetricMap        map[string][]string
        // or
	dimensionsForMetrics        []glob.Glob[]string
}

Or even the ability to set the max # of dimensions per metric might solve our use case.

Http connections not being closed

We're running prometheus-to-cloudwatch in a kubernetes pod to scrape our selenium endpoint for metrics. This is working but after a couple of hours it causes the selenium exporter to become unavailable as the http connections are being held open so no new ones can be created. We're having to restart the exporter to get everything working again. Is there a way to get prometheus-to-cloudwatch to close the http connections after it has made them so that it isn't leaving them hanging?

Error returned HTTP status 400 Bad Request

Hello!
I have a problem with prometheus to cloudwatch error: returned HTTP status 400 Bad Request

My prometheus collect metrics by blackbox exporter icmp on path http://url:9115/probe but when i export PROMETHEUS_SCRAPE_URL env i receive error: prometheus-to-cloudwatch: Error: GET request for URL "http://url:9115/probe" returned HTTP status 400 Bad Request
If i using http://url:9115/metrics it's work but i dont have metrics their what i need

Do you have any advice ?

Thank you

Text format parsing error in line 79

Hello,

I'm trying to put OpenDJ metrics into CloudWatch. OpenDJ has a prometheus exporter, so this app felt like a good fit. I tried it and got the following message before the application exits.

$ prometheus-to-cloudwatch  -cloudwatch_publish_timeout 5 -prometheus_scrape_interval 60 -prometheus_scrape_url http://localhost:8080/metrics/prometheus -cloudwatch_namespace OpenDJ -cloudwatch_region us-east-2
2021/03/13 11:15:11 prometheus-to-cloudwatch: Starting prometheus-to-cloudwatch bridge
2021/03/13 11:16:11 reading text format failed: text format parsing error in line 79: second HELP line for metric name "ds_backend_entry_count"

Here's a snippet from http://localhost:8080/metrics/prometheus. The fourth line in this snippet is line 79 in the payload.

# HELP ds_backend_ttl_thread_count Number of active time-to-live threads
# TYPE ds_backend_ttl_thread_count gauge
ds_backend_ttl_thread_count{backend="userRoot",type="db",} 0.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="__config.ldif__",base_dn="cn=config",type="local",} 188.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="__config.ldif__",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="adminRoot",base_dn="cn=admin data",type="local",} 8.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="adminRoot",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="ads-truststore",base_dn="cn=ads-truststore",type="local",} 3.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="ads-truststore",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="backup",base_dn="cn=backups",type="local",} 1.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="backup",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="monitor",base_dn="cn=monitor",type="local",} 36.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="monitor",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="rootUser",base_dn="cn=Directory Manager",type="local",} 1.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="rootUser",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="schema",base_dn="cn=schema",type="local",} 1.0
# HELP ds_backend_is_private Whether the base DNs of this backend should be considered public or private
# TYPE ds_backend_is_private gauge
ds_backend_is_private{backend="schema",type="local",} 1.0
# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN

This also appears around line 45:

# HELP ds_backend_entry_count Number of subordinate entries of the base DN, including the base DN
# TYPE ds_backend_entry_count gauge
ds_backend_entry_count{backend="userRoot",base_dn="dc=movies,dc=tv",type="db",} 0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.