Comments (13)
One idea would be to do this via console templates, we could add a function that'd take the output of a query and produce text/protobuf format. We'd need some hook to set the content type too.
from prometheus.
@brian-brazil This would certainly be possible, but since this is an integral feature, it arguably deserves its own specialized and optimized implementation and endpoint, no?
from prometheus.
A separate endpoint would be best.
from prometheus.
A way to do this via console templates, until we've got a full-on solution: https://github.com/prometheus/prometheus/blob/master/consoles/federation_template_example.txt
from prometheus.
The solution might include "streaming" as in "transfer more than one timestamped sample per time series during one scrape by a higher-level Prometheus server of a lower-level Prometheus server".
from prometheus.
I'm a little wary of doing more than one value. The main reason you'd need that would be if a previous scrape failed, and requesting more data from a server that failed last time may lead to a cascading failure.
from prometheus.
There are two common use-cases for federation:
- Scaling, as folks have mentioned. Given prometheus' scaling this is actually probably the rarer use-case
- Aggregating data across zones of some form
It's generally important to monitor a target from "nearby". You want to run prometheus as close to the target in the network sense as possible. It's actually generally a good idea to run it in the same failure domain as well, as then your monitoring goes down exactly when your system goes down, instead of alternate with it, this helps avoid your system being up while your monitoring is down, minimizes netsplits impacting monitoring, etc.
In the case of multiple zones though it's often useful to cross-correlate data across those zones. So you'd use the federation to pull the data in to a "global" level prometheus. In this case it'd be fairly common for a scrape to fail due to a network-level event (fiber cut, router failure, etc.)... and it kind of sucks to just lose that data from your global level prometheus instance when it still exists in the lower level monitoring.
I should note here that in the prometheus model there isn't a global store to pull from, so if the data isn't in that top-level right now, you'll never get it there. You'd end up having to do periodic dumps and imports from your lower-level promethei to fill in holes for network outages... ick :(.
I'd suggest pulling data in in a more "streaming" fashion with a bounding the window. The default bound can be relatively small to avoid the cascading problem, this way it should at least be able to bridge small network "glitches" like those frequently seen in intercontinental links. If someone wants to expose themselves to cascade failures to handle a cruddy network, they could extend the window if desired.
from prometheus.
Oh, also, this way you can handle high-frequency data without having a high-frequency poll at the federation layer.
from prometheus.
I don't think a bounding window is sufficient to prevent cascading failures, even if it now requests at most two data points that means that the load on the slave prometheus server could double in an outage - which would be bad.
My experience is that gaps due to small blips due to network fun don't usually cause problems in practice. I'd try to avoid putting anything critical in a global prometheus, due to the fundamental unreliability of the WAN (and data appearing a bit back in time may cause weirdness with rules) - it's more for general information with the per-cluster/failure domain prometheus servers being the place you usually go to first.
from prometheus.
What about higher frequency data? It seems the scrapes will have to happen at least as fast as the fastest scrape that the lower-level prometheus is doing. Which, assuming prometheus is as well written as I think it is (I'm new to the community)... could be very very fast.
from prometheus.
At the global level, high frequency data is much less useful than at a local level.
High-frequency data (on the order of seconds) is primarily useful for debugging things like microbursts for which you usually want to look at a handful of variables in roughly one datacenter at a time to figure things out, and reduce the impact of the various race conditions inherent in monitoring.
At a global level you tend to want a wide range of metrics at no more than a minute granularity. A well instrumented server will tend to have hundreds to thousands of metrics, and many thousands of time series. Doing scrapes more often will make you run into performance problems sooner without much benefit from the increased frequency, rather it's the breadth of instrumentation that helps you pin down all bar the microbust-level issues. If anything you'd be looking at downsampling a bit at the global level.
from prometheus.
This has been implemented. http://prometheus.io/docs/operating/federation/
from prometheus.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
from prometheus.
Related Issues (20)
- Prometheus out-of-order-sample issues HOT 1
- Request to support ASLR in Prometheus HOT 2
- OTLP write receiver: Out of order sample from remote write (timestamp=0)
- enable querying of old data with new UTF-8 names HOT 4
- Add a function `fill()` to fill missing intervals HOT 2
- Prometheus skipping scrapes every 2 hours HOT 3
- promtool test with `/dev/stdin` fails for multiple test cases HOT 2
- Remove `init()` from discovery/metrics.go HOT 1
- Proposal: Add support `Nacos` SD config HOT 1
- promql: `label_join` destination label is not validated (v2.51.0+)
- Prom stops updating subset of metrics after dovecot server is restarted HOT 4
- Filter not working HOT 5
- Prometheus Target Down, when using it as Docker conatiner HOT 1
- Use metric_relabel_configs to drop metrics but still get a high prometheus memory usage HOT 2
- Queries return same series twice with non-stringlabels build HOT 2
- Generate wrong alerts when upgrading Prometheus from 2.33.5 to 2.50.0 HOT 2
- Prometheus's remote_write closes the impact of metadata sending
- Move required OpenTelemetry packages to prometheus/prometheus HOT 6
- How can I get a week's worth of alert history for Prometheus, Alertmanager via python script? HOT 1
- Configurabe scrape interval / scrape timeout via relabeling still marked as experimental HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from prometheus.