GithubHelp home page GithubHelp logo

Comments (8)

plangdale-roblox avatar plangdale-roblox commented on June 6, 2024 1

Thanks @hagen1778. What you've described here should work. Let me describe how we use relabel configs today to make sure there's nothing there that could be problematic in this scheme.

Here is the pattern we use for the agent command line (note the real cluster here has 14 "shards"):

-remoteWrite.relabelConfig=/local/relabel_config.yml
-remoteWrite.url=http://shard1/insert/0/prometheus/api/v1/write 
-remoteWrite.url=http://shard-2/insert/0/prometheus/api/v1/write 
-remoteWrite.url=http://shard-3/insert/0/prometheus/api/v1/write 
-remoteWrite.url=http://shard-4/insert/0/prometheus/api/v1/write
-remoteWrite.urlRelabelConfig=/local/1_url_relabel.yml,/local/2_url_relabel.yml,/local/3_url_relabel.yml,/local/4_url_relabel.yml

The first global relabel config looks at tags which identify the sample source and then adds a tag which identifies which shard the sample should go to. We ensure the rules here tag every sample (so there's a catch-all rule for any samples that don't match any other rule). Then the second set of relabel config files each correspond to one of the remote write URLs and simply say keep samples tagged with the matching shard id.

So, with the syntax you have proposed:

  • the contents of the relabel config files does not change
  • each remotewrite.url is replaced by a set of urls corresponding to a group
  • each of the per-group urlRelabelConfig file locations is prefixed with the group name (eg: gr1) as you described, so that it is applied for the whole group

If it works that way, then we are good.

As a minor point, I think the gr1/ prefix syntax might not work in that literal form due to ambiguity with file paths, but obviously a different prefix pattern can be used.

Thanks!

from victoriametrics.

plangdale-roblox avatar plangdale-roblox commented on June 6, 2024 1

Currently, streaming aggregation is handled by a second set of agents, so part of the relabel config sends aggregation input samples to these dedicated agents. Those agents then write their output to a single "shard" (ie: a single group). So everything we've discussed so far should work just fine for those, and if we ever did find ourselves writing aggregation results to multiple groups, it should still work fine unless I'm missing something.

from victoriametrics.

hagen1778 avatar hagen1778 commented on June 6, 2024

The only alternative we have today is we would need to run dedicated sets of vmagents in front of each of these storage groups, and our agents would need to be aware of which set of collection agents (not always vmagent) to send which samples to.

If I understand it correctly, the alternative would look like the following:
image

  1. A shared set of vmagents L1 which receives all the writes from all other agents/services. The L1 is responsible for routing the data stream via relabeling rules.
  2. A fault-domain set of vmagents L2 which receives writes from L1 vmagents. The L2 is responsible for sharding and replicating data across storage groups
  3. A fault-domain set of vmselects. These vmselects are responsible for reading and deduplicating data from fault-domain sets of storage groups.

Currently, the information about where samples go is fully encapsulated in the configuration of the shared vmagent tier.

According to the scheme above, the routing info still remain incapsulated into L1 layer.

If we moved to dedicated vmagent sets, we would need to size them independently

Indeed, L2 vmagents need to be sized and provisioned independently. As well as vminsert/vmselect/vmstorage services. The capacity planning should be pretty straightforward, as these vmagents are responsible only for replication/sharding process, and they already receive optimally pre-processed data in an optimal way.

as well as configure certain external services that push samples to us to become aware of which vmagent set is for which metrics.

I don't get this. According to the scheme above, the external services will remain aware of L1 vmagents only.

At a high level, this would mean being able to assign remoteWrite.urls to identifiable groups, and then have the relabelling configs be mapped to specific groups.

I'm afraid of making the configuration params overcomplicated. This topology already is pretty complex, making L1 layer do everything could be too complicated and rarely used by anyone in community. The topology with multi-level vmagents seems more transparent to me and provides better flexibility. It also doesn't require overcomplicating the configuration of vmagents.

from victoriametrics.

plangdale-roblox avatar plangdale-roblox commented on June 6, 2024

@hagen1778 Thanks for your reply. I understand why you are proposing this approach, and it is one I considered internally. But there is a real cost associated with running an additional VM Agent tier - these consume CPU and RAM resources. As we don't need more vm agents to handle load, adding more to handle this distribution work increases our cost to serve. From that perspective, increase vm agent configuration complexity (which I agree will happen) is the preferable option.

from victoriametrics.

hagen1778 avatar hagen1778 commented on June 6, 2024

But there is a real cost associated with running an additional VM Agent tier - these consume CPU and RAM resources.

The L2 vmagent should consume significantly less resources than L1 vmagents, as it doesn't do scraping/relabeling/aggregation. Resource usage should be about 1vCPU core and 500MiB of mem for serving 300K samples/s. If we assume overall ingestion of 100Mil samples/s into the system, the L2 layer will require about 330 CPUs and about 160GiB of RAM. With some extra provisioning it would be 400 CPUs, 200GiB of RAM. Or 40 CPUs, 20GiB of RAM for 10Mil samples/s, which is still multiple times above the average for most of the VM setups.

For your case, how significant would the resource cost for adding the L2 based on your calculations?

from victoriametrics.

plangdale-roblox avatar plangdale-roblox commented on June 6, 2024

For the clusters that have this architecture (we call them "sharded clusters" but they are a set of storage/insert/select nodes with a shared agent layer), our aggregate throughput is about 50M samples in, 100M samples out. In practice, there are six of these sharded clusters so we would need to run a separate L2 for each of them, and we would want some amount of physical isolation - so keep in mind that we would not be able to deploy the L2 as densely as theoretically possible. When you combine that with the operational overhead of having an additional moving parts that we have to monitor and deal with failures for, it is not something we would want to casually do. Keep in mind that the one cluster per fault domain design will itself already increase the operational overhead because of the more complex deployment topology.

Thanks.

from victoriametrics.

hagen1778 avatar hagen1778 commented on June 6, 2024

@plangdale-roblox Do you think something like below will work for your case?

./bin/vmagent \
  -remotewrite.url=gr1/vminsert-1-1 \
  -remotewrite.url=gr1/vminsert-1-2 \
  -remotewrite.url=gr1/vminsert-1-3 \
  -remotewrite.urlRelabelConfig=gr1/cfg1 \
  -remotewrite.urlShardByURL=gr1/true \
  -remotewrite.urlShardByURLReplicas=gr1/2 \
  
  -remotewrite.url=gr2/vminsert-2-1 \
  -remotewrite.url=gr2/vminsert-2-2 \
  -remotewrite.url=gr2/vminsert-2-3 \
  -remotewrite.urlRelabelConfig=gr2/cfg2 \
  -remotewrite.urlShardByURL=gr2/true \
  -remotewrite.urlShardByURLReplicas=gr2/2
image

Also, do you plan to use stream aggregation per each group?

from victoriametrics.

hagen1778 avatar hagen1778 commented on June 6, 2024

Thanks! What about stream aggregation? Are you going to use it per-group?

from victoriametrics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.