GithubHelp home page GithubHelp logo

Comments (23)

benner avatar benner commented on May 26, 2024 1

Tested in semi-prod. Now it has more that 2M targets. Some stats (from Cloudprober exported metrics):
image
image
image
image

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

@benner If all probes are using the same targets you can consider using shared_targets.

Sorry, it's not documented, but here is what you can do:

Add the following config stanza at the config file level:

shared_targets {
  name: "my_file_targets"

  targets {
    file_targets {
      ..
    }
  }
}

and, then at the probe level:

probe {
  ...
  targets {
    shared_targets: "my_file_targets"
  }
}

This way we create only one target structure in memory .. and hence only one file refresh loop.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

Ah, but you're using different filters. Can these filters be a regex on target names?

targets {
  shared_targets: "my_file_targets"
  regex: ".."
}

from cloudprober.

benner avatar benner commented on May 26, 2024

I saw shared_targets or regex filtering nut in my case all probes have different targets. Now I'm using label filters.

from cloudprober.

benner avatar benner commented on May 26, 2024

Example of cfg:

resource {                                                                              
name: "<target>"                                                                  
labels {                                                                                
    key: "probe"                                                                        
    value: "<probe name>-<HTTPS|HTTP>"                                        
  }                                                                                     
}   
        file_targets {                                                                  
          file_path: "/tmp/targets.textpb"               
          filter {                                                                      
            key: "labels.probe",                                                        
            value: "<probe name>-HTTP"                                  
          }                                                                             
        }            

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

I see. I guess rds_targets semantics would have been better for file_targets as well. RDS server could run in the same process (all transparent to the probes or configuration), and all probes could just rds_targets.

From configuration standpoint, it will look like this:

probe {
  ...
  targets{
    rds_target {                                                                  
       resource_path: "file://<file_name>"
          filter {                                                                      
             key: "labels.probe",                                                        
             value: "<probe name>-HTTP"                                  
          }                                                                             
        }
      }
   }
...
}

...

rds_server {
  provider {
    id: "file"
    file_config {
      file_path: ....
    }
  }
}

Does that make sense at all? I can try implementing it. It may take a few days to get it reviewed and submitted though.

from cloudprober.

benner avatar benner commented on May 26, 2024

I will look into that

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

@benner, fyi: I am also looking at adding the "file" provider to RDS. I'll try to create a PR soonish, so that you can test with it if you want.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

I still need to get changed reviewed internally, but here is the proposed change: #635 (branch rds_file).

With this change you can do the following:

  1. Add the following stanza to your cloudprober.cfg:
rds_server {
  provider {
    file_config {
      file_path: "/tmp/targets.json" 
      re_eval_sec: 60
    }
  }
  1. Then specify targets in probes like this:
probe {
  ...
  targets{
    rds_target {                                                                  
       resource_path: "file://"
       filter {                                                                      
         key: "labels.probe",                                                        
         value: "<probe name>-HTTP"                                  
       }                                                                             
    }
  }
  ...
}

If you'd like to test, you can download pre-built binaries from here: linux, all.

from cloudprober.

benner avatar benner commented on May 26, 2024

I tried rds_target and it works as expected or I wanted to achieve. What I found missing or did not found how to change is rds.ClientConf.re_eval_sec which has default of 30s (with this freshness Cloudprober grows to ~40 GiB instead of 14 GiB in static version).

From what I see rds.ClientConf.re_eval_sec is not reachable in configuration file:

% rg ClientConf --glob '*.proto'
rds/client/proto/config.proto
18:// ClientConf represents resource discovery service (RDS) based targets.
20:message ClientConf {

targets/proto/targets.proto
22:  optional rds.ClientConf.ServerOptions rds_server_options = 1;
129:  optional rds.ClientConf.ServerOptions rds_server_options = 4;

targets/lameduck/proto/config.proto
43:  optional rds.ClientConf.ServerOptions rds_server_options = 6;

Only inner ServerOptions can be configured.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

Good point regarding not being to control client side re_eval_sec through rds_targets. We can fix it, but I am not sure if it will help a lot. Client side refreshing causes data copy between server and client, but is filtered data.

To change refresh interval on the server side (where the file is actually reloaded), you can modify your rds_server config:

rds_server {
  provider {
    file_config {
      file_path: "/tmp/targets.json" 
      re_eval_sec: 60  <-- here
    }
}

I am wondering if 14GB vs 40GB difference is only due to temporary objects in the memory or something else is going on. At the time of refresh, new objects are created, and old objects should automatically be garbage collected.

I've thought about implementing some improvements to RDS protocol to track if resources have changed and signal to the client in case refresh is not needed, but I've largely avoided that as it may make things more complex and may introduce new failure modes.

Was it 14GB with file_targets or targets directly embedded in the config file?

from cloudprober.

benner avatar benner commented on May 26, 2024

I am wondering if 14GB vs 40GB difference is only due to temporary objects in the memory or something else is going on.

I think it is directly related. I actually did some heap dumps (pporf) during RSS growth and looked at .svg.

Attaching one when RSS was 5G (less probes). I don't have svg now when it was 40G but situation was the same but numbers was bigger.

image

Next week I'll redo the test with hands changed and recompiled rds.ClientConf.re_eval_sec.

Ether way, situation with rds files solution is like 5-6 times better than it was before :-)

from cloudprober.

benner avatar benner commented on May 26, 2024

Was it 14GB with file_targets or targets directly embedded in the config file?

Yes

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

Thanks for sharing the heap profile. Few observations:

  • Large amount of memory is being used while parsing the textproto file - 60% of the total (earlier you were using JSON right?).
  • 13% is going in runtime.malg -- that's basically for goroutines. I think these must be HTTP probe goroutines.
  • 10% + 3.54% is being used for running probes. Interestingly ListEndpoints is using only 1.64% of the heap.
  • fileLister.ListResources is using 8.4% of the heap space.

Thoughts:

  1. Parsing textproto is obviously the most expensive operation -- it seems that it uses a lot of intermediate storage for doing the TextProto->Proto conversion. One thing that can improve this usage is if we somehow parse text-proto one resource at a time and re-use intermediate data structures. Not sure if it's possible to do it in textproto (I can think of how we can do something like this in JSON, using json.RawMessage to do lazy decoding). If we can do it, it will improve the memory usage, but may increase compute time a little bit, though hopefully not by much, because behind the scene, TextProto->Proto decoder is also parsing one message at a time.

  2. Goroutine heap usage and probes' heap usage is largely unavoidable.

  3. We can likely improve fileLister.ListResources's performance. Currently we are allocating an array of size as big as the total number of resources (to avoid doing too many allocs) but that may be a wrong strategy. We may start with a min size, let's say 10 or even 100, and then let append handle the growth of the slice.

I'll look at implementing 1 (at least for JSON) & 3.

from cloudprober.

benner avatar benner commented on May 26, 2024

I was always using TextProto

from cloudprober.

benner avatar benner commented on May 26, 2024

Tested full configuration with default 30s and with 60min

rds/client/client.go:
+	// reEvalInterval := time.Duration(client.c.GetReEvalSec()) * time.Second
+	reEvalInterval := 30 * 60 * time.Second

with 60 min re_eval_sec it works as expected. RSS grows almost the same as with static configuration.

For default 30s interval main reason for memory usage is big number of probes. In my setups it means RDS server receives around 3 requests per second. Result: all CPUs are busy (20 logical CPUs) and memory grows to ~40G.

30s ~900 probes:
image

60 min ~900 probes:

image

One drawback with longer re_eval_sec is that all probing can be delayed up to same interval for actual probing but I can live with that because whole point of that is to keep Cloudprober without restarting :-)

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

Interesting. Thanks again for the heap profiles @benner!

So ListResources should get better with the change that I just committed in the main branch: #640. There was inefficient resource allocation in the previous change.

Another improvement I am planning to make is to reload file "only if" it has been modified since the last load. With that you won't have to worry about re_eval_sec so much.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

Even though resources allocation efficiency mentioned in the previous update will help quite a bit, I think #641 will have the most impact for a scenario like yours.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

With all the changes so far, I think you should see substantial improvement in performance (much better than the default case earlier). You can even keep the re_eval_sec very low in RDS file provider config -- it won't reload file unless it has changed.

Next up is #641. I'm working on it.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

@benner, I think we should be good here now. You should be able to use lower re_eval_sec (30s) without any performance impact. Until you update the targets file, no new large data allocation should happen.

Let me know how it goes.

from cloudprober.

benner avatar benner commented on May 26, 2024

Thank you. Tested for few hours. Everything seems even better.

image

P.S. I tested on my laptop with full config but without actual HTTP requests. After vocation I will test on real system.

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

Awesome. Great to hear. Thanks for checking!

from cloudprober.

manugarg avatar manugarg commented on May 26, 2024

@benner Awesome, great to see.

from cloudprober.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.