GithubHelp home page GithubHelp logo

Comments (4)

da2ce7 avatar da2ce7 commented on July 20, 2024 1

@josecelano there is was a bug in my patch that enabled the test to fail, now I yield before I return then new "filled" ring-buffer.

Here is the updated patch:

da2ce7@d030030

from torrust-tracker.

da2ce7 avatar da2ce7 commented on July 20, 2024

@josecelano I think that you are correct:

da2ce7@ff9b43d

This commit should fix the problem. However now I don't see any advantage of using the ring-buffer over a standard vector. Since we are not inserting-or-removing entries from the buffer, but just swapping them for new entries.

from torrust-tracker.

josecelano avatar josecelano commented on July 20, 2024

@josecelano I think that you are correct:

da2ce7@ff9b43d

This commit should fix the problem. However now I don't see any advantage of using the ring-buffer over a standard vector. Since we are not inserting-or-removing entries from the buffer, but just swapping them for new entries.

Hi @da2ce7, after emerging the issue:

I restarted the live demo. Now, I can get this graph with Datadog by parsing the "request-aborted" lines in the logs.

image

Regarding your patch, I guess now the socket acts as a buffer for all incoming request, and we only handle them when the previous ones have finished. I suppose that will consume more memory, but It's a trade-off between rejecting the requests or consuming more memory and increasing the response time.

NOTE: Remember, we still don't have a timeout for processing the UDP requests.

I have to think about this, but I prefer directly controlling that buffer in the application. I mean, I would read all incoming packets as soon as possible and keep those pending tasks in the app. That way, we could implement policies to handle the overload. For example, we could:

  • Reject new requests when the buffer is full. With the current solution, we don't know when it is full (I guess so) or if we did it would be less readable.
  • Ignore new requests when the buffer is full. We don't even send a response in the current version. That doesn't have to be bad. Maybe it's also a way to save bandwidth, and with UDP, the client shouldn't have a problem with that.

Maybe a more common solution would be more straightforward for newcomers to understand. For example, we could have a pool of workers processing requests. The main loop could get all incoming requests from the socket and send them to the workers by using a channel. The channel would be the buffer. Maybe under the hood, this implementation is the same, but it has two advantages:

  1. The app controls the buffer. We can probably get info from the socket buffer, but I guess the API is going to be harder to understand.
  2. The worker's pattern is well-known and documented. ChatGPT can create the code for that :-) and can explain to other developers.

Finally, limiting the number of active requests with the current version (without your new patch) is a way of limiting resources. In this case, we limit CPU usage (not even sending a "busy" response to clients) and memory usage because task processors consume memory. I'm not sure if we should limit the resources without providing a way to monitor the actual server load. I mean, this limitation was probably hiding an overload problem. I have been monitoring the memory consumption in the demo, and I didn't see this problem (we were rejecting a lot of requests). I only saw a CPU problem when it was caused by the swapping. In conclusion, I think we should disable all limits (your new path disables them, unless there is a limit for the socket buffer) and monitor the servers to scale up. If we want to introduce some kind of limitations we have to think how to monitor that limitation so sysadmins can detect overload.

I've been discussing these limitations here.

I will merge your patch for the time being.

from torrust-tracker.

josecelano avatar josecelano commented on July 20, 2024

Just for the record, this is the Datadog dashboard where I put the number of handled requests (below) and the number of aborted requests (top)

image

from torrust-tracker.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.