Comments (4)
@josecelano there is was a bug in my patch that enabled the test to fail, now I yield before I return then new "filled" ring-buffer.
Here is the updated patch:
from torrust-tracker.
@josecelano I think that you are correct:
This commit should fix the problem. However now I don't see any advantage of using the ring-buffer over a standard vector. Since we are not inserting-or-removing entries from the buffer, but just swapping them for new entries.
from torrust-tracker.
@josecelano I think that you are correct:
This commit should fix the problem. However now I don't see any advantage of using the ring-buffer over a standard vector. Since we are not inserting-or-removing entries from the buffer, but just swapping them for new entries.
Hi @da2ce7, after emerging the issue:
I restarted the live demo. Now, I can get this graph with Datadog by parsing the "request-aborted" lines in the logs.
Regarding your patch, I guess now the socket acts as a buffer for all incoming request, and we only handle them when the previous ones have finished. I suppose that will consume more memory, but It's a trade-off between rejecting the requests or consuming more memory and increasing the response time.
NOTE: Remember, we still don't have a timeout for processing the UDP requests.
I have to think about this, but I prefer directly controlling that buffer in the application. I mean, I would read all incoming packets as soon as possible and keep those pending tasks in the app. That way, we could implement policies to handle the overload. For example, we could:
- Reject new requests when the buffer is full. With the current solution, we don't know when it is full (I guess so) or if we did it would be less readable.
- Ignore new requests when the buffer is full. We don't even send a response in the current version. That doesn't have to be bad. Maybe it's also a way to save bandwidth, and with UDP, the client shouldn't have a problem with that.
Maybe a more common solution would be more straightforward for newcomers to understand. For example, we could have a pool of workers processing requests. The main loop could get all incoming requests from the socket and send them to the workers by using a channel. The channel would be the buffer. Maybe under the hood, this implementation is the same, but it has two advantages:
- The app controls the buffer. We can probably get info from the socket buffer, but I guess the API is going to be harder to understand.
- The worker's pattern is well-known and documented. ChatGPT can create the code for that :-) and can explain to other developers.
Finally, limiting the number of active requests with the current version (without your new patch) is a way of limiting resources. In this case, we limit CPU usage (not even sending a "busy" response to clients) and memory usage because task processors consume memory. I'm not sure if we should limit the resources without providing a way to monitor the actual server load. I mean, this limitation was probably hiding an overload problem. I have been monitoring the memory consumption in the demo, and I didn't see this problem (we were rejecting a lot of requests). I only saw a CPU problem when it was caused by the swapping. In conclusion, I think we should disable all limits (your new path disables them, unless there is a limit for the socket buffer) and monitor the servers to scale up. If we want to introduce some kind of limitations we have to think how to monitor that limitation so sysadmins can detect overload.
I've been discussing these limitations here.
I will merge your patch for the time being.
from torrust-tracker.
Just for the record, this is the Datadog dashboard where I put the number of handled requests (below) and the number of aborted requests (top)
from torrust-tracker.
Related Issues (20)
- Error building the tracker: `unresolved import std::hash::DefaultHasher` HOT 1
- Config overhaul: version 2 for the configuration toml file (breaking changes) HOT 4
- Move from `log` to `tracing` crate
- New config option for logging style
- Enable colour in console output HOT 2
- Docker build is failing: `failed to load bitcode of module "criterion-af9a3f7183f1573d.criterion.b69900c842eb33fa-cgu.08.rcgu.o"`
- Running unit test in coverage workflow is failing HOT 3
- Add a workflow to run a basic benchmark for the UDP tracker
- Remove log output from UDP tracker client
- UDP server: alternative implementation to avoid aborting too many requests HOT 1
- Config overhaul: split tracker mode
- Fix docker warning in `Containerfile`
- Config overhaul: improve admin experience
- Config overhaul: rename `log_level` to `threshold`
- Config overhaul: add version and namespace to the configuration HOT 2
- Config overhaul: make some fields mandatory (remove default value) HOT 1
- Config overhaul: define current config version as version 2
- Config overhaul: remove secrets when final config is printed to console
- Config overhaul: lowercase for `DatabaseDriver` HOT 1
- Config overhaul: use an explicit section for metadata HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torrust-tracker.