GithubHelp home page GithubHelp logo

Comments (11)

atomashpolskiy avatar atomashpolskiy commented on July 1, 2024

This is a good observation. However, tracking individual block requests and responses would be an overkill. How about increasing default timeout to 2 minutes?

from bt.

ckovorodkin avatar ckovorodkin commented on July 1, 2024

However, tracking individual block requests and responses would be an overkill.

But actually it did, but not use.

Just replace

long duration = System.currentTimeMillis() - started;

with
long duration = System.currentTimeMillis() - max(started, checked);

and


with
shouldAssign = !timeoutedPeers.containsKey(peer);

and for more accurate timeout tracking

move

if (connectionState.getCurrentAssignment().isPresent()) {
Assignment assignment = connectionState.getCurrentAssignment().get();
if (pieceIndex == assignment.getPiece()) {
assignment.check();
}
}

to line 64:

// check that this block was requested in the first place
if (!checkBlockIsExpected(peer, connectionState, piece)) {
return;
}
// discard blocks for pieces that have already been verified
if (bitfield.isVerified(piece.getPieceIndex())) {

and finally

replace

this.maxPieceReceivingTime = Duration.ofSeconds(30);

with
this.maxPieceReceivingTime = Duration.ofSeconds(10);

(and rename maxPieceReceivingTime to more actual name)

PS

all my changes that is related to this issue can be seen in my fork:
ckovorodkin@6252697

from bt.

ckovorodkin avatar ckovorodkin commented on July 1, 2024

BTW, is it ok, that during timeoutedAssignmentPeerBanDuration Have messages is still sending to peer (also PEX exchange occurs)?

from bt.

atomashpolskiy avatar atomashpolskiy commented on July 1, 2024

On the surface, this looks good. But there might be a problem with time-tracking individual blocks in that the sender will not always send the requested blocks with equal intervals. Like we request several blocks at a time (by sending multiple requests from the request queue), the sender may optimize as well and send all requested blocks at once, and this may happen well after the individual block receival timeout has elapsed. I.e. compare the following timelines of receiving blocks from two peers:

  • request B1..4 from peer X -----3sec----> B1 -----3sec----> B2 -----3sec----> B3 -----3sec----> B4
  • request B1..4 from peer Y -----12sec----> B1..4

Average download rate is the same for both peers (peer Y may even be slightly faster than peer X by sending the blocks after 11 seconds), but with your changes the second peer (Y) will be considered to timeout.

BTW, is it ok, that during timeoutedAssignmentPeerBanDuration Have messages is still sending to peer (also PEX exchange occurs)?

Yeah, I think it's OK. BEP-3 requires to send Have messages to all connected peers, because the most important thing is to keep the view of the state of the swarm up-to-date for everybody.

from bt.

atomashpolskiy avatar atomashpolskiy commented on July 1, 2024

I think we can just replace the existing option with a new relative metric:
max amount of time to receive one unit of data (e.g. 1 MiB)
Then calculate session-specific timeout for receiving a piece based on the size of a "standard" piece in the torrent

from bt.

ckovorodkin avatar ckovorodkin commented on July 1, 2024

Metric : minimum download speed. for example 1.0 KiB/sec

And use it to calculate inactivity timeout by calculate equation pending block count * block size / minimum dowdload speed

from bt.

atomashpolskiy avatar atomashpolskiy commented on July 1, 2024

Metric : minimum download speed. for example 1.0 KiB/sec

This is basically the same thing as "max amount of time to receive one unit of data", but differently worded and, when expressed in code directly (for instance, as some Rate object), harder to serialize/deserialize than a simple number.

And use it to calculate inactivity timeout by calculate equation pending block count * block size / minimum dowdload speed

This is just an approximation, so I don't think it's incredibly useful to constantly perform this recalculation... piece_size * max_data_receive_time will do just fine (both values should be normalized to the same units, most likely KiBs)

from bt.

ckovorodkin avatar ckovorodkin commented on July 1, 2024

It is more optimal to track smaller amount of data, because it makes possible to quickly detect inactivity.
Usually piece size is about 2 MiB, but block size is 8 KiB. Scale is 256:1. When block timeout is 10 sec, than piece timeout is 2560 sec = 42 min. This is too long.

from bt.

atomashpolskiy avatar atomashpolskiy commented on July 1, 2024

It is more optimal to track smaller amount of data, because it makes possible to quickly detect inactivity.

Optimal from what perspective? Is quickly detecting inactivity of one of the many active peers really that important? I'd rather say that the opposite is true: quickly punishing for eventual pauses is detrimental in the longer run

from bt.

atomashpolskiy avatar atomashpolskiy commented on July 1, 2024

Instead of short-term micromanagement we eventually would like to maintain long-term statistics (might even persist those between different sessions) for all encountered peers and prefer to assign pieces to peers that proved to be more responsive and timely (while still having some reasonable but not too punishing cap on the receiving time)

from bt.

ckovorodkin avatar ckovorodkin commented on July 1, 2024

Optimal from what perspective? Is quickly detecting inactivity of one of the many active peers really that important?

For real-time streaming applications it is very important. One inactive peer can stop streaming.

On the surface, this looks good. But there might be a problem with time-tracking individual blocks in that the sender will not always send the requested blocks with equal intervals. Like we request several blocks at a time (by sending multiple requests from the request queue), the sender may optimize as well and send all requested blocks at once, and this may happen well after the individual block receival timeout has elapsed.

Just notice: If whole channel bandwidth is utilized by all downloads, than it make sense to download blocks from peer one-by-one. Also, it is preferred behaviour for real-time streaming (while peer get task in block scope, not in piece scope).

Anyway, I plan to implement the simultaneous loading of a piece by several peers in the normal mode (like in endgame, but without random() and duplicates).

from bt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.