GithubHelp home page GithubHelp logo

Comments (4)

vamsiDT avatar vamsiDT commented on July 17, 2024

Hi, are you using the abm-evaluation.cc. If so, RED queue disc is used only for the purpose of marking packets. Its queue size is set to 100MB inorder to bypass the actions of RED. Setting hardDrop to true or false should not effect packet drops here.

It would help if you paste your configuration for DCTCP and RED.
Eg., in the switch case starting at line 548 in abm-evaluation.cc.

As a final check, whether a retransmission is triggered or not really depends on the nature of drop. Not all drops cause retransmissions. If the setup is simple enough, you can also check if the drops you mentioned are recovered in "fast recovery". It should be possible to log FastRecovery events from tcp-socket-base.cc

Regarding "only with HardDrop being false", my guess is that the RED queue is initialized with default size of "25p" in your case and there are frequent/aggressive drops which eventually cause a retransmission. Further, setting HardDrop to True is essentially a FIFO queue, and DCTCP falls back to RENO (probably why there are more loss and rtx).

from ns3-datacenter.

xenchieng1 avatar xenchieng1 commented on July 17, 2024

Yes I don't think I changed anything but I'm going to paste from my code here:

`
case DCTCP:

            Config::SetDefault ("ns3::TcpL4Protocol::SocketType", TypeIdValue (ns3::TcpDctcp::GetTypeId()));

	Config::SetDefault ("ns3::RedQueueDisc::UseEcn", BooleanValue (true));

	Config::SetDefault ("ns3::RedQueueDisc::QW", DoubleValue (1.0));

	Config::SetDefault ("ns3::RedQueueDisc::MinTh", DoubleValue (RedMinTh * PACKET_SIZE));

	Config::SetDefault ("ns3::RedQueueDisc::MaxTh", DoubleValue (RedMaxTh * PACKET_SIZE));

	Config::SetDefault ("ns3::RedQueueDisc::MaxSize", QueueSizeValue (QueueSize ("100MB"))); // This is just for initialization. The buffer management algorithm will take care of the rest.

	Config::SetDefault ("ns3::TcpSocketBase::UseEcn", StringValue ("On"));

	Config::SetDefault ("ns3::RedQueueDisc::LInterm", DoubleValue (0.0));

	Config::SetDefault ("ns3::RedQueueDisc::UseHardDrop", BooleanValue (false));

	Config::SetDefault ("ns3::RedQueueDisc::Gentle", BooleanValue (false));

	Config::SetDefault ("ns3::RedQueueDisc::MeanPktSize", UintegerValue (PACKET_SIZE));

	Config::SetDefault ("ns3::Ipv4GlobalRouting::FlowEcmpRouting", BooleanValue(true));

	UseEcn = 1;

	ecnEnabled = "EcnEnabled";

	Config::SetDefault("ns3::GenQueueDisc::nPrior", UintegerValue(nPrior));

	Config::SetDefault("ns3::GenQueueDisc::RoundRobin", UintegerValue(1));

	Config::SetDefault("ns3::GenQueueDisc::StrictPriority", UintegerValue(0));

	handle = tc.SetRootQueueDisc ("ns3::GenQueueDisc");

	cid = tc.AddQueueDiscClasses (handle, nPrior , "ns3::QueueDiscClass");

	for (uint32_t num = 0; num < nPrior; num++) {

		tc.AddChildQueueDisc (handle, cid[num], "ns3::RedQueueDisc", "MinTh", DoubleValue (RedMinTh * PACKET_SIZE), "MaxTh", DoubleValue (RedMaxTh * PACKET_SIZE));

	}

	break;

`

Also here is my TcpSocketBase config which is above DCTCP:

`Config::SetDefault ("ns3::TcpSocket::ConnTimeout", TimeValue (MilliSeconds (10))); // syn retry interval

Config::SetDefault ("ns3::TcpSocketBase::MinRto", TimeValue (MicroSeconds (rto)) ); //(MilliSeconds (5))

Config::SetDefault ("ns3::TcpSocketBase::RTTBytes", UintegerValue ( RTTBytes )); //(MilliSeconds (5))

Config::SetDefault ("ns3::TcpSocketBase::ClockGranularity", TimeValue (NanoSeconds (10))); //(MicroSeconds (100))

Config::SetDefault ("ns3::RttEstimator::InitialEstimation", TimeValue (MicroSeconds (200))); //TimeValue (MicroSeconds (80))

Config::SetDefault ("ns3::TcpSocket::SndBufSize", UintegerValue (1073725440)); //1073725440

Config::SetDefault ("ns3::TcpSocket::RcvBufSize", UintegerValue (1073725440));

Config::SetDefault ("ns3::TcpSocket::ConnCount", UintegerValue (6)); // Syn retry count

Config::SetDefault ("ns3::TcpSocketBase::Timestamp", BooleanValue (true));

Config::SetDefault ("ns3::TcpSocket::SegmentSize", UintegerValue (PACKET_SIZE));

Config::SetDefault ("ns3::TcpSocket::DelAckCount", UintegerValue (0));

Config::SetDefault ("ns3::TcpSocket::PersistTimeout", TimeValue (Seconds (20)));`

My intention was not to focus on use of hard drop and its effect. If I understand correctly, use of hard drop causes the packet to be dropped (instead of being marked) if the queue size goes above MaxTh. Here, MaxTh is RedMaxTh * PACKET_SIZE which is significantly smaller than 100MB, so this threshold could be reached even if the MaxSize for each RED queue is very large.
My intention was more to understand why the nature of a drop caused by buffer management algorithms (in the root queue disc) is different than a drop in the child queue disc from the congestion control point of view. Both of them are happening in the switch so logically if a timeout happens for one of them, it should happen for the other case as well.

But here if the drop happens at the RedQueueDisc it will be handled by timeoutat but if it happens at the GenQueueDisc, it will be noticed by three duplicate acks (yes, if I disable fast retransmit, it again will be noticed by timeout). I was wondering if this has roots in understanding networking concepts or it's just an ns3 design and its traffic control layering.

from ns3-datacenter.

vamsiDT avatar vamsiDT commented on July 17, 2024

Hi, I see. Thanks for the elaborate description.

I am very sure this has nothing to do with NS3 queueDiscs.

Before going in to why the drops have different effects, just to be sure if the context is clear (if you already know, please ignore).

So, RED queue discs or any other queue discs in NS3 (except the GenQueueDisc) act as AQMs. They are local to a port. Each of them typically has a maxsize after which packets are dropped.

DT or ABM or any other buffer management defined in GenQueueDisc are device wide. The device has a "shared memory" with a maxsize. These buffer management algorithms assign/change "Thresholds" to each queue (in NS3 they are the internal queuediscs) at each port. The thresholds can be imagined as "MaxSize". Further, these thresholds change dynamically.

I think whats happening in your case is, when hardDrop is set (making it drop tail), drop threshold is at RedMaxTh (a static value) which is a very small drop tail queue. As a result, the drop rate might be higher. In this case, if the thresholds calculated by DT/ABM are higher than the RedMaxTh, drops are due to the underlying queueDisc and not due to the buffer management alg.

In the paper we actually discuss this effect. The drop threshold is essentially min(buffer management, aqm) when both are used. ABM is conceptually buffer management x aqm ;)

Given the configuration in your case, buffer management thresholds will most likely be much larger than aqm thresholds (RedMaxTh in this case). So buffer management does nothing at all here.

Now, about the different effect of drops: Aggressive dropping --> more losses --> acks may also be lost --> more frequent retransmissions. I think this is the case when hardDrop is set with a small RedMaxTh.

To cross check, please set hardDrop but set RedMaxTh to a very large value (eg., 1000 pkts). Most likely there will be less retransmissions.

To achieve more aggressive dropping and low thresholds to be calculated by buffer management algorithms, alpha value can be lowered (eg., 0.001 just as a test).

from ns3-datacenter.

xenchieng1 avatar xenchieng1 commented on July 17, 2024

Nice explanation! Thank you. Setting the hardDrop and large threshold works as expected! Regarding lowering alpha values for testing, I see multiple values in alphas file. Are they for high to low priorities? I'm confused because there are multiple identical values (like 0.5) but then values go down. In the abm-evaluation.cc both incast and other applications get a random value for priority which is in the same range. Does it mean sometimes incasts can have equal or even lower priority? Or you used a different alphas file for evaluation?

from ns3-datacenter.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.