ns3-datacenter's People
Forkers
xenchieng1 dsphnb link-xz computernetworksystem yi-ran wsky51 tongjiegithub huihuiiris henrychang213 lzy0129 hixianyi forestleem distributedsystemresearch zhibo-yan liulalala changsenxia liyutingxxn wan-nan jesmonx absar98 zoranevereatfish jfangx123 summerr0007 leeyyonens3-datacenter's Issues
initWienRate?
What is error here? cannot find it in code:
msg="Could not set default value for ns3::TcpSocketState::initWienRate", +0.000000000s -1 file=../src/core/model/config.cc, line=854
terminate called without an active exception
Why is normalized drain rate set to 1 when the drain rate is less than b*(1/nPrior)
Could you please clarify the reason behind setting 'th' to 1 when the drain rate is less than b*(1/nPrior)? Does this adjustment relate to the scheduling algorithm? Is it because the parameter 'ap' already restricts the lower-priority queue? Could you provide more insights into this decision?
build failed
I ran into the following build failure in the initial build with ./waf.
../src/point-to-point/model/qbb-net-device.cc:91:32: error: ‘class ns3::RdmaEgressQueue’ has no member named ‘GetNode’; did you mean ‘GetNBytes’?
std::cout << "node" << this->GetNode().GetId() << " packetSize " << p->GetSize() << " time " << Simulator::Now().GetNanoSeconds() << " unsched " << unsched << std::endl;
^~~~~~~
GetNBytes
build failed
Incast Experiment
Hello, I want to repeat the incast experiment in the PowerTCP paper.
The result of 10-1 incast fits well with the paper, but when I add more flows (I tried the 96-1 incast), the result is different from the paper (the queue is about 100KB-200KB after coverage).
The flows I use are:
97
0 16 3 10000 1000000000000 0.13
32 16 3 10001 1000000000000 0.15
33 16 3 10033 1000000000000 0.15
...
127 16 3 10127 1000000000000 0.15
Is there any problem with my experiment? In the paper, you say
in addition to the 10 : 1 incast, the 256th server sends a query request (§4.1) to all the other 255 servers which then respond at the same time, creating a 255:1 incast.
How can I create the experiment?
Scheduling algorithms
i have a question i saw when i put printf for simulator time in scheduling algorithms:
there is a for loop on
GetNQueueDiscClasses
that is dequeueing items but i see sometimes when i print
Simulator::Now()
the time is the same for different items dequeued. when i think it looks the dequeue operation should advance the time once item is successfully dequeued but i dont see this in code.
also, who calls
GenQueueDisc::DoDequeue()
? i couldn't find its caller.
thank you!
MinRto not effective when AcceptPacket returns false
I'm trying to play with DT and ABM but I just noticed that when I'm using DCTCP with the settings in your evaluation (ECN:On and HardDrop:Off) even when AcceptPacket in GenQueueDisc returns false (for simplicity let's focus on DT when Threshold is low and an incast arrives), the FCTs in the output file do not reflect MinRto settings.
I checked that DropBeforeEnqueue from QueueDisc is called but I'm not sure why retransmission timeout is not triggered. In order to verify, I set a big MinRto (at least ten times greater than the standalone fct) but it is not included in the FCT (unless I turn on the harddrop which I guess in that case REDQueueDisc drops the packet for exceeding MaxTh and GenQueueDisc does not seem to drop it because of buffer overflow) This is interesting because if for some reason a limited buffer rejects the packet, it has to be dropped regardless of congestion control settings.
What might be wrong here? How can I make sure if GenQueueDisc calls DropBeforeEnqueue, a packet retransmission is triggered? I just confirmed that DoRetransmit in TcpSocketBase is not called if GenQueueDisc rejects the packet (only with HardDrop being false)
When will this project be updated?
Hi:
I want to ask when this repo will update all the codes.
Thank a lot.
BR
topology file
I see 128 nodes and 10 switches in the topology.txt file, which seems to be arranged in 2 pods where each pod has 64 servers with 2 tors and 2 aggregate switches. On the other hand the PowerTCP paper explains that it uses 256 servers distributed over 4 pods.
Is the topology.txt file uploaded here the one used to generate the results in the paper?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.