GithubHelp home page GithubHelp logo

scylladb / seastar Goto Github PK

View Code? Open in Web Editor NEW
8.0K 397.0 1.5K 16.36 MB

High performance server-side application framework

Home Page: http://seastar.io

License: Apache License 2.0

C++ 90.83% Python 4.95% Shell 0.84% Ragel 0.45% Dockerfile 0.03% CMake 2.90%
c-plus-plus seastar dpdk async aio

seastar's Introduction

Seastar

CircleCI Version License: Apache2 n00b issues

Introduction

SeaStar is an event-driven framework allowing you to write non-blocking, asynchronous code in a relatively straightforward manner (once understood). It is based on futures.

Building Seastar

For more details and alternative work-flows, read HACKING.md.

Assuming that you would like to use system packages (RPMs or DEBs) for Seastar's dependencies, first install them:

$ sudo ./install-dependencies.sh

then configure (in "release" mode):

$ ./configure.py --mode=release

then compile:

$ ninja -C build/release

In case there are compilation issues, especially like g++: internal compiler error: Killed (program cc1plus) try giving more memory to gcc, either by limiting the amount of threads ( -j1 ) and/or allowing at least 4g ram to your machine.

If you're missing a dependency of Seastar, then it is possible to have the configuration process fetch a version of the dependency locally for development.

For example, to fetch fmt locally, configure Seastar like this:

$ ./configure.py --mode=dev --cook fmt

--cook can be repeated many times for selecting multiple dependencies.

Build modes

The configure.py script is a wrapper around cmake. The --mode argument maps to CMAKE_BUILD_TYPE, and supports the following modes

CMake mode Debug info Optimi­zations Sanitizers Allocator Checks Use for
debug Debug Yes -O0 ASAN, UBSAN System All gdb
release RelWithDebInfo Yes -O3 None Seastar Asserts production
dev Dev (Custom) No -O1 None Seastar Asserts build and test cycle
sanitize Sanitize (Custom) Yes -Os ASAN, UBSAN System All second level of tests, track down bugs

Note that seastar is more sensitive to allocators and optimizations than usual. A quick rule of the thumb of the relative performances is that release is 2 times faster than dev, 150 times faster than sanitize and 300 times faster than debug.

Using Seastar from its build directory (without installation)

It's possible to consume Seastar directly from its build directory with CMake or pkg-config.

We'll assume that the Seastar repository is located in a directory at $seastar_dir.

Via pkg-config:

$ g++ my_app.cc $(pkg-config --libs --cflags --static $seastar_dir/build/release/seastar.pc) -o my_app

and with CMake using the Seastar package:

CMakeLists.txt for my_app:

find_package (Seastar REQUIRED)

add_executable (my_app
  my_app.cc)
  
target_link_libraries (my_app
  Seastar::seastar)
$ mkdir $my_app_dir/build
$ cd $my_app_dir/build
$ cmake -DCMAKE_PREFIX_PATH="$seastar_dir/build/release;$seastar_dir/build/release/_cooking/installed" -DCMAKE_MODULE_PATH=$seastar_dir/cmake $my_app_dir

The CMAKE_PREFIX_PATH values ensure that CMake can locate Seastar and its compiled submodules. The CMAKE_MODULE_PATH value ensures that CMake can uses Seastar's CMake scripts for locating its dependencies.

Using an installed Seastar

You can also consume Seastar after it has been installed to the file-system.

Important:

  • Seastar works with a customized version of DPDK, so by default builds and installs the DPDK submodule to $build_dir/_cooking/installed

First, configure the installation path:

$ ./configure.py --mode=release --prefix=/usr/local

then run the install target:

$ ninja -C build/release install

then consume it from pkg-config:

$ g++ my_app.cc $(pkg-config --libs --cflags --static seastar) -o my_app

or consume it with the same CMakeLists.txt as before but with a simpler CMake invocation:

$ cmake ..

(If Seastar has not been installed to a "standard" location like /usr or /usr/local, then you can invoke CMake with -DCMAKE_PREFIX_PATH=$my_install_root.)

There are also instructions for building on any host that supports Docker.

Use of the DPDK is optional.

Seastar's C++ standard: C++17 or C++20

Seastar supports both C++17, and C++20. The build defaults to the latest standard supported by your compiler, but can be explicitly selected with the --c++-standard configure option, e.g., --c++-standard=17, or if using CMake directly, by setting on the CMAKE_CXX_STANDARD CMake variable.

See the compatibity statement for more information.

Getting started

There is a mini tutorial and a more comprehensive one.

The documentation is available on the web.

Resources

Ask questions and post patches on the development mailing list. Subscription information and archives are available here, or just send an email to [email protected].

Information can be found on the main project website.

File bug reports on the project issue tracker.

The Native TCP/IP Stack

Seastar comes with its own userspace TCP/IP stack for better performance.

Recommended hardware configuration for SeaStar

  • CPUs - As much as you need. SeaStar is highly friendly for multi-core and NUMA
  • NICs - As fast as possible, we recommend 10G or 40G cards. It's possible to use 1G too but you may be limited by their capacity. In addition, the more hardware queue per cpu the better for SeaStar. Otherwise we have to emulate that in software.
  • Disks - Fast SSDs with high number of IOPS.
  • Client machines - Usually a single client machine can't load our servers. Both memaslap (memcached) and WRK (httpd) cannot over load their matching server counter parts. We recommend running the client on different machine than the servers and use several of them.

Projects using Seastar

  • cpv-cql-driver: C++ driver for Cassandra/Scylla based on seastar framework
  • cpv-framework: A web framework written in c++ based on seastar framework
  • redpanda: A Kafka replacement for mission critical systems
  • Scylla: A fast and reliable NoSQL data store compatible with Cassandra and DynamoDB
  • smf: The fastest RPC in the West

seastar's People

Contributors

amnonh avatar argenet avatar asias avatar avikivity avatar balusch avatar bhalevy avatar denesb avatar duarten avatar elazarl avatar elcallio avatar emaxerrno avatar espindola avatar gleb-cloudius avatar havaker avatar kbr-scylla avatar michoecho avatar nyh avatar pdziepak avatar psarna avatar raphaelsc avatar rzarzynski avatar slivne avatar stephandollberg avatar syuu1228 avatar tchaikov avatar tgrabiec avatar travisdowns avatar vladzcloudius avatar wmitros avatar xemul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seastar's Issues

AWS: Memcached using DHCP, SMP throws an Assert on AWS while load is running

Using commit 74f9f1f

on AWS c3.8xlarge

running

sudo ./build/release/apps/memcached/memcached --network-stack native --dpdk-pmd -c 2

Checking link status
Created DPDK device
done
Port 0 Link Up - speed 10000 Mbps - full-duplex
DHCP sending discover
DHCP Got offer for 172.30.2.49
DHCP sending request for 172.30.2.49
DHCP Got ack on request
DHCP ip: 172.30.2.49
DHCP nm: 255.255.255.0
DHCP gw: 172.30.2.1
seastar memcached v1.0
DHCP timeout
memcached: ./core/future.hh:145: void future_state::set(A&& ...) [with A = {bool, net::dhcp::lease}; T = {bool, net::dhcp::lease}]: Assertion `_state == state::future' failed.

This does not happen when memcached is started with -c 1

seawreck does not work with DPDK with smp

sudo build/release/apps/seawreck/seawreck --server 19268.20.101:10000 --host-ipv4-addr 192.168.20.185 --dhcp 0 --gw-ipv4-addr 192.168.20.101  --netmask-ipv4-addr 255.255.255.0 --smp 2 --conn 4 --network-stack native --dpdk-pmd  --collectd 0  --duration 1

with 8ca0f21

this works on the same server with posix stack (--network-stack posix)

I get at times

[shlomi@dpdk1 ~]$ sudo tshark -i enp129s0
Running as user "root" and group "root". This could be dangerous.
Capturing on 'enp129s0'
1 0.000000 IntelCor_2d:3a:00 -> Broadcast ARP 60 Who has 192.168.20.101? Tell 192.168.20.185
2 0.000343 IntelCor_2d:3a:00 -> Broadcast ARP 60 Who has 192.168.20.101? Tell 192.168.20.185
3 1.000410 IntelCor_2d:3a:00 -> Broadcast ARP 60 Who has 192.168.20.101? Tell 192.168.20.185
4 1.000416 IntelCor_27:d3:ea -> IntelCor_2d:3a:00 ARP 42 192.168.20.101 is at 68:05:ca:27:d3:ea
5 1.000471 IntelCor_2d:3a:00 -> Broadcast ARP 60 Who has 192.168.20.101? Tell 192.168.20.185
6 1.000475 IntelCor_27:d3:ea -> IntelCor_2d:3a:00 ARP 42 192.168.20.101 is at 68:05:ca:27:d3:ea

and at times (after 4,5 restarts)

29 56.363824 192.168.20.185 -> 192.168.20.101 TCP 62 47476→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
30 56.363829 192.168.20.185 -> 192.168.20.101 TCP 62 46368→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
31 56.363858 192.168.20.101 -> 192.168.20.185 TCP 62 10000→47476 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
32 56.363864 192.168.20.185 -> 192.168.20.101 TCP 62 46786→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
33 56.363871 192.168.20.101 -> 192.168.20.185 TCP 62 10000→46368 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
34 56.363873 192.168.20.101 -> 192.168.20.185 TCP 62 10000→46786 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
35 56.363878 192.168.20.185 -> 192.168.20.101 TCP 62 54821→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
36 56.363887 192.168.20.101 -> 192.168.20.185 TCP 62 10000→54821 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
37 56.363891 192.168.20.185 -> 192.168.20.101 TCP 60 47476→10000 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
38 56.363898 192.168.20.185 -> 192.168.20.101 TCP 60 46368→10000 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
39 56.363916 192.168.20.185 -> 192.168.20.101 TCP 60 46786→10000 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
40 56.363921 192.168.20.185 -> 192.168.20.101 TCP 60 54821→10000 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
41 57.374729 192.168.20.185 -> 192.168.20.101 TCP 62 [TCP Spurious Retransmission] 46786→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
42 57.374750 192.168.20.101 -> 192.168.20.185 TCP 62 [TCP Previous segment not captured] 10000→46786 [SYN, ACK] Seq=15794908 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
43 57.374753 192.168.20.185 -> 192.168.20.101 TCP 62 [TCP Spurious Retransmission] 46368→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
44 57.374763 192.168.20.101 -> 192.168.20.185 TCP 62 [TCP Previous segment not captured] 10000→46368 [SYN, ACK] Seq=15795290 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
45 57.374767 192.168.20.185 -> 192.168.20.101 TCP 62 [TCP Spurious Retransmission] 54821→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
46 57.374774 192.168.20.101 -> 192.168.20.185 TCP 62 [TCP Previous segment not captured] 10000→54821 [SYN, ACK] Seq=15795110 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
47 57.374776 192.168.20.185 -> 192.168.20.101 TCP 62 [TCP Spurious Retransmission] 47476→10000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 WS=128
48 57.374783 192.168.20.101 -> 192.168.20.185 TCP 62 [TCP Previous segment not captured] 10000→47476 [SYN, ACK] Seq=15795830 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
49 57.374784 192.168.20.185 -> 192.168.20.101 TCP 60 [TCP ACKed unseen segment] 46786→10000 [RST, ACK] Seq=1 Ack=15794909 Win=0 Len=0
50 57.374788 192.168.20.185 -> 192.168.20.101 TCP 60 [TCP ACKed unseen segment] 46368→10000 [RST, ACK] Seq=1 Ack=15795291 Win=0 Len=0
51 57.374791 192.168.20.185 -> 192.168.20.101 TCP 60 [TCP ACKed unseen segment] 54821→10000 [RST, ACK] Seq=1 Ack=15795111 Win=0 Len=0
52 57.374820 192.168.20.185 -> 192.168.20.101 TCP 60 [TCP ACKed unseen segment] 47476→10000 [RST, ACK] Seq=1 Ack=15795831 Win=0 Len=0

this is simlar to the issue reported for tcp_client with dpdk (#23)

DPDK support TX queue for each cpu irrespective of RX queue type

On intel cards we are limited by rss to use 16 rx queues, however we can use additional TX queues - if they are available.

For example on bare metal we can have 128 queues and 24 cpus

  • 16 cpus will use physical rx queues and distribute the traffix to the 24 cpus
  • 8 cpus will use software rx queues
  • 24 cpus will use a physical tx queue

File descriptor leak in memcache

After several memcache slaps we crash with:

$ ./memcached
terminate called after throwing an instance of 'std::system_error'
  what():  Too many open files

tests/udp_zero_copy server prints 0 pps with --smp 2 even when there is flow

iptraf shows no outgoing traffic on tap0 for --smp 2.

Works fine with --smp 1.

 $ sudo build/release/tests/udp_zero_copy --netw native smp 2
DHCP sending discover
DHCP Got offer for 192.168.122.18
DHCP sending request for 192.168.122.18
DHCP Got ack on request
DHCP  ip: 192.168.122.18
DHCP  nm: 255.255.255.0
DHCP  gw: 192.168.122.1
Listening on 0.0.0.0:10000
Out: 0.00 pps
Out: 0.00 pps
Out: 0.00 pps
Out: 0.00 pps
Out: 0.00 pps
Out: 0.00 pps
$ sudo build/release/tests/udp_zero_copy --netw native --smp 1
DHCP sending discover
DHCP Got offer for 192.168.122.18
DHCP sending request for 192.168.122.18
DHCP Got ack on request
DHCP  ip: 192.168.122.18
DHCP  nm: 255.255.255.0
DHCP  gw: 192.168.122.1
Listening on 0.0.0.0:10000
Out: 85035.00 pps
Out: 86964.00 pps
Out: 82538.00 pps

Create virtio-net rx packet in one go

From Avi:
'''
Note, this leaves some performance on the table, for two reasons:

  1. packet::append() is less efficient than packet::packet(packet&&, fragment, deleter);
  2. even that is inefficient in that every append allocates a new deleter, and chains it as a linked list into the packet. This both requires one extra allocation per fragment, and walking a linked list when the time comes to kill the packet.

A better approach would be to accumulate the fragments in a vector<>, then create the packet in one go (the closest thing we have is packet(std::vector, deleter) but even better is using two fragment iterators instead of the vector), and creating a deleter that deletes all fragments in one go.
'''

running memcached on intel with SMP=14 does not distribute inbound traffic to all cores 0-7 receive traffic cores 8-13 do not receive traffic

running from seastar-dev/linearize

on intel1

sudo build/release/apps/memcached/memcached --network-stack native --dhcp 0 --host-ipv4-addr 192.168.20.101 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 192.168.20.185 --smp 14 --dpdk-pmd --collectd 1 --collectd-address 192.168.20.185:25826 --collectd-hostname=intel1-1

on intel2

for i in {1..48} ; do memaslap -s 192.168.20.101:11211 -t 12000s -X 64 -T 1 -c 64 & done

checking http://system.cloudius:18080/dashboard/

  • cores 0-7 network-X->totoal-operations-rx-packets have inbound traffic ~225K packets each
  • cores 8-13 network-X->otoal-operations-rx-packets have inbound traffic 0 packets each

this may be related to the previous memcached SMP reported issue

Seastar httpd incorrectly handles HTTP 0.9 requests

If you do

telnet www.google.com 80
GET /

You'll immediately be sent the response. This is the trivial HTTP 0.9 protocol, where the HTTP/.. version isn't specified on the GET line, and the server does not wait for any headers in the request, beyond the URL.

Unfortunately Seastar's httpd does not handle this case correctly, and seems to just hang after getting the "GET /" line. Not even another input line, even an empty line suggesting the end of headers, appear to finish this hang.

Error in seastar.pc written by build.ninja (hence configure.py)

Currently, seastar.pc gets written as follows:

-e Name: Seastar
URL: http://seastar-project.org/
Description: Advanced C++ framework for high-performance server applications on modern hardware.
Version: 1.0
Libs: -L/home/br/Dev/foo/seastar/build/release -Wl,--whole-archive -lseastar -Wl,--no-whole-archive -g -Wl,--no-as-needed   -fvisibility=hidden -pthread  -laio -lboost_program_options -lboost_system -lstdc++ -lm -lboost_unit_test_framework -lboost_thread -lcryptopp -lrt -lxenstore -lhwloc -lnuma -lpciaccess -lxml2 -lz 
Cflags: -std=gnu++1y -g  -Wall -Werror -fvisibility=hidden -pthread -I/home/br/Dev/foo/seastar -I/home/br/Dev/foo/seastar/build/release/gen   -DHAVE_XEN -DHAVE_HWLOC -DHAVE_NUMA  -O2

Note the "-e" in the first line. The result when running pkg-config is:

$ pkg-config --cflags --libs ../../seastar/build/release/seastar.pc
Package '/seastar' has no Name: field

The "-e" is due to line 382 in configure.py.

when running smp reactor.queue_length-tasks-pending of reactor 0 has values and all the rest (id != 0) have zero value

run httpd

sudo build/release/apps/httpd/httpd --network-stack native --dhcp 0 --host-ipv4-addr 192.168.20.101 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 192.168.20.185 --smp 14 --dpdk-pmd --collectd 1 --collectd-address 192.168.20.185:25826 --collectd-hostname=test-1

run load from wrk to stress the system

check reactor.queue_length-tasks-pending value of reactor 0 is != 0, all the other reactors (1..13) have a value of 0

seastar tcpserver - goclient rxrx test with DPDK at times underperforms by x1000

http://jenkins.cloudius-systems.com:8080/job/seastar-private-tcpserver-perf/8/PerfPublisher/

this happens at least 1 out of 5 runs

On Thor-Loki:

Goserver tcp-server stack-native-dpdk171 tcpserver stack-native-dpdk18 tcpserver stack-posix
8.919001802 982.790393248 1034.080868611 9.70277449

values are seconds to complete the tests - lower is better

To reproduce you can tun the tests in the following manner:

  • install + configure dpdk
  • start tcpserver
  • clone tests.git
  • cd tests
  • scripts/tester.py run --config_param sut.ip: --config_param tester.ip:x apps/tcp/tests/test_1_goclient/tester
  • check the output at apps/tcp/tests/test_1_goclient/tester/out

or

  • install + configure dpdk
  • start tcpserver
  • clone tests.git
  • cd tests/apps/tcp/tests/test_1_goclient/tester
  • go run client.go -h

Please note you can use client.go -conn 1 option to limit the number of connections

seawreck does not with DPDK UP

sudo build/release/apps/seawreck/seawreck --server 192.168.20.101:10000 --host-ipv4-addr 192.168.20.185 --dhcp 0 --gw-ipv4-addr 192.168.20.101  --netmask-ipv4-addr 255.255.255.0 --smp 1 --conn 4 --network-stack native --dpdk-pmd  --collectd 0  --duration 1

with 8ca0f21

this works on the same server with posix stack (--network-stack posix)

throws an assert

this is the output

ports number: 1
Port 0: max_rx_queues 316 max_tx_queues 316
Port 0: using 1 queue
RX checksum offload supported
TX ip checksum offload supported
TX TCP&UDP checksum offload supported
Port 0 init ... done:
Creating Tx mbuf pool 'dpdk_net_pktmbuf_pool0_tx' [1024 mbufs] ...
Creating Rx mbuf pool 'dpdk_net_pktmbuf_pool0_rx' [1536 mbufs] ...
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: i40e_dev_tx_queue_setup(): Using full-featured tx path

Checking link status
Created DPDK device
.done
Port 0 Link Up - speed 40000 Mbps - full-duplex
========== http_client ============
Server: 192.168.20.101:10000
Connections: 4
Requests/connection: dynamic (timer based)
seawreck: net/dpdk.cc:219: virtual unsigned int dpdk::dpdk_device::hash2qid(uint32_t): Assertion `_redir_table.size()' failed.

this is similar to the issue I reported for running tcp_client with dpdk (#22)

apachebench hangs on httpd

When running Apachebench (ab) on seastar's httpd application, it hangs immediately.

I think the problem is that httpd doesn't correctly implement the finer details of the HTTP protocol.

The concrete issue is that our server doesn't close the connection, as it should, when connected by an HTTP/1.0 client. RFC2616 states:

Clients and servers SHOULD NOT assume that a persistent connection is maintained for HTTP versions less than 1.1 unless it is explicitly signaled. See section 19.6.2 for more information on backward compatibility with HTTP/1.0 clients. 

As can be seen with "ab -v4", ApacheBench does use HTTP 1.0.

There seem to be other problems in our correct support of the HTTP protocol. E.g, I noticed the connection hangs if there is no HTTP/... on the GET line. This is wrong - HTTP/0.9 should be assumed, and definitely the request should not hang.

seastar tcpserver - goclient txtx test on DPDK is underperfoming x2

http://jenkins.cloudius-systems.com:8080/job/seastar-private-tcpserver-perf/8/PerfPublisher/

increasing the size of user_queue_space x2 does not help (user_queue_space = {212992 * 2}) - the associated results are from a build using asias/txtx branch in seastar-dev that has this change.

On Thor-Loki:

Goserver tcp-server stack-native-dpdk171 tcpserver stack-native-dpdk18 tcpserver stack-posix
8.915581406 17.087416845 17.867953145 8.914382154

values are seconds to complete the tests - lower is better

To reproduce you can tun the tests in the following manner:

  • install + configure dpdk
  • start tcpserver
  • clone tests.git
  • cd tests
  • scripts/tester.py run --config_param sut.ip: --config_param tester.ip:x apps/tcp/tests/test_1_goclient/tester
  • check the output at apps/tcp/tests/test_1_goclient/tester/out

or

  • install + configure dpdk
  • start tcpserver
  • clone tests.git
  • cd tests/apps/tcp/tests/test_1_goclient/tester
  • go run client.go -h

Please note you can use client.go -conn 1 option to limit the number of connections

finally() is non-atomic

Given

bool x = false;
f = make_ready_future<>().finally([] {
    x = true;
});

The assertion

assert(!(x && !f.available())); // x implied f.available()

currently fails, though it should not.

Can't build completely

In Rev 2e041c2,
I can't build some component of seastar as below.

Would you resolve this problem?

(07:04:59) [email protected]:~/seastar ]
 ./configure.py
(07:05:02) [email protected]:~/seastar ]
 ninja-build
[1/70] LINK build/release/apps/httpd/httpd
FAILED: g++  -O2 -I build/release/gen -g -Wl,--no-as-needed   -fvisibility=hidden -pthread  -o build/release/apps/httpd/httpd build/release/apps/httpd/main.o build/release/http/transformers.o build/release/http/json_path.o build/release/http/file_handler.o build/release/http/common.o build/release/http/routes.o build/release/json/json_elements.o build/release/json/formatter.o build/release/http/matcher.o build/release/http/mime_types.o build/release/http/httpd.o build/release/http/reply.o build/release/http/api_docs.o build/release/net/proxy.o build/release/net/virtio.o build/release/net/dpdk.o build/release/net/ip.o build/release/net/ethernet.o build/release/net/arp.o build/release/net/native-stack.o build/release/net/ip_checksum.o build/release/net/udp.o build/release/net/tcp.o build/release/net/dhcp.o build/release/net/xenfront.o build/release/core/reactor.o build/release/core/fstream.o build/release/core/posix.o build/release/core/memory.o build/release/core/resource.o build/release/core/scollectd.o build/release/core/app-template.o build/release/core/thread.o build/release/core/dpdk_rte.o build/release/util/conversions.o build/release/net/packet.o build/release/net/posix-stack.o build/release/net/net.o build/release/rpc/rpc.o build/release/core/xen/xenstore.o build/release/core/xen/gntalloc.o build/release/core/xen/evtchn.o -laio -lboost_program_options -lboost_system -lstdc++ -lm -lboost_unit_test_framework -lboost_thread -lcryptopp -lrt -lxenstore -lhwloc -lnuma -lpciaccess -lxml2 -lz
build/release/core/xen/gntalloc.o: In function `xen::userspace_gntalloc::userspace_gntalloc(unsigned int)':
/home/yusuke/seastar/core/xen/gntalloc.cc:59: undefined reference to `vtable for xen::userspace_gntalloc'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

My typical Env is the below.

(07:04:53) [email protected]:~/seastar ]
 uname -a
Linux localhost.localdomain 4.0.4-202.fc21.x86_64 #1 SMP Wed May 27 22:28:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
(07:04:57) [email protected]:~/seastar ]
 cat /etc/redhat-release
Fedora release 21 (Twenty One)

Need to check UDP checksum on incoming UDP packets

We need to check the UDP checksum on incoming UDP packets. As we do in TCP (tcp::received()), we should avoid calculating the UDP checksum if the card already did this for us thanks to rx checksum offloading.

or_terminate() makes it hard to debug exceptions

I'm not sure there will ever be a satisfactory solution for this problem in C++, but just in case someone has an idea I want to lay out this issue.

The ".or_terminate()" continuation is necessary for exiting on unhandled exception. Otherwise, an unhandled exception will be silently ignored instead of causing the application to abort.

But there's one very annoying thing about how or_terminate() works: Because it catches the exception, it makes it impossible to debug where the uncaught exception was actually generated. Usually you end up with a mysterious error message like

terminate called after throwing an instance of 'std::system_error'
  what():  No such file or directory
zsh: abort (core dumped)  build/release/seastar

And if you try to debug this with gdb, the backtrace you get is not the interesting one (where the uncaught exception was thrown) but just the place where or_terminate() called std::terminate(), which is not very informative (although it can be of some help if the application has several uses of or_terminate()).

Memcached is not working with DPDK on Intel servers

DPDK 1.8.0
seastar : 89763c95c92b44d3482e7500ddd5c14b8c47b76en
also seastar: 29366cb
(same results)

  • boot seastar on dpdk1 with
    '''
    sudo build/release/apps/memcached/memcached --network-stack native --dhcp 0 --host-ipv4-addr 192.168.20.101 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 192.168.20.185 --smp 1 --dpdk-pmd
    '''
  • run memaslap on dpdk2 with
    '''
    memaslap -s 192.168.20.101:11211 -t 20s
    '''
  • after memaslap ends on dpdk2 send pings
    '''
    ping 192.168.20.101
    '''

seastar httpd with dpdk does work on the servers

'''
sudo build/release/apps/httpd/httpd --network-stack nave --dhcp 0 --host-ipv4-addr 192.168.20.101 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 192.168.20.185 --smp 1 --dpdk-pmd
'''
from packet 75 it seems that seastar has lost network connectivity

after memaslap ends send pings confirms this

I'll attach a wireshark capture
[shlomi@dpdk2 ~]$ sudo tshark -i enp129s0 -r /tmp/memached_capture.cap
Running as user "root" and group "root". This could be dangerous.
1 0.000000000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
2 0.000032000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
3 1.000040000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
4 1.000068000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
5 2.000068000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
6 2.000098000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
7 3.000085000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
8 3.000112000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
9 4.000116000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
10 4.000145000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
11 5.000132000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
12 5.000158000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
13 6.000160000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
14 6.000192000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
15 7.000172000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
16 7.000201000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
17 8.000211000 192.168.20.101 -> 239.192.74.66 collectd 1041 Host=dpdk1, 15 values for 4 plugins, 0 messages
18 8.000240000 192.168.20.101 -> 239.192.74.66 collectd 246 Host=dpdk1, 3 values for 1 plugin, 0 messages
19 8.279197000 192.168.20.185 -> 192.168.20.101 TCP 74 51693→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032647 TSecr=0 WS=128
20 8.279263000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51693 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
21 8.279337000 192.168.20.185 -> 192.168.20.101 TCP 54 51693→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
22 8.280717000 192.168.20.185 -> 192.168.20.101 TCP 74 51694→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032649 TSecr=0 WS=128
23 8.280774000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51694 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
24 8.280822000 192.168.20.185 -> 192.168.20.101 TCP 54 51694→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
25 8.281946000 192.168.20.185 -> 192.168.20.101 TCP 74 51695→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032650 TSecr=0 WS=128
26 8.281992000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51695 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
27 8.282038000 192.168.20.185 -> 192.168.20.101 TCP 54 51695→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
28 8.283147000 192.168.20.185 -> 192.168.20.101 TCP 74 51696→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032651 TSecr=0 WS=128
29 8.283192000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51696 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
30 8.283233000 192.168.20.185 -> 192.168.20.101 TCP 54 51696→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
31 8.284336000 192.168.20.185 -> 192.168.20.101 TCP 74 51697→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032652 TSecr=0 WS=128
32 8.284380000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51697 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
33 8.284430000 192.168.20.185 -> 192.168.20.101 TCP 54 51697→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
34 8.285531000 192.168.20.185 -> 192.168.20.101 TCP 74 51698→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032654 TSecr=0 WS=128
35 8.285576000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51698 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
36 8.285618000 192.168.20.185 -> 192.168.20.101 TCP 54 51698→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
37 8.286725000 192.168.20.185 -> 192.168.20.101 TCP 74 51699→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032655 TSecr=0 WS=128
38 8.286761000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51699 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
39 8.286802000 192.168.20.185 -> 192.168.20.101 TCP 54 51699→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
40 8.287924000 192.168.20.185 -> 192.168.20.101 TCP 74 51700→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032656 TSecr=0 WS=128
41 8.287966000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51700 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
42 8.288007000 192.168.20.185 -> 192.168.20.101 TCP 54 51700→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
43 8.289107000 192.168.20.185 -> 192.168.20.101 TCP 74 51701→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032657 TSecr=0 WS=128
44 8.289152000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51701 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
45 8.289194000 192.168.20.185 -> 192.168.20.101 TCP 54 51701→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
46 8.290280000 192.168.20.185 -> 192.168.20.101 TCP 74 51702→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032658 TSecr=0 WS=128
47 8.290325000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51702 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
48 8.290366000 192.168.20.185 -> 192.168.20.101 TCP 54 51702→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
49 8.291402000 192.168.20.185 -> 192.168.20.101 TCP 74 51703→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032659 TSecr=0 WS=128
50 8.291442000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51703 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
51 8.291483000 192.168.20.185 -> 192.168.20.101 TCP 54 51703→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
52 8.292512000 192.168.20.185 -> 192.168.20.101 TCP 74 51704→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032661 TSecr=0 WS=128
53 8.292557000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51704 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
54 8.292598000 192.168.20.185 -> 192.168.20.101 TCP 54 51704→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
55 8.293634000 192.168.20.185 -> 192.168.20.101 TCP 74 51705→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032662 TSecr=0 WS=128
56 8.293679000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51705 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
57 8.293720000 192.168.20.185 -> 192.168.20.101 TCP 54 51705→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
58 8.294760000 192.168.20.185 -> 192.168.20.101 TCP 74 51706→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032663 TSecr=0 WS=128
59 8.294805000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51706 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
60 8.294846000 192.168.20.185 -> 192.168.20.101 TCP 54 51706→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
61 8.295890000 192.168.20.185 -> 192.168.20.101 TCP 74 51707→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032664 TSecr=0 WS=128
62 8.295926000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51707 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
63 8.295967000 192.168.20.185 -> 192.168.20.101 TCP 54 51707→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
64 8.297002000 192.168.20.185 -> 192.168.20.101 TCP 74 51708→11211 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=11032665 TSecr=0 WS=128
65 8.297047000 192.168.20.101 -> 192.168.20.185 TCP 62 11211→51708 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 WS=128
66 8.297089000 192.168.20.185 -> 192.168.20.101 TCP 54 51708→11211 [ACK] Seq=1 Ack=1 Win=29312 Len=0
67 8.297145000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
68 8.297168000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
69 8.297184000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
70 8.297198000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
71 8.297215000 192.168.20.101 -> 192.168.20.185 MEMCACHE 62 STORED
72 8.297222000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
73 8.297224000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
74 8.297225000 192.168.20.101 -> 192.168.20.185 MEMCACHE 62 STORED
75 8.297235000 192.168.20.185 -> 192.168.20.101 MEMCACHE 1159 set \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf. 0 0 1024
76 8.297243000 192.168.20.185 -> 192.168.20.101 TCP 54 51694→11211 [ACK] Seq=1106 Ack=9 Win=29312 Len=0
77 8.297457000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020\020\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
78 8.297467000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020\260\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
79 8.297475000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020P\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
80 8.297484000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020\360\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
81 8.297493000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020\220\027\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
82 8.297516000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \0200\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
83 8.297529000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020\320\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
84 8.297540000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020p\031\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
85 8.497421000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
86 8.497432000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
87 8.497437000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
88 8.497441000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
89 8.498412000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
90 8.498419000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
91 8.498424000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
92 8.498429000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Previous segment not captured] get \020p\024\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
93 8.498434000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\020\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
94 8.498440000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\260\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
95 8.498445000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
96 8.900453000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
97 8.900467000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
98 8.900472000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
99 8.900477000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
100 8.901460000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
101 8.901475000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
102 8.901480000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
103 8.901484000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020p\024\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
104 8.901489000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\020\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
105 8.901493000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\260\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
106 8.901498000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
107 9.707471000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
108 9.707485000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
109 9.707490000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
110 9.707494000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
111 9.707499000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
112 9.707503000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
113 9.707508000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
114 9.707522000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\260\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
115 9.707527000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
116 9.707533000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
117 9.707538000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\027\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
118 9.707543000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
119 9.707555000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
120 9.707560000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020p\031\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
121 11.319491000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
122 11.319506000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
123 11.319510000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
124 11.319515000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
125 11.319521000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
126 11.319525000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
127 11.319530000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
128 11.319545000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\260\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
129 11.319550000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
130 11.319557000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
131 11.319562000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\027\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
132 11.319567000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
133 11.319578000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
134 11.319583000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020p\031\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
135 11.535564000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020P\021\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
136 11.535583000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020\360\021\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
137 11.535589000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020\220\022\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
138 11.535594000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \0200\023\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
139 11.535599000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020\320\023\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
140 11.535605000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020p\024\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
141 11.535610000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020\020\025\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
142 11.535621000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 get \020P\026\020\020\020\020\020StOIPA4ySHgoaLAOv1KiqE.MhgnpRkwtbIA.TEwvWah4vrTpsnVg-T1h
143 14.543471000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
144 14.543487000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
145 14.543492000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
146 14.543497000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
147 14.543503000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
148 14.543507000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
149 14.543513000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
150 14.543527000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\260\025\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
151 14.543531000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
152 14.543536000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\026\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
153 14.543543000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\027\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
154 14.543547000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
155 14.543560000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\030\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
156 14.543565000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020p\031\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
157 20.991488000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
158 20.991503000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
159 20.991508000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
160 20.991513000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
161 20.991518000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
162 20.991522000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
163 20.991527000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
164 29.301230000 192.168.20.185 -> 192.168.20.101 TCP 54 51693→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
165 29.301290000 192.168.20.185 -> 192.168.20.101 TCP 54 51694→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
166 29.301312000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51695→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
167 29.301327000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51696→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
168 29.301341000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51697→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
169 29.301345000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51698→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
170 29.301348000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51699→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
171 29.301386000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51700→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
172 29.301388000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51701→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
173 29.301389000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51702→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
174 29.301391000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51703→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
175 29.301395000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51704→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
176 29.301608000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51705→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
177 29.301609000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51706→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
178 29.301611000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51707→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
179 29.301611000 192.168.20.185 -> 192.168.20.101 TCP 54 [TCP ACKed unseen segment] 51708→11211 [FIN, ACK] Seq=1176 Ack=9 Win=29312 Len=0
180 33.871513000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\020\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
181 33.871530000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP Retransmission] get \020\260\020\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
182 33.871537000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020P\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
183 33.871545000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\360\021\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
184 33.871551000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\220\022\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
185 33.871558000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \0200\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
186 33.871565000 192.168.20.185 -> 192.168.20.101 MEMCACHE 124 [TCP ACKed unseen segment] [TCP Retransmission] get \020\320\023\020\020\020\020\020wQngU18jgs8nK-7r1WKvqcCar.jxZvRTJD7oEFVu5dfPcmFeHZX59jf.
187 45.301172000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=1/256, ttl=64
188 46.300493000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=2/512, ttl=64
189 47.300532000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=3/768, ttl=64
190 48.300682000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=4/1024, ttl=64
191 49.300541000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=5/1280, ttl=64
192 50.300491000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=6/1536, ttl=64
193 50.319452000 IntelCor_2d:3a:00 -> IntelCor_27:d3:ea ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
194 51.300489000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=7/1792, ttl=64
195 51.321471000 IntelCor_2d:3a:00 -> IntelCor_27:d3:ea ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
196 52.300480000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=8/2048, ttl=64
197 52.323450000 IntelCor_2d:3a:00 -> IntelCor_27:d3:ea ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
198 53.300465000 192.168.20.185 -> 192.168.20.101 ICMP 98 Echo (ping) request id=0x1f9d, seq=9/2304, ttl=64
199 54.300466000 IntelCor_2d:3a:00 -> Broadcast ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
200 55.303461000 IntelCor_2d:3a:00 -> Broadcast ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
201 56.305472000 IntelCor_2d:3a:00 -> Broadcast ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
202 59.663499000 IntelCor_2d:3a:00 -> Broadcast ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
203 60.665480000 IntelCor_2d:3a:00 -> Broadcast ARP 42 Who has 192.168.20.101? Tell 192.168.20.185
204 61.667452000 IntelCor_2d:3a:00 -> Broadcast ARP 42 Who has 192.168.20.101? Tell 192.168.20.185

net: missing support for a "backlog" parameter of a listen()

Currently we don't support this parameter and set the hard coded value (100).

This value is too small for httpd testing on AWS and we have to manually change it to 1000.

We need to prioritize the implementation of a support for a backlog listen() parameter.

DHCP timeout on AWS guest

When running with DHCP on AWS we get a bellow assert after about 30 seconds.
This happens only with SMP configuration and doesn't reproduce in a UP configuration.
master hash is: 24d5c31

DHCP timeout
httpd: ./core/future.hh:145: void future_state<T>::set(A&& ...) [with A = {bool, net::dhcp::lease}; T = {bool, net::dhcp::lease}]: Assertion `_state == state::future' failed.

The bisect shows that the patch responsible for the breakage is:

ff4aca2ee0787b98d64090546adb63ef23b4dc7d is the first bad commit
commit ff4aca2ee0787b98d64090546adb63ef23b4dc7d
Author: Gleb Natapov <[email protected]>
Date:   Sun Jan 25 14:35:28 2015 +0200

    core: prefetch work items before processing

:040000 040000 2a28c23f48931e81723d025bc496a3a8a368e9cd 76f92f274bdf437f67c2842654460816f9fb4672 M      core

To reproduce run:
sudo ./build/release/apps/httpd/httpd --network-stack native --dpdk-pmd -m 512M -c 4

And wait for about 30 seconds.

SeaStar pinning to cpu's is not optimal it does not utilize the maximum avilable cores in case of hyperthreading (at least not on the intel dpdk servers)

Running with --smp 2 and doing 'top + 1' it can be seen that Seastar pin's it threads to cpu 0, 14
checking /proc/cpuinfo

cpu 0,14 are actually on the same physical id, core id (0,0) : so they are actually sharing the same core while there are additional 27 cores avilable.

it seems deterministic I restarted the process 5 times and in all cases cpu 0,14 have been selected

the complete command line
sudo build/release/apps/httpd/httpd --network-stack nive --dpdk-pmd 1 --dhcp 0 --host-ipv4-addr 192.168.10.101 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 192.168.10.185 --smp 2 --collectd 1 --collectd-address 192.168.10.185:25826 --collectd-hostname intel-dpdk1

handling of large amount of parallel connection (~1M and higher)

it is interesting to see max number of concurrent connections.

Bump queue length to 10000 and run httpd + wrk:

loki$ ./httpd
Seastar HTTP server listening on port 10000 ...

thor$ ./wrk -c 2000 -t 4 http://192.168.10.101:10000/
Running 10s test @ http://192.168.10.101:10000/
4 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.32ms 1.31ms 16.29ms 91.47%
Req/Sec 50.11k 14.47k 88.60k 75.44%
1904931 requests in 10.00s, 374.24MB read
Socket errors: connect 983, read 0, write 0, timeout 3932
Requests/sec: 190548.03
Transfer/sec: 37.43MB

Even with 2000 connections, we have large number of connection errors.

Then, I tried something simpler: tcp_server + go client(sending 1 ping/pong per connection).

[asias@hjpc goclient]$ go run ./client.go -conn 20000 -test ping
========== ping ============
Server: 192.168.66.123:10000
Connections: 20000
Total PingPong: 20000
Total Time(Secs): 0.9693877270000001
Requests/Sec: 20631.57954546705
Connections Erros: 11465

[asias@hjpc goclient]$ go run ./client.go -conn 10000 -test ping
========== ping ============
Server: 192.168.66.123:10000
Connections: 10000
Total PingPong: 10000
Total Time(Secs): 0.6466077090000001
Requests/Sec: 15465.327525193485
Connections Erros: 3983

[asias@hjpc goclient]$ go run ./client.go -conn 5000 -test ping
========== ping ============
Server: 192.168.66.123:10000
Connections: 5000
Total PingPong: 5000
Total Time(Secs): 0.5134876780000001
Requests/Sec: 9737.332002736002
Connections Erros: 0

Note, go client will close the connection first, so we will have lot of connections in TIME_WAIT state. Before each run, wait until the number goes down.

   $ sudo netstat  -natp|grep 123|grep TIME_WAIT|wc -l

Also, I used

   $sudo sysctl net.ipv4.ip_local_port_range="1500    65000"

to increase available local port.

DPDK + i40e does not work for outbound connections

We have multiple problems with DPDK integration and i40e RSS, some of them is on our side, and some are DPDK shortcomings, but we should workaround them anyway. DPDK problems are described here http://patchwork.dpdk.org/ml/archives/dev/2015-February/013300.html and as far as I can tell still not addressed. The problem on our side is that we need to configure which RSS algorithm to use (i4oe supports two and default one is not the one seastar code assumes).

httpd RST storm

$ wrk http://192.168.122.2:10000
Running 10s test @ http://192.168.122.2:10000
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   840.01ms  142.60us 840.28ms   70.00%
    Req/Sec     5.40      1.26     6.00     80.00%
  110 requests in 10.00s, 364.43KB read
Requests/sec:     11.00
Transfer/sec:     36.44KB

With commit 2f42ea15e97f8f6ee002876a6233031f0fdad690 + following patch to send large HTTP response:

--- a/apps/httpd/httpd.cc
+++ b/apps/httpd/httpd.cc
@@ -140,7 +140,8 @@ public:
             auto resp = std::make_unique<response>();
             resp->_response_line = "HTTP/1.1 200 OK\r\n";
             resp->_headers["Content-Type"] = "text/html";
-            resp->_body = "<html><head><title>this is the future</title></head><body><p>Future!!</p></body></html>";
+            resp->_body = std::string(1024*3, 'X');
             respond(std::move(resp));
         }
         future<size_t> write_body() {

I saw the following:

610  12.009139 192.168.122.1 -> 192.168.122.2 TCP 54 57381 > 10000 [RST] Seq=1 Win=0 Len=0
611  12.009143 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57382 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
612  12.009149 192.168.122.1 -> 192.168.122.2 TCP 54 57382 > 10000 [RST] Seq=1 Win=0 Len=0
613  12.009152 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57384 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
614  12.009159 192.168.122.1 -> 192.168.122.2 TCP 54 57384 > 10000 [RST] Seq=1 Win=0 Len=0
615  12.009162 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57386 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
616  12.009169 192.168.122.1 -> 192.168.122.2 TCP 54 57386 > 10000 [RST] Seq=1 Win=0 Len=0
617  12.009173 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57385 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
618  12.009180 192.168.122.1 -> 192.168.122.2 TCP 54 57385 > 10000 [RST] Seq=1 Win=0 Len=0
619  12.009184 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57383 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
620  12.009191 192.168.122.1 -> 192.168.122.2 TCP 54 57383 > 10000 [RST] Seq=1 Win=0 Len=0
621  12.409304 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57377 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
622  12.409343 192.168.122.1 -> 192.168.122.2 TCP 54 57377 > 10000 [RST] Seq=1 Win=0 Len=0
623  12.409358 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57378 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
624  12.409371 192.168.122.1 -> 192.168.122.2 TCP 54 57378 > 10000 [RST] Seq=1 Win=0 Len=0
625  12.409380 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57379 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
626  12.409389 192.168.122.1 -> 192.168.122.2 TCP 54 57379 > 10000 [RST] Seq=1 Win=0 Len=0
627  12.409393 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57380 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
628  12.409401 192.168.122.1 -> 192.168.122.2 TCP 54 57380 > 10000 [RST] Seq=1 Win=0 Len=0
629  12.409407 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57381 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
630  12.409415 192.168.122.1 -> 192.168.122.2 TCP 54 57381 > 10000 [RST] Seq=1 Win=0 Len=0
631  12.409419 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57382 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
632  12.409426 192.168.122.1 -> 192.168.122.2 TCP 54 57382 > 10000 [RST] Seq=1 Win=0 Len=0
633  12.409428 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57384 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
634  12.409435 192.168.122.1 -> 192.168.122.2 TCP 54 57384 > 10000 [RST] Seq=1 Win=0 Len=0
635  12.409438 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57386 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
636  12.409445 192.168.122.1 -> 192.168.122.2 TCP 54 57386 > 10000 [RST] Seq=1 Win=0 Len=0
637  12.409449 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57385 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
638  12.409455 192.168.122.1 -> 192.168.122.2 TCP 54 57385 > 10000 [RST] Seq=1 Win=0 Len=0
639  12.409459 192.168.122.2 -> 192.168.122.1 TCP 54 [TCP ZeroWindow] 10000 > 57383 [ACK] Seq=1 Ack=1 Win=0 [TCP CHECKSUM INCORRECT] Len=0
640  12.409466 192.168.122.1 -> 192.168.122.2 TCP 54 57383 > 10000 [RST] Seq=1 Win=0 Len=0

Tx hangs when TSO is enabled

master HEAD 934c6ac

The issue has been found when running the "txtx" test of tests/tcp_server application on top of DPDK in an SMP environment (4 CPUs) with default parameters (100 connections).

In a back-to-back setup the partner (Linux box) is consistently reporting "rx_long_length_errors":

$ ethtool -S enp3s0f1 | grep err
     rx_errors: 0
     tx_errors: 0
     rx_over_errors: 0
     rx_crc_errors: 0
     rx_frame_errors: 0
     rx_fifo_errors: 0
     rx_missed_errors: 0
     tx_aborted_errors: 0
     tx_carrier_errors: 0
     tx_fifo_errors: 0
     tx_heartbeat_errors: 0
     rx_long_length_errors: 1188
     rx_short_length_errors: 0
     rx_csum_offload_errors: 0

and the transmitter stops due to Tx window shrinkage on some of the connections.

The issue is reproducing with 99% probability.

Better distribution of Rx traffic to software queues

In case we are running in an ublanaced mode the distribution of RX traffic from CPUs that have a physical queue to software queues should be optimized

For example: 16 Physical Queues - 24 cores need to have each of 16 CPUS with physical queue distribute the same amount of traffic to software queues.

running tcp_client on dpdk with smp (2 and up) does not execute

sudo build/release/tests/tcp_client --test ping --server 192.168.10.101:10000 --host-ipv4-addr 192.168.10.185 --dhcp 0 --gw-ipv4-addr 192.168.10.101 --dpdk-pmd --netmask-ipv4-addr 255.255.255.0 --smp 2 --conn 2 --network-stack native --collectd 0

checking wireshark traces we only have

Running as user "root" and group "root". This could be dangerous.
1 0.000000000 IntelCor_2d:39:88 -> Broadcast ARP 60 Who has 192.168.10.101? Tell 192.168.10.185
2 0.000028000 IntelCor_2d:39:88 -> Broadcast ARP 60 Who has 192.168.10.101? Tell 192.168.10.185
3 1.000059000 IntelCor_2d:39:88 -> Broadcast ARP 60 Who has 192.168.10.101? Tell 192.168.10.185
4 1.000074000 IntelCor_27:d3:72 -> IntelCor_2d:39:88 ARP 42 192.168.10.101 is at 68:05:ca:27:d3:72

Running tcp_client with posix stack with same params on the same setup (without dpdk) works

httpd crash

This is a spinoff from issue #3, but unrelated to ApacheBench.

When running Seastar httpd and sending a request to it with the simple command, we get a crash:

echo GET / | nc 192.168.122.2 10000
#0  0x0000601000044750 in ?? ()
#1  0x0000000000416032 in close (this=0x60100000e3b0) at ./core/reactor.hh:564
#2  close (this=0x60100000e3b0) at ./core/reactor.hh:613
#3  http_server::connection::generate_response (this=0x60100000e380, 
    req=std::unique_ptr<http_request> containing 0x601000085f00)
    at apps/httpd/httpd.cc:166
#4  0x0000000000416439 in http_server::connection::read()::{lambda()#1}::operator()() const (__closure=0x7ffff77fca00) at apps/httpd/httpd.cc:77
#5  0x000000000041670f in _ZZN7promiseIIEE8scheduleIZN6futureIIEE4thenIZN11http_server10connection4readEvEUlvE_EENSt9result_ofIFT_vEE4typeEOS9_NSt9enable_ifIXsr9is_futureIISC_EE5valueEPvE4typeEEUlRT_E_EEvSD_EN15task_with_state3runEv ()
    at ./core/apply.hh:36
#6  0x000000000044a608 in reactor::run (this=0x7ffff77fdb20)
    at core/reactor.cc:334

This stack trace is using the network stack, but exactly the same bug (with a different frame #0) happens also for the Posix stack. The bug is also unrelated to the fact that I sent an HTTP 0.9 request (I tried specifying HTTP/1.0 on the request line and got the same bug).

SeaStar+DPDK tries to access not bound NIC

Looks like SeaStar+DPDK tries to access not bound NIC, and it causes panic.
Confirmed on KVM with virtio-net.
Maybe not our problem but DPDK problem, but better to make sure that.

When you have 2 NICs but bound only first NIC case:

Option: 18


Network devices using DPDK-compatible driver
============================================
<none>

Network devices using kernel driver
===================================
0000:00:03.0 'Virtio network device' if= drv=virtio-pci unused=virtio_pci,igb_uio 
0000:00:0a.0 'Virtio network device' if= drv=virtio-pci unused=virtio_pci,igb_uio 

Other network devices
=====================
<none>

Enter PCI address of device to bind to IGB UIO driver: 0000:00:03.0
[syuu@f21 seastar]$ ifconfig -a
eth1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:15:03:c9  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Seems EAL detects both NICs:

[syuu@f21 seastar]$ sudo env LD_LIBRARY_PATH=/home/syuu/dpdk/x86_64-native-linuxapp-gcc/lib/ ./build/release/apps/httpd/httpd --dpdk-pmd --nat-adapter --network-stack native --csum-offload off --collectd 0
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up memory...
[ 2380.984759] Bits 55-60 of /proc/PID/pagemap entries are about to stop being page-shift some time soon. See the linux/Documentation/vm/pagemap.txt for details.
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fc531e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fc531a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x7c00000 bytes
EAL: Virtual area found at 0x7fc529c00000 (size = 0x7c00000)
EAL: Requesting 64 pages of size 2MB from socket 0
EAL: TSC frequency is ~1895573 KHz
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: Master lcore 0 is ready (tid=39b33700;cpuset=[0])
EAL: lcore 1 is ready (tid=29bff700;cpuset=[1])
EAL: lcore 3 is ready (tid=28bfd700;cpuset=[3])
EAL: lcore 2 is ready (tid=293fe700;cpuset=[2])
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL: PCI device 0000:00:0a.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
ports number: 2
Port 0: max_rx_queues 1 max_tx_queues 1
Port 0: using 1 queue
LRO is off
Port 0 init ... done: 
Creating Tx mbuf pool 'dpdk_net_pktmbuf_pool0_tx' [1024 mbufs] ...
Creating Rx mbuf pool 'dpdk_net_pktmbuf_pool0_rx' [1024 mbufs] ...
Port 0: Changing HW FC settings is not supported

Checking link status 
Created DPDK device
done
Port 0 Link Up - speed 10000 Mbps - full-duplex
[ 2381.787992] tun: Universal TUN/TAP device driver, 1.6
[ 2381.789015] tun: (C) 1999-2004 Max Krasnyansky <[email protected]>
DHCP sending discover
DHCP sending discover
DHCP sending discover
DHCP Got offer for 192.168.122.226
DHCP sending request for 192.168.122.226
DHCP Got ack on request
DHCP  ip: 192.168.122.226
DHCP  nm: 255.255.255.0
DHCP  gw: 192.168.122.1
Seastar HTTP server listening on port 10000 ..

It causes panic when reboot:

[ 2508.097002] NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [avahi-daemon:416]
[ 2508.097002] Modules linked in: vhost_net tun vhost macvtap macvlan igb_uio(OE) uio snd_hda_codec_generic snd_hda_intel snd_hda_controller snd_hda_codec snd_hwdep snd_seq iosf_mbi snd_seq_device crct10dif_pclmul snd_pcm qxl ppdev crc32_pclmul ttm crc32c_intel drm_kms_helper snd_timer ghash_clmulni_intel serio_raw virtio_net virtio_console virtio_balloon virtio_rng parport_pc drm snd parport i2c_piix4 soundcore nfsd auth_rpcgss nfs_acl lockd grace sunrpc virtio_blk virtio_pci virtio_ring ata_generic virtio pata_acpi
[ 2508.097002] CPU: 2 PID: 416 Comm: avahi-daemon Tainted: G           OE  3.19.5-200.fc21.x86_64 #1
[ 2508.097002] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
[ 2508.097002] task: ffff8800b9431360 ti: ffff8800ba004000 task.ti: ffff8800ba004000
[ 2508.097002] RIP: 0010:[<ffffffffa01c031f>]  [<ffffffffa01c031f>] virtnet_send_command+0xff/0x160 [virtio_net]
[ 2508.097002] RSP: 0018:ffff8800ba007af8  EFLAGS: 00000246
[ 2508.097002] RAX: 0000000000000000 RBX: ffff8800ba3e7200 RCX: ffff8800ba631000
[ 2508.097002] RDX: 000000000000c150 RSI: ffff8800ba007afc RDI: ffff8800ba634000
[ 2508.097002] RBP: ffff8800ba007b98 R08: 0000000000000004 R09: ffff8800ba3e7200
[ 2508.097002] R10: ffff8800ba3e7200 R11: 0000000000008000 R12: ffff8800ba3e7200
[ 2508.097002] R13: ffff8800ba007b98 R14: ffff8800ba007b00 R15: ffff8800ba007ae8
[ 2508.097002] FS:  00007efd38a7b700(0000) GS:ffff8800bfd00000(0000) knlGS:0000000000000000
[ 2508.097002] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2508.097002] CR2: 0000000000953002 CR3: 0000000037375000 CR4: 00000000001406e0
[ 2508.097002] Stack:
[ 2508.097002]  000000000000ff00 ffff8800ba007b20 ffff8800ba007bb0 ffff8800ba007b40
[ 2508.097002]  0000000000000000 ffffea0002e801c2 0000000200000afa 0000000000000000
[ 2508.097002]  0000000000000000 ffffea0002e801c2 0000000100000af9 0000000000000000
[ 2508.097002] Call Trace:
[ 2508.097002]  [<ffffffffa01c2073>] virtnet_set_rx_mode+0xb3/0x300 [virtio_net]
[ 2508.097002]  [<ffffffff81662969>] ? __hw_addr_del_entry+0xa9/0xe0
[ 2508.097002]  [<ffffffff8165d607>] __dev_set_rx_mode+0x57/0xa0
[ 2508.097002]  [<ffffffff81662be9>] __dev_mc_del+0x59/0x70
[ 2508.097002]  [<ffffffff81662c10>] dev_mc_del+0x10/0x20
[ 2508.097002]  [<ffffffff816da830>] igmp_group_dropped+0x1c0/0x250
[ 2508.097002]  [<ffffffff816dadd3>] ip_mc_dec_group+0xc3/0x130
[ 2508.097002]  [<ffffffff816daee6>] ip_mc_leave_group+0xa6/0x120
[ 2508.097002]  [<ffffffff816a36d7>] do_ip_setsockopt.isra.11+0x347/0xe30
[ 2508.097002]  [<ffffffff81222403>] ? pipe_write+0x393/0x450
[ 2508.097002]  [<ffffffff8125aa0c>] ? fsnotify+0x3ac/0x580
[ 2508.097002]  [<ffffffff81322072>] ? sock_has_perm+0x72/0x90
[ 2508.097002]  [<ffffffff816a41f0>] ip_setsockopt+0x30/0xa0
[ 2508.097002]  [<ffffffff816ca7bb>] udp_setsockopt+0x1b/0x30
[ 2508.097002]  [<ffffffff81640c44>] sock_common_setsockopt+0x14/0x20
[ 2508.097002]  [<ffffffff8163f840>] SyS_setsockopt+0x80/0xf0
[ 2508.097002]  [<ffffffff817752c9>] system_call_fastpath+0x12/0x17
[ 2508.097002] Code: c5 68 ff ff ff e8 a2 f4 e4 ff 48 8b 7b 08 e8 d9 f3 e4 ff 84 c0 75 14 eb 27 0f 1f 00 48 8b 7b 08 e8 f7 ec e4 ff 84 c0 75 17 f3 90 <48> 8b 7b 08 48 8d b5 64 ff ff ff e8 81 f1 e4 ff 48 85 c0 74 dc 

TCP PUSH

@asias

Need to unsure we set the PSH bit if we're emptying the tx queue. PSH ensures that the remote side will terminate LRO and DMA the packet to the remote processor.

Assertion `nr_pages' failed

I've ran into the following assertion failure during startup of a server using seastar.

core/memory.cc:371: void memory::cpu_pages::free_span_no_merge(uint32_t, uint32_t): Assertion `nr_pages' failed.
gdb$ bt
#0  0x00007ffff4244cc9 in __GI_raise (sig=sig@entry=0x6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ffff42480d8 in __GI_abort () at abort.c:89
#2  0x00007ffff423db86 in __assert_fail_base (fmt=0x7ffff438e830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0xe7425d "nr_pages", file=file@entry=0xe7424e "core/memory.cc", line=line@entry=0x173, function=function@entry=0xe745c0 <memory::cpu_pages::free_span_no_merge(unsigned int, unsigned int)::__PRETTY_FUNCTION__> "void memory::cpu_pages::free_span_no_merge(uint32_t, uint32_t)") at assert.c:92
#3  0x00007ffff423dc32 in __GI___assert_fail (assertion=0xe7425d "nr_pages", file=0xe7424e "core/memory.cc", line=0x173, function=0xe745c0 <memory::cpu_pages::free_span_no_merge(unsigned int, unsigned int)::__PRETTY_FUNCTION__> "void memory::cpu_pages::free_span_no_merge(uint32_t, uint32_t)") at assert.c:101
#4  0x00000000004d983c in memory::cpu_pages::free_span_no_merge (this=<optimized out>, span_start=<optimized out>, nr_pages=<optimized out>) at core/memory.cc:371
#5  0x00000000004d06de in ~temporary_buffer (this=0x7fffffffccd0, __in_chrg=<optimized out>) at ./core/temporary_buffer.hh:58
#6  apply (args=<unknown type in ..., CU 0x2e70f1, DIE 0x392847>, func=<unknown type in ..., CU 0x2e70f1, DIE 0x392835>) at core/apply.hh:34
#7  apply<file::dma_read_bulk(uint64_t, size_t)::<lambda(size_t)> mutable::<lambda()> mutable [with CharType = char]::<lambda(auto:10)>, temporary_buffer<char> > (args=<unknown type in ..., CU 0x2e70f1, DIE 0x367884>, func=<unknown type in ..., CU 0x2e70f1, DIE 0x367870>) at core/apply.hh:42
#8  apply<file::dma_read_bulk(uint64_t, size_t)::<lambda(size_t)> mutable::<lambda()> mutable [with CharType = char]::<lambda(auto:10)>, temporary_buffer<char> > (args=<unknown type in ..., CU 0x2e70f1, DIE 0x377c96>, func=<unknown type in ..., CU 0x2e70f1, DIE 0x377c83>) at core/future.hh:1083
#9  operator()<future_state<temporary_buffer<char> > > (state=<unknown type in ..., CU 0x2e70f1, DIE 0x37f08b>, __closure=0x600000e023a8) at core/future.hh:779
#10 _ZN12continuationIZN6futureII16temporary_bufferIcEEE4thenIZZZN4file13dma_read_bulkIcEES0_IIS1_IT_EEEmmENUlmE_clEmENUlvE0_clEvEUlS7_E_S0_IIEEEET0_OT_EUlOS7_E_IS2_EE3runEv (this=0x600000e02380) at core/future.hh:359
#11 0x000000000046f9f2 in reactor::run_tasks (this=0x7423, this@entry=0x6000001f1000, tasks=..., quota=0x6) at core/reactor.cc:1093
#12 0x000000000049650b in reactor::run (this=0x6000001f1000) at core/reactor.cc:1190
#13 0x00000000004e8c4a in app_template::run_deprecated(int, char**, std::function<void ()>&&) (this=this@entry=0x7fffffffd530, ac=ac@entry=0xd, av=av@entry=0x7fffffffd778, func=func@entry=<unknown type in ..., CU 0x5001c9, DIE 0x5a7095>) at core/app-template.cc:122
#14 0x000000000041dd62 in main (ac=0xd, av=0x7fffffffd778) at main.cc:279

Infinite recursion when throwing std::bad_alloc from seastar's allocator

When throwing std::bad_alloc, the runtime is calling __cxa_allocate_exception, which in turn calls malloc() to allocate the exception object. If we tried throwing std::bad_alloc for a small object, likely __cxa_allocate_exception will fail too. libstdc++ has a fallback emergency pool which it uses in case malloc() fails:

__cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) _GLIBCXX_NOTHROW
{
  void *ret;

  thrown_size += sizeof (__cxa_refcounted_exception);
  ret = malloc (thrown_size);

  if (!ret)
    ret = emergency_pool.allocate (thrown_size);

  if (!ret)
    std::terminate ();

...

Because seastar's malloc() is built on top of throwing allocate(), we never let this emergency pool to kick in but rather recurse until SIGSEGV.

Running memcached on intel with SMP=24 distributes incoming traffic only to CPUs 0..15

I ran memcached from seastar-dev/linearize on Intel servers

on intel1

sudo build/release/apps/memcached/memcached --network-stack native --dhcp 0 --host-ipv4-addr 192.168.20.101 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 192.168.20.185 --smp 24 --dpdk-pmd --collectd 1 --collectd-address 192.168.20.185:25826 --collectd-hostname=intel1-1

on intel2

for i in {1..48} ; do memaslap -s 192.168.20.101:11211 -t 12000s -X 64 -T 1 -c 64 & done 

When checking incoming traffic via http://system.cloudius:18080/ machine intel1-1:

  • network-18->total-operations-rx-packets it can be seen that no traffic is received on this cpu its 0
  • netwrok-12-total-operations-rx-packets is ~200K

Please note that not all collectd information is avilable for all CPUs it seems that some packets are dropped along the way

When booting the memcached process the following is printed:

.
.
PMD: eth_i40e_dev_init(): FW 4.1 API 1.1 NVM 04.01.00 eetrack 80001121
PMD: eth_i40e_dev_init(): Failed to stop lldp
PMD: i40e_pf_parameter_init(): Max supported VSIs:130
PMD: i40e_pf_parameter_init(): PF queue pairs:64
PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
ports number: 1
Port 0: max_rx_queues 316 max_tx_queues 316
Port 0: using 24 queues
Port 0: RSS table size is 512
RX checksum offload supported
TX ip checksum offload supported
TX TCP&UDP checksum offload supported
Port 0 init ... done: 
Creating mbuf pool 'dpdk_net_pktmbuf_pool0' [3072 mbufs] ...
.
.

Replace libuv in node.js

this looks to be an excellent replacement for the single threaded libuv event loop in node.js. Are there any plans to integrate this, it should deliver some massive performance gains.

futurize<void>::apply may return immediately

There is a problem in futurize::apply that happens when the function being passed returns a future.

Because the code reads as func(args); return make_ready_future<>(), if func itself returns a future, we won't wait for the said future.

variables with static storage do not work with seastar well

Seastar's allocator has an assertion which checks that the memory block is freed on the same CPU on which it was allocated. This is reasonable because all allocations in seastar need to be CPU-local.

The problem is that boost libraries (program_options, unit_testing_framework) make heavy use of lazily-allocated static variables, for instance:

    const variable_value&
    variables_map::get(const std::string& name) const
    {
        static variable_value empty;
        // ...
    }

Such variable will be allocated in in the current threads but freed in the main thread when exit handlers are executed.

This results in:

output_stream_test: core/memory.cc:415: void memory::cpu_pages::free(void*): Assertion `((reinterpret_cast<uintptr_t>(ptr) >> cpu_id_shift) & 0xff) == cpu_id' failed.
Aborted (core dumped)

This can be worked around by using a really nasty hack to trigger initialization in the main thread by making dummy calls. This solution is not satisfactory to say the least.

Incorrect default processor count logic

When Seastar is run on a machine with hyperthreads with default parameters, it fails with:

terminate called after throwing an instance of 'std::runtime_error'
  what():  insufficient processing units

The problem is that if you have, for example, 12 cores and 24 hyperthreads, configuration.cpus is set, by default, to 24, yet resource::allocate() (resource.cc) checks, and fails with the above exception, if that is higher than 12.

running tcp_client with dpdk with up thorws an aseert in dpdk

The following assert is thrown

tcp_client: net/dpdk.cc:184: virtual unsigned int dpdk::dpdk_device::hash2qid(uint32_t): Assertion `_redir_table.size()' failed.

sudo build/release/tests/tcp_client --test ping --server 192.168.10.101:10000 --host-ipv4-addr 192.168.10.185 --dhcp 0 --gw-ipv4-addr 192.168.10.101 --dpdk-pmd --netmask-ipv4-addr 255.255.255.0 --smp 1 --conn 2 --network-stack native --collectd 0

running tcp_client with stack posix works on the same setup (without dpdk)

Seastar crashes when DHCP renews lease

I have run Seastar's httpd on OSv on KVM (these details hopefully don't matter), with native network stack, and ran "wrk" on it (./wrk -c 100 -t1 --latency http://192.168.122.89:10000/) which worked fine, and then forgot all about this server. When I came back to it after a long time of inactivity, I got the following message:

Assertion failed: _state == state::future (./core/future.hh: void future_state::set(A&& ...) [with A = {}; T = {}]: 145)

In GDB, I see this stack trace. It seems the DHCP lease renew code reuses a future, causing this bug.

#7  set_value<> (this=0xffff900002daea60) at ./core/future.hh:222
#8  net::native_network_stack::on_dhcp (this=0xffff900002dae000,
    success=<optimized out>, res=...) at net/native-stack.cc:142
#9  0x0000100000cd88ac in operator() (res=..., success=<optimized out>,
    __closure=0x2000002ff450) at net/native-stack.cc:157
#10 apply (func=...,
    args=<error reading variable: access outside bounds of object referenced via synthetic pointer>) at ./core/apply.hh:36
#11 apply<net::native_network_stack::on_dhcp(bool, const net::dhcp::lease&)::<lambda()>::<lambda(bool, const net::dhcp::lease&)>, bool, net::dhcp::lease>(net::native_network_stack::<lambda()>::<lambda(bool, const net::dhcp::lease&)>, <unknown type in /home/nyh/seastar/build/release/apps/httpd/httpd, CU 0x3d19eb, DIE 0x47a7d0>) (func=..., args=<optimized out>) at ./core/apply.hh:43
#12 0x0000100000cd904f in _ZZN6futureIIbN3net4dhcp5leaseEEE4thenIZZNS0_20native_network_stack7on_dhcpEbRKS2_ENKUlvE0_clEvEUlbS7_E_EES_IIEEOT_NSt9enable_ifIXsrSt7is_sameINSt9result_ofIFSB_ObOS2_EE4typeEvE5valueEPvE4typeEENUlRT_E_clI12future_stateIIbS2_EEEEDaRSB_ () at ./core/future.hh:364
#13 0x0000100000cf9f59 in reactor::run (this=0xffff900002908018)
    at core/reactor.cc:408

ubsan makes debug-build tests executables are very large

As explained in commit 3dd4bd2 we strip the test executables, because the debug information makes each one unacceptably large. We strip both the "release" and "debug" tests.

Stripping made the release executables acceptably small, but the debug-mode tests are still very large. For example, for the tcp_client test, we have

1.4M    build/release/tests/tcp_client
38M build/debug/tests/tcp_client

It turns out the reason for most of the debug executable's size is ubsan (the undefined behavior sanitizer) enabled in the "debug" build. I don't know why, but ubsan adds a very large data section to the executable, which explains much of the executable's increased size. For example:

   text    data     bss     dec     hex filename
12360358    26861464      11409 39233231    256a6cf build/debug/tests/tcp_client
1424669    4512   10584 1439765  15f815 build/release/tests/tcp_client

As you can see above, the text section of the executable is much larger in the debug build, which was to be expected because the debug build removes all optimizations which especially in C++ keeps tons of code that should have been optimized away. But this increase is dwarfed by the dramaticly increase data section - from 4 KB to 26 MB - storing something related to ubsan. I don't know what it stores, or if it can somehow be reduced or eliminated (without eliminated ubsan, of course).

The very large data section can be seen in the individual object files too. For example:

   text    data     bss     dec     hex filename
1803825 3153480     368 4957673  4ba5e9 build/debug/tests/tcp_client.o
 216723       8     101  216832   34f00 build/release/tests/tcp_client.o

I checked the different sanitizer options, and the "-fsanitize=undefined" is responsible for all of the huge data segment (without only the other sanitizers enabled, no such increase is seen).

Excessive retransmits due to TSO

A minor issue with retransmits: if we lose the tail end of a large TSO packet, we retransmit the entire thing, so we can cause even more congestion.

A fix would involve not storing the TCP headers in the retransmit queue, so we can trim the acknowledged part of the packet, and generate a new packet containing only unackowledged data.

This is probably best done together with selective acknowledgement support.

posix_file_impl::list_directory is not working on XFS

I have the following directory structure on XFS

-sh-4.3$ ls -lstr
total 0
0 drwxrwxrwx 4 root    root  32 Jul 12 07:33 shlomi
0 drwxrwxrwx 3 jenkins root 133 Aug  3 21:06 jenkins
0 drwxr-xr-x 2 shlomi  1015   6 Aug  4 06:31 a
0 drwxr-xr-x 2 shlomi  1015   6 Aug  4 06:31 b
0 drwxr-xr-x 2 shlomi  1015   6 Aug  4 06:31 c

when running the seastar/tests/directory_test

-sh-4.3$ cd /data2
-sh-4.3$ /home/shlomi/urchin/seastar/build/release/tests/directory_test 
DE . DT 0x4
DE .. DT 0x4
DE shlomi DT 0x0
DE shlomi DT 0x0
shlomi
DE jenkins DT 0x0
DE jenkins DT 0x0
jenkins
DE a DT 0x0
DE a DT 0x0
a
DE b DT 0x0
DE b DT 0x0
b
DE c DT 0x0
DE c DT 0x0
c
WARNING: closing file in reactor thread

the entries are found but there type is not extracted correctly

I used the following patch

-sh-4.3$ git diff
diff --git a/core/reactor.cc b/core/reactor.cc
index 8cdcbed..f7c98e5 100644
--- a/core/reactor.cc
+++ b/core/reactor.cc
@@ -801,6 +801,8 @@ posix_file_impl::list_directory(std::function<future<> (directory_entry de)> nex
             auto start = w->buffer + w->current;
             auto de = reinterpret_cast<linux_dirent*>(start);
             std::experimental::optional<directory_entry_type> type;
+            unsigned char dt_type = start[de->d_reclen - 1];
+            std::cout << "DE " << de->d_name << " DT 0x" << std::hex << (int)dt_type << "\n";
             switch (start[de->d_reclen - 1]) {
             case DT_BLK:
                 type = directory_entry_type::block_device;
@@ -821,6 +823,7 @@ posix_file_impl::list_directory(std::function<future<> (directory_entry de)> nex
                 type = directory_entry_type::socket;
                 break;
             default:
+                std::cout << "DE " << de->d_name << " DT 0x" << std::hex << (int)dt_type << "\n";
                 // unknown, ignore
                 ;
             }

Over head

-sh-4.3$ git log -1
commit 947619e
Merge: 3cf6bee d48477b
Author: Avi Kivity [email protected]
Date: Sun Aug 2 19:09:02 2015 +0300

Merge "net: add statistics counter for linearization events" from Vlad

New counters have been added to net::qp, net::tcp and net::ipv4.

The new counters are exposed via collectd interface.

for comparison running directory_test on ext4

-sh-4.3$ cd /boot/
-sh-4.3$ /home/shlomi/urchin/seastar/build/release/tests/directory_test 
DE . DT 0x4
DE .. DT 0x4
DE initramfs-3.18.7-200.fc21.x86_64kdump.img DT 0x8
initramfs-3.18.7-200.fc21.x86_64kdump.img
DE config-3.18.6-200.fc21.x86_64 DT 0x8
config-3.18.6-200.fc21.x86_64
DE .vmlinuz-3.18.7-200.fc21.x86_64.hmac DT 0x8
.vmlinuz-3.18.7-200.fc21.x86_64.hmac
DE .vmlinuz-3.18.6-200.fc21.x86_64.hmac DT 0x8
.vmlinuz-3.18.6-200.fc21.x86_64.hmac
DE vmlinuz-0-rescue-2e99f913871b4635bcca4758b6de2f2d DT 0x8
vmlinuz-0-rescue-2e99f913871b4635bcca4758b6de2f2d
DE vmlinuz-4.0.4-202.fc21.x86_64+debug DT 0x8
vmlinuz-4.0.4-202.fc21.x86_64+debug
DE grub DT 0x4
grub
DE initramfs-3.18.6-200.fc21.x86_64.img DT 0x8
initramfs-3.18.6-200.fc21.x86_64.img
DE initramfs-0-rescue-2e99f913871b4635bcca4758b6de2f2d.img DT 0x8
initramfs-0-rescue-2e99f913871b4635bcca4758b6de2f2d.img
DE System.map-3.18.6-200.fc21.x86_64 DT 0x8
System.map-3.18.6-200.fc21.x86_64
DE extlinux DT 0x4
extlinux
DE config-4.0.4-202.fc21.x86_64 DT 0x8
config-4.0.4-202.fc21.x86_64
DE initramfs-4.0.4-202.fc21.x86_64kdump.img DT 0x8
initramfs-4.0.4-202.fc21.x86_64kdump.img
DE config-3.18.7-200.fc21.x86_64 DT 0x8
config-3.18.7-200.fc21.x86_64
DE initramfs-4.0.4-202.fc21.x86_64.img DT 0x8
initramfs-4.0.4-202.fc21.x86_64.img
DE vmlinuz-4.0.4-202.fc21.x86_64 DT 0x8
vmlinuz-4.0.4-202.fc21.x86_64
DE lost+found DT 0x4
lost+found
DE System.map-3.18.7-200.fc21.x86_64 DT 0x8
System.map-3.18.7-200.fc21.x86_64
DE grub2 DT 0x4
grub2
DE initramfs-3.18.6-200.fc21.x86_64kdump.img DT 0x8
initramfs-3.18.6-200.fc21.x86_64kdump.img
DE vmlinuz-3.18.6-200.fc21.x86_64 DT 0x8
vmlinuz-3.18.6-200.fc21.x86_64
DE .vmlinuz-4.0.4-202.fc21.x86_64+debug.hmac DT 0x8
.vmlinuz-4.0.4-202.fc21.x86_64+debug.hmac
DE initrd-plymouth.img DT 0x8
initrd-plymouth.img
DE System.map-4.0.4-202.fc21.x86_64 DT 0x8
System.map-4.0.4-202.fc21.x86_64
DE vmlinuz-3.18.7-200.fc21.x86_64 DT 0x8
vmlinuz-3.18.7-200.fc21.x86_64
DE initramfs-3.18.7-200.fc21.x86_64.img DT 0x8
initramfs-3.18.7-200.fc21.x86_64.img
DE config-4.0.4-202.fc21.x86_64+debug DT 0x8
config-4.0.4-202.fc21.x86_64+debug
DE .vmlinuz-4.0.4-202.fc21.x86_64.hmac DT 0x8
.vmlinuz-4.0.4-202.fc21.x86_64.hmac
DE System.map-4.0.4-202.fc21.x86_64+debug DT 0x8
System.map-4.0.4-202.fc21.x86_64+debug
WARNING: closing file in reactor thread

support running on hyperthreads

Do we want to support running seastar on hyperthreads

Running seastar on a bare metal machine with # queues > # cpus one gets

Terminate called after throwing an instance of 'std::runtime_error'
what(): insufficient processing units

s1% sudo ./build/release/apps/httpd/httpd --network-stack native --dpdk-pmd 1 --dhcp 0 --host-ipv4-addr 10.120.6.132 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 10.120.6.134
terminate called after throwing an instance of 'std::runtime_error'
what(): insufficient processing units
s1% sudo apt-getistall hwlock
sudo: apt-getistall: command not found
s1% sudo apt-get install hwlock
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package hwlock
s1% sudo apt-get install hwlock-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package hwlock-dev
s1% sudo gdb --args ./build/release/apps/httpd/httpd --network-stack native --dpdk-pmd 1 --dhcp 0 --host-ipv4-addr 10.120.6.132 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 10.120.6.134
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./build/release/apps/httpd/httpd...done.
(gdb) run
Starting program: /home/shlomi/seastar/build/release/apps/httpd/httpd --network-stack native --dpdk-pmd 1 --dhcp 0 --host-ipv4-addr 10.120.6.132 --netmask-ipv4-addr 255.255.255.0 --gw-ipv4-addr 10.120.6.134
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
terminate called after throwing an instance of 'std::runtime_error'
what(): insufficient processing units

Program received signal SIGABRT, Aborted.
0x00007ffff4dfbbb9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) up
#1 0x00007ffff4dfefc8 in __GI_abort () at abort.c:89

89 abort.c: No such file or directory.
(gdb) up
#2 0x00007ffff74bac1d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6

(gdb) up
#3 0x00007ffff74b8ca6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6

(gdb) up
#4 0x00007ffff74b8cf1 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6

(gdb) up
#5 0x00007ffff74b8f08 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6

(gdb) up
#6 0x0000000000486ba7 in resource::allocate (c=...) at core/resource.cc:66

66 throw std::runtime_error("insufficient processing units");
(gdb) list
61 throw std::runtime_error("insufficient physical memory");
62 }
63 auto available_procs = hwloc_get_nbobjs_by_depth(topology, HWLOC_OBJ_PU);
64 unsigned procs = c.cpus.value_or(available_procs);
65 if (procs > available_procs) {
66 throw std::runtime_error("insufficient processing units");
67 }
68 auto mem_per_proc = align_down<size_t>(mem / procs, 2 << 20);
69 std::vector<hwloc_cpuset_t> cpu_sets{procs};
70 auto root = hwloc_get_root_obj(topology);
(gdb) print procs
$1 =
(gdb) print available_procs
$2 =
(gdb) exit
Undefined command: "exit". Try "help".
(gdb) quit
A debugging session is active.

Inferior 1 [process 17983] will be killed.

Quit anyway? (y or n) y

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.