GithubHelp home page GithubHelp logo

iqiyi / dpvs Goto Github PK

View Code? Open in Web Editor NEW
2.9K 195.0 715.0 6.7 MB

DPVS is a high performance Layer-4 load balancer based on DPDK.

License: Other

Makefile 0.85% C 86.21% Shell 1.58% Roff 2.30% Dockerfile 0.15% M4 1.77% Python 0.34% CSS 0.05% Perl 0.97% Go 5.79%
dpdk lvs fullnat snat balancer kernel-bypass load-balancer nat64 ipv6

dpvs's Introduction

Build Run

dpvs-logo.png

Introduction

DPVS is a high performance Layer-4 load balancer based on DPDK. It's derived from Linux Virtual Server LVS and its modification alibaba/LVS.

Notes: The name DPVS comes from "DPDK-LVS".

dpvs.png

Several techniques are applied for high performance:

  • Kernel by-pass (user space implementation).
  • Share-nothing, per-CPU for key data (lockless).
  • RX Steering and CPU affinity (avoid context switch).
  • Batching TX/RX.
  • Zero Copy (avoid packet copy and syscalls).
  • Polling instead of interrupt.
  • Lockless message for high performance IPC.
  • Other techs enhanced by DPDK.

Major features of DPVS including:

  • L4 Load Balancer, including FNAT, DR, Tunnel, DNAT modes, etc.
  • SNAT mode for Internet access from internal network.
  • NAT64 forwarding in FNAT mode for quick IPv6 adaptation without application changes.
  • Different schedule algorithms like RR, WLC, WRR, MH(Maglev Hashing), Conhash(Consistent Hashing) etc.
  • User-space Lite IP stack (IPv4/IPv6, Routing, ARP, Neighbor, ICMP ...).
  • Support KNI, VLAN, Bonding, Tunneling for different IDC environment.
  • Security aspect, support TCP syn-proxy, Conn-Limit, black-listwhite-list.
  • QoS: Traffic Control.

DPVS feature modules are illustrated as following picture.

modules

Quick Start

Test Environment

This quick start is tested with the environment below.

  • Linux Distribution: CentOS 7.2
  • Kernel: 3.10.0-327.el7.x86_64
  • CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
  • NIC: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 03)
  • Memory: 64G with two NUMA node.
  • GCC: gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)

Other environments should also be OK if DPDK works, please check dpdk.org for more info.

Notes: To let dpvs work properly with multi-cores, rte_flow items must support "ipv4, ipv6, tcp, udp" four items, and rte_flow actions must support "drop, queue" at least.

Clone DPVS

$ git clone https://github.com/iqiyi/dpvs.git
$ cd dpvs

Well, let's start from DPDK then.

DPDK setup.

Currently, dpdk-stable-20.11.1 is recommended for DPVS, and we will not support dpdk version earlier than dpdk-20.11 any more. If you are still using earlier dpdk versions, such as dpdk-stable-17.11.2, dpdk-stable-17.11.6 and dpdk-stable-18.11.2, please use earlier dpvs releases, such as v1.8.10.

Notes: You can skip this section if experienced with DPDK, and refer the link for details.

$ wget https://fast.dpdk.org/rel/dpdk-20.11.1.tar.xz   # download from dpdk.org if link failed.
$ tar xf dpdk-20.11.1.tar.xz

DPDK patchs

There are some patches for DPDK to support extra features needed by DPVS. Apply them if needed. For example, there's a patch for DPDK kni driver for hardware multicast, apply it if you are to launch ospfd on kni device.

Notes: Assuming we are in DPVS root directory and dpdk-stable-20.11.1 is under it, please note it's not mandatory, just for convenience.

$ cd <path-of-dpvs>
$ cp patch/dpdk-stable-20.11.1/*.patch dpdk-stable-20.11.1/
$ cd dpdk-stable-20.11.1/
$ patch -p1 < 0001-kni-use-netlink-event-for-multicast-driver-part.patch
$ patch -p1 < 0002-pdump-change-dpdk-pdump-tool-for-dpvs.patch
$ ...

Tips: It's advised to patch all if your are not sure about what they are meant for.

DPDK build and install

Use meson-ninja to build DPDK libraries, and export environment variable PKG_CONFIG_PATH for DPDK app (DPVS). The dpdk.mk in DPVS checks the presence of libdpdk.

$ cd dpdk-stable-20.11.1
$ mkdir dpdklib                 # user desired install folder
$ mkdir dpdkbuild               # user desired build folder
$ meson -Denable_kmods=true -Dprefix=dpdklib dpdkbuild
$ ninja -C dpdkbuild
$ cd dpdkbuild; ninja install
$ export PKG_CONFIG_PATH=$(pwd)/../dpdklib/lib64/pkgconfig/

Tips: You can use script dpdk-build.sh to facilitate dpdk build. Run dpdk-build.sh -h for the usage of the script.

Next is to set up DPDK hugepage. Our test environment is NUMA system. For single-node system please refer to the link.

$ # for NUMA machine
$ echo 8192 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
$ echo 8192 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

$ mkdir /mnt/huge
$ mount -t hugetlbfs nodev /mnt/huge

Install kernel modules and bind NIC with uio_pci_generic driver. Quick start uses only one NIC, normally we use two for FNAT cluster, even four for bonding mode. For example, suppose the NIC we would use to run DPVS is eth0, in the meantime, we still keep another standalone NIC eth1 for debugging.

$ modprobe uio_pci_generic

$ cd dpdk-stable-20.11.1
$ insmod dpdkbuild/kernel/linux/kni/rte_kni.ko carrier=on

$ ./usertools/dpdk-devbind.py --status
$ ifconfig eth0 down          # assuming eth0 is 0000:06:00.0
$ ./usertools/dpdk-devbind.py -b uio_pci_generic 0000:06:00.0

Notes:

  1. An alternative to the uio_pci_generic is igb_uio, which is moved to a separated repository dpdk-kmods.
  2. A kernel module parameter carrier is added to rte_kni.ko since DPDK v18.11, and the default value for it is "off". We need to load rte_kni.ko with the extra parameter carrier=on to make KNI devices work properly.

dpdk-devbind.py -u can be used to unbind driver and switch it back to Linux driver like ixgbe. You can also use lspci or ethtool -i eth0 to check the NIC PCI bus-id. Please refer to DPDK site for more details.

Notes: PMD of Mellanox NIC is built on top of libibverbs using the Raw Ethernet Accelerated Verbs AP. It doesn't rely on UIO/VFIO driver. Thus, Mellanox NICs should not bind the igb_uio driver. Refer to Mellanox DPDK for details.

Build DPVS

It's simple, just set PKG_CONFIG_PATH and build it.

$ export PKG_CONFIG_PATH=<path-of-libdpdk.pc>  # normally located at dpdklib/lib64/pkgconfig/
$ cd <path-of-dpvs>

$ make              # or "make -j" to speed up
$ make install

Notes:

  1. Build dependencies may be needed, such as pkg-config(version 0.29.2+),automake, libnl3, libnl-genl-3.0, openssl, popt and numactl. You can install the missing dependencies by using the package manager of the system, e.g., yum install popt-devel automake (CentOS) or apt install libpopt-dev autoconfig (Ubuntu).
  2. Early pkg-config versions (v0.29.2 before) may cause dpvs build failure. If so, please upgrade this tool.

Output files are installed to dpvs/bin.

$ ls bin/
dpip  dpvs  ipvsadm  keepalived
  • dpvs is the main program.
  • dpip is the tool to set IP address, route, vlan, neigh, etc.
  • ipvsadm and keepalived come from LVS, both are modified.

Launch DPVS

Now, dpvs.conf must locate at /etc/dpvs.conf, just copy it from conf/dpvs.conf.single-nic.sample.

$ cp conf/dpvs.conf.single-nic.sample /etc/dpvs.conf

and start DPVS,

$ cd <path-of-dpvs>/bin
$ ./dpvs &

Check if it's get started ?

$ ./dpip link show
1: dpdk0: socket 0 mtu 1500 rx-queue 8 tx-queue 8
    UP 10000 Mbps full-duplex fixed-nego promisc-off
    addr A0:36:9F:9D:61:F4 OF_RX_IP_CSUM OF_TX_IP_CSUM OF_TX_TCP_CSUM OF_TX_UDP_CSUM

If you see this message. Well done, DPVS is working with NIC dpdk0!

Don't worry if you see this error:

EAL: Error - exiting with code: 1
  Cause: ports in DPDK RTE (2) != ports in dpvs.conf(1)

It means the NIC count of DPVS does not match /etc/dpvs.conf. Please use dpdk-devbind to adjust the NIC number or modify dpvs.conf. We'll improve this part to make DPVS more "clever" to avoid modify config file when NIC count does not match.

What config items does dpvs.conf support? How to configure them? Well, DPVS maintains a config item file conf/dpvs.conf.items which lists all supported config entries and corresponding feasible values. Besides, some config sample files maintained as ./conf/dpvs.*.sample show the configurations of dpvs in some specified cases.

Test Full-NAT (FNAT) Load Balancer

The test topology looks like the following diagram.

fnat-single-nic

Set VIP and Local IP (LIP, needed by FNAT mode) on DPVS. Let's put commands into setup.sh. You do some check by ./ipvsadm -ln, ./dpip addr show.

$ cat setup.sh
VIP=192.168.100.100
LIP=192.168.100.200
RS=192.168.100.2

./dpip addr add ${VIP}/24 dev dpdk0
./ipvsadm -A -t ${VIP}:80 -s rr
./ipvsadm -a -t ${VIP}:80 -r ${RS} -b

./ipvsadm --add-laddr -z ${LIP} -t ${VIP}:80 -F dpdk0
$
$ ./setup.sh

Access VIP from Client, it looks good!

client $ curl 192.168.100.100
Your ip:port : 192.168.100.3:56890

Tutorial Docs

More configure examples can be found in the Tutorial Document. Including,

  • WAN-to-LAN FNAT reverse proxy.
  • Direct Route (DR) mode setup.
  • Master/Backup model (keepalived) setup.
  • OSPF/ECMP cluster model setup.
  • SNAT mode for Internet access from internal network.
  • Virtual Devices (Bonding, VLAN, kni, ipip/GRE).
  • UOA module to get real UDP client IP/port in FNAT.
  • ... and more ...

We also listed some frequently asked questions in the FAQ Document. It may help when you run into problems with DPVS.

Performance Test

Our test shows the forwarding speed (pps) of DPVS is several times than LVS and as good as Google's Maglev.

performance

Click here for the lastest performance data.

License

Please refer to the License file for details.

Contributing

Please refer to the CONTRIBUTING file for details.

Community

Currently, DPVS has been widely accepted by dozens of community cooperators, who have successfully used and contributed a lot to DPVS. We just list some of them alphabetically as below.

CMSoft cmsoft
IQiYi iqiyi
NetEase netease
Shopee shopee
Xiaomi todo

Contact Us

DPVS is developed by iQiYi QLB team since April 2016. It's widely used in iQiYi IDC for L4 load balancer and SNAT clusters, and we have already replaced nearly all our LVS clusters with DPVS. We open-sourced DPVS at October 2017, and are excited to see that more people can get involved in this project. Welcome to try, report issues and submit pull requests. And please feel free to contact us through Github or Email.

  • github: https://github.com/iqiyi/dpvs
  • email: iig_cloud_qlb # qiyi.com (Please remove the white-spaces and replace # with @).

dpvs's People

Contributors

0xsgl avatar adamyyan avatar azura27 avatar beacer avatar chion82 avatar danielybl avatar dongzerun avatar haosdent avatar icymoon avatar kldeng avatar liwei42 avatar mfhw avatar mscbg avatar roykingz avatar simonczw avatar sun-ao-1125 avatar vipinpv85 avatar wdjwxh avatar wenjiejiang avatar xiaguiwu avatar xiangp126 avatar yangxingwu avatar yfming avatar you-looks-not-tasty avatar ytwang0320 avatar yuwenjun-cmss avatar ywc689 avatar zhouyangchao avatar zos43 avatar zycode667 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dpvs's Issues

CentOS7编译安装dpvs报错,请大神支援。

问题如下:

  1. 在CentOS7.4系统内核为3.10.0-693.el7.x86_64下编译dpdk16.07报错,编译最新版本dpdk没有问题
  2. dpdk最新版编译完成,编译dpvs程序出现如下错误
/root/dpvs/src/netif.c:1418:9: error: too few arguments to function ‘rte_ring_enqueue_bulk’
         res = rte_ring_enqueue_bulk(isol_rxq->rb, (void *const * )mbufs, rx_len);
......
/root/dpvs/src/netif.c:3886:34: error: dereferencing pointer to incomplete type
                 dev_info->pci_dev->addr.domain,
                                  ^
/root/dpvs/src/netif.c:3887:34: error: dereferencing pointer to incomplete type
                 dev_info->pci_dev->addr.bus,
                                  ^
/root/dpvs/src/netif.c:3888:34: error: dereferencing pointer to incomplete type
                 dev_info->pci_dev->addr.devid,
                                  ^
/root/dpvs/src/netif.c:3889:34: error: dereferencing pointer to incomplete type
                 dev_info->pci_dev->addr.function);

猜测貌似是内核版本或dpdk版本不适合,导致编译时类型冲突报错。请相关项目的大神指点一下,谢谢。

性能数据为什么只有7个核

E5-2650 v3双路cpu加起来有20个核了,为什么只有7个核的数据?
1600w pps不算高,我们内核版的lvs都跑到3000w pps了,用了32个核。

README.md: it's better to notice user that we're on NUMA system

% echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
% echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
% mkdir /mnt/huge
% mount -t hugetlbfs nodev /mnt/huge

On a single node system there isn't a node1 I think.

dpvs启动报错:fail to bring up dpdk0

之前步骤都正常,在启动dpvs时报错,有没有遇到相同问题的帮忙解答一下

[root@localhost bin]# ./dpvs &
[1] 18431
[root@localhost bin]# EAL: Detected 40 lcore(s)
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:81:00.0 on NUMA socket 1
EAL: probe driver: 8086:1528 rte_ixgbe_pmd
EAL: PCI device 0000:81:00.1 on NUMA socket 1
EAL: probe driver: 8086:1528 rte_ixgbe_pmd
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:1528 rte_ixgbe_pmd
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1528 rte_ixgbe_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
NETIF: netif_port_start: fail to bring up dpdk0
DPVS: Start dpdk0 failed, skipping ...

No 'CONTRIBUTING.md' was found.

Hi all,
What is the agreement for contributions?

It seems that I didn't find the inbound agreement.
In general, the inbound license should be 'Developer Certificate of Origin, version 1.1'.
The outbound license should be 'Apache License 2.0'
And the legal information of inbound license should be in CONTRIBUTING.md.

My boss want to know the inbound license of this project and the link.

Please see this link as reference:
https://github.com/hyperhq/runv/blob/master/CONTRIBUTING.md

Thanks.

DPVS performance evaluation

Hi,
I would like to replicate your performance test results on our servers.
Would you mind to share more detailed in your test setup? (such as test diagram, how to setup the test, how to run the test ..)
Thank you!

启动dpvs报错,另一个报错提示

[root@localhost bin]# ./dpvs &
[1] 2902
[root@localhost bin]# EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:08:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
EAL: Error - exiting with code: 1
Cause: [netif_lcore_init] bad lcore configuration (err=-5), exit ...

[1]+ Exit 1 ./dpvs

sant sapool error

dpip addr add 172.20.2.122/24 dev dpdk0 sapool
[sockopt_msg_recv] errcode set in socket msg#400 header: failed dpdk api(-11)
dpip: failed dpdk api

[root@dpvs bin]# MSGMGR: [sockopt_msg_send:msg#400] errcode set in sockopt msg reply: failed dpdk api

where is the kni patch?

There's a patch for DPDK kni driver for hardware multicast, apply it if needed (for example, launch ospfd on kni device).

assuming we are in DPVS root dir and dpdk-stable-16.07.2 is under it, pls note it's not mandatory, just for convenience.

% cd <path-of-dpvs>
% cp patch/dpdk-16.07/0001-kni-use-netlink-event-for-multicast-driver-part.patch dpdk-stable-16.07.2/

where is the patch?

dpdk报错

执行./tools/dpdk-devbind.py --st时报错,请大神看看哪里有问题
[root@localhost dpdk-stable-16.07.2]# ./tools/dpdk-devbind.py --st
Traceback (most recent call last):
File "./tools/dpdk-devbind.py", line 577, in
main()
File "./tools/dpdk-devbind.py", line 573, in main
get_nic_details()
File "./tools/dpdk-devbind.py", line 249, in get_nic_details
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
File "./tools/dpdk-devbind.py", line 125, in check_output
stderr=stderr).communicate()[0]
File "/usr/lib64/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

black-list 怎么使用

black-list 是把恶意请求的ip自动加入到 black-list 还是手动配置的
我只在ipvsadm 看到 有--get-blklst 参数

运行./bin/dpvs报错

EAL: Error - exiting with code: 1
Cause: [netif_lcore_init] bad lcore configuration (err=-5), exit ...

请问这个什么哪里配置有问题吗?配置文件没有修改

EAL: Detected 32 lcore(s)
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
froad:rte_eth_driver_register
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
EAL: Error - exiting with code: 1
Cause: [netif_lcore_init] bad lcore configuration (err=-5), exit ...

rte_eth_dev_configure 报错

netif.c:3002这个函数报错,
ret = rte_eth_dev_configure(port->id, port->nrxq, port->ntxq, &port->dev_conf);
port->id=0, port->nrxq=1, port->ntxq=1
返回-22
虚拟机是kvm的虚拟机。
我的网卡是:
Ethernet controller: Red Hat, Inc Virtio network device
从dpdk官网看到确实是支持这个网卡的,请问下这个可能是什么原因。

关于dpvs无法运行的相关问题

尊敬的爱奇艺开发者们:
      你们好,首先非常感谢你们提供、开源的这个程序。我跟我的小组依照你们的程序提示,在虚拟机以及实体机上都进行了安装测试,但都会报出“EAL: Error - exiting with code: 1 Cause: Cannot init mbuf pool on socket 1”错误,导致程序无法顺利进行。我想请问一下是否dpvs对环境、底层硬件依赖性太强?是否虚拟机下无法正确运行?是否实体机的硬件环境与你们不一致的时候无法运行?希望得到你们的回答,万分感激!

TX descriptor number issues.

  1. Does port->txq_desc_nb relates to the tx descriptor_number value of netif_defs in configuration file? If it does, the change of tx descriptor_number will NOT affect port->txq_desc_nb, since i set tx descriptor_number to 4096 and dpvs passes 128 to dpdk when setting up tx queues in netif_port_start() int netif.c file.

  2. Passing 128 as tx descriptor number to dpdk will cause problem to some device such as VMXNET3 NIC, since port->txq_desc_nb will be set as tx ring size to the device and VMXNET3 requires it in range 512-4096.

Refers to: http://dpdk.org/ml/archives/dev/2015-November/028809.html

netif_port_start() fail to config tx queue

As what this title say, netif_port_start() fail to config tx queue, with rte_eth_tx_queue_setup() in dpdk returns -22. After inspection, i found port id =0, queue id =0, tx descriptor number =1024, socket id=0 and a not null txconf pointer passed to rte_eth_tx_queue_setup().

here is dpvs logs.

EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
NETIF: netif_port_start: fail to config dpdk0:tx-queue-0
DPVS: Start dpdk0 failed, skipping ...
Kni: update maddr of dpdk0 Failed!

Report DPVS fdir bug

我的配置如下

  • ./dpip addr add 10.114.249.201/24 dev dpdk0
  • ./dpip route add default via 10.114.249.254 dev dpdk0
  • ./ipvsadm -A -t 10.114.249.201:80 -s rr
  • ./ipvsadm -a -t 10.114.249.201:80 -r 10.112.95.3 -b
  • ./ipvsadm --add-laddr -z 10.114.249.202 -t 10.114.249.201:80 -F dpdk0

我仅仅使用了一个网卡(X710)

我dpvs的配置文件为dpvs.conf.single-nic.sample
我发现一个奇怪的现象,只用我当的网卡队列数设置为1的时候,是可以curl成功的

通过调试代码发现,当队列数为2的时候,curl 服务(客户端IP:10.112.95.3),

拓扑 客户端 10.112.95.3 vip 10.114.249.201 lip 10.114.249.202 服务器 10.112.95.3

log 有如下输出:

lcore 2 port0 ipv4 hl 5 tos 0 tot 60 id 35328 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.201
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 2 port0 ipv4 hl 5 tos 0 tot 60 id 35329 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.201
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 2 port0 ipv4 hl 5 tos 0 tot 60 id 35330 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.201
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 2 port0 ipv4 hl 5 tos 0 tot 60 id 35331 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.201
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202
lcore 2 port0 ipv4 hl 5 tos 0 tot 60 id 35332 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.201
lcore 1 port0 ipv4 hl 5 tos 0 tot 52 id 0 ttl 60 prot 6 src 10.112.95.3 dst 10.114.249.202

fdir not working

setup as readme, but not working as expected

build with CONFIG_DPVS_IPVS_DEBUG, log

IPVS: conn lookup: [5] TCP - miss
IPVS: conn lookup: [6] TCP - miss

how to config ospf?

I want to use ospf, in ospfd.conf need config interface as:
interface eth0
ip ospf message-digest-key 1 md5 hellotencent
...
but eth0 is down, i set dpdk0 failed! as:
ifconfig dpdk0 192.168.1.140 broadcast 192.168.1.128 netmask 255.255.255.128
dpdk0: ERROR while getting interface flags: No such device
SIOCSIFBRDADDR: No such device

编译出错

make -j4
== Build lib
== Build lib/librte_compat
== Build lib/librte_eal
== Build lib/librte_net
== Build lib/librte_eal/common
== Build lib/librte_eal/linuxapp
== Build lib/librte_eal/linuxapp/eal
== Build lib/librte_eal/linuxapp/igb_uio
== Build lib/librte_eal/linuxapp/kni
(cat /dev/null; echo kernel//root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko;) > /root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/igb_uio/modules.order
Building modules, stage 2.
MODPOST 1 modules
CC [M] /root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.o
CC [M] /root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_vmdq.o
CC [M] /root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/kni_misc.o
CC [M] /root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/kni_net.o
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c: 在函数‘igb_ndo_bridge_getlink’中:
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2290:2: 错误:提供给函数‘ndo_dflt_bridge_getlink’的实参太少
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, 0, 0);
^
In file included from /usr/src/kernels/3.10.0-514.el7.x86_64/include/net/dst.h:13:0,
from /usr/src/kernels/3.10.0-514.el7.x86_64/include/net/sock.h:72,
from /usr/src/kernels/3.10.0-514.el7.x86_64/include/linux/tcp.h:23,
from /root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:34:
/usr/src/kernels/3.10.0-514.el7.x86_64/include/linux/rtnetlink.h:100:12: 附注:在此声明
extern int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
^
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c: 在文件作用域:
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2339:2: 错误:从不兼容的指针类型初始化 [-Werror]
.ndo_fdb_add = igb_ndo_fdb_add,
^
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2339:2: 错误:(在‘igb_netdev_ops.<匿名>.ndo_fdb_add’的初始化附近) [-Werror]
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2346:2: 错误:从不兼容的指针类型初始化 [-Werror]
.ndo_bridge_setlink = igb_ndo_bridge_setlink,
^
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2346:2: 错误:(在‘igb_netdev_ops.<匿名>.ndo_bridge_setlink’的初始化附近) [-Werror]
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2347:2: 错误:从不兼容的指针类型初始化 [-Werror]
.ndo_bridge_getlink = igb_ndo_bridge_getlink,
^
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2347:2: 错误:(在‘igb_netdev_ops.<匿名>.ndo_bridge_getlink’的初始化附近) [-Werror]
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c: 在函数‘igb_ndo_bridge_getlink’中:
/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2295:1: 错误:在有返回值的函数中,控制流程到达函数尾 [-Werror=return-type]
}
^
cc1: all warnings being treated as errors
make[8]: *** [/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni/igb_main.o] 错误 1
make[8]: *** 正在等待未完成的任务....
make[7]: *** [module/root/slb/dpvs/dpdk-stable-16.07.2/build/build/lib/librte_eal/linuxapp/kni] 错误 2
make[6]: *** [sub-make] 错误 2
make[5]: *** [rte_kni.ko] 错误 2
make[4]: *** [kni] 错误 2
make[3]: *** [linuxapp] 错误 2
make[2]: *** [librte_eal] 错误 2
make[1]: *** [lib] 错误 2
make: *** [all] 错误 2

Stress tests have a lot of errors

we used tsung to test dpvs appearing lots of errors

Name |Highest Rate | Total number

error_abort_max_conn_retries | 99.3 / sec | 2557
error_abort_max_send_retries | 3.6 / sec | 73
error_connect_eaddrinuse | 21.7 / sec | 779
error_connect_etimedout | 133 / sec | 20857
error_connection_closed | 17.2 / sec | 360
error_timeout | 39.5 / sec | 811

关闭 keepalived 服务 DPDK 报错退出

手动停止keepalived服务
. /etc/rc.d/init.d/functions
killproc keepalived

dpdk 报错 dpvs进程退出

[319825.257437] lcore-slave-7[19509]: segfault at fffffffeffffffe8 ip 00000000004accb2 sp 00007fc2205fa890 error 5 in dpvs[400000+237000]
[319828.309061] KNI: /dev/kni closed

configure DPVS via dr mode

Can dpvs do supports DR mode? how to configured /etc/keepalived/keepalived.conf ,here interface use dpdk0 or dpd0.kni ? I set as it, but it cannot work ! no keepalived package sendout!

frag table init failed ?

Hello, I tried install your dpvs, everything works fine excpet when I launch dpvs by ./dpvs &

Output:
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: eth2:rx_queue_number = 8
NETIF: worker cpu1:eth2 rx_queue_id += 0
NETIF: worker cpu1:eth2 tx_queue_id += 0
NETIF: worker cpu2:eth2 rx_queue_id += 1
NETIF: worker cpu2:eth2 tx_queue_id += 1
NETIF: worker cpu3:eth2 rx_queue_id += 2
NETIF: worker cpu3:eth2 tx_queue_id += 2
NETIF: worker cpu4:eth2 rx_queue_id += 3
NETIF: worker cpu4:eth2 tx_queue_id += 3
NETIF: worker cpu5:eth2 rx_queue_id += 4
NETIF: worker cpu5:eth2 tx_queue_id += 4
NETIF: worker cpu6:eth2 rx_queue_id += 5
NETIF: worker cpu6:eth2 tx_queue_id += 5
NETIF: worker cpu7:eth2 rx_queue_id += 6
NETIF: worker cpu7:eth2 tx_queue_id += 6
NETIF: worker cpu8:eth2 rx_queue_id += 7
NETIF: worker cpu8:eth2 tx_queue_id += 7
USER1: rte_ip_frag_table_create: allocation of 25165952 bytes at socket 0 failed do
IP4FRAG: [22] fail to create frag table.
EAL: Error - exiting with code: 1
Cause: Fail to init inet: failed dpdk api

do you have any idea about it

README.md: unnecessary cd command

% cd dpdk-stable-16.07.2/
% export RTE_SDK=$PWD
% cd <path-of-dpvs>

% make # or "make -j40" to speed up.
% make install

I think the first cd command is not needed?

安装报错,请支援

CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_i210.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_api.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_mac.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_manage.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_mbx.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_nvm.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/e1000_phy.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_ethtool.o
CC [M] /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.o
/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c: 在函数‘igb_ndo_bridge_getlink’中:
/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2288:2: 错误:提供给函数‘ndo_dflt_bridge_getlink’的实参太少
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, 0, 0, nlflags);
^
In file included from /usr/src/kernels/3.10.0-693.5.2.el7.x86_64/include/net/dst.h:13:0,
from /usr/src/kernels/3.10.0-693.5.2.el7.x86_64/include/net/sock.h:72,
from /usr/src/kernels/3.10.0-693.5.2.el7.x86_64/include/linux/tcp.h:23,
from /home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:34:
/usr/src/kernels/3.10.0-693.5.2.el7.x86_64/include/linux/rtnetlink.h:115:12: 附注:在此声明
extern int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
^
/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c: 在文件作用域:
/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2317:2: 错误:初始值设定项里有未知的字段‘ndo_set_vf_vlan’
.ndo_set_vf_vlan = igb_ndo_set_vf_vlan,
^
/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c: 在函数‘igb_ndo_bridge_getlink’中:
/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2296:1: 错误:在有返回值的函数中,控制流程到达函数尾 [-Werror=return-type]
}
^
cc1: all warnings being treated as errors
make[8]: *** [/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni/igb_main.o] 错误 1
make[7]: *** [module/home/yinfan/dpvs/dpdk-stable-16.11.3/build/build/lib/librte_eal/linuxapp/kni] 错误 2
make[6]: *** [sub-make] 错误 2
make[5]: *** [rte_kni.ko] 错误 2
make[4]: *** [kni] 错误 2
make[3]: *** [linuxapp] 错误 2
make[2]: *** [librte_eal] 错误 2
make[1]: *** [lib] 错误 2
make: *** [all] 错误 2

dpvs 启动后CPU使用过高

DPVS 启动后CPU 8核心显示100%,运行5分钟左右,系统宕机,无法访问

Nov 21 21:48:11 localhost dpvs: dpvs_running: Remove a zombie pid file /var/run/dpvs.pid
Nov 21 21:48:15 localhost dpvs[2991]: PMD: bnxt_rte_pmd_init() called for (null)
Nov 21 21:48:15 localhost dpvs[2991]: EAL: PCI device 0000:01:00.0 on NUMA socket 0
Nov 21 21:48:15 localhost dpvs[2991]: EAL: probe driver: 8086:1521 rte_igb_pmd
Nov 21 21:48:15 localhost dpvs[2991]: EAL: PCI device 0000:01:00.1 on NUMA socket 0
Nov 21 21:48:15 localhost dpvs[2991]: EAL: probe driver: 8086:1521 rte_igb_pmd
Nov 21 21:48:15 localhost dpvs[2991]: EAL: PCI device 0000:01:00.2 on NUMA socket 0
Nov 21 21:48:15 localhost dpvs[2991]: EAL: probe driver: 8086:1521 rte_igb_pmd
Nov 21 21:48:15 localhost dpvs[2991]: EAL: PCI device 0000:01:00.3 on NUMA socket 0
Nov 21 21:48:15 localhost dpvs[2991]: EAL: probe driver: 8086:1521 rte_igb_pmd
Nov 21 21:48:15 localhost dpvs[2991]: CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
Nov 21 21:48:15 localhost dpvs[2991]: CFG_FILE: log_level = WARNING
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: dpdk0:rx_queue_number = 8
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu1:dpdk0 rx_queue_id += 0
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu1:dpdk0 tx_queue_id += 0
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu2:dpdk0 rx_queue_id += 1
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu2:dpdk0 tx_queue_id += 1
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu3:dpdk0 rx_queue_id += 2
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu3:dpdk0 tx_queue_id += 2
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu4:dpdk0 rx_queue_id += 3
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu4:dpdk0 tx_queue_id += 3
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu5:dpdk0 rx_queue_id += 4
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu5:dpdk0 tx_queue_id += 4
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu6:dpdk0 rx_queue_id += 5
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu6:dpdk0 tx_queue_id += 5
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu7:dpdk0 rx_queue_id += 6
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu7:dpdk0 tx_queue_id += 6
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu8:dpdk0 rx_queue_id += 7
Nov 21 21:48:15 localhost dpvs[2991]: NETIF: worker cpu8:dpdk0 tx_queue_id += 7
Nov 21 21:48:16 localhost kernel: KNI: Creating kni...

dpvs启动到最后系统重启

大侠,

编译完成后启动dpvs,眼看着要起来了,最终系统重启了,journal系统日志无异常

kernel 3.10.0-327 + dpdk 16.07.2
nic: intel 82599 ixgbe

40c

配置文件默认,只去掉了dpdk1相关内容
完整的dpvs启动日志如下,请帮我看看吧

EAL: Detected 40 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:154d rte_ixgbe_pmd
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:154d rte_ixgbe_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = DEBUG
NETIF: pktpool_size = 524287 (round to 2^n-1)
NETIF: pktpool_cache_size = 256 (round to 2^n)
NETIF: netif device config: dpdk0
NETIF: dpdk0:rx_queue_number = 8
NETIF: dpdk0:nb_rx_desc = 1024 (round to 2^n)
NETIF: dpdk0:rss = tcp
NETIF: dpdk0:tx_queue_number = 8
NETIF: dpdk0:nb_tx_desc = 1024 (round to 2^n)
NETIF: dpdk0: kni_name = dpdk0.kni
NETIF: netif worker config: cpu0
NETIF: cpu0:type = master
NETIF: cpu0:cpu_id = 0
NETIF: netif worker config: cpu1
NETIF: cpu1:type = slave
NETIF: cpu1:cpu_id = 1
NETIF: worker cpu1:dpdk0 queue config
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: netif worker config: cpu2
NETIF: cpu2:type = slave
NETIF: cpu2:cpu_id = 2
NETIF: worker cpu2:dpdk0 queue config
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: netif worker config: cpu3
NETIF: cpu3:type = slave
NETIF: cpu3:cpu_id = 3
NETIF: worker cpu3:dpdk0 queue config
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: netif worker config: cpu4
NETIF: cpu4:type = slave
NETIF: cpu4:cpu_id = 4
NETIF: worker cpu4:dpdk0 queue config
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: netif worker config: cpu5
NETIF: cpu5:type = slave
NETIF: cpu5:cpu_id = 5
NETIF: worker cpu5:dpdk0 queue config
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: netif worker config: cpu6
NETIF: cpu6:type = slave
NETIF: cpu6:cpu_id = 6
NETIF: worker cpu6:dpdk0 queue config
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: netif worker config: cpu7
NETIF: cpu7:type = slave
NETIF: cpu7:cpu_id = 7
NETIF: worker cpu7:dpdk0 queue config
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: netif worker config: cpu8
NETIF: cpu8:type = slave
NETIF: cpu8:cpu_id = 8
NETIF: worker cpu8:dpdk0 queue config
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
DTIMER: sched_interval = 500
NEIGHBOUR: arp_unres_qlen = 128
NEIGHBOUR: arp_pktpool_size = 1023(round to 2^n-1)
NEIGHBOUR: arp_pktpool_cache = 32(round to 2^n)
NEIGHBOUR: arp_timeout = 60
IPV4: inet_def_ttl = 64
IP4FRAG: ip4_frag_buckets = 4096
IP4FRAG: ip4_frag_bucket_entries = 16 (round to 2^n)
IP4FRAG: ip4_frag_max_entries = 4096
IP4FRAG: ip4_frag_ttl = 1
MSGMGR: msg_ring_size = 4096 (round to 2^n)
MSGMGR: msg_mc_qlen = 256 (round to 2^n)
MSGMGR: sync_msg_timeout_us = 2000
MSGMGR: ipc_unix_domain = /var/run/dpvs_ctrl
IPVS: conn_pool_size = 2097152 (round to 2^n)
IPVS: conn_pool_cache = 256 (round to 2^n)
IPVS: conn_init_timeout = 3
IPVS: defence_udp_drop ON
IPVS: udp_timeout_normal = 300
IPVS: udp_timeout_last = 3
IPVS: defence_tcp_drop ON
IPVS: tcp_timeout_none = 2
IPVS: tcp_timeout_established = 90
IPVS: tcp_timeout_syn_sent = 3
IPVS: tcp_timeout_syn_recv = 30
IPVS: tcp_timeout_fin_wait = 7
IPVS: tcp_timeout_time_wait = 7
IPVS: tcp_timeout_close = 3
IPVS: tcp_timeout_close_wait = 7
IPVS: tcp_timeout_last_ack = 7
IPVS: tcp_timeout_listen = 120
IPVS: tcp_timeout_synack = 30
IPVS: tcp_timeout_last = 2
IPVS: synack_mss = 1452
IPVS: synack_ttl = 63
IPVS: synproxy_synack_options_sack ON
IPVS: rs_syn_max_retry = 3
IPVS: ack_storm_thresh = 10
IPVS: max_ack_saved = 3
IPVS: synproxy_conn_reuse ON
IPVS: synproxy_conn_reuse: CLOSE
IPVS: synproxy_conn_reuse: TIMEWAIT
KNI: pci: 04:00:01       8086:154d
MSGMGR: [msg_init] built-in msg registered:
lcore 0     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 0     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 1     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 1     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 1     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 2     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 2     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 2     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 3     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 3     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 3     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 4     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 4     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 4     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 5     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 5     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 5     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 6     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 6     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 6     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 7     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 7     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 7     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)
lcore 8     hash 1         type 1         mode UNICAST       unicast_cb 0x44e2a0    multicast_cb (nil)
lcore 8     hash 2         type 2         mode UNICAST       unicast_cb 0x44e4b0    multicast_cb (nil)
lcore 8     hash 5         type 5         mode UNICAST       unicast_cb 0x4359d0    multicast_cb (nil)

USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 1
NETIF: dpdk0:dst_port_mask=700
NETIF: device dpdk0 configuration:
RSS: ETH_RSS_TCP
ipv4_src_ip:        0
ipv4_dst_ip: 0xffffffff
src_port:    0
dst_port: 0x700

NETIF: Waiting for dpdk0 link up, be patient ...
NETIF: >> dpdk0: link up - speed 10000 Mbps - full-duplex
DPVS: 
port-queue-lcore relation array: 
                dpdk0: A0:36:9F:E0:76:72 
    rx0-tx0     cpu1-cpu1                
    rx1-tx1     cpu2-cpu2                
    rx2-tx2     cpu3-cpu3                
    rx3-tx3     cpu4-cpu4                
    rx4-tx4     cpu5-cpu5                
    rx5-tx5     cpu6-cpu6                
    rx6-tx6     cpu7-cpu7                
    rx7-tx7     cpu8-cpu8                

NETIF: [netif_loop] Lcore 9 has nothing to do.
NETIF: [netif_loop] Lcore 11 has nothing to do.
NETIF: [netif_loop] Lcore 12 has nothing to do.
NETIF: [netif_loop] Lcore 13 has nothing to do.
NETIF: [netif_loop] Lcore 15 has nothing to do.
NETIF: [netif_loop] Lcore 10 has nothing to do.
NETIF: [netif_loop] Lcore 14 has nothing to do.
NETIF: [netif_loop] Lcore 16 has nothing to do.
NETIF: [netif_loop] Lcore 17 has nothing to do.
NETIF: [netif_loop] Lcore 18 has nothing to do.
NETIF: [netif_loop] Lcore 19 has nothing to do.
NETIF: [netif_loop] Lcore 29 has nothing to do.
NETIF: [netif_loop] Lcore 22 has nothing to do.
NETIF: [netif_loop] Lcore 23 has nothing to do.
NETIF: [netif_loop] Lcore 39 has nothing to do.
NETIF: [netif_loop] Lcore 26 has nothing to do.
NETIF: [netif_loop] Lcore 28 has nothing to do.
NETIF: [netif_loop] Lcore 30 has nothing to do.
NETIF: [netif_loop] Lcore 32 has nothing to do.
NETIF: [netif_loop] Lcore 33 has nothing to do.
NETIF: [netif_loop] Lcore 35 has nothing to do.
NETIF: [netif_loop] Lcore 24 has nothing to do.
NETIF: [netif_loop] Lcore 38 has nothing to do.
NETIF: [netif_loop] Lcore 27 has nothing to do.
NETIF: [netif_loop] Lcore 31 has nothing to do.
NETIF: [netif_loop] Lcore 34 has nothing to do.
NETIF: [netif_loop] Lcore 37 has nothing to do.
NETIF: [netif_loop] Lcore 21 has nothing to do.
NETIF: [netif_loop] Lcore 36 has nothing to do.
NETIF: [netif_loop] Lcore 20 has nothing to do.
NETIF: [netif_loop] Lcore 25 has nothing to do.
Kni: kni_mc_list_cmp_set: add mc addr: 01:00:5e:00:00:01 dpdk0 OK
Kni: kni_mc_list_cmp_set: add mc addr: 33:33:00:00:00:01 dpdk0 OK
Kni: kni_mc_list_cmp_set: add mc addr: 33:33:ff:e0:76:72 dpdk0 OK

ipvsadm --add-laddr报错

./ipvsadm --add-laddr -z 私网IP -t 公网IP:80 -F dpdk0
MSGMGR: [sockopt_msg_send:msg#400] errcode set in sockopt msg reply: failed dpdk api
[sockopt_msg_recv] errcode set in socket msg#400 header: failed dpdk api(-11)

The IP on DPDK is not accessable

Following the guide, everything looks fine.The dpvs is working on the background. But when I add the IP to the eth which drived by dpdk, the ip is not accessable from client , any other configurations I missed ?

when I display the dpip link , forward2kni-off is there , does that matter ? Pls help..

Here's some output for reference.
[root@localhost bin]# ./dpip link show
1: dpdk0: socket 1 mtu 1500 rx-queue 16 tx-queue 16
UP 10000 Mbps full-duplex fixed-nego promisc-off forward2kni-off
addr 00:E0:ED:57:08:BA OF_RX_IP_CSUM OF_TX_IP_CSUM OF_TX_TCP_CSUM OF_TX_UDP_CSUM

[root@localhost bin]# ./dpip addr show
inet 10.219.100.6/22 scope global dpdk0
broadcast 10.219.103.255 valid_lft forever preferred_lft forever
inet 10.219.100.7/32 scope global dpdk0
valid_lft forever preferred_lft forever sa_used 0 sa_free 1032176 sa_miss 0

[root@localhost bin]# ./dpip route show
inet 10.219.100.6/32 via 0.0.0.0 src 0.0.0.0 dev dpdk0 mtu 1500 tos 0 scope host metric 0 proto auto
inet 10.219.100.7/32 via 0.0.0.0 src 0.0.0.0 dev dpdk0 mtu 1500 tos 0 scope host metric 0 proto auto
inet 10.219.100.0/22 via 0.0.0.0 src 10.219.100.6 dev dpdk0 mtu 1500 tos 0 scope link metric 0 proto auto
inet 0.0.0.0/0 via 10.219.103.254 src 0.0.0.0 dev dpdk0 mtu 1500 tos 0 scope global metric 0 proto auto

[root@localhost bin]# ./ipvsadm -ln
IP Virtual Server version 0.0.0 (size=0)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.219.100.6:80 rr
-> 10.219.100.4:80 FullNat 1 0 0

[root@localhost ~]# curl 10.219.100.6
curl: (7) Failed connect to 10.219.100.6:80; No route to host
[root@localhost ~]#

dpvs 运行不通 哪位大侠能帮忙看一下,十万火急

您好,爱奇艺的开发者们

非常感谢你们开源了dpvs这个优秀的项目,我最近在学习使用它,在使用的过程中我遇见
了一些问题,希望您们能帮组我解决一下

我的配置如下

  • ./dpip addr add 10.114.249.201/24 dev dpdk0
  • ./dpip route add default via 10.114.249.254 dev dpdk0
  • ./ipvsadm -A -t 10.114.249.201:80 -s rr
  • ./ipvsadm -a -t 10.114.249.201:80 -r 10.112.95.3 -b
  • ./ipvsadm --add-laddr -z 10.114.249.202 -t 10.114.249.201:80 -F dpdk0

我仅仅使用了一个网卡(X710)

我dpvs的配置文件为dpvs.conf.single-nic.sample
我发现一个奇怪的现象,只用我当的网卡队列数设置为1的时候,是可以curl成功的

如果为其他则访问不通,我看代码原以为是fdir没有设置成功,但是我通过gdb调试发现fdir设置成功了

麻烦您们能帮组我解答一下吗?

非常感谢~

运行./dpvs &报错。

在运行./dpvs &报错如下:
求助^_^:

[root@localhost bin]# ./dpvs &
[1] 1187
[root@localhost bin]# EAL: Detected 5 lcore(s)
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL: PCI device 0000:00:09.0 on NUMA socket -1
EAL:   probe driver: 8086:100e rte_em_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 2
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
EAL: Error - exiting with code: 1
  Cause: Cannot init mbuf pool on socket 1
[1]+  Exit 1                  ./dpvs

Can not capture with device as dpdk.0 or dpdk.bond0

dpvs test in my enviroment is ok. It's very nice job.Greaet!!!
but it can't capture packet on uesrspace .
my test envrioment :
CentOS release 6.5
2.6.32
CPU E5-2650
gcc version 4.4.7

and some little bug like this:

  1. redefinition
    typedef uint8_t lcoreid_t;
    typedef uint8_t portid_t;
    typedef uint16_t queueid_t;
    in include/dpdk.h and in include/conf/netif.h so it redefinition with my compile.so i think it place in include/common.h

diff --git a/include/conf/netif.h b/include/conf/netif.h
index 2f445ab..8d6851e 100644
--- a/include/conf/netif.h
+++ b/include/conf/netif.h
@@ -18,6 +18,7 @@
#ifndef NETIF_CONF_H
#define NETIF_CONF_H
#include <linux/if_ether.h>
+#include "common.h"

#define NETIF_MAX_PORTS 64
#define NETIF_MAX_LCORES 64
@@ -30,9 +31,9 @@
#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16
#define NETIF_MAX_BOND_SLAVES 32

-typedef uint8_t lcoreid_t;
-typedef uint8_t portid_t;
-typedef uint16_t queueid_t;
+//typedef uint8_t lcoreid_t;
+//typedef uint8_t portid_t;
+//typedef uint16_t queueid_t;

#define NETIF_PORT_FLAG_RX_IP_CSUM_OFFLOAD 0x1<<3
#define NETIF_PORT_FLAG_TX_IP_CSUM_OFFLOAD 0x1<<4
[root@lvs-master dpvs]# git diff include/conf/netif.h include/dpdk.h include/common.h
diff --git a/include/common.h b/include/common.h
index 5643f0d..af87f5a 100644
--- a/include/common.h
+++ b/include/common.h
@@ -21,6 +21,10 @@
#include <stdint.h>
#include <linux/if_ether.h>

+typedef uint8_t lcoreid_t;
+typedef uint8_t portid_t;
+typedef uint16_t queueid_t;
+
#ifndef NELEMS
#define NELEMS(a) (sizeof(a) / sizeof((a)[0]))
#endif
diff --git a/include/conf/netif.h b/include/conf/netif.h
index 2f445ab..8d6851e 100644
--- a/include/conf/netif.h
+++ b/include/conf/netif.h
@@ -18,6 +18,7 @@
#ifndef NETIF_CONF_H
#define NETIF_CONF_H
#include <linux/if_ether.h>
+#include "common.h"

#define NETIF_MAX_PORTS 64
#define NETIF_MAX_LCORES 64
@@ -30,9 +31,9 @@
#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16
#define NETIF_MAX_BOND_SLAVES 32

-typedef uint8_t lcoreid_t;
-typedef uint8_t portid_t;
-typedef uint16_t queueid_t;
+//typedef uint8_t lcoreid_t;
+//typedef uint8_t portid_t;
+//typedef uint16_t queueid_t;

#define NETIF_PORT_FLAG_RX_IP_CSUM_OFFLOAD 0x1<<3
#define NETIF_PORT_FLAG_TX_IP_CSUM_OFFLOAD 0x1<<4
diff --git a/include/dpdk.h b/include/dpdk.h
index c8ff11a..5fc6ae9 100644
--- a/include/dpdk.h
+++ b/include/dpdk.h
@@ -55,9 +55,10 @@
#include <rte_ip_frag.h>
#include <rte_eth_bond.h>
#include "mbuf.h"
+#include "common.h"

-typedef uint8_t lcoreid_t;
-typedef uint8_t portid_t;
-typedef uint16_t queueid_t;
+//typedef uint8_t lcoreid_t;
+//typedef uint8_t portid_t;
+//typedef uint16_t queueid_t;

#endif /* DPVS_DPDK_H */
2. can not show conn like as ./ipvsadm -ln -c

EAL: Error - exiting with code: 1 Cause: Cannot init mbuf pool on socket 1

[root@localhost bin]# ./dpvs &
[1] 3718
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: probe driver: 8086:100f rte_em_pmd
EAL: PCI device 0000:02:02.0 on NUMA socket -1
EAL: probe driver: 8086:100f rte_em_pmd
EAL: PCI device 0000:02:03.0 on NUMA socket -1
EAL: probe driver: 8086:100f rte_em_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
EAL: Error - exiting with code: 1
Cause: Cannot init mbuf pool on socket 1
[1]+ Exit 1 ./dpvs

dpvs 运行提示No free hugepages reported in hugepages-1048576kB

./dpvs &
[1] 8690
[@bx_20_165 ~/dpvs/bin]# EAL: Detected 24 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:01:00.2 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:01:00.3 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7

EAL: Error - exiting with code: 1 Cause: Cannot init mbuf pool on socket 1

Can dpvs run on vmxnet3 virtual network?
I try hugepages size 4096 8192 and 16384 but result are same. like this

[root@localhost ~]# bin/dpvs
EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL: PCI device 0000:1b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 4
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
EAL: Error - exiting with code: 1
  Cause: Cannot init mbuf pool on socket 1

When this error report , free hugepage always zero, even if set 16384
I used the newer ver.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.