Comments (14)
from dpvs.
我分配的大内存页是1024,按照文档来配置的,不过我配置为2048也照样有问题。
配置文件/etc/dpvs.conf设置mbuf的大小没有改变,按照下载下来的配置设置的,应该是够的。
但是查到:
[root@localhost bin]# dmesg |grep -i numa
[ 0.000000] No NUMA configuration found
``` 这个貌似显示NUMA没有开启,是否可能是这个原因?
from dpvs.
Are you running dpvs on a virtual machine? Dpvs has not been tuned for virtual server.
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL: probe driver: 1af4:1000 rte_virtio_pmd
EAL: PCI device 0000:00:09.0 on NUMA socket -1
EAL: probe driver: 8086:100e rte_em_pmd
from dpvs.
Pls try to modify DPVS_MAX_SOCKET
to 1, at include/common.h
.
from dpvs.
@beacer 修改DPVS_MAX_SOCKET
为1后,重新编译,还是上述报错。
from dpvs.
更新一下,前几天修了这个bug,
from dpvs.
from dpvs.
@beacer
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
USER1: rte_ip_frag_table_create: allocation of 25165952 bytes at socket 0 failed do
IP4FRAG: [21] fail to create frag table.
EAL: Error - exiting with code: 1
Cause: Fail to init inet: failed dpdk api
./tools/dpdk-devbind.py --st
Network devices using DPDK-compatible driver
0000:06:00.2 'I350 Gigabit Network Connection' drv=igb_uio unused=
Network devices using kernel driver
0000:06:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=igb_uio Active
0000:06:00.1 'I350 Gigabit Network Connection' if=eth1 drv=igb unused=igb_uio
0000:06:00.3 'I350 Gigabit Network Connection' if=eth3 drv=igb unused=igb_uio
Other network devices
cat /etc/dpvs.conf
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! This is dpvs default configuration file.
!
! The attribute "" denotes the configuration item at initialization stage. Item of
! this type is configured oneshoot and not reloadable. If invalid value configured in the
! file, dpvs would use its default value.
!
! Note that dpvs configuration file supports the following comment type:
! * line comment: using '#" or '!'
! * inline range comment: using '<' and '>', put comment in between
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! global config
global_defs {
log_level WARNING
! log_file /var/log/dpvs.log
}
! netif config
netif_defs {
pktpool_size 524287
pktpool_cache 256
<init> device dpdk0 {
rx {
queue_number 8
descriptor_number 1024
rss tcp
}
tx {
queue_number 8
descriptor_number 1024
}
! promisc_mode
kni_name dpdk0.kni
}
}
! worker config (lcores)
worker_defs {
worker cpu0 {
type master
cpu_id 0
}
<init> worker cpu1 {
type slave
cpu_id 1
port dpdk0 {
rx_queue_ids 0
tx_queue_ids 0
! isol_rx_cpu_ids 9
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu2 {
type slave
cpu_id 2
port dpdk0 {
rx_queue_ids 1
tx_queue_ids 1
! isol_rx_cpu_ids 10
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu3 {
type slave
cpu_id 3
port dpdk0 {
rx_queue_ids 2
tx_queue_ids 2
! isol_rx_cpu_ids 11
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu4 {
type slave
cpu_id 4
port dpdk0 {
rx_queue_ids 3
tx_queue_ids 3
! isol_rx_cpu_ids 12
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu5 {
type slave
cpu_id 5
port dpdk0 {
rx_queue_ids 4
tx_queue_ids 4
! isol_rx_cpu_ids 13
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu6 {
type slave
cpu_id 6
port dpdk0 {
rx_queue_ids 5
tx_queue_ids 5
! isol_rx_cpu_ids 14
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu7 {
type slave
cpu_id 7
port dpdk0 {
rx_queue_ids 6
tx_queue_ids 6
! isol_rx_cpu_ids 15
! isol_rxq_ring_sz 1048576
}
}
<init> worker cpu8 {
type slave
cpu_id 8
port dpdk0 {
rx_queue_ids 7
tx_queue_ids 7
! isol_rx_cpu_ids 16
! isol_rxq_ring_sz 1048576
}
}
}
! timer config
timer_defs {
# cpu job loops to schedule dpdk timer management
schedule_interval 500
}
! dpvs neighbor config
neigh_defs {
unres_queue_length 128
pktpool_size 1023
pktpool_cache 32
timeout 60
}
! dpvs ipv4 config
ipv4_defs {
default_ttl 64
fragment {
bucket_number 4096
bucket_entries 16
max_entries 4096
ttl 1
}
}
! control plane config
ctrl_defs {
lcore_msg {
ring_size 4096
multicast_queue_length 256
sync_msg_timeout_us 2000
}
ipc_msg {
unix_domain /var/run/dpvs_ctrl
}
}
! ipvs config
ipvs_defs {
conn {
conn_pool_size 2097152
conn_pool_cache 256
conn_init_timeout 3
! expire_quiescent_template
! fast_xmit_close
}
udp {
defence_udp_drop
timeout {
normal 300
last 3
}
}
tcp {
defence_tcp_drop
timeout {
none 2
established 90
syn_sent 3
syn_recv 30
fin_wait 7
time_wait 7
close 3
close_wait 7
last_ack 7
listen 120
synack 30
last 2
}
synproxy {
synack_options {
mss 1452
ttl 63
sack
! wscale
! timestamp
}
! defer_rs_syn
rs_syn_max_retry 3
ack_storm_thresh 10
max_ack_saved 3
conn_reuse_state {
close
time_wait
! fin_wait
! close_wait
! last_ack
}
}
}
}
! sa_pool config
sa_pool {
pool_hash_size 16
}
from dpvs.
@liuflylove666 what's the hugepage size ? pls use a bigger one, like echo 4096 > ...
or 10240
from dpvs.
use 10240
error info:
rte_eal_init 2 arc:1,*argv:./bin/dpvs
EAL: Detected 32 lcore(s)
ready rte_eal_pci_init
EAL: Probing VFIO support...
ready rte_eal_memory_init
rte_eal_memzone_init
ready rte_eal_dev_init
driver->type:0
driver->type:0
driver->type:1
PMD: bnxt_rte_pmd_init() called for (null)
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:1
froad:rte_eth_driver_register
driver->type:0
driver->type:0
driver->type:1
froad:rte_eth_driver_register
driver->type:0
driver->type:0
driver->type:1
froad:rte_eth_driver_register
driver->type:0
ready rte_eal_intr_init
EAL: PCI device 0000:06:00.0 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:06:00.1 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:06:00.2 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:06:00.3 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
CFG_FILE: Opening configuration file '/etc/dpvs.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 8
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu4:dpdk0 rx_queue_id += 3
NETIF: worker cpu4:dpdk0 tx_queue_id += 3
NETIF: worker cpu5:dpdk0 rx_queue_id += 4
NETIF: worker cpu5:dpdk0 tx_queue_id += 4
NETIF: worker cpu6:dpdk0 rx_queue_id += 5
NETIF: worker cpu6:dpdk0 tx_queue_id += 5
NETIF: worker cpu7:dpdk0 rx_queue_id += 6
NETIF: worker cpu7:dpdk0 tx_queue_id += 6
NETIF: worker cpu8:dpdk0 rx_queue_id += 7
NETIF: worker cpu8:dpdk0 tx_queue_id += 7
PANIC in rte_kni_init():
Can not open /dev/kni
8: [./bin/dpvs() [0x42f321]]
7: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f0402d0aaf5]]
6: [./bin/dpvs(main+0x102) [0x42e8f2]]
5: [./bin/dpvs(netif_init+0x102) [0x43bd72]]
4: [./bin/dpvs() [0x439ba0]]
3: [./bin/dpvs(rte_kni_init+0x388) [0x46b598]]
2: [./bin/dpvs(__rte_panic+0xbe) [0x42b0de]]
1: [./bin/dpvs(rte_dump_stack+0x1a) [0x4a42ea]]
Aborted (core dumped)
from dpvs.
/dev/kni
is not exist, is rte_kni.ko
installed ? if yes, pls try DPDK 's kni
sample to see if there's compatible issue for kni
and NIC.
http://dpdk.org/doc/guides/sample_app_ug/kernel_nic_interface.html
from dpvs.
@beacer thanks, now it is running
from dpvs.
@beacer 你们能开发支持一下Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe 的网卡驱动吗?在bnxt_ethdev.c添加:
#define BROADCOM_DEV_ID_5720 0x165f
无效。
from dpvs.
@liuflylove666 DPDK的网卡驱动,你问下dpdk吧,www.dpdk.org
。里面应该有maillist.
from dpvs.
Related Issues (20)
- Feature: Any plan to integrate dpvs into k8s
- RHEL 8 或者 RHEL9的支持 HOT 1
- uoa_max_trail=3设置后会丢失前三个udp的包 HOT 5
- dpvs 1.96版本 dpvs-agent 创建RS ockopt DPVSAGENT_VS_ADD_DESTS Read sockmsg failed Error=EOF
- 测试dpvs1.9.6,发现部分请求没有将真实用户IP写入是为什么呢 HOT 12
- 是否支持ICMP协议,以及是否支持任意端口转发 HOT 2
- Feature Request: support Maglev Hash Stateless scheduler algorhithm
- dpvs1.9.6启动时Segmentation fault问题
- 1.9.6大家都能正常编译过去吗? HOT 1
- 一个疑问:DPVS_MAX_LCORE需要编译阶段指定是什么原因?是否可以改为可配置? HOT 1
- rockylinux 8 编译 1.9.6 失败了 HOT 2
- 请问是否支持虚IP封装再IPIP内 HOT 2
- dpvs 初始化 CONFIG_DPVS_MAX_LCORE最大为64 HOT 5
- synproxy HOT 3
- TOA HOT 2
- FullNat 如何删除一个 LoclIP HOT 1
- vmware虚拟机两个igb_uio驱动的网卡启动dpvs,这两张网卡与配置文件中dpdk0,dpdk1如何对应? HOT 1
- 请问大佬如何运行dpvs里面的test内容? HOT 2
- DPVS未来是否规划整合策略路由功能以应对多线路Trunk模式下ICMP流量与NAT64场景下的路由决策优化? HOT 5
- IPv6 Neighbor Advertisement缺少Source Link-Layer Address字段无法更新邻居信息导致丢包
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dpvs.