GithubHelp home page GithubHelp logo

sflow / host-sflow Goto Github PK

View Code? Open in Web Editor NEW
146.0 17.0 55.0 7.68 MB

host-sflow agent

Home Page: http://sflow.net

License: Other

Shell 1.32% Makefile 1.54% C 91.21% C++ 0.43% Python 0.67% CMake 0.48% Dockerfile 0.31% Ruby 3.58% Go 0.45%

host-sflow's Introduction

This software is distributed under the following license:
http:/sflow.net/license.html

Welcome to the project!

Host-sFlow: http://sflow.net/about.php
Docmentation and examples: http://sflow.net/documentation.php
Related Links: http://sflow.net/relatedlinks.php

Please port this agent to every OS that you care about.

If you don't see the binary download you are looking for, such as
for Debian, Ubuntu, Solaris, FreeBSD, AIX, Darwin (perhaps with
Docker or KVM extensions) you can compile it yourself from the
sources:

git clone https://github.com/sflow/host-sflow

Discussion group is here:
  https://groups.google.com/group/host-sflow

Was formerly here:
  https://sourceforge.net/p/host-sflow/mailman/

AUTHORS
Neil McKee ([email protected])
Sonia Panchen ([email protected])
Stuart Johnston ([email protected])
Robert Jordan
Nicolas Satterly ([email protected])
Collaboration with iozone project (http://iozone.org)
Johnny Johnson ([email protected])
Robert Alexander ([email protected])
Hubert Chu ([email protected])
Don Bollinger ([email protected])
Corey Hickey

host-sflow's People

Contributors

bugfood avatar bun avatar jfieber avatar sflow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

host-sflow's Issues

Configuration file

It would be nice if the Host sFlow Exporter would have a configuration file which could be edited.

hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device

I'm using hsflowd to performance metrics using the sFlow protocol but facing error "hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device" in /var/log/syslog. hsflowd is working properly but the error is being logged almost every 23 minutes

cat /etc/hsflowd.conf
  sflow {
    collector { ip = 127.0.0.1 UDPPort=6343 }
    sampling=100
    sampling.10G=100
    pcap { speed = 1- }
    tcp {}
  }
cat /etc/hsflowd.auto
  rev_start=1
  hostname=test
  sampling=100
  header=128
  datagram=1400
  polling=30
  sampling.10G=100
  agentIP=xxx
  agent=bond0
  ds_index=1
  collector=127.0.0.1 6343
  rev_end=1
journalctl -f -u hsflowd.service
  Oct 30 09:03:38 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:03:38 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:03:38 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:26:24 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:26:24 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:26:24 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:49:08 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:49:08 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 09:49:08 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 10:11:54 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 10:11:54 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 10:11:54 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 10:34:38 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  Oct 30 10:34:38 test hsflowd[2344]: SFF8036 ethtool ioctl failed: No such device
  ...

Any Idea?

Missing docker vir_* metrics when using hsflowd as a container

Hello,
I'm trying to use hsflowd as a container on coreos using this Dockerfile

Hsflowd package hsflowd-ubuntu16_2.0.5-7_amd64.deb has been built (commit f7bcfac) by running:

sudo ./docker_build_on ubuntu16

I'm using below command to start the containers:

/usr/bin/docker run --cap-add=NET_ADMIN --pid=host --uts=host --net=host \
-v /var/run/docker.sock:/var/run/docker.sock -v /sys/fs/cgroup/:/sys/fs/cgroup/:ro \
--name hsflowd hsflowd

I don't have vir_* metrics available in sflow-rt from hsflowds running as a container:

core@core-1 ~ $ curl http://localhost:8008/dump/10.x.x.81/ALL/json|grep metricName|grep vir
core@core-1 ~ $ curl http://localhost:8008/dump/10.x.x.82/ALL/json|grep metricName|grep vir
core@core-1 ~ $ curl http://localhost:8008/dump/10.x.x.83/ALL/json|grep metricName|grep vir

I see them from hsflowd on normal ubuntu vm:

core@core-1 ~ $ curl http://localhost:8008/dump/10.x.x.84/ALL/json|grep metricName|grep vir|wc -l
140

I attached hsflowd logs from both (container and vm) below:

I have other containers running on nodes 10.x.x.81-83 where I use hsflowd as a container:

core@core-1 ~ $ dkc
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
b0f75c9f95fa        nginx                    "nginx -g 'daemon off"   58 minutes ago      Up 58 minutes       80/tcp, 443/tcp                                  awesome_sammet
c7467a33eb0b        localhost:5000/hsflowd   "/bin/sh -c '/etc/ini"   About an hour ago   Up About an hour                                                     hsflowd
c88f6c3dca15        sflow/sflow-rt           "/sflow-rt/start.sh"     7 hours ago         Up 7 hours          0.0.0.0:6343->6343/udp, 0.0.0.0:8008->8008/tcp   fervent_brattain
c12de84bf3cb        registry:2               "/entrypoint.sh /etc/"   26 hours ago        Up 26 hours         127.0.0.1:5000->5000/tcp                         registry
core@core-1 ~ $ dke -it hsflowd bash

root@core-1:/# export DOCKER_API_VERSION=1.22

root@core-1:/# docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
b0f75c9f95fa        nginx                    "nginx -g 'daemon off"   58 minutes ago      Up 58 minutes       80/tcp, 443/tcp                                  awesome_sammet
c7467a33eb0b        localhost:5000/hsflowd   "/bin/sh -c '/etc/ini"   About an hour ago   Up About an hour                                                     hsflowd
c88f6c3dca15        sflow/sflow-rt           "/sflow-rt/start.sh"     7 hours ago         Up 7 hours          0.0.0.0:6343->6343/udp, 0.0.0.0:8008->8008/tcp   fervent_brattain
c12de84bf3cb        registry:2               "/entrypoint.sh /etc/"   26 hours ago        Up 26 hours         127.0.0.1:5000->5000/tcp                         registry

I could missed something along the way though

outputPort is always 0 in flow samples

im testing hsflowd version 2.0.11 on Cumulus Linux 4.2.1 and in my simple setup
receiver ---(index3) switch (index5)----- sender/collector

the flow samples are always being sent with outputPort 0 as evident from the sflowtool dump:

startSample ----------------------
sampleType_tag 0:1
sampleType FLOWSAMPLE
sampleSequenceNo 3093
sourceId 0:5
meanSkipCount 1000
samplePool 3093000
dropEvents 0
inputPort 5
outputPort 0
flowBlock_tag 0:1
flowSampleType HEADER
headerProtocol 1
sampledPacketSize 1530
strippedBytes 4
headerLen 128
headerBytes 08-00-27-71-E3-C0-08-00-27-3D-B7-CA-08-00-45-00-05-DA-E9-BC-40-00-40-11-3C-F1-0A-01-03-64-01-00-00-01-E6-8A-13-89-05-C6-54-0D-00-01-20-0D-5F-F9-64-A4-00-00-72-1A-32-33-34-35-00-00-00-00-30-31-32-33-34-35-36-37-38-39-30-31-32-33-34-35-36-37-38-39-30-31-32-33-34-35-36-37-38-39-30-31-32-33-34-35-36-37-38-39-30-31-32-33-34-35-36-37-38-39-30-31-32-33-34-35-36-37-38-39-30-31-32-33-34-35
dstMAC 08002771e3c0
srcMAC 0800273db7ca
IPSize 1512
ip.tot_len 1498
srcIP 10.1.3.100
dstIP 1.0.0.1
IPProtocol 17
IPTOS 0
IPTTL 64
IPID 48361
UDPSrcPort 59018
UDPDstPort 5001
UDPBytes 1478
endSample   ----------------------
endDatagram   =================================

Please let me know if you need more information ?

Linux port does not compile 2.0.2/2.0.3

e612d67 seems to have introduced some bugs that break the build:

readInterfaces.c: In function ‘ethtool_get_GLINKSETTINGS’:
readInterfaces.c:276:25: warning: passing argument 1 of ‘setAdaptorSpeed’ from incompatible pointer type
setAdaptorSpeed(adaptor, 0);
^
In file included from readInterfaces.c:9:0:
hsflowd.h:534:8: note: expected ‘struct HSP ’ but argument is of type ‘struct SFLAdaptor *’
void setAdaptorSpeed(HSP *sp, SFLAdaptor *adaptor, uint64_t speed);
^
readInterfaces.c:276:9: error: too few arguments to function ‘setAdaptorSpeed’
setAdaptorSpeed(adaptor, 0);
^
In file included from readInterfaces.c:9:0:
hsflowd.h:534:8: note: declared here
void setAdaptorSpeed(HSP *sp, SFLAdaptor *adaptor, uint64_t speed);
^
readInterfaces.c:283:25: warning: passing argument 1 of ‘setAdaptorSpeed’ from incompatible pointer type
setAdaptorSpeed(adaptor, ifSpeed_bps);
^
In file included from readInterfaces.c:9:0:
hsflowd.h:534:8: note: expected ‘struct HSP *’ but argument is of type ‘struct SFLAdaptor *’
void setAdaptorSpeed(HSP *sp, SFLAdaptor *adaptor, uint64_t speed);
^
readInterfaces.c:283:34: warning: passing argument 2 of ‘setAdaptorSpeed’ makes pointer from integer without a cast
setAdaptorSpeed(adaptor, ifSpeed_bps);
^
In file included from readInterfaces.c:9:0:
hsflowd.h:534:8: note: expected ‘struct SFLAdaptor *’ but argument is of type ‘uint64_t’
void setAdaptorSpeed(HSP *sp, SFLAdaptor *adaptor, uint64_t speed);
^
readInterfaces.c:283:9: error: too few arguments to function ‘setAdaptorSpeed’
setAdaptorSpeed(adaptor, ifSpeed_bps);
^
In file included from readInterfaces.c:9:0:
hsflowd.h:534:8: note: declared here
void setAdaptorSpeed(HSP *sp, SFLAdaptor *adaptor, uint64_t speed);
^
readInterfaces.c: At top level:
readInterfaces.c:296:15: warning: ‘ethtool_get_GSET’ defined but not used [-Wunused-function]
static bool ethtool_get_GSET(HSP *sp, struct ifreq *ifr, int fd, SFLAdaptor *adaptor)
^
make[1]: *
* [Makefile:363: readInterfaces.o] Error 1

No flowsamples sent on Debian 9.12

Hello,
I installed hsflowd on a server (Debian 9.12) to monitor traffic on it (packet sampling). After configuring the hsflowd.conf file, I fetched the datagrams with a collector by using sflowtool. I receive countersamples but never flowsamples.
Here is my hsflowd.conf file:

sflow {
  agent = eth0
  DNSSD = off
  sampling = 10
  polling = 20
  collector { ip=138.195.139.11 udpport=6343 }
  nflog { group = 5  probability = 0.0025 }
}

I ran beforhand the commands to configure NFLOG in iptables as it is explained. I also restarted hsflowd after modifying the conf file.
I also tried another configuration (after making sure eth0 is the name of the network interface) :

sflow {
  agent = eth0
  DNSSD = off
  sampling = 10
  polling = 20
  collector { ip=138.195.139.11 udpport=6343 }
  pcap = { dev=eth0 }
}

Is packet-sampling not supported on my server or did I miss something ?

Thanks in advance for your answer,

AlberichVR

FreeBSD Makefile problem with json.

I just built host-sflow from sources for FreeBSD 11.0

The "make" command brings these errors for the json Makefile: Lines 24 and 27: Missing dependency operator. Line 30: Need an operator.

I tried running "gmake" but after leaving the json directory successfully (Nothing to be done for 'all') it doesn't understand the FreeBSD Makefile.

I commented out the "cd src/json; $(MAKE)" from the main Makefile and ran "make" again and successfully completed the installation.

Broken line parsing in readInterfaces

Hi,

The parsing of /proc/net/dev is broken if the lines are longer than MAX_PROC_LINE_CHARS. There is a comment that says that fgets will chop the line off if it's longer, however the next call to fgets will return the rest of the line.

Relevant code: https://github.com/sflow/host-sflow/blob/master/src/Linux/readInterfaces.c#L653-L673

My /proc/net/dev looks something like this:

Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
            eth0: 359401051 4518683    0    0    0     0          0    513091 144790237  522864    0    0    0     0       0          0
bond-interface-a: 286042294609005 1770579283801    0 180681106    0     0          0  20252445 23210149855391943 15484818667769    0  213    0     0       0          0
bond-interface-b: 129287069135225 1719490381267    5 2143401    0     0          0  14099075 20773782419083321 13758778336373    0 10414    0     0       0          0

Since the lines for bond-interface-a and bond-interface-b are longer than MAX_PROC_LINE_CHARS, the parser thinks that the interface counters are device names and then tries to call ioctl SIOCGIFFLAGS on the device 0. This generates a lot of errors :)

I'm not sure what would be the best way to fix this. Should parseNextTok return NULL if it can't find the separator? Or should the while-loop make sure that we skip trailing line data?

The same read pattern seems to be used on multiple places in the source code.

Thanks,
Anton

Sampling rate doesn't get programmed if sflow is enabled on pre-provisioned bonds

This issue is observed in both the Base and Premium versions of OS10.

Sampling rate config doesn't get programmed in the h/w for LAG (bonds), if sflow is enabled on a pre-provisioned LAG (say a static lag with member slave ports already added).

Would uncommenting adaptorNIO->bond_master check help solve this issue ?
For this config scenario updateBondCounters might be called only from agentCB_getCounters_interface...??

hsflowd version @2.0.8-1 :
https://github.com/sflow/host-sflow/blob/v2.0.8-1/src/Linux/readPackets.c#123
if(/adaptorNIO->bond_master
||
/ adaptorNIO->bond_slave) {
updateBondCounters(sp, adaptor);

Likely updateBondCounters function would quit on failed if (procFile) check for bond_slave (member port name) @ https://github.com/sflow/host-sflow/blob/v2.0.8-1/src/Linux/readNioCounters.c#41 ?

Pls check

How to modify and collect custom protocols?

Hello, I have installed host sflow in Centos6 and I would like to use it to collect custom protocols. The first 14 bytes of the custom protocol packet structure are the source MAC address and destination MAC address, as well as the 2 bytes protocol label. Next, there are 44 bytes of other content, followed by 20 bytes of content containing the source IP and destination IP. When I use the flow render app and choose ipsource and ipdestination, I cannot see the content. How should I modify the source code?

cannot start hsflowd on /proc hardened folder

I was testing hsflowd on one of my CentOS VMs, and it seems that service cannot be started on systems where /proc is not the usual 0755 . On public-facing servers, the procedure is to harden folders and files to break kernel exploits.

When I started hsflowd with /proc 0550, it crashed and strace showed that is was trying to use user nobody and process wanted to read interface information from /proc

Is there an ( easy ) way to change the user through the /etc/ config ? If not, I'll live with it as I realize my environment is not a typical one :)

Feel free to close this ticket is the answer is no :)

Thank you!

Dynamic sampling rate for 10Gbps

I've a problem with very inaccurate sampling rate for 10Gbps link with low traffic (less than 500Mbps). Is it possible to implement a dynamic adjusted sampling rate into hsflowd?

Support configuration to allow picking discrete set of interfaces to turn on sflow in addition to the regex based approach

With switchport regex based approach, it becomes almost impossible to turn sflow on a set of discrete interfaces, without inadvertently turning it on unintended interfaces.

Support to allow list of discrete interfaces other than alongside the current regex based approach, allows selectively choosing interfaces to turn sflow on, especially a useful feature for switches.

Eg
switchport = e101-001-1, e101-109-0, e102-123-2
or
multiple switchport configurations
switchport = e101-001-1,
switchport = e101-109-0,
etc.,

[linux] agent.cidr ignores secondary IPv4 addresses

hsflowd appears to use the ancient netdevice ioctls in order to obtain the list of candidate IPv4 agent IPs on the system:

// Try to get the IP address for this interface
if(ioctl(fd,SIOCGIFADDR, &ifr) < 0) {

This ioctl assumes that a single network device only has a single IPv4 address. That assumption has has been false for decades (since Linux v2.2 IIRC).

As a practical consequence this means that hsflowd is only able to select the first IPv4 address added to an interface as its agent IP.

For example, consider the following network setup, extremely common on routers, where there is a second globally scoped IP address (192.0.2.10) added to the loopback interface. This address serves as the primary address of the router.

rlnc-user@hsflowd:~/host-sflow$ ip -4 address list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 192.0.2.10/32 scope global lo
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc fq_codel state UP group default qlen 1000
    inet 10.20.211.88/26 brd 10.20.211.127 scope global dynamic ens3
       valid_lft 84048sec preferred_lft 84048sec

Given this configuration it appears impossible hsflowd use 192.0.2.10 as its agent IP.

Setting agent = lo yields an agent IP of 127.0.0.1 (not entirely unexpected, although one could reasonably have thought it would have given the globally scoped address a higher precedence than a host-scoped one by default).

Setting agent.cidr = 192.0.2.10/32 (or agent.cidr = 192.0.2.0/24) unexpectedly yields an agent IP of 10.20.211.88. Presumably this happens because the CIDR filtering logic never gets to evaulate the 192.0.2.10 address, therefore reverting to the default behaviour (i.e., behaving as if agent.cidr was not specified).

To fix this I believe it is necessary to rewrite readInterfaces() to use a more current API to enumerate local IP addresses, such as getifaddrs(3).

Another approach would be to make it possible for the user to hard-code the agent IP in hsflowd.conf directly, thus disabling the auto-detection logic completely. As far as I can tell, the agent IP is just an identifier; it should not have to correspond to an address configured on a local network device.

Total traffic Values are not accurate

We are testing fastnetmon.

fcli show total_traffic_counters
incoming traffic 50413 pps
incoming traffic 269 mbps
incoming traffic 17 flows
outgoing traffic 7160 pps
outgoing traffic 4 mbps
outgoing traffic 2 flows
internal traffic 0 pps
internal traffic 0 mbps
other traffic 9 pps
other traffic 0 mbps

The outgoing traffic just shows outgoing traffic at 4 mbps. But when we checked MRTG connected to the switches, it shows outgoing traffic over 1gbps. Do you have any idea why Fastnet displays incorrect values?

fcli show system_counters

total_simple_packets_processed 1752718
total_ipv4_packets 1752718
total_ipv6_packets 0
unknown_ip_version_packets 0
total_unparsed_packets 0
total_unparsed_packets_speed 0
total_remote_whitelisted_packets_packets 0
total_flowspec_filtered_packets 0
total_flowspec_filtered_bytes 0
total_flowspec_whitelist_packets 0
traffic_db_errors 0
traffic_db_pushed_messages 1752718
traffic_db_sampler_seen_packets 0
traffic_db_sampler_selected_packets 0
speed_recalculation_time_seconds 0
speed_recalculation_time_microseconds 4120
all_traffic_calculation_delay_shorter 0
all_traffic_calculation_delay_negative 0
all_traffic_calculation_delay_longer 0
total_number_of_hosts 17408
remote_hosts_hash_load_factor_integer 0
remote_hosts_hash_load_factor_fraction 320
remote_hosts_hash_size 3296
remote_hosts_hash_bucket_count 10273
hosts_hash_load_factor_integer 0
hosts_hash_load_factor_fraction 371
hosts_hash_size 875
hosts_hash_bucket_count 2357
hosts_hash_load_factor_ipv6_integer 0
hosts_hash_load_factor_ipv6_fraction 0
hosts_hash_size_ipv6 0
hosts_hash_ipv6_bucket_count 1
influxdb_writes_total 664387
influxdb_writes_failed 0
clickhouse_metrics_writes_total 479992
clickhouse_metrics_writes_failed 0
netflow_all_protocols_total_flows_speed 0
sflow_raw_packet_headers_total_speed 40
entries_flow_tracking 25
flow_exists_for_ip 25
flow_does_not_exist_for_ip 850
traffic_buffer_duration_seconds_ipv4 0
traffic_buffer_duration_seconds_ipv6 0
total_flexible_thresholds_matched_bytes_ipv4 0
total_flexible_thresholds_matched_packets_ipv4 0
total_flexible_thresholds_matched_bytes_ipv6 0
total_flexible_thresholds_matched_packets_ipv6 0
sflow_raw_udp_packets_received 357498
sflow_udp_receive_errors 0
sflow_udp_receive_eagain 0
sflow_total_packets 357498
sflow_bad_packets 0
sflow_flow_samples 1752718
sflow_bad_flow_samples 0
sflow_padding_flow_sample 0
sflow_with_padding_at_the_end_of_packet 357498
sflow_parse_error_nested_header 0
sflow_counter_sample 6154
sflow_raw_packet_headers_total 1752718
sflow_ipv4_header_protocol 0
sflow_ipv6_header_protocol 0
sflow_unknown_header_protocol 0
sflow_extended_router_data_records 1752718
sflow_extended_switch_data_records 1752718
sflow_extended_gateway_data_records 1751724
global_system_ignoredmulti 180794
global_system_incsumerrors 0
global_system_indatagrams 38167788
global_system_inerrors 0
global_system_noports 196348
global_system_outdatagrams 30489262
global_system_rcvbuferrors 0
global_system_sndbuferrors 0

===========================================

fcli show main

af_packet_extract_tunnel_traffic: false
af_packet_read_packet_length_from_ip_header: false
af_packet_use_new_generation_parser: false
afpacket_strict_cpu_affinity: false
api_host: 127.0.0.1
api_host_counters_max_hosts_in_response: 100
api_port: 50052
asn_lookup: true
average_calculation_time: 5
ban_details_records_count: 25
ban_status_delay: 20
ban_status_updates: false
ban_time: 1900
ban_time_total_hostgroup: 1900
build_total_hostgroups_from_per_host_hostgroups: false
cache_path: /var/cache/fastnetmon
clickhouse_metrics: true
clickhouse_metrics_database: fastnetmon
clickhouse_metrics_host: 127.0.0.1
clickhouse_metrics_password:
clickhouse_metrics_per_protocol_counters: true
clickhouse_metrics_port: 9000
clickhouse_metrics_push_period: 1
clickhouse_metrics_username: default
collect_attack_pcap_dumps: false
collect_simple_attack_dumps: true
connection_tracking_skip_ports: false
country_lookup: false
do_not_ban_incoming: false
do_not_ban_outgoing: true
do_not_cap_ban_details_records_count: false
do_not_withdraw_flow_spec_announces_on_restart: false
do_not_withdraw_unicast_announces_on_restart: false
drop_root_permissions: false
dump_all_traffic: false
dump_all_traffic_json: false
dump_internal_traffic: false
dump_other_traffic: false
email_notifications_add_simple_packet_dump: true
email_notifications_auth: true
email_notifications_auth_method:
email_notifications_disable_certificate_checks: false
email_notifications_enabled: false
email_notifications_from: [email protected]
email_notifications_hide_flow_spec_rules: false
email_notifications_host: smtp.gmail.com
email_notifications_password: ********
email_notifications_port: 587
email_notifications_recipients:
email_notifications_tls: true
email_notifications_username: [email protected]
email_subject_blackhole_block: FastNetMon blocked host {{ ip }}
email_subject_blackhole_unblock: FastNetMon unblocked host {{ ip }}
email_subject_partial_block: FastNetMon partially blocked traffic for host {{ ip }}
email_subject_partial_unblock: FastNetMon partially unblocked traffic for host {{ ip }}
enable_api: true
enable_asn_counters: true
enable_ban: false
enable_ban_hostgroup: false
enable_ban_ipv6: false
enable_ban_remote_incoming: true
enable_ban_remote_outgoing: true
enable_connection_tracking: true
enable_total_hostgroup_counters: false
flexible_thresholds: false
flexible_thresholds_disable_multi_alerts: false
flow_spec_ban_time: 1900
flow_spec_detection_prefer_simple_packets: false
flow_spec_do_not_process_ip_fragmentation_flags_field: false
flow_spec_do_not_process_length_field: false
flow_spec_do_not_process_source_address_field: false
flow_spec_do_not_process_tcp_flags_field: false
flow_spec_execute_validation: true
flow_spec_fragmentation_options_use_match_bit: false
flow_spec_ignore_do_not_fragment_flag: false
flow_spec_tcp_options_use_match_bit: false
flow_spec_unban_enabled: true
force_asn_lookup: false
force_native_mode_xdp: false
generate_attack_traffic_samples: false
generate_attack_traffic_samples_delay: 60
generate_hostgroup_traffic_baselines: false
generate_hostgroup_traffic_baselines_delay: 60
generate_hostgroup_traffic_samples: false
generate_hostgroup_traffic_samples_delay: 60
generate_max_talkers_report: false
generate_max_talkers_report_delay: 300
gobgp: false
gobgp_announce_host: true
gobgp_announce_host_ipv6: true
gobgp_announce_hostgroup_networks: false
gobgp_announce_hostgroup_networks_ipv4: false
gobgp_announce_hostgroup_networks_ipv6: false
gobgp_announce_remote_host: false
gobgp_announce_whole_subnet: false
gobgp_announce_whole_subnet_custom_ipv6_prefix_length: 48
gobgp_announce_whole_subnet_custom_prefix_length: 24
gobgp_announce_whole_subnet_force_custom_ipv6_prefix_length: false
gobgp_announce_whole_subnet_force_custom_prefix_length: false
gobgp_announce_whole_subnet_ipv6: false
gobgp_api_host: localhost
gobgp_api_port: 50051
gobgp_bgp_listen_port: 179
gobgp_communities_host_ipv4:
gobgp_communities_hostgroup_networks_ipv4:
gobgp_communities_hostgroup_networks_ipv6:
gobgp_communities_subnet_ipv4:
gobgp_communities_subnet_ipv6:
gobgp_community_host: 65001:668
gobgp_community_host_ipv6: 65001:668
gobgp_community_remote_host: 65001:669
gobgp_community_subnet: 65001:667
gobgp_community_subnet_ipv6: 65001:667
gobgp_do_not_manage_daemon: false
gobgp_flow_spec_announces: false
gobgp_flow_spec_default_action: discard
gobgp_flow_spec_next_hop_ipv4:
gobgp_flow_spec_next_hop_ipv6:
gobgp_flow_spec_rate_limit_value: 1024
gobgp_flow_spec_v6_announces: false
gobgp_flow_spec_v6_default_action: discard
gobgp_flow_spec_v6_rate_limit_value: 1024
gobgp_ipv6: false
gobgp_next_hop: 0.0.0.0
gobgp_next_hop_hostgroup_networks_ipv4: 0.0.0.0
gobgp_next_hop_hostgroup_networks_ipv6: 100::1
gobgp_next_hop_ipv6: 100::1
gobgp_next_hop_remote_host: 0.0.0.0
gobgp_router_id:
graphite: false
graphite_host: 127.0.0.1
graphite_port: 2003
graphite_prefix: fastnetmon
graphite_push_period: 1
influxdb: true
influxdb_attack_notification: true
influxdb_auth: true
influxdb_custom_tags: true
influxdb_database: fastnetmon
influxdb_host: 127.0.0.1
influxdb_kafka: false
influxdb_kafka_brokers:
influxdb_kafka_partitioner: consistent
influxdb_kafka_topic: fastnetmon
influxdb_password: ********
influxdb_per_protocol_counters: true
influxdb_port: 8086
influxdb_push_host_ipv4_flexible_counters: true
influxdb_push_host_ipv6_counters: true
influxdb_push_host_ipv6_flexible_counters: true
influxdb_push_period: 1
influxdb_skip_host_counters: true
influxdb_tag_name: server
influxdb_tag_value: fastnetmon5
influxdb_tags_table: foo=bar
influxdb_user: fastnetmon
interfaces:
interfaces_xdp:
ipfix_parse_datalink_frame_section: false
ipfix_per_router_sampling_rate:
ipv4_automatic_data_cleanup: true
ipv4_automatic_data_cleanup_delay: 300
ipv4_automatic_data_cleanup_threshold: 300
ipv4_remote_automatic_data_cleanup: true
ipv4_remote_automatic_data_cleanup_delay: 300
ipv4_remote_automatic_data_cleanup_threshold: 300
ipv6_automatic_data_cleanup: true
ipv6_automatic_data_cleanup_delay: 300
ipv6_automatic_data_cleanup_threshold: 300
keep_blocked_hosts_during_restart: false
keep_flow_spec_announces_during_restart: false
keep_traffic_counters_during_restart: false
license_use_port_443: true
logging_level: info
logging_local_syslog_logging: false
logging_remote_syslog_logging: false
logging_remote_syslog_port: 514
logging_remote_syslog_server: 10.10.10.10
microcode_xdp_path: /etc/fastnetmon/xdp_kernel.o
mirror_af_external_packet_sampling: false
mirror_af_packet_disable_multithreading: true
mirror_af_packet_fanout_mode: cpu
mirror_af_packet_sampling: true
mirror_af_packet_sampling_rate: 100
mirror_af_packet_socket_stats: true
mirror_af_packet_workers_number: 1
mirror_af_packet_workers_number_override: false
mirror_afpacket: false
mirror_external_af_packet_sampling_rate: 100
mirror_xdp: false
mongo_store_attack_information: false
monitor_local_ip_addresses: false
netflow: false
netflow_count_packets_per_device: false
netflow_custom_sampling_ratio_enable: false
netflow_host: 0.0.0.0
netflow_ignore_long_duration_flow_enable: false
netflow_ignore_sampling_rate_from_device: false
netflow_ipfix_inline: false
netflow_long_duration_flow_limit: 1
netflow_mark_zero_next_hop_and_zero_output_as_dropped: false
netflow_multi_thread_processing: false
netflow_ports: 2055
netflow_process_only_flows_with_dropped_packets: false
netflow_rx_queue_overflow_monitoring: false
netflow_sampling_cache: false
netflow_sampling_ratio: 1
netflow_socket_read_mode: recvfrom
netflow_templates_cache: false
netflow_threads_per_port: 1
netflow_v5_custom_sampling_ratio_enable: false
netflow_v5_per_router_sampling_rate:
netflow_v5_sampling_ratio: 1
netflow_v9_lite: false
netflow_v9_per_router_sampling_rate:
networks_list: 11.22.33.0/22
64.235.32.0/19
72.18.192.0/20
216.108.224.0/20
beef::1/64
networks_whitelist:
networks_whitelist_remote:
notify_script_enabled: false
notify_script_format: text
notify_script_hostgroup_enabled: false
notify_script_hostgroup_path: /etc/fastnetmon/scripts/notify_about_attack.sh
notify_script_path: /etc/fastnetmon/scripts/notify_about_attack.sh
override_internal_traffic_as_incoming: false
override_internal_traffic_as_outgoing: true
per_direction_hostgroup_thresholds: true
pid_path: /var/run/fastnetmon.pid
poll_mode_xdp: false
process_incoming_traffic: true
process_ipv6_traffic: true
process_outgoing_traffic: true
prometheus: false
prometheus_export_host_ipv4_counters: false
prometheus_export_host_ipv6_counters: false
prometheus_export_network_ipv4_counters: true
prometheus_export_network_ipv6_counters: true
prometheus_host: 127.0.0.1
prometheus_port: 9209
redis_enabled: false
redis_host: 127.0.0.1
redis_port: 6379
redis_prefix: fastnetmon
remote_host_tracking: true
sflow: true
sflow_count_packets_per_device: false
sflow_extract_tunnel_traffic: false
sflow_host: 64.235.40.29
sflow_ports: 6343
sflow_read_packet_length_from_ip_header: false
sflow_track_sampling_rate: true
sflow_use_new_generation_parser: false
slack_notifications_add_simple_packet_dump: true
slack_notifications_enabled: false
slack_notifications_url: https://hooks.slack.com/services/TXXXXXXXX/BXXXXXXXXX/LXXXXXXXXX
speed_calculation_delay: 1
system_group: fastnetmon
system_user: fastnetmon
telegram_notifications_add_simple_packet_dump: true
telegram_notifications_bot_token: xxx:xxx
telegram_notifications_enabled: false
telegram_notifications_recipients:
tera_flow: false
tera_flow_host: 0.0.0.0
tera_flow_ports:
threshold_specific_ban_details: false
traffic_buffer: false
traffic_buffer_port_mirror: false
traffic_buffer_size: 100000
traffic_db: true
traffic_db_host: 127.0.0.1
traffic_db_port: 8100
traffic_db_sampling_rate: 512
unban_enabled: true
unban_only_if_attack_finished: true
unban_total_hostgroup_enabled: true
web_api_host: 127.0.0.1
web_api_login: admin
web_api_password: ********
web_api_port: 10007
web_api_ssl: false
web_api_ssl_certificate_path: ********
web_api_ssl_host: 127.0.0.1
web_api_ssl_port: 10443
web_api_ssl_private_key_path: ********
web_api_trace_queries: false
web_callback_enabled: false
web_callback_url: http://127.0.0.1:8080/attack/notify
xdp_extract_tunnel_traffic: false
xdp_read_packet_length_from_ip_header: false
xdp_set_promisc: false
xdp_use_new_generation_parser: false
zero_copy_xdp: false

fcli show sflow_sampling_rates

10.255.0.1_1_0_65 2048
10.255.0.2_1_0_65 2048

We are using Brocade CER routers, with a recent version of firmware version 6.0x firmware

mod_psample, mod_dropmon drops samples larger than 8124 bytes

Hi,

Currently mod_psample and mod_dropmon reads packet samples from netlink using a buffer size of HSP_PSAMPLE_READNL_RCV_BUF and HSP_DROPMON_READNL_RCV_BUF set to 8192. This is not enough to fit a packet with MTU 9000. Given extra metadata and such, the cut-off ends up being at >8124 bytes packets.

Currently the read from netlink is implemented like this:

static void readNetlink_PSAMPLE(EVMod *mod, EVSocket *sock, void *magic)
{
  HSP_mod_PSAMPLE *mdata = (HSP_mod_PSAMPLE *)mod->data;
  uint8_t recv_buf[HSP_PSAMPLE_READNL_RCV_BUF];
  int batch = 0;
  for( ; batch < HSP_PSAMPLE_READNL_BATCH; batch++) {
    int numbytes = recv(sock->fd, recv_buf, sizeof(recv_buf), 0);
    if(numbytes <= 0)
break;
    struct nlmsghdr *nlh = (struct nlmsghdr*) recv_buf;
    while(NLMSG_OK(nlh, numbytes)){
      // [..]
    }
  }
  // [..]
}

This causes the NLMSG_OK to reject the read as it is truncated and the event is dropped.

I suggest increasing these to buffers to 32 KiB.

Unvalid flow data

It seems the tool is broken and not sending valid sflow data from Windows. Tested with Nagios network Analyzer.

No ulog/nflog sampling on arm7

Hello,
I managed to build hsflowd on arm7 for my Raspberry pi using Docker ubuntu:18.04. I tried to configure nflog sampling but it fails (pcap sampling works fine).

$ sudo hsflowd -dddd -f ./hsflowd.conf
[...]
dbg1: dlopen(/etc/hsflowd/modules/mod_nflog.so) failed : /etc/hsflowd/modules/mod_nflog.so: undefined symbol: __aeabi_uidiv
dlsym(mod_nflog) failed : hsflowd: undefined symbol: mod_nflog
[...]
Linux raspberrypi 4.9.35-v7+ #1014 SMP Fri Jun 30 14:47:43 BST 2017 armv7l GNU/Linux

I think since it was built on a fully-functional OS, the RPI is missing some functions. Managed to find some doc.
Not sure how to proceed though if I can force a specific GCC (arm-eabi-none-gcc vs arm-eabi-gcc?).

Thank you for your help

32bit/i686 __stack_chk_fail_local compile error

Greetings,

on a 32bit gentoo-latest (Gentoo Base System release 2.3) I am getting the following error:

gcc -std=gnu99 -I. -I../json -I../sflow -fPIC -g -O2 -D_GNU_SOURCE -DHSP_VERSION=2.0.9 -DUTHEAP -DHSP_OPTICAL_STATS -DHSP_MOD_DIR=/etc/hsflowd/modules -Wall -Wstrict-prototypes -Wunused-value -Wunused-function -c mod_json.c
ld -o mod_json.so mod_json.o -shared
mod_json.o: In function evt_packet_tock': /inet/netflow/host-sflow/src/Linux/mod_json.c:1280: undefined reference to __stack_chk_fail_local'
mod_json.o: In function getApplication': /inet/netflow/host-sflow/src/Linux/mod_json.c:284: undefined reference to __stack_chk_fail_local'
mod_json.o: In function readJSON': /inet/netflow/host-sflow/src/Linux/mod_json.c:1250: undefined reference to __stack_chk_fail_local'
ld: mod_json.so: hidden symbol `__stack_chk_fail_local' isn't defined
ld: final link failed: Bad value
make[1]: *** [Makefile:251: mod_json.so] Error 1

the following patch resolves this issue

src/Linux/Makefile
-LD=ld
+LD=gcc

I am also getting this warning:

util.c: In function ‘hashHash’:
util.c:1545:58: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
else if(oh->options & UTHASH_IDTY) return (uint32_t)((uint64_t)obj);
..........................................................................................^

Question for the sampling rate

Hi, thank you for your great open source. Then I found a question unable to get the answer on docs, like the following list:

  1. How do I set the sampling rate to 100% (or 1-in-N)? because I used the snippet(sampling = 1) and it's not working.
  2. How do I understand the log infos of sampling? "SamplingRate":1000,"SamplePool":9000

There are full settings:

agent.cidr = 192.168.0.0/16
polling = 60
sampling = 1
collector { ip=127.0.0.1 udpport=6343 }
pcap { speed = 1- }
tcp { }

FreeBsd Pcap conf error

I am trying to generate some sflow from my FreeBSD network interface(rl0), Im trying to use hsflow to send it to my collector (sflowtool) ....
I always get zeroed data in my collector, so I thought it would be necessary to activate PCAP to send data from my interface...

First I compiled using ports:

/usr/ports/net/hsflowd # make FEATURES="PCAP"
#make install
#make clean

I try run hsflow using this conf in FreeBSD:

sflow{
DNSSD = off
polling =5
sampling = 512
collector{
ip = X.X.X.X
udpport = 6343
}
pcap {
dev = rl0
}
}

When I run in debug mode:

#hsflowd -ddd

I get this error:

parse error at <pcap><{> on line 17 of /usr/local/etc/hsflowd/hsflowd.conf : unexpected sFlow setting

Can help ?

OBS: sflowtool currently works normally receiving Flows from my switches

not building on Darwin

versions: host-sflow-2.0.7-4, Darwin Kernel Version 15.6.0: Thu Sep 1 15:01:16 PDT 2016; root:xnu-3248.60.11~2/RELEASE_X86_64 x86_64

compiler:

Apple LLVM version 7.3.0 (clang-703.0.31)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin

getting a compile time error on Darwin. a missing field in a struct.

PLATFORM=`uname`; \
	MYVER=`./getVersion`; \
        MYREL=`./getRelease`; \
        cd src/$PLATFORM; /Library/Developer/CommandLineTools/usr/bin/make VERSION=$MYVER RELEASE=$MYREL
gcc -std=gnu99 -I. -I../sflow -O3 -DNDEBUG -Wall -D_GNU_SOURCE -c hsflowconfig.c
hsflowconfig.c:532:56: error: no member named 'ipAddr' in 'struct _SFLAdaptor'
         if(sp->sFlow->agentDevice && sp->sFlow->agentDevice->ipAddr.addr) {
                                      ~~~~~~~~~~~~~~~~~~~~~~  ^
hsflowconfig.c:534:65: error: no member named 'ipAddr' in 'struct _SFLAdaptor'
            sp->sFlow->agentIP.address.ip_v4 = sp->sFlow->agentDevice->ipAddr;
                                               ~~~~~~~~~~~~~~~~~~~~~~  ^
hsflowconfig.c:542:29: error: no member named 'ipAddr' in 'struct _SFLAdaptor'
            if(adaptor && adaptor->ipAddr.addr) {
                          ~~~~~~~  ^
hsflowconfig.c:544:53: error: no member named 'ipAddr' in 'struct _SFLAdaptor'
               sp->sFlow->agentIP.address.ip_v4 = adaptor->ipAddr;
                                                  ~~~~~~~  ^
4 errors generated.
make[1]: *** [hsflowconfig.o] Error 1
make: *** [hsflowd] Error 2

in the struct the agentDevice is of type SFLAdaptor. the reference to the ipAddr field makes me wonder if it was intended to be a HSPCollector instead.

building on Darwin is presently unable to complete.

No VM disk statistics

Hi,

I'm testing a setup to monitor kvm with ganglia, everything seems fine except for VM disk statistics (empty graph).
Some detail:

  • RHEL 7.2 3.10.0-327.el7.x86_64
  • libvirt-1.2.17-13.el7.x86_64
  • ganglia-3.7.2-2.el7 (from epel)
  • hsflowd-2.0.1-1.x86_64.rpm (from here)

Can you help me? Thanks a lot.

FR: set source ip

Sometimes a device has multiple interfaces and dynamic routing is used. The statistic can be sent via different interfaces in this case. It would be nice to be able to specify the source ip (or source interface) for statistic packets.

Centos 8 support

Hi,

Is CentOS 8 supported ? I have tried recompiling https://centos.pkgs.org/7/puias-unsupported-x86_64/hsflowd-1.27.3-1.x86_64.rpm.html but it errors out with iptables netfilter dependency:

make[1]: Leaving directory '/services/nginx/rpmbuild/sd8.com/BUILD/hsflowd-1.27.3/src/json'
PLATFORM=uname;
MYVER=./getVersion;
MYREL=./getRelease;
cd src/$PLATFORM; make VERSION=$MYVER RELEASE=$MYREL
make[1]: Entering directory '/services/nginx/rpmbuild/sd8.com/BUILD/hsflowd-1.27.3/src/Linux'
gcc -std=gnu99 -I. -I../sflow -O3 -DNDEBUG -Wall -Wstrict-prototypes -Wunused-value -D_GNU_SOURCE -DHSP_VERSION=1.27.3 -DUTHEAP -DHSF_ULOG -DHSF_JSON -I../json -DHSF_SYSTEM_SLICE -c hsflowconfig.c
In file included from hsflowconfig.c:9:
hsflowd.h:116:10: fatal error: linux/netfilter_ipv4/ipt_ULOG.h: No such file or directory
#include <linux/netfilter_ipv4/ipt_ULOG.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Thank you

sys uptime should be more accurate

hi,
the member "bootTime" in struct SFLAgent is defined with time_t, and it will be multiplied by 1000 when fill in the sys uptime field of datagram.
In the sflow spec 5, sys uptime should be calculated with ms.
so i think it will affect the accuracy to compute counter rate in collector.
i suggest to modify it to timeval.
untitled

Crashing on some nodes

I have sflow deployed in containers in a kubernetes cluster. It's crashing on 4 out of 130 nodes, and I can't find what's different on those

The log is:

/usr/sbin/hsflowd(log_backtrace+0x20)[0x40cde0]
/usr/sbin/hsflowd[0x40cf83]
/lib64/libpthread.so.0(+0xf5d0)[0x7fd390ecc5d0]
/etc/hsflowd/modules/mod_tcp.so(+0x1594)[0x7fd3902a2594]
SIGSEGV, faulty address is 0x72
current bus: packet

I'm using the https://github.com/sflow/host-sflow/releases/download/v2.0.19-1/hsflowd-centos7-2.0.19-1.x86_64.rpm RPM

How to handle TAP Traffic

Hello,

I currently try to use hsflowd to generate sflow Data from tapped fiber connections. This has the result that I have multiple Interfaces which are receiving the rx and tx traffic of an uplink.

As example I have the following network interfaces:

4: ntt1_tx: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
5: ntt1_rx: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
6: level3_rx: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
7: level3_tx: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

Can I configure hsflowd so that it reports incoming traffic from that interface as outgoing traffic? Or better is it possible to "merge" two interfaces to that they have the same id?

Version 2.0.50-3 not compatible with Debian 10

The last version introduces INET_DIAG_SOCKOPTn, with came up with the 5.10 Linux kernel version. Howerver, the buster repository is stuck at 4.19 (https://packages.debian.org/buster/linux-image-amd64), thus the compilation fails.

mod_tcp.c: In function 'parse_diag_msg':
mod_tcp.c:256:7: error: 'INET_DIAG_SOCKOPT' undeclared (first use in this function); did you mean 'INET_DIAG_LOCALS'?
  case INET_DIAG_SOCKOPT: {
       ^~~~~~~~~~~~~~~~~
       INET_DIAG_LOCALS
mod_tcp.c:256:7: note: each undeclared identifier is reported only once for each function it appears in

Could you make it compatible with older kernel versions ?

Ability to use tunnel interfaces like GRE for sFlow

Currently, tunnel interfaces (like GRE) do not seem supported.
Checked on VyOS 1.5-rolling-202403250019

Simple to reproduce:

set interfaces ethernet eth1 address 192.0.2.1/30
set interfaces tunnel tun0 address '203.0.113.1/27'
set interfaces tunnel tun0 encapsulation 'gre'
set interfaces tunnel tun0 remote '192.0.2.2'
set interfaces tunnel tun0 source-address '192.0.2.1'

set system sflow agent-interface 'tun0'
set system sflow interface 'tun0'
set system sflow sampling-rate '1000'
set system sflow server 192.0.2.254

Generated configuration:

vyos@r4# cat /run/sflow/hsflowd.conf 
# Genereated by /usr/libexec/vyos/conf_mode/system_sflow.py
# Parameters http://sflow.net/host-sflow-linux-config.php

sflow {
  polling=30
  sampling=1000
  sampling.bps_ratio=0
  agent=tun0
  collector { ip = 192.0.2.254 udpport = 6343 }
  pcap { dev=tun0 }
  dbus { }
}

Logs:

Mar 29 12:45:27 r4 systemd[1]: hsflowd.service: Scheduled restart job, restart counter is at 4.
Mar 29 12:45:27 r4 systemd[1]: Stopped hsflowd.service - Host sFlow.
Mar 29 12:45:27 r4 systemd[1]: Started hsflowd.service - Host sFlow.
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(log_backtrace+0x2e)[0x56016e5f2cce]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x11ed9)[0x56016e5f2ed9]
Mar 29 12:45:27 r4 hsflowd[11295]: /lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7f85bdcdc050]
Mar 29 12:45:27 r4 hsflowd[11295]: /etc/hsflowd/modules/mod_pcap.so(+0x1954)[0x7f85bde84954]
Mar 29 12:45:27 r4 hsflowd[11295]: /etc/hsflowd/modules/mod_pcap.so(+0x1d3f)[0x7f85bde84d3f]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(EVEventTx+0xf7)[0x56016e5f3bb7]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x12d59)[0x56016e5f3d59]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x1316a)[0x56016e5f416a]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x13340)[0x56016e5f4340]
Mar 29 12:45:27 r4 hsflowd[11295]: /lib/x86_64-linux-gnu/libc.so.6(+0x89134)[0x7f85bdd29134]
Mar 29 12:45:27 r4 hsflowd[11295]: /lib/x86_64-linux-gnu/libc.so.6(+0x1097dc)[0x7f85bdda97dc]
Mar 29 12:45:27 r4 hsflowd[11295]: SIGSEGV, faulty address is (nil)
Mar 29 12:45:27 r4 hsflowd[11295]: current bus: packet
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(log_backtrace+0x2e)[0x56016e5f2cce]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x11eeb)[0x56016e5f2eeb]
Mar 29 12:45:27 r4 hsflowd[11295]: /lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x7f85bdcdc050]
Mar 29 12:45:27 r4 hsflowd[11295]: /etc/hsflowd/modules/mod_pcap.so(+0x1954)[0x7f85bde84954]
Mar 29 12:45:27 r4 hsflowd[11295]: /etc/hsflowd/modules/mod_pcap.so(+0x1d3f)[0x7f85bde84d3f]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(EVEventTx+0xf7)[0x56016e5f3bb7]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x12d59)[0x56016e5f3d59]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x1316a)[0x56016e5f416a]
Mar 29 12:45:27 r4 hsflowd[11295]: /usr/sbin/hsflowd(+0x13340)[0x56016e5f4340]
Mar 29 12:45:27 r4 hsflowd[11295]: /lib/x86_64-linux-gnu/libc.so.6(+0x89134)[0x7f85bdd29134]
Mar 29 12:45:27 r4 hsflowd[11295]: /lib/x86_64-linux-gnu/libc.so.6(+0x1097dc)[0x7f85bdda97dc]
Mar 29 12:45:27 r4 hsflowd[11295]: PCAP: tun0 has no supported datalink encapsulaton
Mar 29 12:45:27 r4 hsflowd[11295]: Received signal 11
Mar 29 12:45:27 r4 hsflowd[11295]: SIGSEGV, faulty address is (nil)
Mar 29 12:45:27 r4 hsflowd[11295]: current bus: packet
Mar 29 12:45:27 r4 systemd[1]: hsflowd.service: Main process exited, code=exited, status=11/n/a
Mar 29 12:45:27 r4 systemd[1]: hsflowd.service: Failed with result 'exit-code'.

Version:

vyos@r4# run show version all | match hsflo
ii  hsflowd                              2.0.52-1                         all          sFlow(R) monitoring agent

An additional report https://vyos.dev/T6033

The agentIP selection is wrong when multi devices have same IPv4/IPv6 address

We used hsflowd in SONiC.

  • The agent is set to Loopback0
  • the Loopback0 has both IPv4 and IPv6 address

Issue:
The IPv4 address priority should be higher than IPv6 per hsflowd design "EnumIPSelectionPriority".

But it selected the IPv6 address wrongly:

root@MC-54:/# cat /etc/hsflowd.auto
# WARNING: Do not edit this file. It is generated automatically by hsflowd.
rev_start=2
hostname=MC-54
sampling=400
header=128
datagram=1400
polling=20
agentIP=fd00:0:201::5
agent=Loopback0
ds_index=1
collector=26.34.15.106/6343//
rev_end=2

The device Loopback0, Loopback1001, Loopback1002 belong to different VRF so then can have same IPv4/IPv6 address:

root@MC-54:~# ip addr show dev Loopback0
35: Loopback0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 8a:5f:78:e1:3b:5d brd ff:ff:ff:ff:ff:ff
    inet 10.145.240.15/32 scope global Loopback0
       valid_lft forever preferred_lft forever
    inet6 fd00:0:201::5/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::885f:78ff:fee1:3b5d/64 scope link
       valid_lft forever preferred_lft forever
root@MC-54:~#
root@MC-54:~# ip addr show dev Loopback1001
212: Loopback1001: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue master Vrf10002 state UNKNOWN group default qlen 1000
    link/ether 1a:4d:0d:10:8d:35 brd ff:ff:ff:ff:ff:ff
    inet 10.145.240.15/32 scope global Loopback1001
       valid_lft forever preferred_lft forever
    inet6 fd00:0:201::5/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::184d:dff:fe10:8d35/64 scope link
       valid_lft forever preferred_lft forever
root@MC-54:~#
root@MC-54:~# ip addr show dev Loopback1002
213: Loopback1002: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue master Vrf10006 state UNKNOWN group default qlen 1000
    link/ether 02:1f:9f:c1:6d:25 brd ff:ff:ff:ff:ff:ff
    inet 10.145.240.15/32 scope global Loopback1002
       valid_lft forever preferred_lft forever
    inet6 fd00:0:201::5/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::1f:9fff:fec1:6d25/64 scope link
       valid_lft forever preferred_lft forever
root@MC-54:~#

But in function "readInterfaces", the HASH key of localIP/localIP6 has only IPv4/IPv6 address without dev/ifname.

  // keep v4 and v6 separate to simplify HT logic
  UTHash *newLocalIP = UTHASH_NEW(HSPLocalIP, ipAddr.address.ip_v4, UTHASH_DFLT);
  UTHash *newLocalIP6 = UTHASH_NEW(HSPLocalIP, ipAddr.address.ip_v6, UTHASH_DFLT);

In our example, Loopback0, Loopback1001, Loopback1002 have same IPv4 address "10.145.240.15/32".
But after "readInterfaces", the "localIP" has only one "10.145.240.15/32" for "Loopback1001".
Then agent "Loopback0" can't select correct agentIP.

Version 2.0.51-17 does not work with bonding

When using version 2.0.51-17, hsflowd does crash when listening to bonding interfaces

Configuration:

sflow {
  ....
  sampling.100M=1000
  sampling.1G=1000
  sampling.2G=1000
  sampling.10G=1000
  sampling.20G=1000
  sampling.40G=1000
  sampling=1000
  ...
  pcap { dev=bond0 }
}

The error:

....
dbg1:updateBondCounters: bond bond0 slave XXX found
Received signal 11
hsflowd(log_backtrace+0x24)[0x56077fbcb8f4]
hsflowd(+0x11afa)[0x56077fbcbafa]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x13140)[0x7f336fdcb140]
/lib/x86_64-linux-gnu/libc.so.6(+0x3d518)[0x7f336fc21518]
hsflowd(updateBondCounters+0x1f4)[0x56077fbd3f84]
hsflowd(readBondState+0x65)[0x56077fbd44c5]
hsflowd(configSwitchPorts+0xc)[0x56077fbd73dc]
hsflowd(EVEventTx+0xe7)[0x56077fbcc827]
hsflowd(EVEventTxAll+0x98)[0x56077fbccaa8]
hsflowd(EVEventTx+0xe7)[0x56077fbcc827]
hsflowd(+0x129d9)[0x56077fbcc9d9]
hsflowd(+0x12e40)[0x56077fbcce40]
hsflowd(+0x13010)[0x56077fbcd010]
hsflowd(main+0xbd9)[0x56077fbc43f9]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7f336fc07d0a]
hsflowd(_start+0x2a)[0x56077fbc479a]
SIGSEGV, faulty address is (nil)
current bus: poll
hsflowd(log_backtrace+0x24)[0x56077fbcb8f4]
hsflowd(+0x11b0c)[0x56077fbcbb0c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x13140)[0x7f336fdcb140]
/lib/x86_64-linux-gnu/libc.so.6(+0x3d518)[0x7f336fc21518]
hsflowd(updateBondCounters+0x1f4)[0x56077fbd3f84]
hsflowd(readBondState+0x65)[0x56077fbd44c5]
hsflowd(configSwitchPorts+0xc)[0x56077fbd73dc]
hsflowd(EVEventTx+0xe7)[0x56077fbcc827]
hsflowd(EVEventTxAll+0x98)[0x56077fbccaa8]
hsflowd(EVEventTx+0xe7)[0x56077fbcc827]
hsflowd(+0x129d9)[0x56077fbcc9d9]
hsflowd(+0x12e40)[0x56077fbcce40]
hsflowd(+0x13010)[0x56077fbcd010]
hsflowd(main+0xbd9)[0x56077fbc43f9]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7f336fc07d0a]
hsflowd(_start+0x2a)[0x56077fbc479a]
SIGSEGV, faulty address is (nil)
current bus: poll

When using strace:

...
setuid(65534)                           = 0
openat(AT_FDCWD, "/proc/net/bonding/bond0", O_RDONLY) = 12
fstat(12, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
read(12, "Ethernet Channel Bonding Driver:"..., 1024) = 869
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} ---
write(1, "Received signal 11\n", 19Received signal 11
)    = 19
...

Support for musl

Not all Linux distributions use glibc, support for musl (used by e.g. Alpine) would be great.

So far I've had to make the following adjustments to the source to get it to compile:

  • Remove the "backtrace" functionality
  • Remove the "malloc_stats" functionality
  • Replace (comparison_fn_t)strcmp with (int (*)(const void*, const void*))strcmp in readDiskCounters

Since these changes mostly remove functionality, I am not sure what the acceptable solution would be in this case. Maybe simply put some of these features behind #ifdef __GLIBC__? I can submit a trivial patch for this if desired.

I can't see container graphics on my Ganglia

Hi,

I have installed sflow host in order to monitor my docker containers using:

make DOCKER=yes LIBVIRT=yes VRTDSKPATH=yes
sudo make install
sudo make schedule

After hsflow is running I can see my specific statistics for virtual machines, but not for containers. Is there any extra configuration that we need to change in order to see this graphics?

This is my hsflowd.conf:
sflow{
DNSSD = off
polling = 10
sampling = 512
sampling.http = 100
sampling.memcache = 400

collector {
ip = 127.0.0.1
udpport = 6343
}
}

My gmond.conf:
globals {
daemonize = yes
setuid = yes
user = nobody
debug_level = 0
max_udp_msg_len = 1472
mute = yes /* don't send metrics /
deaf = no /
listen for metrics */
allow_extra_data = yes
host_dmax = 0
host_tmax = 20
cleanup_threshold = 300
gexec = no
send_metadata_interval = 0
}
cluster {
name = "DoCluster"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
host {
location = "unspecified"
}
udp_recv_channel {
port = 8649
}
tcp_accept_channel {
port = 8649
}

udp_recv_channel {
port = 6343
}

sflow {
udp_port = 6343
accept_vm_metrics = yes
}

And my gmetad.conf:
data_source "DoCluster" 10 localhost
all_trusted on
case_sensitive_hostnames 0

Thanks in advance,

It is not possilbe to install the hsflowd-eos-2.0.11-1.x86_64.rpm on an Arista switch

tsc1-leaf1-1.11#extension hsflowd-eos-2.0.11-1.x86_64.rpm
% Error installing hsflowd-eos-2.0.11-1.x86_64.rpm: RPM install error: Transaction failed: package hsflowd-2.0.11-1.x86_64 is intended for a different architecture

tsc1-leaf1-1.11#bash
s
Arista Networks EOS shell

[martin@tsc1-leaf1-1 ~]$ sudo -i

Arista Networks EOS shell

-bash-4.3# uname -a
Linux tsc1-leaf1-1.11 3.4.43.Ar-4773753.4182F #1 SMP PREEMPT Thu Apr 20 11:27:27 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux

The hsflowd-eos-2.0.9-1.i686.rpm is the working fine.

Incorrect data(CPU utilization) for KVM and Host in CentOS 7

I test with Host sFlow as below environment.

  • Host OS : CentOS 7.3 with KVM (4 core)
  • Guest OS : CentOS 7.3 ( 2 vcore )
  • hsflowd-2.0.15-1.x86_64 is installed in Host OS

I generate overhead with stress tool. (yum install stress)

  • Guest OS CPU is shown 50% with top.
  • Host OS CPU is shown 25% with top.

I checked with sflowtool collected counter information from hsflowd.

In case of Host OS
cpu_load_one 1.320
cpu_load_five 1.330
cpu_load_fifteen 1.090

In case of Guest OS
delta of CPU time(30 sec) = 1060630 - 1095300 = 69340
69340 / 30000 = 2.315

Is this collect ? I can't sure is it right way to get CPU utilization.

hsflowd does not work on Debian 11

I compiled with "make deb" then "dpkg -i hsflowd_2.0.36-2_amd64.deb" and the service is running but it does not work.
nothing is being sent to 6343 UDP( checked via "tcpdump -n -i lo port 6343")

cat /etc/hsflowd.conf
sflow {
collector { ip = 127.0.0.1 UDPPort=6343 }
sampling=100
sampling.10G=100
pcap { speed = 1- }
tcp {}
}

cat /etc/hsflowd.auto
rev_start=1
hostname=xxxxxxxxxx
sampling=100
header=128
datagram=1400
polling=30
sampling.10G=100
agentIP=xxxxxxxxxxx
agent=ens1f0
ds_index=1
collector=127.0.0.1 6343
rev_end=1

No out interfaces on flows

Hello,

I use host-sflow on a virtual machine(Debian 11.7 with kernel 5.10.0-23) which acts as network-edge router.
I have strange behaviour where I never have out interface(ifindex = 0) on all flows.
Screenshot 2024-05-07 at 17 19 36

With debug enable on process I get this log (extract) :

takeSample: hook=0 tap=enp1s4 in=enp1s4 out=<not found> pkt_len=111 cap_len=115 mac_len=14 (BC241178846C -> D6E08A0CB402 et=0x0800)
dbg2:selected sampler enp1s4 ifIndex=11
dbg1:psample netlink (type=29) CMD = 0
dbg2:psample: grp=1 in=11 out=0 n=10 seq=27448 drops=0 pktlen=106
takeSample: hook=0 tap=enp1s4 in=enp1s4 out=<not found> pkt_len=92 cap_len=96 mac_len=14 (BC241178846C -> D6E08A0CB402 et=0x0800)
dbg2:selected sampler enp1s4 ifIndex=11
dbg1:psample netlink (type=29) CMD = 0
dbg2:psample: grp=1 in=11 out=0 n=10 seq=27449 drops=0 pktlen=66
takeSample: hook=0 tap=enp1s4 in=enp1s4 out=<not found> pkt_len=52 cap_len=56 mac_len=14 (BC241178846C -> D6E08A0CB402 et=0x0800)
dbg2:selected sampler enp1s4 ifIndex=11
dbg1:psample netlink (type=29) CMD = 0
dbg2:psample: grp=1 in=11 out=0 n=10 seq=27450 drops=0 pktlen=70
takeSample: hook=0 tap=enp1s4 in=enp1s4 out=<not found> pkt_len=56 cap_len=60 mac_len=14 (BC241178846C -> D6E08A0CB402 et=0x0800)
dbg2:selected sampler enp1s4 ifIndex=11

my configuration :

$ cat /etc/hsflowd.conf
sflow {
  DNSSD = off
  polling = 10
  sampling = 1000

  agent = ens18

  collector {
    ip=XX.XX.XX.XX
    udpport=6343
  }

  # ====== Local configuration ======
  psample { group=1 }
  #  dent { sw=off switchport=enp[0-9]+s[0-9]* }
  # tcp { }
  # systemd { markTraffic = on }
}

With the tc_psample script(here) on all interfaces(tc_psample $DEV 1000 1).

Is there something I don't understand? Do you have any idea?

KVM monitoring is not work on CentOS 7

Hi. I've install latest RPM((hsflowd-centos7-2.0.15-1.x86_64.rpm)) on CentOS 7 to monitor resource of Host and KVM.

When I start hsflowd with /usr/sbin/hslowd -dd
I got below error.

libvirt: XML-RPC error : Cannot create user runtime directory '/run/user/0/libvirt': Permission denied
virConnectOpenReadOnly() failed

Some body know reason why ? How to resolve this issue ?

Dose host sflow follow sflow v5 rfc?

Because there are many formats which cannot be recognized by logstash sflow plugin or Whireshark. I check the counter polling format value which is different from sflow RFC.
Which collector are suitable for host sflow?

sflowtool is not showing any samples output

Need some help where I am not seeing any sflow samples output in the sflowtool. Below is the hsflowd config file and I do see takeSample outputs in the debug file but just nothing on the udp port. What prevents sflow samples from being sent out when they made it all the way upto the takeSample() ?

The only error I see in the debugs are the 'Get SIOCGIFADDR' failed errors for every interface since there is no IP assigned to them. Any debugging tips would be helpful.

rev_start=1
hostname=myHostName
sampling=400
header=128
datagram=1400
polling=20
agentIP=2000:f8b0:8096:2e30::6
agent=bond0
ds_index=1
collector=127.0.0.1/6343//
rev_end=1

Debugs

dbg1: psample netlink (type=23) CMD = 0
dbg3: psample: grp=1
dbg2: psample: grp=1 in=272 out=278 n=100 seq=757 drops=0 pktlen=106
takeSample: hook=0 tap=Ethernet1_1_1 in=Ethernet1_1_1 out=Ethernet1_5_1 pkt_len=92 cap_len=92 mac_len=14 (00000000007B -> 0000000000EA et=0x8100)
dbg2: selected sampler Ethernet1_1_1 ifIndex=272
dbg1: psample netlink (type=23) CMD = 0
dbg3: psample: grp=1
dbg2: psample: grp=1 in=272 out=275 n=100 seq=758 drops=0 pktlen=106
takeSample: hook=0 tap=Ethernet1_1_1 in=Ethernet1_1_1 out=Ethernet1_3_1 pkt_len=92 cap_len=92 mac_len=14 (00000000007B -> 0000000000EA et=0x8100)
dbg2: selected sampler Ethernet1_1_1 ifIndex=272
dbg1: psample netlink (type=23) CMD = 0
dbg3: psample: grp=1
dbg2: psample: grp=1 in=272 out=278 n=100 seq=759 drops=0 pktlen=106

PCAP + Linux possible race condition

I've been experiencing issues with running host-sflow on Linux in combination with privilege dropping enabled. Sometimes (but often enough) hsflowd will refuse to start because it is unable to open some of the interfaces.

I've modified the code to log the actual PCAP error:

--- host-sflow-2.0.4.orig/src/Linux/mod_pcap.c
+++ host-sflow-2.0.4/src/Linux/mod_pcap.c
@@ -267,7 +267,7 @@
                                0, /* timeout==poll */
                                bpfs->pcap_err);
     if(bpfs->pcap == NULL) {
-      myLog(LOG_ERR, "PCAP: device %s open failed", bpfs->deviceName);
+      myLog(LOG_ERR, "PCAP: device %s open failed: %s", bpfs->deviceName, bpfs->pcap_err);
       return;
     }

This results in the following errors, when it fails:

Nov 29 15:16:43 localhost user.err hsflowd: PCAP: device eth2 open failed: eth2: You don't have permission to capture on that device (socket: Operation not permitted)
Nov 29 15:16:43 localhost user.err hsflowd: PCAP: device eth3 open failed: eth3: You don't have permission to capture on that device (socket: Operation not permitted)

Restarting the host-sflow service in a loop will eventually result in success, so this must be a race condition somewhere.

NIC sflow stream per collector

The current hsflowd allows multiple network interfaces and multiple collectors. However, it doesn't allow allocation of NIC sflow stream per collector, I would like to have that implemented to allow separate streams for data collection.

Current:
collector { ip=x.x.x.1 udpport=6343, ip=x.x.x.2 udpport=6343}
pcap { dev = eth1 }
pcap { dev = eth2 }

Enhancement:
pcap { dev = eth1, collector_ip=x.x.x.1:6343 }
pcap { dev = eth2, collector_ip=x.x.x.2:6343 }

Incorrect tag version

It is more a cosmetic bug with tag versions.
I guess expected tag v2.0.50-4 instead of v20.0.50-4

double disk stats computed

Hello,
looking at src/Linux/readDiskCounters.c, and digging into /proc filesystem, I think there is an error on computations.
From my point of view, using kernel 3.10.0-1127.10.1.el7.x86_64, if I do a cat diskstat, I have:

[root@host ~]# cat /proc/diskstats
 253       0 vda 58221223 571170 7363640901 993650042 550607954 7206892 10253583588 3165244408 0 877360678 1357257008
 253       1 vda1 58221189 571170 7363638525 993649854 503206454 7206892 10253583588 3159062750 0 1057691671 2927779264
  11       0 sr0 23 0 164 1 0 0 0 0 0 1 1
 253      16 vdb 15425961 3561433 151905576 85938760 10812323 15883365 213623952 242215861 0 8357465 232539678
 253      32 vdc 43128310 226919 2239536699 349343297 39897393 4096268 1491074702 1888861345 0 515622209 2105718800
 252       0 dm-0 43382933 0 2239535459 352386729 42223176 0 1491074702 2163504538 0 519122815 2518555451

LIne 1 we have /dev/vda and line 2 /dev/vda1, which is a partition of /dev/vda:

[root@host ~]# fdisk -l /dev/vda

Disk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0000aebb

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    20971486    10484719+  83  Linux

So, looking at code, I am pretty sure that total of read/writes are vda + vda1, but, I think that reads of vda1 are compound into value of vda's reads.
So, result seems to be incorrect.
I tried to fix that by myself, but it seems that it is much more complicated.

The code that does total is:

// report the sum over all disks - except software RAID devices and logical volumes
// because that would cause double-counting.   We identify those by their
// major numbers:
// Software RAID = 9
// Logical Vol = 253
if (majorNo != 9 && majorNo != 253) {
    dsk->reads += reads;
    total_sectors_read += sectors_read;
    dsk->read_time += read_time_ms;
    dsk->writes += writes;
    total_sectors_written += sectors_written;
    dsk->write_time += write_time_ms;
}

(lines 94 -> 106)

And I think that the second problem is that VirtIO devices in this particular case are ignored of computation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.