GithubHelp home page GithubHelp logo

pmacct / pmacct Goto Github PK

View Code? Open in Web Editor NEW
1.0K 65.0 260.0 21.58 MB

pmacct is a small set of multi-purpose passive network monitoring tools [NetFlow IPFIX sFlow libpcap BGP BMP RPKI IGP Streaming Telemetry].

Home Page: http://www.pmacct.net

License: Other

C 98.08% Shell 0.16% Makefile 0.31% M4 0.87% PLpgSQL 0.48% Dockerfile 0.10%
netflow ipfix sflow bgp bmp kafka rabbitmq libpcap nflog geoip2

pmacct's Introduction

Build status

DOCUMENTATION

  • Online:

  • Distribution tarball:

    • ChangeLog: History of features version by version
    • CONFIG-KEYS: Available configuration directives explained
    • QUICKSTART: Examples, command-lines, quickstart guides
    • FAQS: FAQ document
    • docs/: Miscellaneous internals, UNIX signals, SQL triggers documents
    • examples/: Sample configs, maps, AMQP/Kafka consumers, clients
    • sql/: SQL documentation, default SQL schemas and customization tips

DOCKER IMAGES

Official pmacct docker images can be found in docker hub. To use them, simply (e.g. sfacctd):

 ~# docker pull pmacct/sfacctd:latest
 ~# docker run -v /path/to/sfacctd.conf:/etc/pmacct/sfacctd.conf pmacct/sfacctd

For more details, options and troubleshooting please read the Docker documentation section

BUILDING

Resolve dependencies, ie.:

  • apt-get install libpcap-dev pkg-config libtool autoconf automake make bash libstdc++-dev g++ for [Debian/Ubuntu]
  • yum install libpcap-devel pkgconfig libtool autoconf automake make bash libstdc++-devel gcc-c++ for [CentOS/RHEL]

Build GitHub code:

 ~# git clone https://github.com/pmacct/pmacct.git
 ~# cd pmacct
 ~#  ./autogen.sh
 ~# ./configure #check-out available configure knobs via ./configure --help
 ~#  make
 ~#  make install #with super-user permission

RELICENSE INITIATIVE

The pmacct project is looking to make its code base available under a more permissive BSD-style license. More information about the motivation and process can be found in this announcement.

CONTRIBUTING

pmacct's People

Contributors

aaronfinney-openx avatar bolemo avatar claudio-ortega avatar dcaba avatar edge-intelligence avatar emil-palm avatar floatingstatic avatar fvdxxx avatar graf3net avatar jaredmauch avatar jbj avatar jccardonar avatar job avatar jwestfall69 avatar matt-texier avatar msune avatar mxyns avatar paololucente avatar pldubouilh avatar pothier-peter avatar rbarazzutti avatar rodonile avatar scuzzilla avatar tbearma1 avatar ustorbeck avatar vadimtk avatar vincentbernat avatar vittoriofoschi avatar vphatarp avatar zephyre777 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pmacct's Issues

Remove labels 15 chars limit ?

Would it be possible to remove the current limitation of 15 chars for labels in networks file ?

! NOTE: labels can be up to 15 characters long.

Tags are mssing

git describe --tags
fatal: No names found, cannot describe anything.

They would help us know which commit corresponds to which version release number in the master branch.
For instance: v1.6.1
Otherwise, we have no way of discovering that information when a stable version is ready, since even the comments do not detail that information.
Tags are much easier to parse though.

[SFACCT] ARP trafic not displayed in JSON Logs

Hi,

I have a problem with sfacct to display ARP traffic in logs (json)
I use version 1.6.2 and Alcatel-Lucent Enterprise Omniswitch equipments which support sFlow V5.

I can see multicast (VRRP) and unicast flow captured by sfacct but not ARP traffic.
I verified in the .pcap with wireshark and sFlow datagram contains well ARP traffic.

sfacct.conf

daemonize: true
syslog: daemon
pidfile: /var/run/sfacctd.pid

plugins: print
print_output: json

sfacctd_counter_file: /pmacct/TMP/sFlow-Counters.txt
sfacctd_counter_output: json

interface: eth0

aggregate: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, src_port, dst_port, proto, tos, src_mask, dst_mask, tcpflags,

print_refresh_time: 60
print_history: 1m
print_output_file: /pmacct/LOGS/sFlow-file-%Y%m%d-%H%M.txt
print_history_roundoff: m

JSON output

{"event_type": "purge", "peer_ip_src": "0.0.0.0", "peer_ip_dst": "", "iface_in": 1024, "iface_out": 0, "ip_src": "10.55.51.253", "ip_dst": "224.0.0.18", "mask_src": 0, "mask_dst": 0, "port_src": 0, "port_dst": 0, "tcp_flags": "0", "ip_proto": "vrrp", "tos": 192, "packets": 1, "bytes": 68}

sFlow header

sflow_arp_header

Do you want .pcap file for further investigation ?

Best Regards

uacctd exports far more flows than pmacctd

Hey!

When using pmacctd and exporting flows, I get netflow packets with several long-running flows each. When using uacctd in a similar setup, I get a lot of netflow packets with one or two PDU each and each PDU accounts for very few packets and start time is always equal to end time.

uacctd is using pcap_cb to handle packets, so I wonder why the difference.

With uacctd:

Cisco NetFlow/IPFIX
    Version: 5
    Count: 1
    SysUptime: 245.613000000 seconds
    Timestamp: Dec 14, 2016 10:19:38.478043000 CET
    FlowSequence: 33122
    EngineType: RP (0)
    EngineId: 0
    00.. .... .... .... = SamplingMode: No sampling mode configured (0)
    ..00 0000 0000 0000 = SampleRate: 0
    pdu 1/1
        SrcAddr: x.x.x.x
        DstAddr: y.y.y.y
        NextHop: 0.0.0.0
        InputInt: 965
        OutputInt: 6
        Packets: 1
        Octets: 52
        [Duration: 0.000000000 seconds]
            StartTime: 56784.255000000 seconds
            EndTime: 56784.255000000 seconds
        SrcPort: 22
        DstPort: 60484
        Padding: 00
        TCP Flags: 0x10
        Protocol: TCP (6)
        IP ToS: 0x08
        SrcAS: 0
        DstAS: 0
        SrcMask: 0 (prefix: 159.100.251.194/32)
        DstMask: 0 (prefix: 212.41.212.84/32)
        Padding: 0000

With pmacctd:

Cisco NetFlow/IPFIX
    Version: 5
    Count: 30
    SysUptime: 333.920000000 seconds
    Timestamp: Dec 14, 2016 08:53:01.000213000 CET
    FlowSequence: 70
    EngineType: RP (0)
    EngineId: 0
    00.. .... .... .... = SamplingMode: No sampling mode configured (0)
    ..00 0000 0000 0000 = SampleRate: 0
    pdu 1/30
        SrcAddr: x.x.x.x
        DstAddr: y.y.y.y
        NextHop: 0.0.0.0
        InputInt: 0
        OutputInt: 0
        Packets: 27
        Octets: 3905
        [Duration: 18.579000000 seconds]
            StartTime: 137.086000000 seconds
            EndTime: 155.665000000 seconds
        SrcPort: 22
        DstPort: 6273
        Padding: 00
        TCP Flags: 0x1f
        Protocol: TCP (6)
        IP ToS: 0x00
        SrcAS: 0
        DstAS: 0
        SrcMask: 0 (prefix: 159.100.251.236/32)
        DstMask: 0 (prefix: 218.87.109.151/32)
        Padding: 0000

Nexthop inconsistency between advertised routes and bgp table dumped in a file.

Hi again :),

I noticed some of the dumped BGP routes have some inconsistencies in the next hop attribute, as you can see below, we are announcing one nexthop, but we see another in the dumped file.

print out from the advertised routes

  • 10.20.255.8:1:10.20.5.0/24 (2 entries, 1 announced)
    BGP group IBGP_pmacct type Internal
    Route Distinguisher: 10.20.255.8:1
    VPN Label: 16
    Nexthop: 10.20.255.8
    Localpref: 100
    AS path: [64021] I
    Communities: target:64021:1
    Cluster ID: 10.20.255.1
    Originator ID: 10.20.255.8

  • 10.20.255.8:8/:10.7.9.128/25 (2 entries, 1 announced)
    BGP group IBGP_pmacct type Internal
    Route Distinguisher: 10.20.255.8:8
    VPN Label: 19
    Nexthop: 10.20.255.8
    Localpref: 100
    AS path: [64021] I
    Communities: 64021:8 target:64021:8
    Cluster ID: 10.20.255.1
    Originator ID: 10.20.255.8

{"timestamp": "2016-12-10 11:15:00", "peer_ip_src": "10.20.1.114", "event_type": "dump", "ip_prefix": "10.20.5.0/24", "bgp_nexthop": "10.20.255.4", "as_path": "", "ecomms": "RT:64021:1", "origin": 0, "local_pref": 100, "rd": "1:10.20.255.8:1"}

{"timestamp": "2016-12-10 11:15:00", "peer_ip_src": "10.20.1.114", "event_type": "dump", "ip_prefix": "10.7.9.0/25", "bgp_nexthop": "10.20.255.8", "as_path": "", "comms": "64021:8", "ecomms": "RT:64021:8 ", "origin": 0, "local_pref": 100, "rd": "1:10.20.255.8:8"}

Broken SQL reference schemas

The reference schemas on 1.6 appear to be broken. I was seeing the following on 1.6 with a version 9 table config:
nfacctd[2966]: ERROR ( sir/mysql ): Unknown column 'net_dst' in 'where clause'

So after a little digging, I've turned off tmp_net_own_field, and now it is failing with:
ERROR ( sir/mysql ): Unknown column 'mask_dst' in 'where clause'

It seems like the expected schema changed in 1.6 but the reference schemas were never updated.

Additionally, from an operator standpoint it would be helpful if the reference schemas were idempotent. ie.

create database if not exists pmacct;
use pmacct;

create table if not exists acct_v9 (

(no drop database/table statements)

amqp_avro_schema_routing_key Appears to not be working

Version:

pmacct, pmacct client 1.6.1 (20161001-00+c5)
 '--enable-rabbitmq' '--enable-jansson' '--enable-geoip' '--enable-avro'

For suggestions, critics, bugs, contact me: Paolo Lucente <[email protected]>.

When amqp_avro_schema_routing_key is set, I get the following error:
Unknown key: amqp_avro_schema_routing_key. Ignored.

Is this known?

Tag releases

It would be cool to tag releases with i.e.

git tag 1.6.0 <commit>

:)

Not all prefixes in the BGP table is dumped to a file.

Hi,

I'm dumping the BGP table every 3 minutes and I realized we are missing some prefixes in the dumped file.
We are currently advertising 47761 prefixes, but not all prefixes get dumped. When comparing the file size together with the amount of prefixes we send, it looks like we are missing a lot of routes in the BGP file.

Amount of prefixes sent:
RIB State: VPN restart is complete
Advertised prefixes: 47761

amount of lines in the BGP file.
[root@linux-9dc70 pmacct]# wc -l bgp-10_197_1_114-2016_12_05T16_21_00.txt
38762 bgp-10_197_1_114-2016_12_05T16_21_00.txt

here it is the config we are using today:
[root@linux-9dc70 etc]# cat pmacct.conf

interface: eno33559296
bgp_daemon: true
bgp_daemon_ip: 10.17.1.18
bgp_agent_map: /opt/pmacct/peers.map

bgp_table_dump_file: /opt/pmacct/log/bgp-$peer_src_ip-%Y_%m_%dT%H_%M_%S.txt
bgp_table_dump_refresh_time: 180

printout from the router.
user1@hostname-re0> show bgp neighbor 10.17.1.18
Peer: 10.17.1.18+179 AS 65001 Local: 10.17.1.14+50967 AS 65001
Description: pmacct
Type: Internal State: Established (route reflector client)Flags:
Last State: EstabSync Last Event: RecvKeepAlive
Last Error: Hold Timer Expired Error
Export: [ BGP_EXPORT ] Import: [ BGP_IMPORT ]
Options:
Options:
Address families configured: inet-vpn-unicast inet6-vpn-unicast
Holdtime: 90 Preference: 170
Number of flaps: 30
Last flap event: Closed
Error: 'Hold Timer Expired Error' Sent: 1 Recv: 0
Peer ID: 10.17.1.18 Local ID: 10.17.25.1 Active Holdtime: 90
Keepalive Interval: 30 Group index: 2 Peer index: 0
BFD: disabled, down
NLRI for restart configured on peer: inet-vpn-unicast inet6-vpn-unicast
NLRI advertised by peer: inet-vpn-unicast inet6-vpn-unicast
NLRI for this session: inet-vpn-unicast inet6-vpn-unicast
Peer does not support Refresh capability
Stale routes from peer are kept for: 300
Peer does not support Restarter functionality
Peer does not support Receiver functionality
Peer supports 4 byte AS extension (peer-as 65001
Peer does not support Addpath
Table bgp.l3vpn.0 Bit: 10002
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: in sync
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Advertised prefixes: 47761
Table bgp.l3vpn-inet6.0 Bit: 20002
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: in sync
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Advertised prefixes: 5022
Table VRF1.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF2.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF3.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF4.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF5.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF6.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF7.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF8.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF9.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF10.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF11.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF12.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF13.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF14.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF15.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF16.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF17.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF18.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF19.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF20.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Last traffic (seconds): Received 26 Sent 1 Checked 24
Input messages: Total 11941 Updates 0 Refreshes 0 Octets 226879
Output messages: Total 107333 Updates 95391 Refreshes 0 Octets 18099491
Output Queue[0]: 0
Output Queue[1]: 0
Output Queue[2]: 0
Output Queue[3]: 0
Output Queue[4]: 0
Output Queue[5]: 0
Output Queue[6]: 0
Output Queue[7]: 0
Output Queue[8]: 0
Output Queue[9]: 0
Output Queue[10]: 0
Output Queue[11]: 0
Output Queue[12]: 0
Output Queue[13]: 0
Output Queue[14]: 0
Output Queue[15]: 0
Output Queue[16]: 0
Output Queue[17]: 0
Output Queue[26]: 0
Output Queue[27]: 0
Output Queue[29]: 0
Output Queue[30]: 0

postgresql support - compilation error

Can you please help me in the following problem?

CFLAGS='-I/usr/pgsql-9.4/include/' LDFLAGS='-L/usr/pgsql-9.4/lib/' ./configure --enable-pgsql
gmake
  CC     pmacctd.o
  CCLD   pmacctd
./.libs/libdaemons.a(libdaemons_la-pgsql_plugin.o): In function `PG_create_dyn_table':
pgsql_plugin.c:(.text+0x664): undefined reference to `PQexec'
pgsql_plugin.c:(.text+0x66f): undefined reference to `PQresultStatus'
pgsql_plugin.c:(.text+0x67c): undefined reference to `PQresultStatus'
pgsql_plugin.c:(.text+0x689): undefined reference to `PQresultErrorMessage'
./.libs/libdaemons.a(libdaemons_la-pgsql_plugin.o): In function `PG_DB_Close':
pgsql_plugin.c:(.text+0x734): undefined reference to `PQfinish'
./.libs/libdaemons.a(libdaemons_la-pgsql_plugin.o): In function `PG_Lock':
pgsql_plugin.c:(.text+0x78b): undefined reference to `PQexec'
pgsql_plugin.c:(.text+0x796): undefined reference to `PQresultStatus'
pgsql_plugin.c:(.text+0x7a3): undefined reference to `PQresultErrorMessage'
pgsql_plugin.c:(.text+0x7bf): undefined reference to `PQclear'
pgsql_plugin.c:(.text+0x7e5): undefined reference to `PQexec'
pgsql_plugin.c:(.text+0x7f0): undefined reference to `PQresultStatus'
pgsql_plugin.c:(.text+0x7fd): undefined reference to `PQresultErrorMessage'
./.libs/libdaemons.a(libdaemons_la-pgsql_plugin.o): In function `PG_DB_Connect':
pgsql_plugin.c:(.text+0x89d): undefined reference to `PQconnectdb'
pgsql_plugin.c:(.text+0x8a8): undefined reference to `PQstatus'
./.libs/libdaemons.a(libdaemons_la-pgsql_plugin.o): In function `PG_cache_purge':
pgsql_plugin.c:(.text+0x1bb1): undefined reference to `PQexec'
pgsql_plugin.c:(.text+0x1bbc): undefined reference to `PQresultStatus'
pgsql_plugin.c:(.text+0x1bdf): undefined reference to `PQclear'
pgsql_plugin.c:(.text+0x1c28): undefined reference to `PQexec'
pgsql_plugin.c:(.text+0x1c33): undefined reference to `PQresultStatus'
pgsql_plugin.c:(.text+0x1c4c): undefined reference to `PQclear'
pgsql_plugin.c:(.text+0x2295): undefined reference to `PQputCopyEnd'
pgsql_plugin.c:(.text+0x22d6): undefined reference to `PQputCopyEnd'
./.libs/libdaemons.a(libdaemons_la-pgsql_plugin.o): In function `PG_cache_dbop':

Tag releases

It would help if releases were tagged.

By looking at the history of ChangeLog, I could get the most recent ones:

git tag v1.6.1 7702361b938344e56ec1fc2db83f9957c0d91c61
git tag v1.6.0 a86c805a3838dfd158d546c358e1fb928bdd3bf2
git tag v1.5.3 2eb7432f4a9e99f863a409ee35c0b85f8c4d94b0~1
git tag v1.5.2 2021a8849fe6366c7bc741bc0a4ecad6df8c0937
git tag v1.5.1 181b22d734adf32dcdb92404a90bdf98ce82d8c4~1
git tag v1.5.0 7815a7f9f95c7a676b2fbeac501a0c6941cdc574
git tag v0.14.3 e5df23390ab975937c1f84f4f6b87172d2c3cc88~1
git tag v0.14.2 d0391bfa5469fbb43ef9ebf14fd6c07464a889e2~1
git tag v0.14.1 9a83d5985b89a7a2746975f1cd203002b2827cb7
git tag v0.14.0 1f9d84bb9cd19aae9fc2fe0a2cfcfae64fc05925

I am using ~1 to refer to the previous commit when the current commit both set date for the new version and bump the number to the next version. I suppose you generate tarball between the two steps.

make fail for 1.6.1 or master branch

Hello,
I have a problem compiling pmacct 1.6.1 or master branch (=1.6.2) :

sh automake.sh and ./configure are OK, but "make" returns this error :

gmake[2]: Entering directory /usr/local/src/pmacct-1.6.1/src' CC libdaemons_la-nl.lo In file included from bgp/bgp.h:24, from nl.c:37: bgp/bgp_table.h:33: error: redefinition of typedef ‘afi_t’ isis/isis.h:34: note: previous declaration of ‘afi_t’ was here bgp/bgp_table.h:34: error: redefinition of typedef ‘safi_t’ isis/isis.h:35: note: previous declaration of ‘safi_t’ was here gmake[2]: *** [libdaemons_la-nl.lo] Error 1 gmake[2]: Leaving directory /usr/local/src/pmacct-1.6.1/src'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/pmacct-1.6.1/src'
make: *** [all-recursive] Error 1

Any tip ?
Regards,
Cédric

Can I use pmactt to measure consumption per program and month?

Hi,

I'm evaluating pmactt.

It's not super clear to me what its precise set of functionalities is. Maybe some examples, use cases or screenshots would help.

Anyway, I'm looking for a permanently-running monitor which can emit a monthly (or daily, etc) bandwidth report, in a per-program manner.

Example desired output:

Bandwidth consumption: last 30 days
==============
Program     Downloaded   Uploaded
/usr/bin/ssh  30MB       100MB
/usr/bin/java 9000MB     3000MB

I actually don't mind the output format (plain text or store to DB, etc), but those are the key metrics for me.

Can pmacct gather this data? So far I haven't found any such monitor so I'm suspecting this may not be possible under linux.

Any help appreciated!

Thanks - Victor

SFLFLOW_EX_TAG, CLASS Documentation

I'm in the process of finalizing code for an sFlow collector in the Manito Networks Flow Analyzer project and I'm seeing some records that I can't figure out how to parse - Enterprise 8800, Format 2. These are coming from VyOS and Ubiquiti devices, and there's nothing in their documentation about it. In sFlow code comments it's mentioned that 8800, 2 comes from pmacct and that's it. The folks on the sFlow mailing list directed me to you all. I've dug through the pmacct code and found SFLFLOW_EX_TAG, SFLFLOW_EX_CLASS, etc but can't find a structure definition that shows how to parse it.

I've unpacked an 8800, 2 record and gotten "4", but then the Python XDR unpacker errors out indicating that unpacked data still remains, but I have no way of knowing what it is. I assume that "4" corresponds with a pre-defined value somewhere.

Can you point me to the structure documentation for records sent by enterprise 8800? If the documentation hasn't been done yet I'd be happy to do it. Thanks!

My previous conversation with the sFlow folks: https://groups.google.com/forum/#!topic/sflow/nyT4KcnO6DM

slow to send sflow counters to kafka

Hiya,

I'm attempting to send sflow counters at 1s intervals to pmacct, and have pmacct forward these to kafka.

I have pmacct (sfacctd) successfully receiving the sflow data, and sending data to kafka. However, sending each counter to kafka takes a long time, and while the counters are being sent to kafka any incoming sflow packets are not captured.

i.e.

Jan 10 10:07:48 INFO ( default/core ): sFlow Accounting Daemon, sfacctd 1.6.1 (20161001-00+c5)
Jan 10 10:07:48 INFO ( default/core ):  '--enable-jansson' '--enable-rabbitmq' '--enable-kafka' '--enable-ipv6' '--enable-plabel' '--enable-64bit' '--enable-threads' '--prefix=/opt/pmacct'
Jan 10 10:07:48 INFO ( default/core ): Reading configuration file '/etc/pmacct/sfacctd-23503.conf'.
Jan 10 10:07:48 WARN ( default/kafka ): defaulting to SRC HOST aggregation.
Jan 10 10:07:48 INFO ( default/kafka ): plugin_pipe_size=9096000 bytes plugin_buffer_size=4096 bytes
Jan 10 10:07:48 INFO ( default/kafka ): ctrl channel: obtained=212992 bytes target=17760 bytes
Jan 10 10:07:48 INFO ( default/core ): waiting for sFlow data on :::23503
Jan 10 10:07:48 INFO ( default/kafka ): cache entries=16411 base cache memory=53434216 bytes
Jan 10 10:07:48 INFO ( default/core ): 3 brokers successfully added.
Jan 10 10:07:49 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283419]
Jan 10 10:07:49 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283420]
Jan 10 10:07:49 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283421]
Jan 10 10:07:49 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283422]
...
<snip> - lots more sflow packets
...
Jan 10 10:07:50 DEBUG ( default/core ): readv5CountersSample(): element tag 0:1.
Jan 10 10:07:51 DEBUG ( default/core ): Kafka message delivery successful (111 bytes): {"seq": 0, "timestamp": "2017-01-10 10:07:49.869972", "peer_src_ip": "<ipaddress>", "event_type": "log_init"}
Jan 10 10:07:51 DEBUG ( default/core ): readv5CountersSample(): element tag 0:2.
Jan 10 10:07:52 DEBUG ( default/core ): Kafka message delivery successful (665 bytes): {"seq": 1, "timestamp": "2017-01-10 10:07:50.395681", "peer_ip_src": "<ipaddress>", "event_type": "log", "source_id_in
dex": 1, "sflow_seq": 1471283580, "sflow_cnt_seq": 2022332, "sf_cnt_type": "sflow_cnt_generic", "ifIndex": 1, "ifType": 6, "ifSpeed": 1000000000, "ifDirection": 1, "ifStatus": 3, "ifInOctets": 15076004816257
, "ifInUcastPkts": 2506335772, "ifInMulticastPkts": 18978967, "ifInBroadcastPkts": 2041661, "ifInDiscards": 0, "ifInErrors": 0, "ifInUnknownProtos": 0, "ifOutOctets": 25695757687509, "ifOutUcastPkts": 279292
9131, "ifOutMulticastPkts": 19847164, "ifOutBroadcastPkts": 15577452, "ifOutDiscards": 0, "ifOutErrors": 0, "ifPromiscuousMode": 1}
Jan 10 10:07:52 DEBUG ( default/core ): readv5CountersSample(): element tag 0:1.
Jan 10 10:07:53 DEBUG ( default/core ): Kafka message delivery successful (651 bytes): {"seq": 2, "timestamp": "2017-01-10 10:07:50.395681", "peer_ip_src": "<ipaddress>", "event_type": "log", "source_id_in
dex": 1, "sflow_seq": 1471283580, "sflow_cnt_seq": 2022332, "sf_cnt_type": "sflow_cnt_ethernet", "dot3StatsAlignmentErrors": 0, "dot3StatsFCSErrors": 0, "dot3StatsSingleCollisionFrames": 0, "dot3StatsMultipl
eCollisionFrames": 0, "dot3StatsSQETestErrors": 0, "dot3StatsDeferredTransmissions": 0, "dot3StatsLateCollisions": 0, "dot3StatsExcessiveCollisions": 0, "dot3StatsInternalMacTransmitErrors": 0, "dot3StatsCar
rierSenseErrors": 0, "dot3StatsFrameTooLongs": 0, "dot3StatsInternalMacReceiveErrors": 0, "dot3StatsSymbolErrors": 0}
Jan 10 10:07:53 DEBUG ( default/core ): readv5CountersSample(): element tag 0:2.
Jan 10 10:07:54 DEBUG ( default/core ): Kafka message delivery successful (665 bytes): {"seq": 3, "timestamp": "2017-01-10 10:07:50.395681", "peer_ip_src": "<ipaddress>", "event_type": "log", "source_id_in
dex": 2, "sflow_seq": 1471283580, "sflow_cnt_seq": 2022332, "sf_cnt_type": "sflow_cnt_generic", "ifIndex": 2, "ifType": 6, "ifSpeed": 1000000000, "ifDirection": 1, "ifStatus": 3, "ifInOctets": 23306961007723
1, "ifInUcastPkts": 2013855039, "ifInMulticastPkts": 12462760, "ifInBroadcastPkts": 4173839, "ifInDiscards": 0, "ifInErrors": 0, "ifInUnknownProtos": 0, "ifOutOctets": 59364348042262, "ifOutUcastPkts": 23465
75762, "ifOutMulticastPkts": 37158770, "ifOutBroadcastPkts": 5023965, "ifOutDiscards": 0, "ifOutErrors": 0, "ifPromiscuousMode": 1}
Jan 10 10:07:54 DEBUG ( default/core ): readv5CountersSample(): element tag 0:1.
Jan 10 10:07:55 DEBUG ( default/core ): Kafka message delivery successful (651 bytes): {"seq": 4, "timestamp": "2017-01-10 10:07:50.395681", "peer_ip_src": "<ipaddress>", "event_type": "log", "source_id_in
dex": 2, "sflow_seq": 1471283580, "sflow_cnt_seq": 2022332, "sf_cnt_type": "sflow_cnt_ethernet", "dot3StatsAlignmentErrors": 0, "dot3StatsFCSErrors": 0, "dot3StatsSingleCollisionFrames": 0, "dot3StatsMultipl
eCollisionFrames": 0, "dot3StatsSQETestErrors": 0, "dot3StatsDeferredTransmissions": 0, "dot3StatsLateCollisions": 0, "dot3StatsExcessiveCollisions": 0, "dot3StatsInternalMacTransmitErrors": 0, "dot3StatsCar
rierSenseErrors": 0, "dot3StatsFrameTooLongs": 0, "dot3StatsInternalMacReceiveErrors": 0, "dot3StatsSymbolErrors": 0}
Jan 10 10:07:55 DEBUG ( default/core ): readv5CountersSample(): element tag 0:2.
Jan 10 10:07:56 DEBUG ( default/core ): Kafka message delivery successful (628 bytes): {"seq": 5, "timestamp": "2017-01-10 10:07:50.395681", "peer_ip_src": "<ipaddress>", "event_type": "log", "source_id_in
dex": 3, "sflow_seq": 1471283580, "sflow_cnt_seq": 2022332, "sf_cnt_type": "sflow_cnt_generic", "ifIndex": 3, "ifType": 6, "ifSpeed": 0, "ifDirection": 1, "ifStatus": 1, "ifInOctets": 120758166, "ifInUcastPk
ts": 416574, "ifInMulticastPkts": 0, "ifInBroadcastPkts": 1188, "ifInDiscards": 0, "ifInErrors": 0, "ifInUnknownProtos": 0, "ifOutOctets": 1052639508, "ifOutUcastPkts": 493524, "ifOutMulticastPkts": 10931783
, "ifOutBroadcastPkts": 4193566, "ifOutDiscards": 0, "ifOutErrors": 0, "ifPromiscuousMode": 1}
...
<snip> and so on, until
...
Jan 10 10:08:02 DEBUG ( default/core ): readv5CountersSample(): element tag 0:2.
Jan 10 10:08:02 DEBUG ( default/core ): Kafka message delivery successful (658 bytes): {"seq": 11, "timestamp": "2017-01-10 10:07:50.395681", "peer_ip_src": "<ipaddress>", "event_type": "log", "source_id_i
ndex": 6, "sflow_seq": 1471283580, "sflow_cnt_seq": 2022332, "sf_cnt_type": "sflow_cnt_generic", "ifIndex": 6, "ifType": 6, "ifSpeed": 1000000000, "ifDirection": 1, "ifStatus": 3, "ifInOctets": 144448666074,
 "ifInUcastPkts": 425681217, "ifInMulticastPkts": 244209791, "ifInBroadcastPkts": 1955105, "ifInDiscards": 0, "ifInErrors": 0, "ifInUnknownProtos": 0, "ifOutOctets": 66168815367, "ifOutUcastPkts": 441955780,
 "ifOutMulticastPkts": 9140271, "ifOutBroadcastPkts": 8642416, "ifOutDiscards": 0, "ifOutErrors": 0, "ifPromiscuousMode": 1}
Jan 10 10:08:02 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283585]
Jan 10 10:08:02 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283586]
Jan 10 10:08:02 DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:8888] version [5] seqno [1471283587]

If I write to a file instead of kafka, everything gets collected and written out successfully, so this looks to be just a problem with writing to kafka.

Is there any other tuning/configuration available which could speed up writing out to kafka? My config looks like

sfacctd_port: 23503
daemonize: false
plugin_pipe_size: 9096000
plugin_buffer_size: 4096

plugins: kafka

sfacctd_counter_kafka_broker_host: kafkaip1,kafkaip2,kafkaip3
sfacctd_counter_kafka_topic: pmacct-counters
sfacctd_counter_kafka_partition: -1

Thanks!

How to get source ip address string before insert statement in MySQL

Hi,

I have added a new column in mysql as per my requirement.

I want to get Source IP address before Insert query is executed in MySQL.
Then, I will lookup for new column value from source IP adress. I need to append this value in Insert statement.

How to get source IP address from callback handlers ?

Regards,
Mehul

Examples of tracker in classification engine

Hi,

I have plugged one Shared Object for classification of Radius protocol. I am able to decode its stream.
Now, I want to maintain some states according to classified protocol stream.

I have studied the architecture of PMacct and I think tracker will do the work i require.
I am not able to find Sample codes for tracking.

Can anyone suggest some links?

Also, is there any other way to maintain states after classifying protocol stream ?

Regards,
Mehul

pmacct & as-stats

Greetings,

I've setup on a Gentoo/rolling/latest 32bit, a freshly/git compiled pmacct as a netflow exporter to feed as-stats for our IP/BGP/AS private 10.0.0.0/8 AWMN network.

I followed this guide.

My pmacctd-nfprobe-simple.conf

daemonize: false
promisc: true
interface: eth0
plugins: nfprobe
nfprobe_receiver: 127.0.0.1:9000
nfprobe_version:9
nfacctd_net: bgp
nfacctd_as_new: bgp
nfprobe_peer_as: true
bgp_peer_src_as_type: bgp
bgp_src_as_path_type: bgp
bgp_src_std_comm_type: bgp
bgp_src_ext_comm_type: bgp
bgp_peer_src_as_map: /etc/pmacct/peers.map
bgp_daemon_pipe_size: 1310710
bgp_daemon: true
bgp_daemon_ip: 10.2.19.10
bgp_daemon_id: 10.2.19.10
bgp_agent_map: /etc/pmacct/agent_to_peer.map
bgp_daemon_port: 17917
bgp_daemon_max_peers: 2
bgp_daemon_msglog: false
bgp_follow_nexthop: 10.2.19.0/24, 10.2.146.0/24, 10.0.0.0/8
pcap_filter: net 10.0.0.0/8
aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, as_path, peer_src_as, peer_dst_as, proto

my /etc/pmacct/peers.map
id=bgp ip=10.2.146.10 in=100

my /etc/pmacct/agent_to_peer.map
bgp_ip=10.2.146.10 ip=10.0.0.0/8

pmacctd -d -f pmacctd-nfprobe-simple.conf

DEBUG: [pmacctd-nfprobe-simple.conf] plugin name/type: 'default'/'core'.
DEBUG: [pmacctd-nfprobe-simple.conf] plugin name/type: 'default'/'nfprobe'.
DEBUG: [pmacctd-nfprobe-simple.conf] daemonize:false
DEBUG: [pmacctd-nfprobe-simple.conf] promisc:true
DEBUG: [pmacctd-nfprobe-simple.conf] interface:eth0
DEBUG: [pmacctd-nfprobe-simple.conf] nfprobe_receiver:127.0.0.1:9000
DEBUG: [pmacctd-nfprobe-simple.conf] nfprobe_version:9
DEBUG: [pmacctd-nfprobe-simple.conf] nfacctd_net:bgp
DEBUG: [pmacctd-nfprobe-simple.conf] nfacctd_as_new:bgp
DEBUG: [pmacctd-nfprobe-simple.conf] nfprobe_peer_as:true
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_peer_src_as_type:bgp
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_src_as_path_type:bgp
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_src_std_comm_type:bgp
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_src_ext_comm_type:bgp
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_peer_src_as_map:/etc/pmacct/peers.map
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon_pipe_size:1310710
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon:true
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon_ip:10.2.19.10
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon_id:10.2.19.10
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_agent_map:/etc/pmacct/agent_to_peer.map
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon_port:17917
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon_max_peers:2
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_daemon_msglog:false
WARN: [pmacctd-nfprobe-simple.conf:39] Unknown key: bgp_daemon_msglog. Ignored.
DEBUG: [pmacctd-nfprobe-simple.conf] bgp_follow_nexthop:10.2.19.0/24, 10.2.146.0/24, 10.0.0.0/8
DEBUG: [pmacctd-nfprobe-simple.conf] pcap_filter:net 10.0.0.0/8
DEBUG: [pmacctd-nfprobe-simple.conf] debug:true
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 1.6.2-git (20170221-01)
INFO ( default/core ):
INFO ( default/core ): Reading configuration file '/etc/pmacctd/pmacctd-nfprobe-simple.conf'.
INFO ( default/nfprobe ): plugin_pipe_size=4096000 bytes plugin_buffer_size=220 bytes
INFO ( default/nfprobe ): ctrl channel: obtained=163840 bytes target=74472 bytes
INFO ( default/nfprobe ): NetFlow probe plugin is originally based on softflowd 0.9.7 software, Copyright 2002 Damien Miller [email protected] All rights reserved.
INFO ( default/nfprobe ): TCP timeout: 3600s
INFO ( default/nfprobe ): TCP post-RST timeout: 120s
INFO ( default/nfprobe ): TCP post-FIN timeout: 300s
INFO ( default/nfprobe ): UDP timeout: 300s
INFO ( default/nfprobe ): ICMP timeout: 300s
INFO ( default/nfprobe ): General timeout: 3600s
INFO ( default/nfprobe ): Maximum lifetime: 604800s
INFO ( default/nfprobe ): Expiry interval: 60s
INFO ( default/nfprobe ): Exporting flows to [127.0.0.1]:9000
INFO ( default/core ): link type is: 1
INFO ( default/core ): [/etc/pmacct/agent_to_peer.map] (re)loading map.
INFO ( default/core ): [/etc/pmacct/agent_to_peer.map] map successfully (re)loaded.
DEBUG ( default/core/BGP ): 1 thread(s) initialized
INFO ( default/core/BGP ): maximum BGP peers allowed: 2
INFO ( default/core/BGP ): bgp_daemon_pipe_size: obtained=327680 target=1310710.
INFO ( default/core/BGP ): waiting for BGP data on 10.2.19.10:17917
INFO ( default/core/BGP ): [10.2.146.10] BGP peers usage: 1/2
INFO ( default/core/BGP ): [10.2.146.10] Capability: MultiProtocol [1] AFI [1] SAFI [1]
INFO ( default/core/BGP ): [10.2.146.10] Capability: 4-bytes AS [41] ASN [22128]
INFO ( default/core/BGP ): [10.2.146.10] BGP_OPEN: Local AS: 22128 Remote AS: 22128 HoldTime: 180
DEBUG ( default/core/BGP ): [10.2.146.10] BGP_KEEPALIVE received
DEBUG ( default/core/BGP ): [10.2.146.10] BGP_KEEPALIVE sent
DEBUG ( default/core/BGP ): [10.2.146.10] BGP_KEEPALIVE received
DEBUG ( default/core/BGP ): [10.2.146.10] BGP_KEEPALIVE sent
DEBUG ( default/nfprobe ): ADD FLOW seq:1 [10.2.146.10]:41317 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:2 [10.2.146.10]:37399 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:3 [10.2.19.4]:50843 <> [192.168.1.69]:53 proto:17
DEBUG ( default/nfprobe ): ADD FLOW seq:4 [10.2.146.10]:46665 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:5 [10.2.146.10]:41227 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:6 [10.2.146.10]:39279 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:7 [10.2.146.10]:44941 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:8 [10.2.146.10]:39761 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:9 [10.2.146.10]:43403 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:10 [10.2.146.10]:43069 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:11 [10.2.146.10]:38175 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:12 [10.2.146.10]:38129 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:13 [10.2.146.10]:33419 <> [10.2.168.5]:179 proto:6
DEBUG ( default/core/BGP ): [10.2.146.10] BGP_KEEPALIVE received
DEBUG ( default/core/BGP ): [10.2.146.10] BGP_KEEPALIVE sent
DEBUG ( default/nfprobe ): ADD FLOW seq:14 [10.2.146.10]:34519 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:15 [10.2.146.10]:34677 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:16 [10.2.146.10]:35381 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:17 [10.2.146.10]:35919 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:18 [10.2.146.10]:40011 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:19 [10.2.146.10]:45617 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:20 [10.2.146.10]:45333 <> [10.74.80.1]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:21 [10.2.146.10]:33561 <> [10.2.168.5]:179 proto:6
DEBUG ( default/nfprobe ): ADD FLOW seq:22 [10.2.146.10]:34705 <> [10.74.80.1]:179 proto:6
^CWARN ( default/nfprobe ): Shutting down on user request.
DEBUG ( default/nfprobe ): Starting expiry scan: mode -1
DEBUG ( default/nfprobe ): Queuing flow seq:3 (0x927d628) for expiry
DEBUG ( default/nfprobe ): Queuing flow seq:1 (0x927d3c8) for expiry
DEBUG ( default/nfprobe ): Queuing flow seq:2 (0x927d4f8) for expiry
DEBUG ( default/nfprobe ): Queuing flow seq:4 (0x927d758) for expiry
DEBUG ( default/nfprobe ): Queuing flow seq:5 (0x927d888) for expiry
DEBUG ( default/nfprobe ): Queuing flow seq:6 (0x927d9b8) for expiry

netstat -nap | grep 9000
udp 0 0 127.0.0.1:52016 127.0.0.1:9000 ESTABLISHED 18010/pmacctd: Netf

tcpdump -n -i lo port 9000

11:30:25.323368 IP 127.0.0.1.60543 > 127.0.0.1.9000: UDP, length 504
11:30:25.323424 IP 127.0.0.1.60543 > 127.0.0.1.9000: UDP, length 480

there is no netflow data generated and/or fed to port 9000.

what am I missing?

Results exported via Kafka are different from using print method

Using "nfacctd -c timestamp_start, in_iface, out_iface -P print -l 50020" I get correct (expected) results which is 6000 packets/sec. However If I use kafka results are 1000-2000 packets/sec (analysed json messages produced by nfacctd). Any ideas why I get wrong results using Kafka?

pmbgpd does't write pidfile

neither by command line or config file.

I believe it is missing:
if (config.pidfile) write_pid_file(config.pidfile);

nfacctd segfaults in get_ipfix_vlen

Hello,

I have an issue with nfacctd 1.6.0 which seems to cause frequent segfaults. My configuration is as follows:

CONFIG:

daemonize: true
syslog: daemon
interface: eth0
plugins: print
nfacctd_port: 5678
nfacctd_time_new: true
! nfacctd checks the sequence number of each Netflow packet unless this is true
nfacctd_disable_checks: false
!
aggregate: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, tcpflags, src_mac,dst_mac
print_output: json
print_output_file: /opt/pmacct/data/nfacctd-%Y%m%d_%H%M.json
print_latest_file: /opt/pmacct/data/nfacctd-latest.json
! flows are stored in a different file every hour...
print_history: 1h
print_history_roundoff: m
! flush the write buffer every 30 seconds
print_refresh_time: 30
print_output_file_append: true
!
pre_tag_map: /opt/pmacct/pretag.map

This machine runs CentOS 7 with the following rpms:

[mpenning@that-server pmacct]$ uname -a
Linux that-server.company.local 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[mpenning@that-server pmacct]$ sudo rpm -qa | grep -E "libpcap|jansson"
libpcap-1.5.3-8.el7.x86_64
jansson-devel-2.4-6.el7.x86_64
libpcap-devel-1.5.3-8.el7.x86_64
jansson-2.4-6.el7.x86_64

SEGFAULTS:

[mpenning@that-server data]$ dmesg -e |grep nfacctd
[Jan27 19:09] nfacctd[2274]: segfault at 7ffe0ae10c75 ip 000000000042487a sp 00007ffe0adf11f8 error 4 in nfacctd[400000+df000]
[Jan27 19:19] nfacctd[2381]: segfault at 7ffe88602dd7 ip 000000000042487a sp 00007ffe885e36c8 error 4 in nfacctd[400000+df000]
[Jan27 20:19] nfacctd[2416]: segfault at 7ffe74f6b7f1 ip 000000000042487a sp 00007ffe74f4d5d8 error 4 in nfacctd[400000+df000]
[Jan27 20:55] nfacctd[2564]: segfault at 7ffcd849b73f ip 000000000042487a sp 00007ffcd847cc08 error 4 in nfacctd[400000+df000]
[Jan27 23:16] nfacctd[2652]: segfault at 7ffc1c42d741 ip 000000000042487a sp 00007ffc1c40eec8 error 4 in nfacctd[400000+df000]
[Jan28 01:43] nfacctd[3218]: segfault at 7fff6059ce3f ip 000000000042487a sp 00007fff6057dbc8 error 4 in nfacctd[400000+df000]
[Jan28 02:51] nfacctd[3859]: segfault at 7ffccaa91ed9 ip 000000000042487a sp 00007ffccaa72218 error 4 in nfacctd[400000+df000]
[Jan28 03:25] nfacctd[4094]: segfault at 7ffccc5f12b6 ip 000000000042487a sp 00007ffccc5d1738 error 4 in nfacctd[400000+df000]
[Jan28 03:51] nfacctd[4208]: segfault at 7ffd86f221d3 ip 000000000042487a sp 00007ffd86f03f28 error 4 in nfacctd[400000+df000]
[Jan28 03:53] nfacctd[4264]: segfault at 7ffc20932165 ip 000000000042487a sp 00007ffc20913fa8 error 4 in nfacctd[400000+df000]
[Jan28 04:34] nfacctd[4270]: segfault at 7ffc5e7b742b ip 000000000042487a sp 00007ffc5e798bf8 error 4 in nfacctd[400000+df000]
[Jan28 06:15] nfacctd[4372]: segfault at 7fffd40851a8 ip 000000000042487a sp 00007fffd40659d8 error 4 in nfacctd[400000+df000]
[Jan28 09:50] nfacctd[4613]: segfault at 7ffd7ec48c22 ip 000000000042487a sp 00007ffd7ec2abb8 error 4 in nfacctd[400000+df000]
[Jan28 10:09] nfacctd[5130]: segfault at 7ffec8b2498f ip 000000000042487a sp 00007ffec8b05308 error 4 in nfacctd[400000+df000]
[Jan28 10:17] nfacctd[5188]: segfault at 7ffeb0888f76 ip 000000000042487a sp 00007ffeb086a408 error 4 in nfacctd[400000+df000]
[Jan28 11:48] nfacctd[5207]: segfault at 7ffc4d1952a8 ip 000000000042487a sp 00007ffc4d176fd8 error 4 in nfacctd[400000+df000]
[Jan28 13:58] nfacctd[5414]: segfault at 7fff59003ba3 ip 000000000042487a sp 00007fff58fe5658 error 4 in nfacctd[400000+df000]
[Jan28 13:59] nfacctd[5755]: segfault at 7ffc808635cc ip 000000000042487a sp 00007ffc808466b8 error 4 in nfacctd[400000+df000]
[Jan28 14:24] nfacctd[5761]: segfault at 7ffd4ff4179f ip 000000000042487a sp 00007ffd4ff21ff8 error 4 in nfacctd[400000+df000]
[Jan28 14:27] nfacctd[5833]: segfault at 7ffd24297c35 ip 000000000042487a sp 00007ffd24278568 error 4 in nfacctd[400000+df000]
[Jan28 15:35] nfacctd[5841]: segfault at 7ffe4c9aa57a ip 000000000042487a sp 00007ffe4c98cea8 error 4 in nfacctd[400000+df000]
[ +21.169715] nfacctd[6027]: segfault at 7fffc8eb3df8 ip 000000000042487a sp 00007fffc8e962b8 error 4 in nfacctd[400000+df000]
[Jan28 16:44] nfacctd[6031]: segfault at 7ffde66a37e6 ip 000000000042487a sp 00007ffde6685868 error 4 in nfacctd[400000+df000]
[Jan28 17:38] nfacctd[6264]: segfault at 7ffc022de27a ip 000000000042487a sp 00007ffc022bfff8 error 4 in nfacctd[400000+df000]
[Jan28 17:59] nfacctd[6399]: segfault at 7ffec4e15bde ip 000000000042487a sp 00007ffec4df7478 error 4 in nfacctd[400000+df000]
[Jan28 18:09] nfacctd[6462]: segfault at 7fffa232e09b ip 000000000042487a sp 00007fffa2310b88 error 4 in nfacctd[400000+df000]
[Jan28 18:20] nfacctd[6501]: segfault at 7ffdbfced1f7 ip 000000000042487a sp 00007ffdbfccf7d8 error 4 in nfacctd[400000+df000]
[Jan28 21:09] nfacctd[6543]: segfault at 7fff7f5833cf ip 000000000042487a sp 00007fff7f565348 error 4 in nfacctd[400000+df000]
[Jan28 22:11] nfacctd[7032]: segfault at 7ffc11f7a295 ip 000000000042487a sp 00007ffc11f5c908 error 4 in nfacctd[400000+df000]
[Jan28 22:37] nfacctd[7187]: segfault at 7fffd2b6c384 ip 000000000042487a sp 00007fffd2b4d7a8 error 4 in nfacctd[400000+df000]
[Jan28 23:04] nfacctd[7262]: segfault at 7ffdb018283f ip 000000000042487a sp 00007ffdb01639c8 error 4 in nfacctd[400000+df000]
[Jan28 23:06] nfacctd[7338]: segfault at 7fff5095c83e ip 000000000042487a sp 00007fff5093ecb8 error 4 in nfacctd[400000+df000]
[Jan28 23:07] nfacctd[7345]: segfault at 7ffdcc5486bf ip 000000000042487a sp 00007ffdcc52a538 error 4 in nfacctd[400000+df000]
[Jan28 23:25] nfacctd[7350]: segfault at 7ffc3dc9a5de ip 000000000042487a sp 00007ffc3dc7bbf8 error 4 in nfacctd[400000+df000]
[Jan28 23:40] nfacctd[7392]: segfault at 7ffdcedbb025 ip 000000000042487a sp 00007ffdced9b3b8 error 4 in nfacctd[400000+df000]
[Jan29 01:30] nfacctd[7426]: segfault at 7fff0d3e626d ip 000000000042487a sp 00007fff0d3c7388 error 4 in nfacctd[400000+df000]
[Jan29 01:56] nfacctd[7691]: segfault at 7ffde3272c93 ip 000000000042487a sp 00007ffde3253528 error 4 in nfacctd[400000+df000]
[Jan29 03:22] nfacctd[7748]: segfault at 7ffddacbd35f ip 000000000042487a sp 00007ffddac9fd38 error 4 in nfacctd[400000+df000]
[Jan29 03:26] nfacctd[7986]: segfault at 7ffc4396976a ip 000000000042487a sp 00007ffc4394a768 error 4 in nfacctd[400000+df000]
[Jan29 04:25] nfacctd[7999]: segfault at 7ffcaa36ec5a ip 000000000042487a sp 00007ffcaa34f458 error 4 in nfacctd[400000+df000]
[Jan29 06:17] nfacctd[8138]: segfault at 7ffcb936aa1c ip 000000000042487a sp 00007ffcb934cb08 error 4 in nfacctd[400000+df000]
[Jan29 07:02] nfacctd[8433]: segfault at 7ffefed0fc18 ip 000000000042487a sp 00007ffefecf2148 error 4 in nfacctd[400000+df000]
[Jan29 07:56] nfacctd[8546]: segfault at 7ffce8be5fe5 ip 000000000042487a sp 00007ffce8bc8688 error 4 in nfacctd[400000+df000]
[Jan29 08:39] nfacctd[8663]: segfault at 7fff1eb380a0 ip 000000000042487a sp 00007fff1eb1a368 error 4 in nfacctd[400000+df000]
[Jan29 08:58] nfacctd[8781]: segfault at 7ffebfb81bb6 ip 000000000042487a sp 00007ffebfb63738 error 4 in nfacctd[400000+df000]
[Jan29 10:25] nfacctd[8841]: segfault at 7fff68985d7e ip 000000000042487a sp 00007fff68967018 error 4 in nfacctd[400000+df000]
[Jan29 10:39] nfacctd[9039]: segfault at 7ffd2a45b00c ip 000000000042487a sp 00007ffd2a43b428 error 4 in nfacctd[400000+df000]
[Jan29 16:33] nfacctd[9074]: segfault at 7ffe08240433 ip 000000000042487a sp 00007ffe082218f8 error 4 in nfacctd[400000+df000]
[Jan29 16:40] nfacctd[9983]: segfault at 7fff9c896c58 ip 000000000042487a sp 00007fff9c878098 error 4 in nfacctd[400000+df000]
[Jan29 16:56] nfacctd[10000]: segfault at 7fff7173243f ip 000000000042487a sp 00007fff71712cc8 error 4 in nfacctd[400000+df000]
[Jan29 17:35] nfacctd[10038]: segfault at 7ffd95d65236 ip 000000000042487a sp 00007ffd95d472b8 error 4 in nfacctd[400000+df000]
[Jan29 18:34] nfacctd[10135]: segfault at 7fff67003a62 ip 000000000042487a sp 00007fff66fe5188 error 4 in nfacctd[400000+df000]
[Jan29 19:29] nfacctd[10285]: segfault at 7ffd89ba20ea ip 000000000042487a sp 00007ffd89b845a8 error 4 in nfacctd[400000+df000]
[Jan29 19:31] nfacctd[10442]: segfault at 7ffd172ac3d5 ip 000000000042487a sp 00007ffd1728d338 error 4 in nfacctd[400000+df000]
[Jan29 19:38] nfacctd[10450]: segfault at 7ffc0299fc86 ip 000000000042487a sp 00007ffc029809d8 error 4 in nfacctd[400000+df000]
[Jan29 20:15] nfacctd[10468]: segfault at 7fffb7f72f6a ip 000000000042487a sp 00007fffb7f53b28 error 4 in nfacctd[400000+df000]
[Jan29 20:32] nfacctd[10558]: segfault at 7ffe55db31eb ip 000000000042487a sp 00007ffe55d95b68 error 4 in nfacctd[400000+df000]
[Jan29 20:34] nfacctd[10597]: segfault at 7ffcb33e8563 ip 000000000042487a sp 00007ffcb33c9918 error 4 in nfacctd[400000+df000]
[Jan29 21:26] nfacctd[10605]: segfault at 7ffeed108716 ip 000000000042487a sp 00007ffeed0eb248 error 4 in nfacctd[400000+df000]
[Jan29 22:03] nfacctd[10739]: segfault at 7ffffe82ce0a ip 000000000042487a sp 00007ffffe80db68 error 4 in nfacctd[400000+df000]
[Jan29 22:19] nfacctd[10837]: segfault at 7ffed2daf12d ip 000000000042487a sp 00007ffed2d8f7a8 error 4 in nfacctd[400000+df000]
[Jan30 00:11] nfacctd[10872]: segfault at 7ffd1219893d ip 000000000042487a sp 00007ffd1217aae8 error 4 in nfacctd[400000+df000]
[Jan30 00:32] nfacctd[11141]: segfault at 7ffd303910e1 ip 000000000042487a sp 00007ffd30371498 error 4 in nfacctd[400000+df000]
[Jan30 01:39] nfacctd[11203]: segfault at 7ffe597b04a7 ip 000000000042487a sp 00007ffe59792648 error 4 in nfacctd[400000+df000]
[Jan30 03:36] nfacctd[11398]: segfault at 7ffc5f0f9902 ip 000000000042487a sp 00007ffc5f0da3e8 error 4 in nfacctd[400000+df000]
[Jan30 03:56] nfacctd[11721]: segfault at 7ffed0290cef ip 000000000042487a sp 00007ffed0273918 error 4 in nfacctd[400000+df000]
[Jan30 04:20] nfacctd[11786]: segfault at 7ffcf66a2f50 ip 000000000042487a sp 00007ffcf6684058 error 4 in nfacctd[400000+df000]
[Jan30 04:46] nfacctd[11853]: segfault at 7ffde674d4b9 ip 000000000042487a sp 00007ffde672e318 error 4 in nfacctd[400000+df000]
[Jan30 04:51] nfacctd[11911]: segfault at 7fff0d5ca34e ip 000000000042487a sp 00007fff0d5acca8 error 4 in nfacctd[400000+df000]
[Jan30 06:04] nfacctd[11923]: segfault at 7ffea9d513f1 ip 000000000042487a sp 00007ffea9d32468 error 4 in nfacctd[400000+df000]
[Feb 7 08:59] nfacctd[11130]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:00] nfacctd[11131]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.193729] nfacctd[11147]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:01] nfacctd[11148]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.671122] nfacctd[11149]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:02] nfacctd[11150]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.199901] nfacctd[11151]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:03] nfacctd[11153]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.742845] nfacctd[11154]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:04] nfacctd[11156]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.640117] nfacctd[11168]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:05] nfacctd[11169]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.859184] nfacctd[11170]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:06] nfacctd[11171]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.940520] nfacctd[11172]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:07] nfacctd[11173]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.982958] nfacctd[11174]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:08] nfacctd[11175]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.206963] nfacctd[11176]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:09] nfacctd[11177]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.182197] nfacctd[11178]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:10] nfacctd[11179]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.220034] nfacctd[11180]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:11] nfacctd[11182]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.092931] nfacctd[11183]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:12] nfacctd[11184]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:13] nfacctd[11186]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.102824] nfacctd[11187]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:14] nfacctd[11189]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.163839] nfacctd[11190]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:15] nfacctd[11191]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.169042] nfacctd[11192]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:16] nfacctd[11193]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.202644] nfacctd[11194]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:17] nfacctd[11195]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.208013] nfacctd[11196]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:18] nfacctd[11197]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.194564] nfacctd[11198]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:19] nfacctd[11199]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.190151] nfacctd[11200]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:20] nfacctd[11201]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.929310] nfacctd[11202]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:21] nfacctd[11204]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.998629] nfacctd[11205]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:22] nfacctd[11206]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.209384] nfacctd[11207]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:23] nfacctd[11208]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.209872] nfacctd[11210]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:24] nfacctd[11211]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.246587] nfacctd[11212]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:25] nfacctd[11213]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.286707] nfacctd[11214]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:26] nfacctd[11215]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.193716] nfacctd[11216]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:27] nfacctd[11217]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.460367] nfacctd[11218]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:28] nfacctd[11219]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.273432] nfacctd[11220]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:29] nfacctd[11221]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.421717] nfacctd[11222]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:30] nfacctd[11223]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.272611] nfacctd[11224]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:31] nfacctd[11225]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.300394] nfacctd[11226]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:32] nfacctd[11229]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.246894] nfacctd[11230]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:33] nfacctd[11231]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.354510] nfacctd[11232]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:34] nfacctd[11233]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.235925] nfacctd[11234]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:35] nfacctd[11235]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.228944] nfacctd[11236]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:36] nfacctd[11237]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.241627] nfacctd[11238]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:37] nfacctd[11239]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.216629] nfacctd[11240]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:38] nfacctd[11241]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.075345] nfacctd[11242]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:39] nfacctd[11243]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.105328] nfacctd[11244]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:40] nfacctd[11245]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.006159] nfacctd[11246]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:41] nfacctd[11247]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.261707] nfacctd[11248]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:42] nfacctd[11249]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.947082] nfacctd[11250]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:43] nfacctd[11251]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.792465] nfacctd[11252]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:44] nfacctd[11253]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.229061] nfacctd[11254]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:45] nfacctd[11255]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.994791] nfacctd[11256]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:46] nfacctd[11257]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.216666] nfacctd[11258]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:47] nfacctd[11259]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.221619] nfacctd[11260]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:48] nfacctd[11261]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.213583] nfacctd[11263]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:49] nfacctd[11264]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.274975] nfacctd[11265]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:50] nfacctd[11266]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.184051] nfacctd[11267]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:51] nfacctd[11268]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.315544] nfacctd[11269]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:52] nfacctd[11270]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.213199] nfacctd[11271]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:53] nfacctd[11272]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.248600] nfacctd[11273]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:54] nfacctd[11275]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.960985] nfacctd[11276]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:55] nfacctd[11277]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.284766] nfacctd[11278]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:56] nfacctd[11279]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.295094] nfacctd[11280]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:57] nfacctd[11282]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +30.230835] nfacctd[11283]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:58] nfacctd[11284]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[ +29.178239] nfacctd[11285]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]
[Feb 7 09:59] nfacctd[11286]: segfault at 0 ip 00007fbe4865a71d sp 00007ffcd7376b70 error 4 in libc-2.17.so[7fbe485ed000+1b6000]

pmacct doesn't export NetFlow Data even though it says sending

I configured it like QUICKSTART, and I get this in the log:

DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 480, template ID = 1024, total len = 460, # elements = 8
DEBUG ( default/nfprobe ): Sending NetFlow v9/IPFIX packet: len = 480
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 138, template ID = 1024, total len = 118, # elements = 2
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 252, template ID = 1024, total len = 232, # elements = 4
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 366, template ID = 1024, total len = 346, # elements = 6
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 480, template ID = 1024, total len = 460, # elements = 8
DEBUG ( default/nfprobe ): Sending NetFlow v9/IPFIX packet: len = 480
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 138, template ID = 1024, total len = 118, # elements = 2
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 252, template ID = 1024, total len = 232, # elements = 4
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 366, template ID = 1024, total len = 346, # elements = 6
DEBUG ( default/nfprobe ): Building NetFlow v9 packet: offset = 480, template ID = 1024, total len = 460, # elements = 8

But my collector gets nothing, if I run netcat -l it doesn't see any incoming connections at all.

stdout contention between Log() and Print plugin

Hi

I originally observed this with 1.5.2 but just confirmed its also happens with master from a few minutes ago. Log( ) and the Print plugin can both be writing to stdout at the same time, which can result in corrupt lines. Its easiest to trigger the issue with debug enabled since it generates a large number of Log() lines.

Here is an example:

$ cat pmacct.pl 
#!/usr/bin/perl
use strict;
use JSON;

my $count = 0;
while(my $line = <STDIN>) {
  $count++;
  next if($line =~ /^(DEBUG|INFO|WARN)/);
  eval { my $derp = from_json($line); };
  print "JSON ERROR[$count]: $line" if($@);
}
 $ ./nfacctd -P print -O json -r 5 -l 2557 -d | perl pmacct.pl 
JSON ERROR[2420]: {"bytes": 52, "ip_src": "70.193.DEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320330]
JSON ERROR[2421]: 211.140", "packets": 1}
JSON ERROR[2494]: {"bytes": 200, "ip_src": "204.124.15.67", "packDEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320331]
JSON ERROR[2547]: ets": 1}
JSON ERROR[2621]: {"bytes": 56, "ip_src": "74.81.98.247",DEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320355]
JSON ERROR[2627]:  "packets": 1}
JSON ERROR[2701]: {"bytes": 253, "ip_src":DEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320361]
JSON ERROR[2705]:  "209.254.40.254", "packets": 1}
JSON ERROR[2778]: {"bytes": 40, "ip_src": "76.65.204.120", "paDEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320365]
JSON ERROR[2782]: ckets": 1}
JSON ERROR[9020]: {"bytes": 71, "DEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.143:21978] version [9] seqno [4579318]
JSON ERROR[9056]: ip_src": "70.210.48.180", "packets": 1}
JSON ERROR[13405]: {"bytes": 60, "DEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320556]
JSON ERROR[13412]: ip_src": "172.56.26.245", "packets": 1}
JSON ERROR[13485]: {"bytes": 126, "ip_src": "181.211.161.124", "packets": DEBUG ( default/core ): Received NetFlow/IPFIX packet from [64.94.0.158:6319] version [9] seqno [157320563]
JSON ERROR[13521]: 2}

add new type of aggregation

Among the many things we do with netflow is count how much IPv4 versus IPv6 traffic is going across our routers. It'd make my processing a lot easier if there were an aggregate type that distinguished between v4 and v6 rather than me having to look at each IP and decide.

Segmentation fault with 32 plugins

I would like to create more than 31 plugins but I have a Segmentation fault on start.
Exemple configuration =>

nfacctd_ip: 0.0.0.0
nfacctd_port: 9998

plugin_pipe_size: 32576000
plugin_buffer_size: 325760
sql_max_writers: 99

!debug: true
!daemonize: true
!logfile: /space/logs/pmacct/nfaccttd.log

nfacctd_disable_checks: true

plugins: amqp[in_1], amqp[in_2], amqp[in_3], amqp[in_4], amqp[in_5], amqp[in_6], amqp[in_6], amqp[in_7], amqp[in_8], amqp[in_9], amqp[in_10], amqp[in_11], amqp[in_12], amqp[in_13], amqp[in_14], amqp[in_15], amqp[in_16], amqp[in_17], amqp[in_18], amqp[in_19], amqp[in_20], amqp[in_21], amqp[in_22], amqp[in_23], amqp[in_24], amqp[in_25], amqp[in_26], amqp[in_26], amqp[in_27], amqp[in_28], amqp[in_29], amqp[in_30], amqp[in_31], amqp[in_32]

!! amqp
amqp_host: localhost
amqp_user: pmacct
amqp_passwd: pmacct

Can you help me ?

Best regards

nfacctd segfault print plugin

Running nfacctd 1.6.1 on centos 7 with the following config:
daemonize: true
plugins: kafka[dsthost],print[all]
aggregate[dsthost]: dst_host
kafka_output[dsthost]: json
kafka_topic[dsthost]: pmacct.acct
kafka_refresh_time[dsthost]: 300
kafka_history[dsthost]: 5m
kafka_history_roundoff[dsthost]: m
nfacctd_port: 9992
!
aggregate[all]: dst_host, src_host, dst_port, src_port
print_refresh_time[all]: 300
print_history[all]: 300
print_history_roundoff[all]: m
print_output_file_append[all]: true
print_output_file[all]: /opt/Data/as/netflow-%Y-%m-%d-%H-%M.csv
print_latest_file[all]: /opt/Data/as/netflow.latest
print_output[all]: csv

compile info:
NetFlow Accounting Daemon, nfacctd 1.6.1 (20161001-00+c5)
'--enable-sqlite3' '--enable-jansson' '--enable-ipv6' '--enable-kafka' '--enable-rabbitmq' '--enable-pgsql' '--enable-mysql' '--enable-plabel' '--enable-geoipv2'

after ~ 5 min the nfacctd print process crash with the following message:
segfault at 31 ip 00007f5114a0938c sp 00007ffcff007798 error 4 in libc-2.17.so[7f5114989000+1b6000]
please advise

route distinguisher (RD) looks wired when dump the BGP table.

Hi,

I started playing with pmacct and I noticed that the route distinguisher RD looks wired when dumping the BGP table in a file. Is there any reason why?

here it is an example:

"rd": "1:40.0.17.10:256

The IP address is inverted, it should be 10.17.0.40 and the 2nd part should be 1806 instead.
Is this a bug? pls find below the complete output from the file.

[root@hostname]# cat bgp-10_197_1_114-2016_11_29T20_05_00.txt | grep 10.12.95.16
{"timestamp": "2016-11-29 20:05:00", "peer_ip_src": "10.17.1.14", "event_type": "dump", "ip_prefix": "10.12.95.16/28", "bgp_nexthop": "10.17.1.14", "as_path": "65346", "comms": "65346:39", "ecomms": "RT:65432:2", "origin": 0, "local_pref": 100, "rd": "1:40.0.17.10:256"}

BR

pmacct 1.6.0 - Failing to build with jansson plugin while libs are in non standard location

I am trying to build the pmacct package with jansson enabled. I am using a non standard location for the library (as shown below), but I get a make error claiming that jansson.h does not exist. Please advise ...

STEPS

wget http://www.digip.org/jansson/releases/jansson-2.7.tar.gz
tar -xvzf jansson-2.7.tar.gz
cd jansson-2.7/
./configure make
DESTDIR=/tmp/jansson/ make install
export JANSSON_LIBS="-L/tmp/jansson/usr/local/lib/ -ljansson"
export JANSSON_CFLAGS="-I/tmp/jansson/usr/local/include/"

cd ../pmacct-1.6.0/
./configure --enable-jansson

OUTPUT

.
.
.
PLATFORM ..... : x86_64
OS ........... : Linux 3.13.0-92-generic
COMPILER ..... : gcc
CFLAGS ....... : -O2 -g -O2 
LIBS ......... : -lpcap  -ldl -L/usr/local/lib -lz -lpthread
LDFLAGS ...... : -Wl,--export-dynamic 
PLUGINS ...... :  jansson

MAKE

/tmp/pmacct-1.6.0$ make
Making all in src
gmake[1]: Entering directory `/tmp/pmacct-1.6.0/src'
Making all in nfprobe_plugin
gmake[2]: Entering directory `/tmp/pmacct-1.6.0/src/nfprobe_plugin'
  CC     libnfprobe_plugin_la-nfprobe_plugin.lo
In file included from common.h:30:0,
                 from nfprobe_plugin.c:53:
./../pmacct.h:338:21: fatal error: jansson.h: No such file or directory
 #include <jansson.h>
                     ^
compilation terminated.
gmake[2]: *** [libnfprobe_plugin_la-nfprobe_plugin.lo] Error 1
gmake[2]: Leaving directory `/tmp/pmacct-1.6.0/src/nfprobe_plugin'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/tmp/pmacct-1.6.0/src'
make: *** [all-recursive] Error 1

v2 sflow counters not being processed

Hello,

I am collecting sflow counters with sfacctd successfully, but some of my devices are sending sflow v2 datagrams and these are not being processed.

I've tried the simplest possible config:

sfacctd_port: 23503
daemonize: false
sfacctd_counter_file: /tmp/counters

Running in debug mode, I see the packets being received:

DEBUG ( default/core ): Received sFlow packet from [<ipaddress>:4815] version [2] seqno [315334925]

I can also successfully receive the data with sflowtool. But I only get the following records being output to /tmp/counters:

{"seq": 0, "timestamp": "2017-01-26 09:18:33.108082", "peer_src_ip": "<ipaddress>", "event_type": "log_init"}

When I configure sfacctd to export the flows for these devices, this is working fine, it's only counters which aren't being processed.

I'm at a loss of what to do next to debug this, I would appreciate any advice.

Thanks!

v6 not supported by nfprobe_receiver in config?

Specifying nfprobe_receiver: [ipv6address]:port and then running pmacctd -f /etc/netflow/config -i eth0 results in error: Syntax error: not weighted brackets. Exiting.

My goal is to generate IPv6 netflow packets from Linux server.

MPLS VPN v6 prefixes is not dump to a file

Hi,
As you could notice below, I'm dumping the BGP to a file and I'm not able to see MPLS VPN IPv6 prefixes in the dumped file.

Here it is the configuration I'm using today.

[root@linux-9dc70 etc]# cat pmacct.conf

interface: eno33559296
bgp_daemon: true
bgp_daemon_ip: 10.17.1.18
bgp_agent_map: /opt/pmacct/peers.map

bgp_table_dump_file: /opt/pmacct/log/bgp-$peer_src_ip-%Y_%m_%dT%H_%M_%S.txt
bgp_table_dump_refresh_time: 180

[root@linux-9dc70 log]# cat bgp-10_197_1_114-2016_12_05T16_18_00.txt | grep 2001
[root@linux-9dc70 log]#

here it is a printout from my router where I peer with the bgp daemon.
Table bgp.l3vpn-inet6.0 Bit: 20002
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Advertised prefixes: 5022

user1@hostname-re0> show bgp neighbor 10.17.1.18
Peer: 10.17.1.18+179 AS 65001 Local: 10.17.1.14+50967 AS 65001
Description: pmacct
Type: Internal State: Established (route reflector client)Flags:
Last State: EstabSync Last Event: RecvKeepAlive
Last Error: Hold Timer Expired Error
Export: [ BGP_EXPORT ] Import: [ BGP_IMPORT ]
Options:
Options:
Address families configured: inet-vpn-unicast inet6-vpn-unicast
Holdtime: 90 Preference: 170
Number of flaps: 30
Last flap event: Closed
Error: 'Hold Timer Expired Error' Sent: 1 Recv: 0
Peer ID: 10.17.1.18 Local ID: 10.17.25.1 Active Holdtime: 90
Keepalive Interval: 30 Group index: 2 Peer index: 0
BFD: disabled, down
NLRI for restart configured on peer: inet-vpn-unicast inet6-vpn-unicast
NLRI advertised by peer: inet-vpn-unicast inet6-vpn-unicast
NLRI for this session: inet-vpn-unicast inet6-vpn-unicast
Peer does not support Refresh capability
Stale routes from peer are kept for: 300
Peer does not support Restarter functionality
Peer does not support Receiver functionality
Peer supports 4 byte AS extension (peer-as 65001
Peer does not support Addpath
Table bgp.l3vpn.0 Bit: 10002
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: in sync
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Advertised prefixes: 47761
Table bgp.l3vpn-inet6.0 Bit: 20002
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: in sync
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Advertised prefixes: 5022
Table VRF1.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF2.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF3.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF4.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF5.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF6.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF7.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF8.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF9.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF10.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF11.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF12.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF13.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF14.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF15.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF16.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF17.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF18.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF19.inet.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Table VRF20.inet6.0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Send state: not advertising
Active prefixes: 0
Received prefixes: 0
Accepted prefixes: 0
Suppressed due to damping: 0
Last traffic (seconds): Received 26 Sent 1 Checked 24
Input messages: Total 11941 Updates 0 Refreshes 0 Octets 226879
Output messages: Total 107333 Updates 95391 Refreshes 0 Octets 18099491
Output Queue[0]: 0
Output Queue[1]: 0
Output Queue[2]: 0
Output Queue[3]: 0
Output Queue[4]: 0
Output Queue[5]: 0
Output Queue[6]: 0
Output Queue[7]: 0
Output Queue[8]: 0
Output Queue[9]: 0
Output Queue[10]: 0
Output Queue[11]: 0
Output Queue[12]: 0
Output Queue[13]: 0
Output Queue[14]: 0
Output Queue[15]: 0
Output Queue[16]: 0
Output Queue[17]: 0
Output Queue[26]: 0
Output Queue[27]: 0
Output Queue[29]: 0
Output Queue[30]: 0

Compilation problem

Hi,

i try to compile pmacct with rabbitmq.

I have pmacct 1.5.1, rabbitmq-server 3.5.7, and last rabbitmq-c release.
./configure works great.
but make failed

amqp_common.c: In function ‘p_amqp_connect’:
amqp_common.c:175:77: error: incompatible type for argument 7 of ‘amqp_exchange_declare’
amqp_cstring_bytes(amqp_host->exchange_type), 0, 0, amqp_empty_table);

^
In file included from /usr/local/include/amqp.h:765:0,
from amqp_common.h:23,
from amqp_common.c:27:
/usr/local/include/amqp_framing.h:798:11: note: expected ‘amqp_boolean_t {aka int}’ but argument is of type ‘amqp_table_t {aka const struct amqp_table_t_}’
AMQP_CALL amqp_exchange_declare(amqp_connection_state_t state, amqp_channel_t channel, amqp_bytes_t exchange, amqp_b
^
amqp_common.c:174:3: error: too few arguments to function ‘amqp_exchange_declare’
amqp_exchange_declare(amqp_host->conn, 1, amqp_cstring_bytes(amqp_host->exchange),

^
In file included from /usr/local/include/amqp.h:765:0,
from amqp_common.h:23,
from amqp_common.c:27:
/usr/local/include/amqp_framing.h:798:11: note: declared here
AMQP_CALL amqp_exchange_declare(amqp_connection_state_t state, amqp_channel_t channel, amqp_bytes_t exchange, amqp_b
^
amqp_plugin.c: In function ‘amqp_plugin’:
amqp_plugin.c:65:3: warning: implicit declaration of function ‘pm_setproctitle’ [-Wimplicit-function-declaration]
pm_setproctitle("%s [%s]", "RabbitMQ/AMQP Plugin", config.name);
^
amqp_plugin.c: In function ‘amqp_handle_routing_key_dyn_strings’:
amqp_plugin.c:455:5: warning: implicit declaration of function ‘addr_to_str’ [-Wimplicit-function-declaration]
addr_to_str(ip_address, &elem->pbgp->peer_src_ip);
^
amqp_plugin.c:477:27: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 4 has type ‘pm_id_t {aka long unsigned int}’ [-Wformat=]
snprintf(buf, newlen, "%u", config.post_tag);
^
amqp_plugin.c:477:27: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 4 has type ‘pm_id_t {aka long unsigned int}’ [-Wformat=]
amqp_plugin.c:497:27: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 4 has type ‘pm_id_t {aka long unsigned int}’ [-Wformat=]
snprintf(buf, newlen, "%u", elem->primitives.tag);
^
amqp_plugin.c:497:27: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 4 has type ‘pm_id_t {aka long unsigned int}’ [-Wformat=]

Anyone help ?

Which data structure field stores payload data ?

Hi,

I have done setup of free-radius on a server.
I am able to run PMacct daemon which stores accounting information in MySQL db.

Now, I want to decode Radius packet.

I am looking in src/pkt_handlers.c file. I have identified Radius packet based on dst_port no. in dst_port_handler function.

I am dumping data pointed by pptrs->payload_ptr which is payload.

I am getting 26 bytes of valid data and after that it is garbage data.

Actual payload captured in wireshark
01 26 00 2f 18 80 1a d2 13 95 a9 7c 73 01 29 9a ec 29 60 bb 01 09 74 65 73 74 69 6e 67 02 12 48 72 db 80 56 9d 7f 58 fb 9b fd 75 fc 6a c4 67

Dump of pptrs->payload_ptr
01 26 00 2f 18 80 1a d2 13 95 a9 7c 73 01 29 9a ec 29 60 bb 01 09 74 65 73 74 00 00 00 00 00 00 a0 00 00 00 b4 f3 60 57 54 1b 83 2b 44 00 00

Do anyone know in which element whole packet payload is stored ?

Regards,
Mehul

Connection failed to Kafka: p_kafka_check_outq_len() / p_kafka_close()

Hi!

I am running a self-compiled version of nfacctd (1.6.0) on my server:

root@myserver:~ # nfacctd -V
NetFlow Accounting Daemon, nfacctd 1.6.0 (20160607-00)
 '--enable-kafka' '--enable-jansson' '--enable-ipv6'

For suggestions, critics, bugs, contact me: Paolo Lucente <[email protected]>.

When starting it with a configuration like this:

nfacctd_ip: <ip>
nfacctd_port: <port>
interface: <interface>

!
! Configuration for Apache Kafka
!
plugins: kafka
aggregate: src_mac, dst_mac, vlan, cos, etype, src_host, dst_host, src_net, dst_net, src_mask, dst_mask, src_as, dst_as, src_port, dst_port, tos, proto, none, flows, tag, tag2, label, class, tcpflags, in_iface, out_iface, ext_comm, as_path, peer_src_ip, peer_dst_ip, peer_src_as, peer_dst_as, local_pref, med, src_ext_comm, src_as_path, src_local_pref, src_med, mpls_vpn_rd, mpls_label_top, mpls_label_bottom, mpls_stack_depth, sampling_rate, src_host_country, dst_host_country, nat_event, post_nat_src_host, post_nat_dst_host, post_nat_src_port, post_nat_dst_port, timestamp_start, timestamp_end, timestamp_arrival, export_proto_seqno, export_proto_version
kafka_topic: ipfix
kafka_broker_host: <server>
kafka_broker_port: <port>
kafka_refresh_time: 1

I get the following output:

root@myserver:~ # nfacctd -f /etc/pmacct/conf.conf 
INFO ( default/core ): Reading configuration file '/etc/pmacct/conf.conf'.
INFO ( default/core ): waiting for NetFlow data on <IP>:<port>
INFO ( default/kafka ): cache entries=16411 base cache memory=53434216 bytes
INFO ( default/kafka ): *** Purging cache - START (PID: 97597) ***
INFO ( default/kafka ): *** Purging cache - END (PID: 97597, QN: 0/0, ET: 0) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97600) ***
INFO ( default/kafka ): *** Purging cache - END (PID: 97600, QN: 0/0, ET: 0) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97603) ***
INFO ( default/kafka ): *** Purging cache - END (PID: 97603, QN: 0/0, ET: 0) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97606) ***
INFO ( default/kafka ): *** Purging cache - END (PID: 97606, QN: 0/0, ET: 0) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97609) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97614) ***
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_check_outq_len()
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_close()
INFO ( default/kafka ): *** Purging cache - END (PID: 97609, QN: 87/195, ET: 1) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97619) ***
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_check_outq_len()
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_close()
INFO ( default/kafka ): *** Purging cache - END (PID: 97614, QN: 52/120, ET: 1) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97624) ***
INFO ( default/kafka ): *** Purging cache - END (PID: 97619, QN: 106/106, ET: 2) ***
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_check_outq_len()
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_close()
INFO ( default/kafka ): *** Purging cache - END (PID: 97624, QN: 68/127, ET: 1) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97629) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97634) ***
^C
INFO ( default/kafka ): *** Purging cache - START (PID: 97634) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97629) ***
INFO ( default/kafka ): *** Purging cache - START (PID: 97596) ***
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_check_outq_len()
ERROR ( default/kafka ): Connection failed to Kafka: p_kafka_close()
INFO ( default/kafka ): *** Purging cache - END (PID: 97596, QN: 78/130, ET: 1) ***
INFO ( default/kafka ): *** Purging cache - END (PID: 97634, QN: 259/259, ET: 2) ***
INFO ( default/core ): OK, Exiting ...

Note: I also get those errors when setting kafka_refresh_time to something higher, like 30.

It seems like the plugin cannot connect to Kafka. However, the Kafka broker is up and running stable. Is this something I should worry about? How would I go about investigating this any further?

Apparently this comes from

When running nfacctd with -d in Debug mode, these errors do not show up 😲

bgp_logdump.c : Kafka / AMQP typo?

Hi Paolo,

I haven't been able to test this, but am surprised to see that in bgp_logdump.c, p_kafka_set_topic() is called on an AMQP host pointer at line 282 instead of a Kafka host pointer.
I thought I would report this in case it was a typo and fixing it could prevent unexpected behaviours.
In case it isn't, please feel free to close this issue and accept my apologies (and ideally let me know or comment the reason for this) :)

make error

hi

I am getting the following error while running make

/usr/local/lib/libpcap.a(bpf_filter.o): In function bpf_validate': (.text+0x0): multiple definition ofbpf_validate'
./.libs/libdaemons.a(libdaemons_la-bpf_filter.o):/root/pmacct/src/bpf_filter.c:528: first defined here
/usr/local/lib/libpcap.a(bpf_filter.o): In function bpf_filter': (.text+0x590): multiple definition ofbpf_filter'
./.libs/libdaemons.a(libdaemons_la-bpf_filter.o):/root/pmacct/src/bpf_filter.c:201: first defined here
collect2: ld returned 1 exit status
gmake[2]: *** [pmacctd] Error 1
gmake[2]: Leaving directory /root/pmacct/src' gmake[1]: *** [all-recursive] Error 1 gmake[1]: Leaving directory/root/pmacct/src'
make: *** [all-recursive] Error 1

using this for confiure
./configure --enable-rabbitmq --enable-jansson

Release: CentOS release 6.6 (Final)

Question: how can I limit maximum entries?

Hi @paololucente,
I install pmacct for embedded Linux Box(openWRT), with total RAM of box 128MBytes, so will reserve only 30->40 Mbytes for pmacct
pmacct with nprobe and nfacctd, will monitor only 2-4 clients

nfacctd uses memory as below
............
imt_buckets: 6550
imt_mem_pools_size: 8192
imt_mem_pools_number: 0

with imt_mem_pools_number: 0 it works for along time
command pmacct -s. Result "For a total of: .... entries", entries will be increase then decrease then increase again. When it reaches ~28k entries, I got error as below

_ERROR ( out/memory ): We are missing data.
If you see this message once in a while, discard it. Otherwise some solutions follow:

  • increase shared memory size, 'plugin_pipe_size'; now: '0'.
  • increase buffer size, 'plugin_buffer_size'; now: '0'.
  • increase system maximum socket size._

I can use crond for erasing with pmacct -e
But for situation if crond has problem, so how can I limit maximum entries, ex: 2000 entries or any advices for small memory device?
Thanks you,

Issues with kafka_topic_rr

Hi,

We're currently using nfacct to export flows to kafka, where we consume theme using logstash to push them into elasticsearch. The whole system works like a charm, the only downside at the moment is that nfacct only outputs the flow data to a single topic with a single partition in Kafka. This means only a single logstash instance can connect to the kafka topic to consume the data. I didn't see an option to output to multiple partitions, so the kafka_topic_rr config statement seemed to suffice. I'm trying to run the following config:

daemonize: true

plugins: kafka
nfacctd_port: 5678
aggregate: src_host, dst_host, src_port, dst_port, proto, tos, tcpflags, in_iface, out_iface, peer_src_ip, sampling_rate, timestamp_start, timestamp_end, timestamp_arrival
kafka_output: json
kafka_topic: pmacct.acct
kafka_topic_rr: 4
kafka_refresh_time: 10
kafka_history: 1m
kafka_history_roundoff: m
logfile: /var/log/nfacctd.log

Using this, the logs look as follows:

Dec 16 14:39:18 INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.6.2-git (20161214-01)
Dec 16 14:39:18 INFO ( default/core ):  '--enable-kafka' '--enable-jansson' '--enable-ipv6'
Dec 16 14:39:18 INFO ( default/core ): Reading configuration file '/etc/pmacct/nfacctd.conf'.
Dec 16 14:39:18 INFO ( default/core ): waiting for NetFlow data on :::5678
Dec 16 14:39:18 INFO ( default/kafka ): cache entries=16411 base cache memory=53434216 bytes
Dec 16 14:39:22 INFO ( default/kafka ): *** Purging cache - START (PID: 121530) ***
Dec 16 14:39:22 INFO ( default/kafka ): *** Purging cache - END (PID: 121530, QN: 0/0, ET: 0) ***
Dec 16 14:39:32 INFO ( default/kafka ): *** Purging cache - START (PID: 121554) ***
Dec 16 14:39:32 INFO ( default/kafka ): *** Purging cache - END (PID: 121554, QN: 0/0, ET: 0) ***
Dec 16 14:39:42 INFO ( default/kafka ): *** Purging cache - START (PID: 121582) ***
Dec 16 14:39:52 INFO ( default/kafka ): *** Purging cache - START (PID: 121608) ***
Dec 16 14:40:02 INFO ( default/kafka ): *** Purging cache - START (PID: 121632) ***
Dec 16 14:40:12 INFO ( default/kafka ): *** Purging cache - START (PID: 121663) ***
Dec 16 14:40:22 INFO ( default/kafka ): *** Purging cache - START (PID: 121687) ***
Dec 16 14:40:32 INFO ( default/kafka ): *** Purging cache - START (PID: 121712) ***
Dec 16 14:40:42 INFO ( default/kafka ): *** Purging cache - START (PID: 121829) ***
Dec 16 14:40:44 INFO ( default/kafka ): *** Purging cache - START (PID: 121521) ***
Dec 16 14:40:44 INFO ( default/kafka ): *** Purging cache - END (PID: 121521, QN: 0/0, ET: 0) ***

As you can see it ends up just locking up and all the kafka writers just hang. In the end I have to manually kill the core + all the writer processes.
I do see the topics being created in kafka and actually some messages being inserted, only I'm only seeing very low numbers. Where I'd expect to see 10K+ messages every 10 seconds I'm seeing maybe 400 or 500.

Simply removing the kafka_topic_rr option makes the whole thing work properly:

Dec 16 15:01:29 INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.6.2-git (20161214-01)
Dec 16 15:01:29 INFO ( default/core ):  '--enable-kafka' '--enable-jansson' '--enable-ipv6'
Dec 16 15:01:29 INFO ( default/core ): Reading configuration file '/etc/pmacct/nfacctd.conf'.
Dec 16 15:01:29 INFO ( default/core ): waiting for NetFlow data on :::5678
Dec 16 15:01:29 INFO ( default/kafka ): cache entries=16411 base cache memory=53434216 bytes
Dec 16 15:01:32 INFO ( default/kafka ): *** Purging cache - START (PID: 145069) ***
Dec 16 15:01:32 INFO ( default/kafka ): *** Purging cache - END (PID: 145069, QN: 0/0, ET: 0) ***
Dec 16 15:01:42 INFO ( default/kafka ): *** Purging cache - START (PID: 145160) ***
Dec 16 15:01:45 INFO ( default/kafka ): *** Purging cache - END (PID: 145160, QN: 31835/31835, ET: 1) ***
Dec 16 15:01:52 INFO ( default/kafka ): *** Purging cache - START (PID: 145182) ***
Dec 16 15:01:55 INFO ( default/kafka ): *** Purging cache - END (PID: 145182, QN: 30793/30793, ET: 1) ***
Dec 16 15:02:02 INFO ( default/kafka ): *** Purging cache - START (PID: 145207) ***
Dec 16 15:02:05 INFO ( default/kafka ): *** Purging cache - END (PID: 145207, QN: 31888/31888, ET: 1) ***

Version info
nfacct: 1.6.2-git (20161214-01)
kafka: 0.10.1.0 (scala 2.11)
librdkafka: 0.9.2

kafka-plugin Writer failed to start: undefined symbol: rd_kafka_conf_set_log_cb

Hi,

I'm using sfacct to export flows to kafka. I noticed that the latest version of sfacctd didn't work with kafka plugin with this message:
undefined symbol: rd_kafka_conf_set_log_cb
I'm trying to solve this issue.

ENV:

# sfacctd -V
sFlow Accounting Daemon, sfacctd 1.6.2-git (20170106-00)
 '--enable-rabbitmq' '--enable-kafka' '--enable-jansson'

For suggestions, critics, bugs, contact me: Paolo Lucente <[email protected]>.
# cat /etc/sfacctd.conf 
aggregate: src_mac, dst_mac, vlan, cos, etype, src_as, dst_as, peer_src_ip, peer_dst_ip, in_iface, out_iface, src_host, src_net, dst_host, dst_net, src_mask, dst_mask, src_port, dst_port, tcpflags, proto, tos, sampling_rate, timestamp_start, timestamp_end, timestamp_arrival

plugins: kafka
kafka_output: json
kafka_topic: pmacct.test
kafka_refresh_time: 60
kafka_history: 1m
kafka_history_roundoff: m
kafka_broker_host: <IP>
kafka_broker_port: 9092

# sfacctd -f /etc/sfacctd.conf 
INFO ( default/core ): sFlow Accounting Daemon, sfacctd 1.6.2-git (20170106-00)
INFO ( default/core ):  '--enable-rabbitmq' '--enable-kafka' '--enable-jansson'
INFO ( default/core ): Reading configuration file '/etc/sfacctd.conf'.
INFO ( default/core ): waiting for sFlow data on 0.0.0.0:6343
INFO ( default/kafka ): cache entries=16411 base cache memory=44769208 bytes
sfacctd: kafka Plugin -- Writer [default]: symbol lookup error: sfacctd: kafka Plugin -- Writer [default]: undefined symbol: rd_kafka_conf_set_log_cb
sfacctd: kafka Plugin -- Writer [default]: symbol lookup error: sfacctd: kafka Plugin -- Writer [default]: undefined symbol: rd_kafka_conf_set_log_cb
sfacctd: kafka Plugin -- Writer [default]: symbol lookup error: sfacctd: kafka Plugin -- Writer [default]: undefined symbol: rd_kafka_conf_set_log_cb

Large BGP communities: treat-as-withdraw error handling

I would like to ask your support to analyze a behaviour that seems unexpected to me. It seems pmacct does not handle malformed large BGP communities attributes in the way I understand the draft wants to.

Draft:

A BGP UPDATE message with a malformed Large Communities attribute SHALL be handled using the approach of "treat-as-withdraw" as described in section 2 [RFC7606].

RFC7606:

Treat-as-withdraw: In this approach, the UPDATE message containing the path attribute in question MUST be treated as though all contained routes had been withdrawn just as if they had been listed in the WITHDRAWN ROUTES field (or in the MP_UNREACH_NLRI attribute if appropriate) of the UPDATE message, thus causing them to be removed from the Adj-RIB-In according to the procedures of [RFC4271].

I used ExaBGP to announce a sequence of UPDATEs toward pmacct:

  1. 203.0.113.11/32 with 2 valid large communities (1:2:3, 4:5:6)
  2. 203.0.113.12/32 with above valid large communities (1:2:3, 4:5:6)
  3. 203.0.113.12/32 (same prefix as above) with an invalid Large BGP Communities attribute (length: 21 bytes)

I found that after ExaBGP completed its job pmacct still has the 203.0.113.12/32 prefix in its table, while it should have been removed when the malformed UPDATE was received:

# cat pmacct/bgp.log
{..., "event_type": "dump_init", "dump_period": 60}
{..., "event_type": "dump", "ip_prefix": "203.0.113.11/32", ..., "lcomms": "1:2:3 4:5:6", "origin": 0, "local_pref": 100}
{..., "event_type": "dump", "ip_prefix": "203.0.113.12/32", ..., "origin": 0, "local_pref": 100}
{..., "event_type": "dump_close", "entries": 2, "tables": 1}

timestamp_arrived fails to work for sfacctd

I am using a really basic configuration and trying to use the timestamp_arrived primitive. It always produces a 0 value, either the start of epoch time or "0.0" if epoch timestamp is desired. This only seems to happen with sfacctd. Previously I was using timestamp_start in 1.5.2 and I do not see any issues with that aggregation primitive.

comma is not an ideal CSV delimiter

The not-configurable CSV delimiter is ,, but this is far from ideal because if one configures the aggregate to dump AS_PATH, it could occur that Atomic Aggregation comes into play, which separates the aggregated ASNs with ,.

As can be seen from this example:

0,0,5650,65130,2914:370_2914:1205_2914:2204_2914:3200,8928_29286_{7155,64513,64646,65100,65104,65105,65130},120,0,8928,129.250.0.112,81.25.197.186,299,462,50.32.0.0,95.210.0.0,12,16,1,40

Correctly parsing 8928_29286_{7155,64513,64646,65100,65104,65105,65130} within the context of of comma separated fields, puts an unnecessary burden on the file parser.

I think there are some ways forward:

  • make the default delimiter not a , but a | (but this might break peoples scripts)
  • make the delimiter configurable: introduce a csv_delimiter

timestamp_end always 0 for IPFIX

Hi,

I'm using nfacctd to collect IPFIX datagram from OVS for traffic logging. I've tried timestamp_start and timestamp_end primitives but timestamp_end always seemed to be 0 (1970-01-01 08:00:00.0 ).

IPFIX datagrams captured by wireshark shows that there actually is a flowEndDeltaMicroseconds value in the flow data (and this value will be eventually parsed as timestamp_end).

I've traced to NF_timestamp_end_handler function in pkt_handlers.c but the code looks fine.

Can anyone tell me what the problem might be?

Thanks.
Neo

no need to NULL terminate the shutdown communication

the trailing 00 when pmacct sends a shutdown communication is not needed.

example:

    kali.meerval.net.bgp > pxtr-2.meerval.net.45050: Flags [P.], cksum 0x4a83 (incorrect -> 0x508a), seq 84:145, ack 175, win 453, options [nop,nop,TS val 2682828165 ecr 613091155],
 length 61: BGP, length: 61
        Notification Message (3), length: 61, Cease (6), subcode Administratively Shutdown (2)
        0x0000:  4500 0071 48f8 4000 4006 a76f a5fe ff11  E..qH.@[email protected]....
        0x0010:  a5fe ff10 00b3 affa 269c f9b8 82db 518d  ........&.....Q.
        0x0020:  8018 01c5 4a83 0000 0101 080a 9fe8 b585  ....J...........
        0x0030:  248b 0753 ffff ffff ffff ffff ffff ffff  $..S............
        0x0040:  ffff ffff 003d 0306 0227 706d 6163 6374  .....=...'pmacct
        0x0050:  2072 6563 6569 7665 6420 5349 4749 4e54  .received.SIGINT
        0x0060:  202d 2073 6875 7474 696e 6720 646f 776e  .-.shutting.down
        0x0070:  **00** <<<--- NOT NEEDED

Kafka-Plugin: Support Failover

Problem

As far as I can tell, if I currently use the Kafka plugin and my configured kafka_broker_host is going down or has some other issues, I am losing data.

Is that correct or am I missing something?

Solution

If it is correct, there are several ways to solve this issue as far as I can tell:

  • Run pmacct twice with different kafka_broker_host values. This will create duplicates that need to be removed.
  • Add failover support to pmaccts Kafka plugin. I guess this is the cleanest and most value-adding version for the Open Source community.
  • (Maybe: Run a Kafka broker locally that we expect NOT to go down? If its running on the same host then we at least have control over the broker.)

Interface with timestamp_end was 1970

Dear pmacct Team,
I monitor an eth1 interface of Centos and timestamp_end always '1970-01-01 08:00:00.0'
If my configuration has problem, please help me

P/s: my purpose will calculate exact duration of connection = timestamp_end - timestamp_start
Thanks,

Server: CentOS 6.3, pmacctd 1.6.0-git (20160606-00) and also tested with pmacctd 1.6.2-git (20170203-00)

Command and config file: pmacctd -f /etc/pmacct2.conf
daemonize: false
plugins: memory[in], memory[out]
interface: eth1

aggregate[in]: dst_host,src_host,timestamp_start,timestamp_end,dst_port,src_port,proto
aggregate[out]: dst_host,src_host,timestamp_start,timestamp_end,dst_port,src_port,proto
aggregate_filter[in]: src net 192.168.0.0/16
aggregate_filter[out]: dst net 192.168.0.0/16
imt_path[in]: /tmp/pmacct_in.pipe
imt_path[out]: /tmp/pmacct_out.pipe

Show: pmacct -p /tmp/pmacct_in.pipe -s
SRC_IP DST_IP SRC_PORT DST_PORT PROTOCOL TIMESTAMP_START TIMESTAMP_END PACKETS BYTES
192.168.111.138 192.168.111.1 22 62071 tcp 2017-02-03 22:56:57.585742 1970-01-01 08:00:00.0 1 712
192.168.111.138 192.168.111.1 22 62071 tcp 2017-02-03 22:54:40.578268 1970-01-01 08:00:00.0 1 1336
...................

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.