GithubHelp home page GithubHelp logo

Comments (8)

paololucente avatar paololucente commented on May 28, 2024 1

This could work, yes. Can we switch to unicast email for the details?

from pmacct.

paololucente avatar paololucente commented on May 28, 2024

Hi Sander ( @SanderDelden ),

I had a quick try at this and i seem unable to reproduce the issue. Is the config in Issue 769 valid also for this issue? Although i am sure innocent, can you post the content of the /etc/pmacct/mappings/bgp.map map? Also, can you look in the log if there is anything suspicious? Any warning / error message?

Paolo

from pmacct.

SanderDelden avatar SanderDelden commented on May 28, 2024

Hi Paolo,

My apologies, should've included the configuration in my initial comment. I've stripped the configuration down to the bare minimum for testing purposes, here you go:

nfacctd.conf:

plugins: print[TEST]

bgp_daemon: true
bgp_daemon_port: 179
nfacctd_as: bgp
bgp_daemon_max_peers: 1
bgp_agent_map: /etc/pmacct/mappings/bgp.map
nfacctd_port: 5009

aggregate[TEST]: dst_as
print_output_file[TEST]: /tmp/pmacct/1m_TEST.json
print_output[TEST]: json
print_history[TEST]: 1m
print_history_roundoff[TEST]: m
print_refresh_time[TEST]: 60
print_output_file_append[TEST]: true

bgp.map:

bgp_ip=x.x.x.x  ip=0.0.0.0/0

All entries in 1m_TEST.json look as follows:

{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:48:00", "stamp_updated": "2024-03-15 09:50:01", "packets": 101479, "bytes": 100798205}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:49:00", "stamp_updated": "2024-03-15 09:50:01", "packets": 144579, "bytes": 143524105}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:49:00", "stamp_updated": "2024-03-15 09:51:01", "packets": 99910, "bytes": 99144753}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:50:00", "stamp_updated": "2024-03-15 09:51:01", "packets": 140996, "bytes": 139505374}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:50:00", "stamp_updated": "2024-03-15 09:52:01", "packets": 102410, "bytes": 102121809}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:51:00", "stamp_updated": "2024-03-15 09:52:01", "packets": 142988, "bytes": 141700933}

The configuration above works in 1.7.8 but the first purge of the cache has all the AS numbers listed as "0". I assume this has to do with the BGP session not being instantly established. This is no problem, just thought I'd mention it.

I've check the (debug) logging and nothing strange is observed.

from pmacct.

paololucente avatar paololucente commented on May 28, 2024

Hi Sander ( @SanderDelden ),

I did manage to reproduce the scenario but unfortunately not the issue - both 1.7.8 and latest commit do work fine. Can you try to set nfacctd_net: bgp too and see if it makes any difference? Also, any ADD-PATH capability involved in the BGP feed?

Paolo

from pmacct.

SanderDelden avatar SanderDelden commented on May 28, 2024

Hi Paolo,

Setting nfacctd_net: bgp unfortunately does not change the output. We are not using ADD-PATH. If it is of use to you I can provide a PCAP of the BGP traffic.

from pmacct.

paololucente avatar paololucente commented on May 28, 2024

It would help, yes. A PCAP of both BGP and flows (maybe in two separate traces). Unfortunately BGP traffic can't be replayed so i could only inspect the traces, what would help much-much more (also having in mind #769) would be if i could access the container where flows and BGP are pointed to -- so to debug, recompile, troubleshoot, etc. both 1.7.8 and latest master code.

from pmacct.

SanderDelden avatar SanderDelden commented on May 28, 2024

Hi Paolo,

Would a Teams session (or any other application of your preference) to debug this be possible?

from pmacct.

doup123 avatar doup123 commented on May 28, 2024

Hello @paololucente, did you by any chance come to a conclusion on this?
I am facing something similar to what @SanderDelden mentioned.

I have configured pmacct to receive NetFlow v9 messages (including ingress and egress VRFID packet fields) from a Cisco router and have also established iBGP peering between them. The router sends both IPv4 and VPNv4 routes to pmacct which are correctly received.

I have also configured:

  • flow_to_rd_map: to associate interfaces with RDs
  • bgp_peer_src_as_map: to specify the src_as of specific interfaces
  • pre_tag_map: for enriching the flows with some selected data passed as labels (encoded as map)

Below you may find the corresponding config:

bgp_daemon: true
bgp_daemon_ip: 0.0.0.0
bgp_daemon_max_peers: 100
bgp_daemon_as: XXXXX
nfacctd_as: bgp
nfacctd_net: bgp


#bgp_table_dump_file: /var/log/pmacct/bgp-$peer_src_ip-%H%M.log
bgp_table_dump_refresh_time: 120
bgp_table_dump_kafka_broker_host: XXXXX
bgp_table_dump_kafka_topic: pmacct-bgp-dump

#https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L2833 #necessary for defining from where the src peering as should be added.
bgp_peer_src_as_type: map

nfacctd_port: 2055
! Set the plugin buffers and timeouts for performance tuning
aggregate: src_host, dst_host,peer_src_ip, peer_dst_ip, in_iface,timestamp_start, timestamp_end, src_as, dst_as, peer_src_as, peer_dst_as, label
plugins: kafka
plugin_buffer_size: 204800
plugin_pipe_size: 20480000
nfacctd_pipe_size: 20480000

! Configure the Kafka plugin
kafka_output: json
kafka_broker_host: XXXXX
kafka_topic: pmacct-enriched2
kafka_refresh_time: 60
kafka_history: 5m
kafka_history_roundoff: m

! MAPS DEFINITION
maps_entries: 2000000
!bgp_table_per_peer_buckets: 12
!aggregate_primitives: /etc/pmacct/primitives.lst
sampling_map: /etc/pmacct/sampling.map
pre_tag_map: pretag.map
pre_tag_label_encode_as_map: true
flow_to_rd_map: flow_to_rd.map
bgp_peer_src_as_map: peers.map
logfile: /var/log/pmacct1.log
daemonize: false

pmacct version

nfacctd -V
NetFlow Accounting Daemon, nfacctd 1.7.10-git [20240405-1 (6362a2c9)]

Arguments:
 'CFLAGS=-fcommon' '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'

Libs:
cdada 0.5.0
libpcap version 1.10.3 (with TPACKET_V3)
rdkafka 2.0.2
jansson 2.14

Plugins:
memory
print
nfprobe
sfprobe
tee
kafka

System:
Linux 5.4.0-155-generic #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023 x86_64

Compiler:
gcc 12.2.0

I have bumped though into a very strange problem:
The dst_as for flows that are related to VPNv4 routes is correctly identified and injected in the aggregated result, but the dst_as for flows that are related to IPv4 routes is set to 0.

The dst_as in the original NetFlow pcap is in both cases 0 (in the NetFlow packets), but only in the VPNv4 case pmacct substitutes its value.

Should not routes that do not correspond to any rd (i.e. IPv4 routes), to be used to enrich all flows not matching flow_to_rd_map criteria?

I am posting the way I have constructed the flow_to_rd_map:

id=0:AS:1234	ip=1.2.3.4 in=111
id=0:AS:1235	ip=1.2.3.4 in=112

Am I missing anything?
P.S.
The rest maps (pretag.map and bgp_peer_src_as_map) work as expected enriching appropriately the flows.

from pmacct.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.