GithubHelp home page GithubHelp logo

cesnet / ipfixcol Goto Github PK

View Code? Open in Web Editor NEW
64.0 33.0 37.0 6.44 MB

IPFIXcol is an implementation of an IPFIX (RFC 7011) collector

License: Other

Shell 0.59% C 43.95% Makefile 1.11% C++ 46.18% Awk 0.02% Objective-C 1.42% Yacc 0.23% Lex 0.24% M4 6.24% Dockerfile 0.02%
liberouter ipfix collector

ipfixcol's Introduction

IPFIXcol framework

⚠️ IPFIXcol2 has been released!

The next generation of the collector is more stable, up to 2x faster, and adds support for new features (e.g. biflow, structured data types, etc.). The code was completely rewritten and some plugins might not be available.

Since the release of the new collector, this old framework is not supported anymore! Please, consider upgrading to the new release.

Table of Contents

  1. Framework description
  2. IPFIXcol
  3. Plugins
  4. Built-in tools
  5. External tools
  6. Howto install
  7. Howto build
  8. Docker
  9. RPM
  10. FastBit
  11. Contact us * Reporting bugs * Forum

Framework description

IPFIXcol framework is a set of:

  • IPFIXcol - collector for capturing IPFIX NetFlow data
  • input, intermediate and storage plugins for collector
  • tools for data processing etc.

To generate data for the IPFIXcol, look at the list list of supported flow exporters.

IPFIXcol

Described in it's README

Plugins

IPFIX collector comes with several built-in plugins described at IPFIXcol's page.

There are also external plugins that are installed separately

External input plugins

External intermediate plugins

  • geoip - adds country codes into the metadata structure
  • profiler - fills metadata informations about profiles and channels
  • profile_stats - counts statistic per profile and channel
  • stats - counts statistics per ODID
  • uid - fills user identity information

External storage plugins

  • fastbit - uses FastBit library to store and index data
  • fastbit_compression - uses FastBit library to store and index data with optional compression support
  • json - converts data into JSON format
  • nfdump - stores data in NFDUMP file format
  • postgres - stores data into PostgreSQL database
  • statistics - uses RRD library to generate statistics for collected data
  • unirec - stores data in UniRec format

Built-in tools

ipfixviewer and ipfixconf

Destribed in IPFIXcol's README

External tools

fbitconvert

Converts data from NFDUMP file format into FastBit. Uses IPFIXcol, nfdump input plugin and fastbit storage plugin.

More info in it's README

fbitdump

Tool for manipulating IPFIX data in FastBit database format. It uses FastBit library to read and index data.

More info in it's README

fbitexpire

Daemon for removing old data.

More info in it's README

fbitmerge

Tool for merging FastBit data (saves free disk space, reduces number of files..)

More info in it's README

profilesdaemon

Tool for profiles management and distribution

More info

How to install

Individual packages of the IPFIXcol framework can be installed from Fedora copr repository Just add the repository to your system:

dnf copr enable @CESNET/IPFIXcol 

And install the packages you need (e.g. IPFIXcol framework and JSON output plugin):

dnf install ipfixcol ipfixcol-json-output

If you not are using one of the supported operating systems, you can build the IPFIXcol from sources.

How to build

Dependencies must be installed first. For Fedora, CentOS and RHEL the list of necessary packages is as follows:

autoconf bison docbook-style-xsl doxygen flex 
gcc gcc-c++ git libtool libxml2 libxml2-devel 
libxslt lksctp-tools-devel lzo-devel make 
openssl-devel GeoIP-devel rrdtool-devel
sqlite-devel postgresql-devel corosync corosync-devel rpm-build

Debian and Ubuntu distributions have a different names for some of the packages:

autoconf bison build-essential docbook-xsl doxygen flex
git liblzo2-dev libtool libsctp-dev libssl-dev libxml2
libxml2-dev pkg-config xsltproc libgeoip-dev librrd-dev
libsqlite3-dev libpq-dev libcpg-dev corosync-dev

IPFIXcol does not support openssl1.1, therefore you need to use libssl1.0-dev on Debian Jessie.

Moreover, you need to build the FastBit library

First, download IPFIXcol git repository (do NOT forget to use --recursive option):

git clone --recursive https://github.com/CESNET/ipfixcol.git

Note: If you have updated from a previous revision of the repository without a submodule or if you forgot to add --recursive option, you can just download the submodule manually:

git submodule update --init --recursive

After installing all dependencies and downloading the repository, the whole framework can be build at once with

autoreconf -i 

to generate configure script from configure.ac, Makefile.in from Makefile.am and install missing files.

./configure

to configure packages in subdirectories and generate Makefiles.

make
sudo make install
sudo ldconfig

to build and install all projects. The call to ldconfig is necessary to update the cache of fynamic linker to find dependency library.

Or you can build each part (collector, tool(s), external plugin(s)) separately.

The projects that depend on ipfixcol headers check the reltive path to base/header directory to use headers. When project is separated from the structure, it needs to have the headers installed (ipfixcol-devel package).

Docker

IPFIXcol can be used with Docker. See Docker howto.

Ansible

IPFIXcol can also be installed using Ansible orchestration. See Ansible howto.

RPM

Each part of framework supports building rpm packages by running

make rpm

RPMs can be build only for specific parts, not the whole project.

FastBit

Plugins and tools that uses FastBit file format need FasBit library installed. IPFIXcol framework uses it's own fork of FastBit library to keep compatibility.

IPFIXcol's FastBit library can be found here.

Contact us

Reporting bugs

If you find any bug you can report it into issue tracker here on GitHub.

Contributing to IPFIXcol

We are open to contributions to IPFIXcol which improve the stability and functionality of the collector. To keep the code readable and consistent, please adhere to the coding style document.

Forum

if you have some questions or if you just want to share your ideas about useful features etc., please use this forum.

Back to Top

ipfixcol's People

Contributors

antoinet avatar bodik avatar budovi avatar evertlammerts avatar imostocky avatar istoffa avatar kozubikmichal avatar krobotp avatar lukas955 avatar mattijsjonker avatar mikeek avatar radim955 avatar ricardocasaca avatar rickhofstede avatar rkrejci avatar rosazden avatar secdorks-torchsis avatar simon-struk avatar thorgrin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ipfixcol's Issues

fbitdump: path concatenation failures (-M and -r)

It seems that fbitdump is performing a strict concatenation of the paths provided using the '-M' and '-r' parameters. This can easily lead to errors and should be very simple to fix. Example:

user@machine:~$fbitdump -C fbitdump.xml -M /<path>/0 -r 2014/10/28/ic20141028132900
Cannot open directory "/<path>/02014/10/28/ic20141028132900/"
No tables found in specified directory

To fix this kind of errors, fbitdump should check whether an additional '/' must be inserted.

json SEND crashes collector when listener recycles

HI Guys. Again, great work on this project, I love it.

One thing I noticed. I have my ipfixcol using json writer, it writes files and also SENDS to a logstash listener on another machine.

As I play with logstash and reconfigure the filter to process the ipfixcol json data (oh, by the way, I had to modify the code to get rid of "ipfix." on all the outgoing json elements so that elasticsearch doesn't barf), when the logstash listener recycles the connection after a config change, ipfixcol just crashes without any messages (nothing in syslog). I'm running on RHEL7 sending to logstash on another RHEL7.

Not sure if you guys have witnessed this behavior or not.

Broken test: 'one input + forwarding'

The output of the test suite in your devel branch is currently as follows:

Testing 'basic - one input file'
Testing 'basic - two input files'
Testing 'basic - withdraw message'
Testing 'ipfixsend - enterprise fields,sctp'
Testing 'ipfixsend - two inputs sctp'
Testing 'ipfixsend - two inputs tcp'
Testing 'ipfixsend - two inputs udp'
Testing 'malformed - bad template field count'
Testing 'malformed - missing template 258'
Testing 'malformed - missingTmpl258,BadSetLength'
Testing 'malformed - missingTmpl258,BadSetLength(long)'
Testing 'malformed - no templates'
Testing 'malformed - valid header and zeros'
Testing 'one input + forwarding'
diff: out-second*: No such file or directory
Testing 'two inputs + filter by ipVersion'
Testing 'two inputs + joinflows'
Testing 'two inputs with same ODID'

Test 'basic - one input file': OK
Test 'basic - two input files': OK
Test 'basic - withdraw message': OK
Test 'ipfixsend - enterprise fields,sctp': OK
Test 'ipfixsend - two inputs sctp': OK
Test 'ipfixsend - two inputs tcp': OK
Test 'ipfixsend - two inputs udp': OK
Test 'malformed - bad template field count': OK
Test 'malformed - missing template 258': OK
Test 'malformed - missingTmpl258,BadSetLength': OK
Test 'malformed - missingTmpl258,BadSetLength(long)': OK
Test 'malformed - no templates': OK
Test 'malformed - valid header and zeros': OK
Test 'one input + forwarding': OK
Test 'two inputs + filter by ipVersion': OK
Test 'two inputs + joinflows': OK
Test 'two inputs with same ODID': OK

Testing done; 17/17 tests passed

As you can see, although no errors are reports, diff reports an error about a missing file. By the way, the fact that the test succeeds while an error occurs is already suspicious in itself...

Debian: compilation error as of 5b0eadd

The IPFIXcol base package does not compile anymore on Debian (v7.8) systems as of 5b0eadd:

configurator.c: In function 'config_compare_xml':
configurator.c:574:2: error: unknown type name 'xmlBufPtr'
configurator.c:574:19: warning: initialization makes integer from pointer without a cast [enabled by default]
configurator.c:576:2: warning: implicit declaration of function 'xmlBufGetNodeContent' [-Wimplicit-function-declaration]
configurator.c:577:2: warning: implicit declaration of function 'xmlBufContent' [-Wimplicit-function-declaration]
configurator.c:577:25: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
configurator.c:580:2: error: unknown type name 'xmlBufPtr'
configurator.c:580:19: warning: initialization makes integer from pointer without a cast [enabled by default]
configurator.c:583:25: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
configurator.c:598:16: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
configurator.c:599:16: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]

So far, I've not been able to discover what in 5b0eadd is actually causing this behavior. It works fine on Ubuntu (v14.04.1) Do you have any clue?

fbitdump: option '-N' (plainLevel) insufficiently explained in help

In the nfdump-era, the '-N' option would cause all numbers to be printed 'as plain as possible'. Fbitdump, however, expects an additional parameter to '-N', indicating the 'plainLevel'. How this 'plainLevel' actually is supposed to work is nowhere explained in the help. Also, by checking the source of Configuration.cpp, it seems that the 'plainLevel' is related to the various plugins, but not really to the basic field in fbitdump's output.

Concrete example: in a situation where I want to print the protocol number in plain (so '6' instead of 'TCP'), it seems I have to use 'plainLevel' 10. The logic behind this is however nowhere explained.

[SCTP] ERROR Unable to unregister the source from the main template manager!

Hi,

Could you help me with following case:

Problem description:
Storage plugin is not closed properly if SCTP exporter disconnects without sending any ipfix data.

Ipfixcol version:
0.9.3 buildID: 520399

Test scenario

  1. start ipfixcol with dummy storage plugin (implementation attached):
    $ ipfixcol -s -v3

  2. establish new SCTP association e.g. with sctp_darn:
    $ sctp_darn -H 127.0.0.1 -h 127.0.0.1 -p 4739 -s
    sctp_darn ready to send...
    127.0.0.1:0-127.0.0.1:4739>
    Recieved SCTP_COMM_UP
    New connection, peer addresses
    127.0.0.1:4739
    10.0.2.15:4739
    192.168.99.101:4739
    172.17.0.1:4739
    127.0.0.1:0-127.0.0.1:4739>

  3. close association with Ctrl + C

Expected result:
Storage plugin is closed properly after exporter disconnects (or not initialized at all since there is no data sent)

Actual result:
Storage plugin initializes after exporter is disconnected and could not be closed. To close plugin main application shutdown is required:

INFO: SCTP input: New SCTP association from ::ffff:127.0.0.1
WARNING: SCTP input: Packet header is incomplete; skipping message...
INFO: SCTP input: SCTP input plugin: Exporter disconnected
DEBUG: dummy intermediate process: [0] Received IPFIX message
INFO: test_storage: TEST_STORAGE STARTING
INFO: output manager: [0] Data Manager created
DEBUG: output manager: [0] Closed source
ERROR: output manager: [0] Unable to unregister the source from the main template manager!
...
Signal detected (2); exiting as soon as possible...
INFO: test_storage: STORAGE PLUGIN CLOSED

Attachments:

  1. Storage plugin implementation
  2. Console output
  3. Message flow

Kind Regards
Krzysztof

Last flow seemingly having all bits in all fields set to 1

There seems to be a problem with the 'last' flow in a result set, e.g.:

fbitdump -R path/to/fbit_flows -A%scrip6,%dstip6 -m%flDESC -c10

sometimes yields:

        Date flow start  Duration                                Dst IPv6:Packets    Bytes Flows
584556019-04-03 15:.615         0 ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff:214.7 T  214.7 T 50000
(...)

This also happens when not sorting DESC and not limiting the number of results.

Centralize non-IPFIX to IPFIX conversion code

The code for converting non-IPFIX to IPFIX PDUs (base/src/input/{sctp,tcp,udp}/convert.{c,h} is currently duplicated among various input plugins. This is error-prone. Could we think about a way to centralize this code, perhaps under the 'util' umbrella?

fbitdump: memory leak when aggregation is used

Any form of aggregation in fbitdump results in memory leaks:

556 (144 direct, 412 indirect) bytes in 1 blocks are definitely lost in loss record 21 of 22
        at 0x4C286E7: operator new(unsigned long) (vg_replace_malloc.c:287)
        by 0x4183DD: fbitdump::Configuration::getAggregateColumns() const (Configuration.cpp:662)
        by 0x4099E9: fbitdump::AggregateFilter::AggregateFilter(fbitdump::Configuration&) (AggregateFilter.cpp:52)
        by 0x41FB9E: main (fbitdump.cpp:109)
596 (144 direct, 452 indirect) bytes in 1 blocks are definitely lost in loss record 22 of 22
        at 0x4C286E7: operator new(unsigned long) (vg_replace_malloc.c:287)
        by 0x4183DD: fbitdump::Configuration::getAggregateColumns() const (Configuration.cpp:662)
        by 0x420170: main (fbitdump.cpp:165)

Cannot get individual json files per ODID

ps: What i'm being told is there is 1 router exporting, but there are 6 ODID's incoming, not sure I believe it..., but could you validate my startup.xml please? I would like one IPFIX file to create along with a json file for each odid, not sure I'm doing it correctly. It looks like i only get 1 json file per exporter, NOT per ODID.

I know I'm doing something wrong, but I've checked and rechecked and cannot figure this out. Sorry to pester you so much.

I have this startup.xml below, I have 2 exporting processes, 1 for the ipfix file format and I want it to contain all ODID's, I have another exporter for JSON and I want a separate file for each ODID.

I don't get any ipfix file and only 1 json file, even though I see debug messages that indicate several json exporters were created and 1 ipfix exporter created.

Am I doing something stupid?

<?xml version="1.0" encoding="UTF-8"?>
<ipfix xmlns="urn:ietf:params:xml:ns:yang:ietf-ipfix-psamp">

    <!--## Every collecting process will be started as new process -->
    <collectingProcess>
            <!--## Arbitrary collecting process name  -->
            <name>UDP collector</name>
            <!--## Type of the collector. Supported types are defined in internalcfg.xml -->
            <udpCollector>
                    <!--## Arbitrary udp collector name -->
                    <name>Listening port 4739</name>
                    <!--## Local listening port -->
                    <localPort>4739</localPort>
                    <!--## Template lifetime in seconds -->
                    <templateLifeTime>1800</templateLifeTime>
                    <!--## Options template lifetime in seconds -->
                    <optionsTemplateLifeTime>1800</optionsTemplateLifeTime>
                    <!--## Template lifetime in packets (for how many packets is template valid) -->
                    <!-- <templateLifePacket>5</templateLifePacket>  -->
                    <!--## Options template lifetime in packets -->
                    <!-- <optionsTemplateLifePacket>100</optionsTemplateLifePacket>  -->
                    <!--## Local address to listen on. If empty, bind to all interfaces -->
                    <localIPAddress></localIPAddress>
            </udpCollector>
            <!--## Name of the exporting process. Must match exporting process name -->
            <exportingProcess>File writer UDP</exportingProcess>
            <exportingProcess>JSON writer UDP</exportingProcess>
            <!--## File for exporting status information to (combined with -S) -->
            <statisticsFile>/prod/eclapp/log/ipfixcol_stat.log</statisticsFile>
    </collectingProcess>

    <!--## Exporting process configuration -->
    <exportingProcess>
            <!--## Name of the exporting process, must match <exportingProcess> element
                       in <collectingProcess> -->
            <name>File writer UDP</name>
             <singleManager>yes</singleManager>
            <!--## Specification of storage plugin -->
            <destination>
                    <name>all ODIDs</name>
                    <fileWriter>
                            <fileFormat>ipfix</fileFormat>
                            <file>file://prod/test/flow/collected-records-udp-all-odid.ipfix</file>
                    </fileWriter>
            </destination>
    </exportingProcess>

    <exportingProcess>
      <name>JSON writer UDP</name>
      <destination>
          <name>ODID 9</name>
          <observationDomainId>9</observationDomainId>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.9.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
      <destination>
          <name>ODID 256</name>
          <observationDomainId>256</observationDomainId>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.256.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
      <destination>
          <name>ODID 259</name>
          <observationDomainId>259</observationDomainId>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.259.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
      <destination>
          <name>ODID 261</name>
          <observationDomainId>261</observationDomainId>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.261.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
      <destination>
          <name>ODID 65797</name>
          <observationDomainId>65797</observationDomainId>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.65797.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
         <destination>
          <name>ODID 65795</name>
          <observationDomainId>65795</observationDomainId>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.67595.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
         <destination>
          <name>UNKNOWN ODID</name>
          <fileWriter>
             <fileFormat>json</fileFormat>
             <metadata>no</metadata>
             <tcpFlags>formated</tcpFlags>
             <timestamp>epoc</timestamp>
             <protocol>formated</protocol>
             <ignoreUnknown>no</ignoreUnknown>
             <!-- output>
               <type>print</type>
             </output -->
             <!--output>
               <type>send</type>
               <ip>127.0.0.1</ip>
               <port>4444</port>
               <protocol>udp</protocol>
             </output -->
             <output>
               <type>file</type>
               <path>/prod/test/flow/</path>
               <prefix>json.unknown.</prefix>
               <dumpInterval>
                 <timeWindow>300</timeWindow>
                 <timeAlignment>yes</timeAlignment>
               </dumpInterval>
             </output>
             <!-- output>
               <type>server</type>
               <port>4800</port>
               <blocking>no</blocking>
             </output -->
           </fileWriter>
         </destination>
    </exportingProcess>

    <!-- List of active Intermediate Plugins -->
    <intermediatePlugins>
            <!-- Dummy Intermediate Plugin - does nothing -->
            <dummy_ip>
            </dummy_ip>

    </intermediatePlugins>
</ipfix>

Here is debug (./configure --enable-debug) and -v3 output that shows I've actually created multiple json storage....

INFO: dummy Intermediate Process: Successfully initialized
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-json-output.so
INFO: configurator: [26698] Opening storage xml_conf: /usr/share/ipfixcol/plugins/ipfixcol-ipfix-output.so

One thing I notice is that I only see 1 of these records:

...
INFO: UDP input: New UDP exporter connected (unique port and address)
DEBUG: ipfix_message: [259] Processing data
DEBUG: json_storage: initialized
INFO: output manager: [259] Data Manager created
DEBUG: output manager: [259] New source
DEBUG: json_storage(file): Thread started...
DEBUG: ipfix_message: [259] Processing data
....

I don't see a "New UDP exporter for any of the other ODID's I'm expecting, but I do see records being processed for them:

....
DEBUG: ipfix_message: [261] Processing data
WARNING: preprocessor: [261] Data template with ID 256 not found
<NOTE: that warning goes away when I get the template>
DEBUG: dummy Intermediate Process: got IPFIX message!
....

So it looks like I have 1 exporter incoming with multiple ODID's (9, 256, 259, 261, 65797....)

A single router can have multiple ODID's as I understand it, different processes can have their own? I'm confused as to what I'm seeing. No IPFIX file and only one JSON file

[test@VH-BLT0 flow]$ pwd
/prod/test/flow
[test@VH-BLT0 flow]$ ls -lt
total 7604
-rw-rw-r--. 1 test test 1019026 Feb 29 17:57 json.259.201602292255
-rw-rw-r--. 1 test test 2171132 Feb 29 17:55 json.259.201602292250
-rw-rw-r--. 1 test test 2031868 Feb 29 17:50 json.259.201602292245
-rw-rw-r--. 1 test test 2129968 Feb 29 17:45 json.259.201602292240
-rw-rw-r--. 1 test test  418509 Feb 29 17:40 json.259.201602292235

Duplicate data written to fields when a field is blank

I have been using IPFIXcol very successfully but have been having a strange issue where if an blank entry is sent it gets written to the JSON file and repeated in the next few elements see below. id 204 has 0x3b but it gets repeated in the following elements instead of the data it should be. This is on the latest dev branch code.

"ipfix.e1XXXXid204": "0x3b", "ipfix.e1XXXXid210": "0x3b", "ipfix.e1XXXXid211": "0x3b",

Cheers
S.

NetFlow conversion test cases

The test suite as it is currently present in IPFIXcol only supports IPFIX files as input traffic. An integral part of IPFIXcol is however the code that converts from NetFlow v5 and NetFlow v9 to IPFIX. Hence, non-IPFIX input traffic should be tested and supported by the test suite too.

The most straightforward solution would be to use tcpreplay for replaying NetFlow traffic. In case tcpreplay is not available on the system, a warning should be printed and the test should be successful.

As a concrete example, I've prepared a test case based on NetFlow traffic for #118.

Fill ipfix_message components

We had several intermediate plugins that were running perfectly fine, also when they were chained. Last week, we found out however that adding the ODIP plugin to the plugin chain made IPFIXcol crash. The problem was actually rather simple: in March 2015, some components were added to struct ipfix_message, such as metadata. Our plugins were developed before those components were introduced and completely ignored those components, also when duplicating and editing IPFIX messages. As such, metadata, for example, was not available in IPFIX messages anymore, while the ODIP plugin expects them to be there: SEGFAULT.

To solve or work-around this, I was wondering what would be a good approach. Some ideas:

  • Add a check to intermediate_process.c, right after calling intermediate_process_message, and draw an error if the new components are missing.
  • Split the functionality of the function message_create_from_mem. Funny enough, this function accepts some (older) ipfix_message components (i.e., input_info and source_status) as arguments, while the new components (e.g., metadata) seem to have been forgotten. Regarding a functional split, we could think about a method that duplicates an IPFIX message without data (i.e., without (option) template sets and data sets) and one that includes data. The split could also be achieved using function arguments, of course.

Any other ideas are welcome.

cannot compile lnfstore

hello, please note the tail of compile log down below. ipfixcol plugin lnfstore cannot be compiled anymore. last working version comes from Dec15 (b4599aa)

ipfixcol-lnfstore-output version 0.2.5
Prefix........: /usr/local
Distribution..: debian
C Compiler....: gcc -Wall -std=gnu99 -I/usr/include/libxml2 -I./../../../base/headers
Linker........: -lnf -lxml2
rpmbuild......: rpmbuild
Build doc.....: yes
xsltproc......: xsltproc
xsltmanstyle..: /usr/share/xml/docbook/stylesheet/docbook-xsl/manpages/docbook.xsl

make all-am
make[1]: Entering directory '/tmp/build_area/ipfixcol/plugins/storage/lnfstore'
/bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I/usr/include/libxml2 -I./../../../base/headers -Wall -std=gnu99 -MT lnfstore.lo -MD -MP -MF .deps/lnfstore.Tpo -c -o lnfstore.lo lnfstore.c
libtool: compile: gcc -DHAVE_CONFIG_H -I. -I/usr/include/libxml2 -I./../../../base/headers -Wall -std=gnu99 -MT lnfstore.lo -MD -MP -MF .deps/lnfstore.Tpo -c lnfstore.c -fPIC -DPIC -o .libs/lnfstore.o
mv -f .deps/lnfstore.Tpo .deps/lnfstore.Plo
/bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I/usr/include/libxml2 -I./../../../base/headers -Wall -std=gnu99 -MT storage.lo -MD -MP -MF .deps/storage.Tpo -c -o storage.lo storage.c
libtool: compile: gcc -DHAVE_CONFIG_H -I. -I/usr/include/libxml2 -I./../../../base/headers -Wall -std=gnu99 -MT storage.lo -MD -MP -MF .deps/storage.Tpo -c storage.c -fPIC -DPIC -o .libs/storage.o
mv -f .deps/storage.Tpo .deps/storage.Plo
/bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I/usr/include/libxml2 -I./../../../base/headers -Wall -std=gnu99 -MT translator.lo -MD -MP -MF .deps/translator.Tpo -c -o translator.lo translator.c
libtool: compile: gcc -DHAVE_CONFIG_H -I. -I/usr/include/libxml2 -I./../../../base/headers -Wall -std=gnu99 -MT translator.lo -MD -MP -MF .deps/translator.Tpo -c translator.c -fPIC -DPIC -o .libs/translator.o
translator.c: In function 'tr_general':
translator.c:186:38: error: 'LNF_FLD_INFO_SIZE' undeclared (first use in this function)
if (lnf_fld_info(item_info->lnf_id, LNF_FLD_INFO_SIZE, &size,
^
translator.c:186:38: note: each undeclared identifier is reported only once for each function it appears in
Makefile:491: recipe for target 'translator.lo' failed
make[1]: *** [translator.lo] Error 1
make[1]: Leaving directory '/tmp/build_area/ipfixcol/plugins/storage/lnfstore'
Makefile:371: recipe for target 'all' failed
make: *** [all] Error 2

fbitdump filtering on da6/dstip6 gives empty result set

Selecting an IPv6 address that is definitely in the data, via

fbitdump -R path/to/fbit_flows -o long6 -c10

and using that to filter the data on, like

fbitdump -R path/to/fbit_flows '%dstip6 == 2001:db8:1911:b111::148:14:0'

does not yield any results, while they are expected. There is no difference in using da6, dstip6, =, ==.
If needed, I can provde the dataset for this.

fbitdump does not build with libfastbit (May 16, 2016)

Since the update of libfastbit on May 16, 2016, fbitdump does not build anymore:

g++ -DHAVE_CONFIG_H -I. -I..  -I../3rdparty  -Wall -O2 -std=gnu++11 -fno-strict-aliasing  -MT AggregateFilter.o -MD -MP -MF .deps/AggregateFilter.Tpo -c -o AggregateFilter.o AggregateFilter.cpp
In file included from Column.h:45:0,
                 from Configuration.h:45,
                 from Filter.h:44,
                 from AggregateFilter.h:44,
                 from AggregateFilter.cpp:40:
Cursor.h:98:2: error: 'typeList' in 'class ibis::table' does not name a type
  ibis::table::typeList columnTypes; /**< Column types of the table */
  ^

json plugin - last element in json output if empty string is <single> double quote

I'll provide examples soon, i'm noticing that if a string is empty - and - the the empty string is the last element in the json array, the output will contain 1 " 'double quote'... I believe the spec indicates always empty strings should be two "" 'two double quotes'...

example of what I mean:

....., ipfix.somevariable: " }

My downstream customers will have issues parsing this I think.

Spec at http://www.json.org about halfway down describing the "string" data flow...

When I get a chance i'll see if i can work out a hotfix.

Test 'ipfixsend - two inputs udp' fails

The test ipfixsend - two inputs udp always fails. By looking at the test's output files, it can be concluded that a significant amount of data is missing (compared to the expected output).

IPFixcol

Hello,

Would appreciate your help: whne i start ipfixcol there is the following message:
"INFO: configurator: No profile configuration" - what this message means ?
I try to get the collector working for Cisco routers but it seems it does not work, no .ipfix file in tmp. The exporter works correctly on Cisco and i see tcpdump data hitting the UDP 2055 port. Please can you send me a simple startup.xml.

Thanks
idsteiner

Memory leak in UDP input plugin

From time to time, I get the following report from Valgrind:

==26923== 10 bytes in 2 blocks are definitely lost in loss record 12 of 153
==26923==    at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==26923==    by 0x69E04B0: input_init (udp_input.c:159)
==26923==    by 0x408F14: config_add_input (configurator.c:355)
==26923==    by 0x409F4A: config_process_new_startup (configurator.c:681)
==26923==    by 0x40A2A7: config_reconf (configurator.c:1133)
==26923==    by 0x406EDF: main (ipfixcol.c:340)

There's no clear way to reproduce this issue (it just happens from time to time), although I cannot pinpoint the source of the problem, yet. What I can say though is that the issue is not solved by checking whether the variables in udp_input.c:181,183,185,187 are already set before (over)writing the pointer.

Storage plugin initialization on startup

The current implementation initializes storage plugins only in the moment in which the plugin is actually needed, i.e., once there's something to be stored. If initialization fails though, IPFIXcol just continues operation. This behavior is extremely error-prone, especially in unattended setups. I therefore propose the following changes:

  • Perform storage plugin initialization on IPFIXcol startup
  • Stop IPFIXcol execution if storage plugin initialization fails and only one storage plugin has been configured/enabled

Unirec storage plugin: libtrap not available for Debian/Ubuntu

The libtrap library, which is required for the Unirec storage plugin, is not available on Debian/Ubuntu-based systems. As such, building IPFIXcol from the top-level directory always fails. I therefore propose to not build the Unirec plugin if 1) compiled from the top-level directory and 2) Debian/Ubuntu is detected. Do you agree, or do you propose another solution?

Issue with commit 50433958ad9b8417858abae166afae158a6be817

There was 1 file that was left broken I think:

plugins/input/nfdump/nfinput.c

316: - for (i = 0; i < MSG_MAX_TEMPLATES; i++) {
316: + for (i = 0; i < MSG_MAX_TEMPL_SETS; i++) {

Sorry - I would fix and issue a pull request but my system's git is tied to my personal github not my public one.

Question - How to get the ipfix filewriter to rotate

I did some capacity tests writing out received packets via JSON and via ipfix file. The ipfix native format is way better, don't start losing data until > 15K packets per second, whereas I start losing data using json output at 4k packets per second.

No worries, I can store to ipfix and post process the ipfix file to json later. Question is, how do I get it to rotate out the ipfix files (or do I need to make my own plugin to do that)?

Thanks guys, this code is awesome, really making it work for me.

fastbit issues when compiling ipfixcol

I get the following error when trying to compile ipfixcol:

checking for library containing fastbit_init... -lfastbit
./configure: line 16768: syntax error near unexpected token LIBFASTBIT,' ./configure: line 16768:PKG_CHECK_MODULES(LIBFASTBIT, fastbit >= 2.0.3.2,,'
configure: error: ./configure failed for tools/fbitdump

I've install fastbit and pkg_config is installed:

# fastbit-config --version FastBit 2.0.3.2

# pkg-config --version 0.28

This is on ubuntu 16.04

fbitdump: combining -a and filter gives 'Select (...) with filter 1=1 failed'

Aggregating over the standard 5-tuple using -a results in aggregated flows like expected, but when a filter is added, e.g. on a source and destination address that were in the aggregated result set, fbitdump errors and stops.

$ fbitdump  -R .  '%sa6 2001:db8::521:6954:d379:d3c5:7947 and %da6 2a00:db8:1111:c01::7d'  -c3
( 3 flows returned, so we're sure these addresses are in the dataset)

# adding -a to aggregate over 5-tuple:
$ fbitdump  -R .  '%sa6 2001:db8::521:6954:d379:d3c5:7947 and %da6 2a00:db8:1111:c01::7d' -a
Select 'e0id28p0,e0id28p1,e0id4,e0id27p0,e0id27p1, flows, e0id152,( e0id153 - e0id152 ) as td,e0id2,e0id1' with filter '1 = 1' failed

# but using -a without filter does return what we're looking for
$ fbitdump  -R .  -a | grep "2001:db8::521:6954:d379:d3c5:7947" | grep "2a00:db8:1111:c01::7d"
(2 unidirectional flows returned)

Using -P for post-aggregation filtering does not throw the error, but shows an empty result set.
I have a dataset available to reproduce this, if need be. At this moment, I'm not sure whether this only occurs for v6 data, or also in v4 data.

Last template before options template not parsed properly

We currently have a trace (basically the same one as for issue #14) that contains NetFlow v9 datagrams. Some of those datagrams feature four template records (ID: 257-260), followed by an options template (ID: 261). However, when running IPFIXcol in high-verbosity mode (-v 3), I see the following output:

DEBUG: ipfix_message: [97] Processing data.
WARNING: preprocessor: [97] Template ID 256 already exists. Rewriting.
DEBUG: template manager: Creating new template 257
WARNING: preprocessor: [97] Template ID 257 already exists. Rewriting.
DEBUG: template manager: Creating new template 258
WARNING: preprocessor: [97] Template ID 258 already exists. Rewriting.
DEBUG: template manager: Creating new template 259
WARNING: preprocessor: [97] Template ID 1 is reserved and not valid for data set!
WARNING: preprocessor: [97] Cannot parse template set, skipping to next set
WARNING: preprocessor: [97] Options template ID 259 already exists. Rewriting.
DEBUG: template manager: Creating new template 261

As you can see, the last template record (with ID '260') is not parsed properly; IPFIXcol reports ID '1' and skips further processing. I've prepared a short trace for you that triggers the behavior. Please contact me at kirc&secdorks.net to get the trace.

Data records for options templates (converted from NFv9) do not increase sequence number

Although we discovered the issue in the particular case of converting options templates from NFv9, the issue is more generic: the code in insert_timestamp_template and insert_timestamp_data, which keeps track of the sequence numbers during conversion, only works if data records feature timestamps (field ID 21 and 22). This is however not a given, especially not for options templates.

Traces for reproduction can be provided upon request.

NetFlow v9 traffic from nProbe causes segfault (FastBit storage)

When exporting flow data using NetFlow v9 from nProbe,* IPFIXcol sooner or later always crashes. Valgrind reports the following:

==3922== Invalid read of size 8
==3922==    at 0x690D35D: element::get_part_info() (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x690AB80: template_table::update_part(std::string) (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x690B586: template_table::flush(std::string) (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x6901DB5: flush_data(fastbit_config*, unsigned int, std::map<unsigned short, template_table*, std::less<unsigned short>, std::allocator<std::pair<unsigned short const, template_table*> > >*, bool) (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x6903BF2: store_packet (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x405578: storage_plugin_thread (data_manager.c:108)
==3922==    by 0x53A9181: start_thread (pthread_create.c:312)
==3922==    by 0x56B9EFC: clone (clone.S:111)
==3922==  Address 0x18032bb88 is not stack'd, malloc'd or (recently) free'd
==3922== 
==3922== 
==3922== Process terminating with default action of signal 11 (SIGSEGV)
==3922==  Access not within mapped region at address 0x18032BB88
==3922==    at 0x690D35D: element::get_part_info() (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x690AB80: template_table::update_part(std::string) (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x690B586: template_table::flush(std::string) (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x6901DB5: flush_data(fastbit_config*, unsigned int, std::map<unsigned short, template_table*, std::less<unsigned short>, std::allocator<std::pair<unsigned short const, template_table*> > >*, bool) (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x6903BF2: store_packet (in /usr/local/share/ipfixcol/plugins/ipfixcol-fastbit-output.so)
==3922==    by 0x405578: storage_plugin_thread (data_manager.c:108)
==3922==    by 0x53A9181: start_thread (pthread_create.c:312)
==3922==    by 0x56B9EFC: clone (clone.S:111)
==3922==  If you believe this happened as a result of a stack
==3922==  overflow in your program's main thread (unlikely but
==3922==  possible), you can try to increase the size of the
==3922==  main thread stack using the --main-stacksize= flag.
==3922==  The main thread stack size used in this run was 8388608.
ssing data sets...

My conclusion is that it is related to the FastBit storage plugin and to flushing/writing the data to disk. I've prepared a short trace for you that triggers the behavior. Please contact me at kirc&secdorks.net to get the trace.

*) Although I mention nProbe specifically in this ticket, I cannot say whether the issue is only triggered by traffic from nProbe. I've used the following command for exporting the traffic:

nprobe -G -V9 -T "%IPV4_SRC_ADDR %IPV4_DST_ADDR %IPV4_NEXT_HOP %INPUT_SNMP %OUTPUT_SNMP %IN_PKTS %IN_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT %TCP_FLAGS %PROTOCOL %SRC_TOS %SRC_AS %DST_AS %IPV4_SRC_MASK %IPV4_DST_MASK %HTTP_HOST %BIFLOW_DIRECTION %HTTP_URL" -n 127.0.0.1:3055 -i eth0

fbitdump voip_rtpcode-plugin: unreachable code fragment

In voip_rtpcodec.c, there is a code fragment that can never be reached. It is the call to snprintf on line 134. Perhaps, one of the conditions that come just before line 134 shouldn't feature a return statement. In case they do, line 134 can be removed completely.

fbitdump filtering on link-local fe80:: IPv6 adresses

Filtering on link-local IPv6 addresses does not work, e.g.:

fbitdump -R ic20151108023000 '%sa6 == fe80::22f4:21ff:fea3:1a00'

yields more than just records having that source address, although all of the results have link-local addresses for their source address.
Using prefixes results in no records, e.g.:

fbitdump -R ic20151108023000 '%sa6 fe80::/64'

This is happening with the latest version of fbitdump, so it includes the fixes for IPv6 issues reported earlier. Filtering on global v6 addresses (and prefixes) behaves like expected.

autoreconf -i from the top directory fails

autoreconf -ivf

autoreconf: Leaving directory `plugins/input/nfdump'
autoreconf: configure.ac: adding subdirectory plugins/storage/json to autoreconf
autoreconf: Entering directory `plugins/storage/json'
aclocal: warning: couldn't open directory 'm4': No such file or directory
configure.ac:162: error: AM_COND_IF: no such condition "HAVE_XSLTPROC"
/usr/share/aclocal-1.14/cond-if.m4:23: AM_COND_IF is expanded from...
/usr/share/aclocal-1.14/cond-if.m4:23: AM_COND_IF is expanded from...
configure.ac:162: the top level
autom4te: /usr/bin/m4 failed with exit status: 1
aclocal: error: echo failed with exit status: 1
autoreconf: aclocal failed with exit status: 1

Components installed in /usr/local/, while expected in /etc/ipfixcol/

IPFIXcol expects many files to be installed in /etc/ipfixcol/, while they are actually installed in /usr/local/. When checking './configure -h' (of many components), it even says the following:

By default, `make install' will install all the files in `/usr/local/bin',
`/usr/local/lib' etc.  You can specify an installation prefix other than
`/usr/local' using `--prefix’, for instance `--prefix=$HOME’.

However, when running 'ipfixcol -h' it looks for the configuration files (internalcfg.xml, ipfix-elements.xml and startup.xml) in /etc/ipfixcol/. This inconsistency should be fixed. Workaround:

'./configure --prefix=/etc'

Data sets not forwarded to storage plugin if previous data set features no template

While developing and debugging intermediate plugins for IPFIXcol, I observed the following behaviour. While looping over the data sets of a particular IPFIX message, I found out that one of the data sets had no template associated to it:

for (i = 0; i < 1024 && msg->data_couple[i].data_set; ++i) {
    templ = msg->data_couple[i].data_template;
    if (!new_templ) {
        // No template associated to data set
    }
}

By verifying the used trace, I realised that this was not a problem of IPFIXcol in general or a previous intermediate plugin, but an artifact in the trace. The IPFIX message referred to in this ticket features four datasets, set0 - set-3, and set-2 is the data set with no template associated to it. Set-3 is however perfectly fine. The problem is now that, without any hacks, set-3 is never forwarded to storage plugins. This seems to be a generic problem related to data sets with a higher set ID than the set with no template associated to it.

Datasets can be provided upon request.

SIGUSR1 doesn't seem to reload configuration

Hi all. First, let me congratulate you guys on such a nice piece of work. IPFIXcol is really flexible and working nicely for me. I'm in testing of it now and noticed that when I add entries to ipfix-elements.xml and issue a SIGUSR1 to the process, it doesn't seem to accept it. Watching the /var/log/messages doesn't indicate any logs of reloading the configs and my test indicate that it didn't happen.

I have a test v5 stream incoming that converts to this JSON:
{"@type": "ipfix.entry", "ipfix.sourceIPv4Address": "10.0.0.2", "ipfix.destinationIPv4Address": "10.0.0.3", "ipfix.ipNextHopIPv4Address": "0.0.0.0", "ipfix.ingressInterface": 3, "ipfix.egressInterface": 5, "ipfix.packetDeltaCount": 1, "ipfix.octetDeltaCount": 64, "ipfix.flowStartMilliseconds": 12748589591756800, "ipfix.flowEndMilliseconds": 6924086267256832000, "ipfix.sourceTransportPort": 4242, "ipfix.destinationTransportPort": 80, "ipfix.paddingOctets": "0", "ipfix.tcpControlBits": "......", "ipfix.protocolIdentifier": "UDP", "ipfix.ipClassOfService": 1, "ipfix.bgpSourceAsNumber": 2, "ipfix.bgpDestinationAsNumber": 3}

If I take a random ID above and comment it out of the ipfix-elements.xml, and restart ipfixcol, I see that the id I commented out is in fact replaced (example) ipfix.e0id14 instead of ipfix.egressInterface. So far so good.

Now if I add that element back in to the ipfix-elements.xml and issue a SIGUSR1 to the running process, it doesn't pick it up or throw any additional messages to /var/log messages that indicates it received the signal.

Just wondering if I am doing something wrong, would love for this to work.

I'm starting the process with the ipfixcol start script, but have modified it to give it more parameters. here is my running view:
ipfixcol -v 2 -M -S 30 -s -d -e/mydir/ipfixcol-ipfix-elements.xml -i/mydir/ipfixcol-internalcfg.xml -c /mydir/ipfixcol-startup.xml -d

ipfixcol-internalcfg.xml is unchanged from original
ipfixcol-ipfix-elements.xml is unchanged with the exception of commenting out or adding back in a single element
ipfixcol-startup.xml has been modified to use the JSON output, this works wonderfully!

Thanks guys, again, thanks for this awesome piece of work!!!!

fbitdump: IPv6 address prefix filter not working

Filtering on v6 prefixes does not work, and fbitdump returns all flows in the input files (i.e., no filtering is applied at all)

For example:
fbitdump -R /my/fbitdata/ '%sa6 2001:db8::/32'
will return flows that are not coming from 2001:db8

Long closing time of storage plugin can result in removal of valid templates

Hi,
I'm trying to write storage plugin for ipfixcol, and I came across some odd behavior. In case of short disconnection of ipfix exporter (shorter than closing time of storage plugin), we can loose templates that were received after reconnect and thus shall not be removed.

ipfixcol version: 0.9.2

I've prepared simple storage plugin implementation just for testing purposes (see attachment
test_plugin.zip), and tested it in following scenario:

  1. start ipfixcol with plugin:
    $ ipfixcol -s -v3
    ...
    Jan 23 15:18:23 ipfixcol[5863]: INFO: configurator: [5863] Opening storage xml_conf: /usr/local/share/ipfixcol/plugins/libtest_storage_plugin.so
    Jan 23 15:18:23 ipfixcol[5863]: INFO: configurator: No profile configuration
    ...
  2. start sending ipfix data with exporter:
    ...
    Jan 23 15:19:53 ipfixcol[5863]: INFO: SCTP input: New SCTP association from ::ffff:127.0.0.1
    Jan 23 15:19:53 ipfixcol[5863]: DEBUG: ipfix_message: [0] Processing data
    Jan 23 15:19:53 ipfixcol[5863]: INFO: preprocessor: [0] New template ID 65535
    Jan 23 15:19:54 ipfixcol[5863]: DEBUG: ipfix_message: [0] Processing data
    Jan 23 15:19:54 ipfixcol[5863]: DEBUG: dummy intermediate process: [0] Received IPFIX message
    Jan 23 15:19:54 ipfixcol[5863]: INFO: test_storage: GOT PACKET | ODID : 0 | SEQUENCE_NUMBER : 1
    ...
  3. disconnect exporter for short period of time:
    ...
    Jan 23 15:19:58 ipfixcol[5863]: INFO: SCTP input: SCTP input plugin: Exporter disconnected
    Jan 23 15:19:58 ipfixcol[5863]: DEBUG: output manager: [0] Closed source
    Jan 23 15:19:58 ipfixcol[5863]: DEBUG: output manager: [0] No source; releasing templates...
    Jan 23 15:19:58 ipfixcol[5863]: INFO: storage plugin thread: [0] No more data from Data Manager
    Jan 23 15:19:58 ipfixcol[5863]: INFO: storage plugin thread: [0] Closing storage plugin thread
    Jan 23 15:19:58 ipfixcol[5863]: INFO: test_storage: CLOSING STORAGE PLUGIN
    Jan 23 15:19:59 ipfixcol[5863]: INFO: SCTP input: New SCTP association from ::ffff:127.0.0.1
    Jan 23 15:19:59 ipfixcol[5863]: DEBUG: ipfix_message: [0] Processing data
    Jan 23 15:19:59 ipfixcol[5863]: INFO: preprocessor: [0] New template ID 65535
    ...
  4. wait for storage plugin to close:
    ...
    Jan 23 15:20:03 ipfixcol[5863]: INFO: test_storage: STORAGE PLUGIN CLOSED
    Jan 23 15:20:03 ipfixcol[5863]: INFO: template manager: [0] Removing all templates
    Jan 23 15:20:03 ipfixcol[5863]: DEBUG: template manager: Removing all templates
    Jan 23 15:20:03 ipfixcol[5863]: DEBUG: template manager: Removing all option templates
    Jan 23 15:20:03 ipfixcol[5863]: DEBUG: template manager: Removing all templates
    Jan 23 15:20:03 ipfixcol[5863]: DEBUG: template manager: Removing all option templates
    ...
  5. after that it can be observed that "Removing all templates" also removed the ones that were received after reconnect:
    ...
    Jan 23 15:20:03 ipfixcol[5863]: DEBUG: ipfix_message: [0] Processing data
    Jan 23 15:20:03 ipfixcol[5863]: WARNING: preprocessor: [0] Data template with ID 65535 not found
    Jan 23 15:20:03 ipfixcol[5863]: DEBUG: dummy intermediate process: [0] Received IPFIX message
    Jan 23 15:20:03 ipfixcol[5863]: INFO: test_storage: NO METADATA
    Jan 23 15:20:04 ipfixcol[5863]: DEBUG: ipfix_message: [0] Processing data
    Jan 23 15:20:04 ipfixcol[5863]: WARNING: preprocessor: [0] Data template with ID 65535 not found
    Jan 23 15:20:04 ipfixcol[5863]: DEBUG: dummy intermediate process: [0] Received IPFIX message
    Jan 23 15:20:04 ipfixcol[5863]: INFO: test_storage: NO METADATA
    ...

I think that this is not desired behavior since it's not hard to imagine that some storage plugins can have prolonged closing time (especially if they are connected to remote databases).

I've tried this also for TCP input and results were the same. I'm attaching the logs for review:
logs.zip

Kind Regards
Krzysztof

fbitdump: filtering on TCP flags

Filtering on TCP flags via %flg, for example '%flg APS', yields no results.

After trying to fix this, it seems the following is happening (but I'm not sure how/where to fix it):
After parsing the string input, e.g. 'S' for flows containing a SYN packet, the resulting numerical value is parsed again. As the parser (parseFlags() in DefaultPlugin.cpp) expects flags in string format, it returns a numerical 0, resulting in no flows.
This is observed by simply adding some printf()'s in parseFlag() and printing the value of the strFlags argument:

printf("in parseFlags, input: %s\n", strFlags);

latest convert.c still cannot convert options templates

@thorgrin Hi Petr, I've merged your latest devel branch into my fork and started testing it. Your latest convert.c still throws "WARNING: UDP input: Message conversion error; skipping message..."...

I know my convert.c is ugly :), I've copied my version over yours, recompiled against the latest CESNET/devel into my fork along with my convert.c and they process out ok.

When I get a chance (weekend work), I'll try to debug your convert.c and see where it is failing. I like yours better, you fixed all my bad memory access hacks.

tm_template_reference_dec: double free or corruption

We have had a rare crash with IPFIXcol recently, which we are still trying to reproduce (so far, without any 'luck'). This is the stack trace:

*** glibc detected *** ./ipfixcol: double free or corruption (!prev): 0x00007fc728eae8c0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x76aa6)[0x7fc73dd4eaa6]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x6c)[0x7fc73dd5384c]
./ipfixcol(tm_template_reference_dec+0x2d)[0x40a02d]
./ipfixcol[0x408f2e]
./ipfixcol[0x4058ae]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50)[0x7fc73e069b50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fc73ddb3e6d]
======= Memory map: ========
00400000-0040f000 r-xp 00000000 08:11 284590582                          <path>/ipfixcol/base/src/ipfixcol
0060f000-00610000 rw-p 0000f000 08:11 284590582                          <path>/ipfixcol/base/src/ipfixcol
00b31000-11e96000 rw-p 00000000 00:00 0                                  [heap]
7fc6f8000000-7fc6f9354000 rw-p 00000000 00:00 0
7fc6f9354000-7fc6fc000000 ---p 00000000 00:00 0
7fc700000000-7fc704000000 rw-p 00000000 00:00 0
7fc704000000-7fc70bfff000 rw-p 00000000 00:00 0
7fc70bfff000-7fc70c000000 ---p 00000000 00:00 0
7fc70c000000-7fc70fffe000 rw-p 00000000 00:00 0
7fc70fffe000-7fc710000000 ---p 00000000 00:00 0
7fc710000000-7fc714000000 rw-p 00000000 00:00 0
7fc714000000-7fc717ffd000 rw-p 00000000 00:00 0
7fc717ffd000-7fc718000000 ---p 00000000 00:00 0
7fc718000000-7fc71c000000 rw-p 00000000 00:00 0
7fc720000000-7fc723ff1000 rw-p 00000000 00:00 0
7fc723ff1000-7fc724000000 ---p 00000000 00:00 0
7fc724000000-7fc727ffa000 rw-p 00000000 00:00 0
7fc727ffa000-7fc728000000 ---p 00000000 00:00 0
7fc728000000-7fc72c000000 rw-p 00000000 00:00 0
7fc72c000000-7fc72fffb000 rw-p 00000000 00:00 0
7fc72fffb000-7fc730000000 ---p 00000000 00:00 0
7fc730000000-7fc734000000 rw-p 00000000 00:00 0
7fc734000000-7fc737ffe000 rw-p 00000000 00:00 0
7fc737ffe000-7fc738000000 ---p 00000000 00:00 0
7fc73a5cd000-7fc73a5e2000 r-xp 00000000 08:02 42336260                   /lib/x86_64-linux-gnu/libgcc_s.so.1
7fc73a5e2000-7fc73a7e2000 ---p 00015000 08:02 42336260                   /lib/x86_64-linux-gnu/libgcc_s.so.1
7fc73a7e2000-7fc73a7e3000 rw-p 00015000 08:02 42336260                   /lib/x86_64-linux-gnu/libgcc_s.so.1
7fc73a7e3000-7fc73a7e4000 ---p 00000000 00:00 0
7fc73a7e4000-7fc73afe4000 rw-p 00000000 00:00 0
7fc73afe4000-7fc73afe5000 ---p 00000000 00:00 0
7fc73afe5000-7fc73b7e5000 rw-p 00000000 00:00 0
7fc73b7e5000-7fc73b7e6000 ---p 00000000 00:00 0
7fc73b7e6000-7fc73bfe6000 rw-p 00000000 00:00 0
7fc73bfe6000-7fc73bfe7000 ---p 00000000 00:00 0
7fc73bfe7000-7fc73c7e7000 rw-p 00000000 00:00 0
7fc73c7e7000-7fc73c7f7000 r-xp 00000000 08:02 234895476                  /usr/lib/x86_64-linux-gnu/libcares.so.2.0.0
7fc73c7f7000-7fc73c9f6000 ---p 00010000 08:02 234895476                  /usr/lib/x86_64-linux-gnu/libcares.so.2.0.0
7fc73c9f6000-7fc73c9f7000 r--p 0000f000 08:02 234895476                  /usr/lib/x86_64-linux-gnu/libcares.so.2.0.0
7fc73c9f7000-7fc73c9f8000 rw-p 00010000 08:02 234895476                  /usr/lib/x86_64-linux-gnu/libcares.so.2.0.0
7fc73c9f8000-7fc73ca01000 r-xp 00000000 08:11 284623469                  <path>/ipfixcol/plugins/intermediate/proxy/.libs/ipfixcol-proxy-inter.so
7fc73ca01000-7fc73cc01000 ---p 00009000 08:11 284623469                  <path>/ipfixcol/plugins/intermediate/proxy/.libs/ipfixcol-proxy-inter.so
7fc73cc01000-7fc73cc02000 rw-p 00009000 08:11 284623469                  <path>/ipfixcol/plugins/intermediate/proxy/.libs/ipfixcol-proxy-inter.so
7fc73cc02000-7fc73cc07000 r-xp 00000000 08:11 284623430                  <path>/ipfixcol/plugins/intermediate/httpfieldmerge/.libs/ipfixcol-httpfieldmerge-inter.so
7fc73cc07000-7fc73ce06000 ---p 00005000 08:11 284623430                  <path>/ipfixcol/plugins/intermediate/httpfieldmerge/.libs/ipfixcol-httpfieldmerge-inter.so
7fc73ce06000-7fc73ce07000 rw-p 00004000 08:11 284623430                  <path>/ipfixcol/plugins/intermediate/httpfieldmerge/.libs/ipfixcol-httpfieldmerge-inter.so
7fc73ce07000-7fc73ce09000 r-xp 00000000 08:02 234881059                  /usr/lib/libsctp.so.1.0.11
7fc73ce09000-7fc73d008000 ---p 00002000 08:02 234881059                  /usr/lib/libsctp.so.1.0.11
7fc73d008000-7fc73d009000 rw-p 00001000 08:02 234881059                  /usr/lib/libsctp.so.1.0.11
7fc73d009000-7fc73d00d000 r-xp 00000000 08:11 284623394                  <path>/ipfixcol/base/src/storage/forwarding/.libs/ipfixcol-forwarding-output.so
7fc73d00d000-7fc73d20c000 ---p 00004000 08:11 284623394                  <path>/ipfixcol/base/src/storage/forwarding/.libs/ipfixcol-forwarding-output.so
7fc73d20c000-7fc73d20d000 rw-p 00003000 08:11 284623394                  <path>/ipfixcol/base/src/storage/forwarding/.libs/ipfixcol-forwarding-output.so
7fc73d20d000-7fc73d214000 r-xp 00000000 08:02 42336514                   /lib/x86_64-linux-gnu/librt-2.13.so
7fc73d214000-7fc73d413000 ---p 00007000 08:02 42336514                   /lib/x86_64-linux-gnu/librt-2.13.so
7fc73d413000-7fc73d414000 r--p 00006000 08:02 42336514                   /lib/x86_64-linux-gnu/librt-2.13.so
7fc73d414000-7fc73d415000 rw-p 00007000 08:02 42336514                   /lib/x86_64-linux-gnu/librt-2.13.so
7fc73d415000-7fc73d41b000 r-xp 00000000 08:11 284623280                  <path>/ipfixcol/base/src/input/udp/.libs/ipfixcol-udp-input.so
7fc73d41b000-7fc73d61b000 ---p 00006000 08:11 284623280                  <path>/ipfixcol/base/src/input/udp/.libs/ipfixcol-udp-input.so
7fc73d61b000-7fc73d61c000 rw-p 00006000 08:11 284623280                  <path>/ipfixcol/base/src/input/udp/.libs/ipfixcol-udp-input.so
7fc73d61c000-7fc73d69d000 r-xp 00000000 08:02 42336297                   /lib/x86_64-linux-gnu/libm-2.13.so
7fc73d69d000-7fc73d89c000 ---p 00081000 08:02 42336297                   /lib/x86_64-linux-gnu/libm-2.13.so
7fc73d89c000-7fc73d89d000 r--p 00080000 08:02 42336297                   /lib/x86_64-linux-gnu/libm-2.13.so
7fc73d89d000-7fc73d89e000 rw-p 00081000 08:02 42336297                   /lib/x86_64-linux-gnu/libm-2.13.so
7fc73d89e000-7fc73d8c0000 r-xp 00000000 08:02 42336288                   /lib/x86_64-linux-gnu/liblzma.so.5.0.0
7fc73d8c0000-7fc73dabf000 ---p 00022000 08:02 42336288                   /lib/x86_64-linux-gnu/liblzma.so.5.0.0
7fc73dabf000-7fc73dac0000 r--p 00021000 08:02 42336288                   /lib/x86_64-linux-gnu/liblzma.so.5.0.0
7fc73dac0000-7fc73dac1000 rw-p 00022000 08:02 42336288                   /lib/x86_64-linux-gnu/liblzma.so.5.0.0
7fc73dac1000-7fc73dad7000 r-xp 00000000 08:02 42336346                   /lib/x86_64-linux-gnu/libz.so.1.2.7
7fc73dad7000-7fc73dcd6000 ---p 00016000 08:02 42336346                   /lib/x86_64-linux-gnu/libz.so.1.2.7
7fc73dcd6000-7fc73dcd7000 r--p 00015000 08:02 42336346                   /lib/x86_64-linux-gnu/libz.so.1.2.7
7fc73dcd7000-7fc73dcd8000 rw-p 00016000 08:02 42336346                   /lib/x86_64-linux-gnu/libz.so.1.2.7
7fc73dcd8000-7fc73de59000 r-xp 00000000 08:02 42336289                   /lib/x86_64-linux-gnu/libc-2.13.so
7fc73de59000-7fc73e059000 ---p 00181000 08:02 42336289                   /lib/x86_64-linux-gnu/libc-2.13.so
7fc73e059000-7fc73e05d000 r--p 00181000 08:02 42336289                   /lib/x86_64-linux-gnu/libc-2.13.so
7fc73e05d000-7fc73e05e000 rw-p 00185000 08:02 42336289                   /lib/x86_64-linux-gnu/libc-2.13.so
7fc73e05e000-7fc73e063000 rw-p 00000000 00:00 0
7fc73e063000-7fc73e07a000 r-xp 00000000 08:02 42336506                   /lib/x86_64-linux-gnu/libpthread-2.13.so
7fc73e07a000-7fc73e279000 ---p 00017000 08:02 42336506                   /lib/x86_64-linux-gnu/libpthread-2.13.so
7fc73e279000-7fc73e27a000 r--p 00016000 08:02 42336506                   /lib/x86_64-linux-gnu/libpthread-2.13.so
7fc73e27a000-7fc73e27b000 rw-p 00017000 08:02 42336506                   /lib/x86_64-linux-gnu/libpthread-2.13.so
7fc73e27b000-7fc73e27f000 rw-p 00000000 00:00 0
7fc73e27f000-7fc73e3d4000 r-xp 00000000 08:02 234885316                  /usr/lib/x86_64-linux-gnu/libxml2.so.2.8.0
7fc73e3d4000-7fc73e5d4000 ---p 00155000 08:02 234885316                  /usr/lib/x86_64-linux-gnu/libxml2.so.2.8.0
7fc73e5d4000-7fc73e5dc000 r--p 00155000 08:02 234885316                  /usr/lib/x86_64-linux-gnu/libxml2.so.2.8.0
7fc73e5dc000-7fc73e5de000 rw-p 0015d000 08:02 234885316                  /usr/lib/x86_64-linux-gnu/libxml2.so.2.8.0
7fc73e5de000-7fc73e5df000 rw-p 00000000 00:00 0
7fc73e5df000-7fc73e5e1000 r-xp 00000000 08:02 42336295                   /lib/x86_64-linux-gnu/libdl-2.13.so
7fc73e5e1000-7fc73e7e1000 ---p 00002000 08:02 42336295                   /lib/x86_64-linux-gnu/libdl-2.13.so
7fc73e7e1000-7fc73e7e2000 r--p 00002000 08:02 42336295                   /lib/x86_64-linux-gnu/libdl-2.13.so
7fc73e7e2000-7fc73e7e3000 rw-p 00003000 08:02 42336295                   /lib/x86_64-linux-gnu/libdl-2.13.so
7fc73e7e3000-7fc73e803000 r-xp 00000000 08:02 42336283                   /lib/x86_64-linux-gnu/ld-2.13.so
7fc73e9f0000-7fc73e9f5000 rw-p 00000000 00:00 0
7fc73e9ff000-7fc73ea02000 rw-p 00000000 00:00 0
7fc73ea02000-7fc73ea03000 r--p 0001f000 08:02 42336283                   /lib/x86_64-linux-gnu/ld-2.13.so
7fc73ea03000-7fc73ea04000 rw-p 00020000 08:02 42336283                   /lib/x86_64-linux-gnu/ld-2.13.so
7fc73ea04000-7fc73ea05000 rw-p 00000000 00:00 0
7fffb022e000-7fffb024f000 rw-p 00000000 00:00 0                          [stack]
7fffb02cc000-7fffb02cd000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
Aborted

Although we were running several plugins at the time of the crash, it really seems to be related to IPFIXcol's base code. tm_template_reference_dec is called only once in the codebase, namely from base/src/queues.c:253, and the double free or corruption appears to have occurred in base/src/template_manager.c:739. Can a double free perhaps be caused when the last templ->next in tm_template_reference_dec is freed both in tm_template_reference_dec:736 and tm_template_reference_dec:739? Or can you think of a scenario that could cause the double free?

NetFlow v9 input: enterprise-specific fields not parsed correctly

We currently have a trace (basically the same one as for issue #14) that contains NetFlow v9 datagrams. Some fields are non-standard (in this case, defined by ntop), and cause the FastBit storage plugin to go crazy:

DEBUG: fastbit storage: Received new template: 258
ERROR: fastbit storage: Wrong element size (e-516227040id24884 - 0)!
ERROR: fastbit storage: Wrong element size (e0id260 - 0)!
ERROR: fastbit storage: Wrong element size (e-516227040id24884 - 0)!
ERROR: fastbit storage: Wrong element size (e0id260 - 0)!

As you can see, this results in integer overflows and also the e0id260 is a field that is not present in the template/data record. There are actually two fields that have an ID > 256:

  • HTTP_URL (57652)
  • HTTP_HOST (57659)

I've prepared a short trace for you that triggers the behavior. Please contact me at kirc&secdorks.net to get the trace.

FastBit storage plugin: meaning of renamed table directories?

In one of our deployment setups, where we receive flow data from a single INVEA-TECH FlowMon Probe over IPFIX per IPFIXcol instance with FastBit storage, we notice that table directories (i.e., those featuring template IDs) are renamed quite regularly. This results in table directories with alphabetic postfixes, such as 'a' and 'b'. I'm wondering what is exactly the reason for those postfixes.

While checking the code, I found out that the dir_check routine in plugins/storage/fastbit/fastbit_table.cpp is where the renaming is done. This routine adds a new directory with the mentioned postfixes if the (original) directory already exists. So now I'm wondering: what is the meaning of the renamed directories? Why isn't the data simply merged? Is it an indication of an issue, since I'm receiving data from just a single ODID?

autoreconf fails on CentOS 5

Building IPFIXcol on CentOS 5 fails with the following errors:

[root@localhost base]# autoreconf -if
aclocal:configure.ac:132: warning: macro 'AM_COND_IF' not found in library
aclocal:configure.ac:150: warning: macro 'AM_COND_IF' not found in library
aclocal:configure.ac:216: warning: macro 'AM_COND_IF' not found in library
aclocal:configure.ac:221: warning: macro 'AM_COND_IF' not found in library
configure.ac:39: error: Autoconf version 2.60 or higher is required
configure.ac:39: the top level
autom4te: /usr/bin/m4 failed with exit status: 63
aclocal: autom4te failed with exit status: 63
autoreconf: aclocal failed with exit status: 63
[root@localhost base]# autoreconf --version
autoreconf (GNU Autoconf) 2.59
<snip>
[root@localhost base]# aclocal --version
aclocal (GNU automake) 1.9.6
<snip>
[root@localhost base]#

Similar errors, although for a different macro and without referred to Autoconf version 2.60, occur when trying to build CESNET's libfast fork.

Even though a potential solution to the problem is provided in the error messages (i.e., installing a more recent version of Autoconf), it is sometimes far from straightforward to accomplish this. This can be the case due to dependency issues, for example. My questions now are twofold:

  1. Is the requirement for Autoreconf 2.60 (or perhaps v2.68) very hard, or would it be possible to have different code paths depending on the version of Autoreconf available on the system?
  2. Do you have any experience with building IPFIXcol (and libfastbit) on CentOS 5 systems?
  3. Any other suggestions on how to solve this problem?

Copyright notices outdated

For the v0.8 release later this month, it may be good to update the year indications in the copyright notices in the head of the source files. Several notices are as old as 2011.

Profiles.cpp: Wrapper object use after free

Several methods in profiles.cpp, namely profile_get_name and profile_get_path, as well as channel_get_name and channel_get_path, return a pointer to an internal element of a Profile. As such, once a Profile object is destroyed, the pointer becomes invalid, yielding unexpected behavior.

NetFlow v9 to IPFIX conversion: no padding inserted

The current code for converting NFv9 PDUs to IPFIX (convert.c:541-598) takes template and data sets in NFv9 PDUs as is, meaning that in case padding is used, it will be considered for the IPFIX PDU, and no padding will be inserted otherwise. This works well as long as exporters adhere to the protocol specifications when it comes to padding. However, we have recently come across a (Cisco) exporter that does not insert padding in NFv9 PDUs at all, even though RFC 3954 dictates the following with respect to padding:

The Exporter SHOULD insert some padding bytes so that the subsequent FlowSet starts at a 4-byte aligned boundary.

In a situation where plugins perform PDU manipulation and insert padding to obtain 4-byte aligned boundaries (as kind of recommended in RFC 7011 in the case of IPFIX), this can lead to memory errors. Although these errors are fixable in plugins, it may be good to overcome problems introduced by poor exporter implementations already in IPFIXcol input plugins, to avoid any interoperability issues further in the processing chain. Also, fixing this will make the resulting IPFIX PDUs (especially when used together with the Forwarding plugin, for example) more compliant with IPFIX PDUs from many other vendors.

fbitdump: expansion of time specifications broken

According to fbitdump's man page,

any parts of the time spec may be omitted e.g YYYY/MM/dd expands to YYYY/MM/dd.00:00:00-infinity and processes all flow from a given day onwards

However, when specifying an abbreviated timestamp like 2016/07/12.05:35, fbitdump reports the following error:

Warning -- ibis::whereParser encountered syntax error, unexpected floating-point number, expecting name string or string literal at location...

[Proposal] Add ipfix-elements.xml configuration file command-line argument

I propose to add an "-e" argument to ipfixcol, which allows one to specify the path to
the ipfix-elements.xml file. This would work analogous to the "-c" and "-i" arguments for the startup
and internalcfg files, respectively. It will come in handy if a user does not have write access to the default file (/etc/ipfixcol/ipfix-elements.cfg).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.