GithubHelp home page GithubHelp logo

sonata-nfv / son-emu Goto Github PK

View Code? Open in Web Editor NEW
36.0 10.0 32.0 1.77 MB

Attention! Legacy! This repo will be replaced with https://github.com/containernet/vim-emu

Home Page: https://github.com/containernet/vim-emu

License: Apache License 2.0

Python 97.02% Shell 0.83% CSS 0.20% HTML 0.54% JavaScript 1.07% Dockerfile 0.33%
mininet containernet emulation nfv sdn sonata docker

son-emu's Introduction

vim-emu: A NFV multi-PoP emulation platform

This emulation platform was created to support network service developers to locally prototype and test their network services in realistic end-to-end multi-PoP scenarios. It allows the execution of real network functions, packaged as Docker containers, in emulated network topologies running locally on the developer's machine. The emulation platform also offers OpenStack-like APIs for each emulated PoP so that it can integrate with MANO solutions, like OSM. The core of the emulation platform is based on Containernet.

The emulation platform vim-emu is developed as part of OSM's DevOps MDG.

Acknowledgments

This software was originally developed by the SONATA project, funded by the European Commission under grant number 671517 through the Horizon 2020 and 5G-PPP programs.

Cite this work

If you use the emulation platform for your research and/or other publications, please cite the following paper to reference our work:

Bibtex:

@inproceedings{peuster2016medicine, 
    author={M. Peuster and H. Karl and S. van Rossem}, 
    booktitle={2016 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN)}, 
    title={MeDICINE: Rapid prototyping of production-ready network services in multi-PoP environments}, 
    year={2016}, 
    volume={}, 
    number={}, 
    pages={148-153}, 
    doi={10.1109/NFV-SDN.2016.7919490},
    month={Nov}
}

Installation

There are multiple ways to install and use the emulation platform. The easiest way is the automated installation using the OSM installer. The bare-metal installation requires a freshly installed Ubuntu 16.04 LTS and is done by an ansible playbook. Another option is to use a nested Docker environment to run the emulator inside a Docker container.

Automated installation (recommended)

./install_osm.sh --lxdimages --vimemu

This command will install OSM (as LXC containers) as well as the emulator (as a Docker container) on a local machine. It is recommended to use a machine with Ubuntu 16.04.

Manual installation

Option 1: Bare-metal installation

sudo apt-get install ansible git aptitude
Step 1. Containernet installation
cd
git clone https://github.com/containernet/containernet.git
cd ~/containernet/ansible
sudo ansible-playbook -i "localhost," -c local install.yml
cd ..
sudo python setup.py install
Step 2. vim-emu installation
cd
git clone https://osm.etsi.org/gerrit/osm/vim-emu.git
cd ~/vim-emu/ansible
sudo ansible-playbook -i "localhost," -c local install.yml
cd ..
sudo python setup.py install

Option 2: Nested Docker Deployment

This option requires a Docker installation on the host machine on which the emulator should be deployed.

git clone https://osm.etsi.org/gerrit/osm/vim-emu.git</code>
cd ~/vim-emu</code>
# build the container:
docker build -t vim-emu-img .
# run the (interactive) container:
docker run --name vim-emu -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock vim-emu-img /bin/bash

Usage

Example

This simple example shows how to start the emulator with a simple topology (terminal 1) and how to start (terminal 2) some empty VNF containers in the emulated datacenters (PoPs) by using the vim-emu CLI.

  • First terminal (start the emulation platform):
    • sudo python examples/default_single_dc_topology.py
  • Second terminal (use docker exec vim-emu <command> for nested Docker deployment):
    • vim-emu compute start -d dc1 -n vnf1
    • vim-emu compute start -d dc1 -n vnf2
    • vim-emu compute list
  • First terminal:
    • containernet> vnf1 ifconfig
    • containernet> vnf1 ping -c 2 vnf2

A more advanced example that includes OSM can be found in the official vim-emu documentation in the OSM wiki.

Further documentation and useful links

Development

How to contribute?

Please check this OSM wiki page to learn how to contribute to a OSM module.

Testing

To run the unit tests do:

  • cd ~/vim-emu
  • sudo pytest -v
  • (To force Python2: sudo python2 -m pytest -v)

Seed code contributors:

Lead:

Contributors

License

The emulation platform is published under Apache 2.0 license. Please see the LICENSE file for more details.

Contact

Manuel Peuster (Paderborn University) [email protected]

son-emu's People

Contributors

gerardo-garcia avatar stevenvanrossem avatar xschlef avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

son-emu's Issues

Implement simple model for container resource limitations

Implement a solution to dynamically limit CPU/Memory of containers to approximate the behavior of a real cloud data center.

These models should be encapsulated and designed as pluggable components.

UPB is preparing a research paper about this.

Mininet host naming limits

There seem to be some special cases when it comes to names used for Mininet hosts:

  • if you insert a "-", ".", ":", ... the Mininet command line breaks e.g. d1.1 ping d2 does not work
  • if your name is too long (8?) Mininet will stop with an exception
  • possible solution: map user given names to internal names, like UUID/Name -> dN

Chaining code cleanup

I fixed some bugs in the chaining code this morning and noticed that the doc strings are missing for the corresponding methods.

In particular setChain and _chainAddFlow are not documented very well and it took me some time to figure out how to fix the problems.

Can we add a better documentation to them? Clean them up?

dummy gatekeeper vnf naming

The dummy GK now assigns its own names to the deplpoyed VNFs in the service: vnf%d
After deployment, there is no way to link the vnf id in the nsd to the actual deployed docker container name?
This would be needed eg. in case the user wants to monitor a vnf after deployment, only the id in the nsd is known to the user, not the actual name of the deployed container...

Would it be possible to use the vnf id in the nsd as the name for the deployed container?
or is this related to the mininet host name issue: [https://github.com//issues/2] ?

As a workaround the dummy GK could return via its rest api the mapping of the nsd vnf id to the real container instance.

Which option would work best you think?

Push monitor metrics to Prometheus Push Gateway

To be compatible with the SP, monitored metrics in son-emu should be pushed to the Prometheus Gateway. When the emulator is attached to the SP, it will be easy to export the metrics to the SP's Monitoring Framework.
In the meantime, the Prometheus gateway can be deployed as a docker container in the SDK:
https://github.com/prometheus/pushgateway
Prometheus will then scrape only the Pushgateway.

RemoteController/Ryu not properly cleaned

This was reported multiple times and has to be fixed ASAP.

Running unittests doing:

sudo py.test -v src/emuvim/test/unittests

often ends with the following if one of the tests fails. This problem stays and breaks all following test executions:

self = <Controller c0: 127.0.0.1:6653 pid=25914> 

    def checkListening( self ):
        "Make sure no controllers are running on our port"
        # Verify that Telnet is installed first:
        out, _err, returnCode = errRun( "which telnet" )
        if 'telnet' not in out or returnCode != 0:
            raise Exception( "Error running telnet to check for listening "
                             "controllers; please check that it is "
                             "installed." )
        listening = self.cmd( "echo A | telnet -e A %s %d" %
                              ( self.ip, self.port ) )
        if 'Connected' in listening:
            servers = self.cmd( 'netstat -natp' ).split( '\n' )
            pstr = ':%d ' % self.port
            clist = servers[ 0:1 ] + [ s for s in servers if pstr in s ]
            raise Exception( "Please shut down the controller which is"
                             " running on port %d:\n" % self.port +
>                            '\n'.join( clist ) )
E           Exception: Please shut down the controller which is running on port 6653:
E           Active Internet connections (servers and established)
E           tcp        0      0 0.0.0.0:6653            0.0.0.0:*               LISTEN      22424/python    
E           tcp        0      0 127.0.0.1:46135         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46138         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46132         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46143         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46124         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46123         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46126         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46130         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46139         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46125         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46140         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46142         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46134         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46136         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46141         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46131         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46128         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46129         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46133         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46127         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46122         127.0.0.1:6653          TIME_WAIT   -               
E           tcp        0      0 127.0.0.1:46137         127.0.0.1:6653          TIME_WAIT   -

The reason for this is that ryu is not killed properly. This has to be fixed. We have to ensure that ryu is always killed after the emulator is stopped! Maybe we always kill old ryu instances if we start the emulator again?

Workarounds:

  • reboot entire system
  • sudo pkill ryu-manager

RM: Dynamic CPU bandwidth control

Find a way to dynamically change the CFS CPU bandwidth control at container runtime. Is this possible? Should be implemented in the Dockernet layer.

Interface name collisions among VNFs

Using the same interface names inside the VNF containers can lead to problems with VNFs that need to run in privileged mode, like the vTC (PF_RING).

A workaround is to modify the VNFDs to not have duplicated interface names (e.g., not have input or output twice in a service).

We should think about having a better naming scheme (e.g. include the vnf_id in the interface name). Not sure how long these names are allowed to be?

This is an issue that should be investigated after the Y1 demo.

monitor feature: Passive packet monitoring

Monitoring particular link (tcpdump-style) in the deployed service chain, by bypassing the traffic through an intermediate VNF and filter/dump/bypass the packets there.

service endpoints deployed from dummy gatekeeper

Currently the service endpoints are disregarded by the dummygatekeeper. If we want to send/receive test traffic in a deployed service, we need somehow to access these endpoints.

  • create a veth interface (in a seperate network namespace) that links to the datacenter switch and routes to the service endpoint?
  • something similar to mininet hosts that attach to an endpoint?
  • a dedicated docker container per endpoint?

dummy GK mgmt network

The dummy GK should not deploy mgmt networks/connection points from the vnfd.
The default connection to docker0 in the emulator can be considered as the mgmt network.

starting son-emu from docker-compose

When starting son-emu with a topology file from docker-compose, it does not wait at the net.CLI() because a container cannot be started in interactive mode using docker-compose.
One solution is to modify the topology file, like this example:
https://github.com/sonata-nfv/son-emu/blob/master/src/emuvim/examples/son-monitor_test_topo.py

Instead of the starting the CLI, a wait loop is started to keep son-emu running.
The SIGINT and SIGTERM signals are trapped to gracefully shutdown son-emu when the container is stopped via the docker engine.

This should also not break starting the topology outside a docker container using python
So this could serve as a more general template for constructing topology files.

Advanced networking interface: Multiple NICs per container

Advanced networking:

  • improve container network configuration
  • allow multiple interfaces per container (important for VNFs in/out style communication)
  • extend API to support this
  • extend CLI to support this

Note: even if containers can have multiple interfaces, we stick with a single switch per logical data center. We completely rely on SDN to do the appropriate traffic steering.

RM: Full test case

Add a full test case that starts real limitated containers.

Who to check limitations? CLI commands for cgroups?

FGK: implement tests

if the son-schema repo is public, we could automatically test with the latest example package.

can we trigger this with PR's to the son-schema repo? :)

start extra monitoring containers (cAdvisor, Prometheus Pushgateway)

To enable son-monitor to get metrics from the son-emu, cAdvisor and Prometheus' Pushgateway are used.
It makes more sense to start these containers together with son-emu, so they are running in the same environment (eg. inside the same VM). The ports used by cAdvisor(8080) and PushGateway(9091) should be mapped correctly.
Starting this monitoring framework is done from the topology file with the 'monitor' option:
DCNetwork( monitor=True)

monitor feature: Test packet insertion

Insert packet at particular point in the deployed service function chain, either by a dedicated VNF or via a Packet_Out messsage from an Openflow controller connected to the emulator's vSwitches. The test packet can be intercepted at a later time, when passing a vSwitch. The test packet can for example include a timestamp, which allows delay measurement.

FGK: Finalize "fake" gatekeeper to deploy our example service package

As long as we do not have WP4's orchestrator available, it would be nice to have a hardcoded script that can deploy/control some example services on the emulator.

As far as I see, this is the way to go for the Y1 prototype. Lets create a script that can deploy the example package defined in son-schema.

Chaining does not work on Ubuntu 16.04

Problem might be caused by a new OVS version.

I use this example to debug: https://github.com/sonata-nfv/son-examples/wiki/Other-Demo-Scripts
on 14.04 I get:

sudo ovs-ofctl dump-flows dc1.s1
[sudo] password for manuel: 
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=24.058s, table=0, n_packets=5, n_bytes=386, idle_age=16, priority=0,in_port=1,dl_vlan=1 actions=strip_vlan,output:2
 cookie=0x0, duration=24.068s, table=0, n_packets=7, n_bytes=462, idle_age=16, priority=0,in_port=2 actions=load:0->OXM_OF_VLAN_VID[],output:1
manuel@dev-vm:~/son-emu$ sudo ovs-ofctl dump-flows s1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=36.122s, table=0, n_packets=5, n_bytes=386, idle_age=28, priority=0,in_port=1,dl_vlan=0 actions=output:2
 cookie=0x0, duration=36.113s, table=0, n_packets=5, n_bytes=386, idle_age=28, priority=0,in_port=2,dl_vlan=1 actions=output:1
manuel@dev-vm:~/son-emu$ sudo ovs-ofctl dump-flows dc2.s1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=51.009s, table=0, n_packets=5, n_bytes=386, idle_age=43, priority=0,in_port=1,dl_vlan=0 actions=strip_vlan,output:2
 cookie=0x0, duration=51.009s, table=0, n_packets=5, n_bytes=378, idle_age=43, priority=0,in_port=2 actions=load:0x1->OXM_OF_VLAN_VID[],output:1

on 16.04 I get the following:

sonata@son-demo-vm:~$ sudo ovs-ofctl dump-flows dc1.s1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=57.032s, table=0, n_packets=0, n_bytes=0, idle_age=57, priority=0,in_port=1,dl_vlan=1 actions=strip_vlan,output:2
sonata@son-demo-vm:~$ sudo ovs-ofctl dump-flows s1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=64.612s, table=0, n_packets=0, n_bytes=0, idle_age=64, priority=0,in_port=1,dl_vlan=0 actions=output:2
 cookie=0x0, duration=64.592s, table=0, n_packets=0, n_bytes=0, idle_age=64, priority=0,in_port=2,dl_vlan=1 actions=output:1
sonata@son-demo-vm:~$ sudo ovs-ofctl dump-flows dc2.s1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=67.620s, table=0, n_packets=0, n_bytes=0, idle_age=67, priority=0,in_port=1,dl_vlan=0 actions=strip_vlan,output:2

difference are these rules:

 cookie=0x0, duration=51.009s, table=0, n_packets=5, n_bytes=378, idle_age=43, priority=0,in_port=2 actions=load:0x1->OXM_OF_VLAN_VID[],output:1

On 16.04 traffic (e.g. ping) is NOT forwarded by flow rules created with the setChain command.

Automated son-emu VM creation

We need an automated way to create an emulator VM that can be distributed.

Yes, we have an install script and a Docker image, but both need sudo rights and fiddle around with the host's machine network to emulate topologies. This is something a developer does not want to do in his dev OS. We should provide a VM for it (like Mininet does). This VM can be executed with e.g. VirtualBox on the developers machine and son-push can push services to it.

Plan:

Create Vagrant file with ansible provisioning plugin that installs son-emu in an empty Ubuntu 16.04 VM
=> For now we have to use a Ubuntu 14.04 base image because of a bug in the 16.04 vagrant base box

son-emu-cli: Replace zerorpc

Instead of using zerorpc to communicate between son-emu-cli and running emulator we should use standard HTTP REST.

Benefits:

  • emulator offers a consistent API that can be used by any other tool
  • library/programming language independent
  • easy to document (Swagger etc.)

Remarks:

  • it has to be ensured that son-emu-cli is python3 compatible

Full cloud-like API (API stage 2)

API to which SONATA's SP can connect (VIM/WIM) to control compute resources. E.g. OpenStack/Heat like.

(milestone not fixed, depends on WP4 outcomes)

apt-get update fail in Dockerfile

This is the log of the issue

fatal: [localhost]: FAILED! => {"changed": false, "cmd": "apt-get update", "failed": true, "msg": "W: Size of file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages.gz is not what the server reported 978820 978828\nW: Failed to fetch http://ppa.launchpad.net/ansible/ansible/ubuntu/dists/trusty/main/binary-amd64/Packages 404 Not Found\n\nE: Some index files failed to download. They have been ignored, or old ones used instead.", "rc": 100, "stderr": "W: Size of file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages.gz is not what the server reported 978820 978828\nW:

son-emu compatible VNF containers

We need a standard way to execute the VNF software inside the containers after the dummy GK has instatiated them.

(we can not rely on the CMD field in the Docker file since the emulator starts each container with /bin/bash as command to configure it)

Idea is that the dummy GK always tries to call start.sh in the root of the container after it has started and connected the container.

FIX: Stop Ryu when network is stopped

Without this, we always have an old Ryu instance running after the first execution of the emulator.

This causes bugs that are extremely hard to find.

FGK: Web GUI

ok, its not needed and we could use curl, but why not having a nice webpage with a file-upload button for the packages? :-)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.