srl-labs / containerlab Goto Github PK
View Code? Open in Web Editor NEWcontainer-based networking labs
Home Page: https://containerlab.dev
License: BSD 3-Clause "New" or "Revised" License
container-based networking labs
Home Page: https://containerlab.dev
License: BSD 3-Clause "New" or "Revised" License
Tried the Graph option with the example topology of lab-examples/br01
go run . graph -t lab-examples/br01/br01.yml -d
The truncated log is as follows:
DEBU[0000] [lab-examples br01 br01.yml]br01.yml[br01 yml]br01
DEBU[0000] File : &{br01.yml br01 br01}
INFO[0000] Parsing topology information ...
DEBU[0000] Prefix: br01
DEBU[0000] DockerInfo: {clab 172.20.20.0/24 172.20.20.1 2001:172:20:20::/80 2001:172:20:20::1}
DEBU[0000] License key: /home/ubuntu/container-lab/clab-br01/license.key
DEBU[0000] Config: /home/ubuntu/container-lab/clab-br01/srl1/config/
DEBU[0000] Env Config: /home/ubuntu/container-lab/clab-br01/srl1/srlinux.conf
DEBU[0000] Topology File: /home/ubuntu/container-lab/clab-br01/srl1/topology.yml
DEBU[0000] License key: /home/ubuntu/container-lab/clab-br01/license.key
DEBU[0000] Config: /home/ubuntu/container-lab/clab-br01/srl2/config/
DEBU[0000] Env Config: /home/ubuntu/container-lab/clab-br01/srl2/srlinux.conf
DEBU[0000] Topology File: /home/ubuntu/container-lab/clab-br01/srl2/topology.yml
DEBU[0000] License key: /home/ubuntu/container-lab/clab-br01/license.key
DEBU[0000] Config: /home/ubuntu/container-lab/clab-br01/srl3/config/
DEBU[0000] Env Config: /home/ubuntu/container-lab/clab-br01/srl3/srlinux.conf
DEBU[0000] Topology File: /home/ubuntu/container-lab/clab-br01/srl3/topology.yml
FATA[0000] Bridge br-clab is referenced in the endpoints section but was not found in the default network namespace
exit status 1
There is code executed which should not be executed on simple topology parsing and graphing.
Currently, the lab topologies reuse the management network provided by docker network containerlab
.
When we destroy the topo, we try to delete this network and its entities, but that errors, due to other containers from other labs might be still attached:
INFO[0001] Deleting docker bridge ...
ERRO[0001] Error response from daemon: error while removing network: network containerlab id 1a050256ae16f2f7e73d2c29a336b04b133698a17fc77230fdea34b29bc5a2cd has active endpoints
the proposal is to first check if the network has active endpoints attached and if it doesn't - try and delete it
save the world by not running tests with changes in the docs only
When link' kind is set to bridge
and the bridge does not exist - create the bridge automatically with the tcp offload set to off
and default MTU set to 9212
When we start SRL containers we can look at what endpoints are defined and template bare interface config for them. Just create and admin-state enable
for starters.
before we properly handle bridge LCM (#84) we should not attempt to create links/containers of one of the links references a non-existing bridge
we would need to change the
import "docker.io/go-docker" to https://github.com/docker/engine/tree/master/client
due to docker.io/go-docker being unmaintaned (ref docker/go-docker#21)
containerlab deploy should merge the generated hosts file with the system's hosts file.
containerlab destroy should remove the entries specific to the deleted fabric
feature added in #127
as discussed with @karimra the proposal is to make docker network name optional (auto-generated based on lab prefix)
This issue will track the design decisions on the new "topology definition" format clab should adopt.
The recent proposal was:
name: wan-topo
mgmt:
# network is not mandatory, if not specified it default to $name-network
network: $name-network
ipv4_range: <ipv4 mgmt range>
ipv6_range: <ipv6 mgmt range>
topology:
defaults:
kind: srl
kinds:
srl:
image: srlinux20.6.1-286
type: ixr6
alpine:
image: henderiw/client-alpine:1.0.0
nodes:
node1:
node2:
client1:
kind: alpine
client2:
kind: alpine
links:
- endpoints: [ "node1:e1-1", "node2:e1-1"]
- endpoints: [ "node1:e1-2", "client1:eth1"]
- endpoints: [ "node2:e1-2", "client2:eth1"]
The areas that we need to have a discussion on:
Currently, the clab doesn't allow to have different license files for different nodes/kinds, as it copies the license file to the lab directory - https://github.com/srl-wim/container-lab/blob/81e0f749835e25d6d1f0360b38f96a67b7f9b8f0/clab/file.go#L146
proposal is to copy the license file under the node directory explicitly, and mount it from there
a config directive points to a precise config file that can be used as-is with a certain node
making config available for kinds
does not make sense, since every instance of a kind should have its own config?
two issues here:
[root@srl-centos7 tmp]# sudo curl -sL https://github.com/srl-wim/container-lab/raw/master/get.sh | \
> sudo bash
containerlab 0.7.0 is available. Changing from version .
Downloading https://github.com/srl-wim/container-lab/releases/download/v0.7.0/containerlab_0.7.0_linux_amd64.rpm
Preparing to install containerlab 0.7.0 from package
file /etc/containerlab/templates/srl/srlconfig.tpl from install of containerlab-0:0.7.0-1.x86_64 conflicts with file from package containerlab-0:0.6.1-1.x86_64
file /usr/local/bin/containerlab from install of containerlab-0:0.7.0-1.x86_64 conflicts with file from package containerlab-0:0.6.1-1.x86_64
Failed to install containerlab
For support, go to https://github.com/srl-wim/container-lab/issues
sudo/root privileges are required
in addition to the existing checks, the following checks need to be added before we attempt to create a lab
config
are present by the provided paths (added in #268)license
element must have a file by that path (added in #268)InitVirtualWiring is broken, if you delete one end of the virtual wire the other end also disappears
We should set the default IP ranges only if both ranges are missing.
This will allow users to create networks with only IPv4 or only IPv6 ranges.
Currently if a user specifies IPv4 only, clab adds the default IPv6 range which could fail if it's already in use by another docker network
likely dependant on the deletion by name
Hi @steiler
I think its better to use log.Fatalf rather than panic here
and would also be lovely to print out the incorrect type name for a user to see where did they typo'ed
In this PR weaveworks moved from shell based container attachement to Go/syscall based
this can be used for inspiration; for instance remove ethtool subshelling and/or creating veth pairs
Currently its hard to understand which topo file was used to spin up containers after the creation.
I propose we add a label that will point to the abs file path of the topology that was used to create those nodes.
What I think also can be added is the containerlab inspect
command which can be used without arguments and will output all containers from all the labs with topo file paths. This will make it possible to see which labs were deployed on the host and how to destroy them
Right now, we generate certificates before the containers deployments,
This means the mgmt IP addresses, cannot be part of the cert SAN.
What about delaying the certificates generation till after the deployment and use SRL's json-rpc to set the certificates and enable the gnmi-server ?
This also solves the issue of overwriting certificates on disk even if they are present in config.
For other nodes, we can check how certificates can be set other than at boot
The table that appears once the lab has been created is not sorted:
+---------------------------------+---------------------------------+-------+-------+---------+-----------------+----------------------+
| Name | Image | Kind | Group | State | IPv4 Address | IPv6 Address |
+---------------------------------+---------------------------------+-------+-------+---------+-----------------+----------------------+
| containerlab-clos02-spine1 | srlinux | srl | | running | 172.20.20.14/24 | 2001:172:20:20::e/80 |
| containerlab-clos02-client3 | ghcr.io/hellt/network-multitool | linux | | running | 172.20.20.15/24 | 2001:172:20:20::f/80 |
| containerlab-clos02-spine2 | srlinux | srl | | running | 172.20.20.13/24 | 2001:172:20:20::d/80 |
| containerlab-clos02-superspine1 | srlinux | srl | | running | 172.20.20.11/24 | 2001:172:20:20::b/80 |
| containerlab-clos02-client1 | ghcr.io/hellt/network-multitool | linux | | running | 172.20.20.12/24 | 2001:172:20:20::c/80 |
| containerlab-clos02-leaf3 | srlinux | srl | | running | 172.20.20.10/24 | 2001:172:20:20::a/80 |
| containerlab-clos02-leaf2 | srlinux | srl | | running | 172.20.20.6/24 | 2001:172:20:20::6/80 |
| containerlab-clos02-client4 | ghcr.io/hellt/network-multitool | linux | | running | 172.20.20.8/24 | 2001:172:20:20::8/80 |
| containerlab-clos02-client2 | ghcr.io/hellt/network-multitool | linux | | running | 172.20.20.7/24 | 2001:172:20:20::7/80 |
| containerlab-clos02-superspine2 | srlinux | srl | | running | 172.20.20.5/24 | 2001:172:20:20::5/80 |
| containerlab-clos02-leaf4 | srlinux | srl | | running | 172.20.20.9/24 | 2001:172:20:20::9/80 |
| containerlab-clos02-spine3 | srlinux | srl | | running | 172.20.20.4/24 | 2001:172:20:20::4/80 |
| containerlab-clos02-leaf1 | srlinux | srl | | running | 172.20.20.3/24 | 2001:172:20:20::3/80 |
| containerlab-clos02-spine4 | srlinux | srl | | running | 172.20.20.2/24 | 2001:172:20:20::2/80 |
+---------------------------------+---------------------------------+-------+-------+---------+-----------------+----------------------+
The proposal is to sort it on the container name
a node.port
and kind.port
must take in a string like 80:8080
that will expose the container's port 8080 via host port 80
Having fixed kinds can bring problems.
The env vars might change across releases, we must support ENV vars to be overridable.
seems like its not only needed to be changed to be "output format", but also to contain accepted values (table:default and json)
Now the graceful stop is on by default, that causes delays when multiple client containers take time to stop gracefully.
--graceful-stop
some leftovers still visible in the code after #128
we need to keep user defined topologies untouched when doing upgrade
The task is to identify where users should store their custom data and retain this data during the upgrade
container names are not start with containerlab
prefix. I think its a bit long, the clab
makes it way shorter -> faster to type, less space on the screen
The sysctl commands https://github.com/srl-wim/container-lab/blob/master/clab/docker.go#L99 and https://github.com/srl-wim/container-lab/blob/master/clab/docker.go#L108 are identical, shouldn't the second one be:
b, err = exec.Command("sudo", "sysctl", "-w", "net.ipv4.conf.default.rp_filter=0").CombinedOutput()
?
quite often its needed to be seen which labs are running on the system
containerlab inspect -all
could be a good candidate that will list all labs/containers by using a common tag
Currently certificates are generated regardless of whether (SRL) config files exist or not.
In case of subsequent deploys of the same lab (with existing config), this results on SRL certificates not being verifiable using the new rootCA file.
the proposal is to add a few features to be able to avoid this situation.
To set sysctl values use
ioutil.WriteFile(path.Join(sysctlBase, sysctl), []byte(strconv.Itoa(newVal)), 0640)
ref - https://github.com/kubernetes/kubernetes/blob/v1.19.2/pkg/util/sysctl/sysctl.go#L97
For the kinds where certs can be locally generated or not needed it might be nice to have a config option to skip cert generation for the global/kind/node case
To allow override of the Cmd that is used by a kind/node
the feature was delivered in #124
By default, docker disables ipv6 networking for containers unless the docker bridge is configured with ipv6 cidr
But for the containers which do not connect to the docker bridge we need to provide a runtime parameter to enable ipv6 networking.
Consider the case of the linux container which we use for test clients connected to SRL containers, for them to have ipv6 networking enabled we need to launch this container with --sysctl net.ipv6.conf.all.disable_ipv6=0
I propose we add this parameter for all kind: linux
workloads.
background: moby/moby#32433
So for a newly clab launched alpine container with a single additional interface, we will have the following picture where the eth1
interface will be disabled for ipv6:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
283: eth0@if284: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:14:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.20.4/24 brd 172.20.20.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2001:172:20:20::4/80 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:1404/64 scope link
valid_lft forever preferred_lft forever
300: eth1@if299: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3a:07:48:7b:e8:aa brd ff:ff:ff:ff:ff:ff link-netnsid 1
/ # sysctl -a | grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.eth1.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 0
To allow mounting files to the containerlab modes, the mount
option have to be created for kinds and nodes.
The use case is to add custom binaries or configs for testing containers (like adding gobgp to a vanilla alpine)
kind_defaults:
my_tools:
type: custom
image: pklepikov/ubuntu-tools
mounts:
- $(pwd)/exabgp /home/admin/exabgp
- $(pwd)/gobgp /home/admin/gobgp
I propose using a more general label in the example topologies.
Right now we are using:
kind_defaults:
srl:
image: srlinux:20.6.1-286
I propose to change this to either ´srlinux:latest´... or srlinux:containerlab
, which would decouple the config from the actual GA release.
in #94 I introduced a netns symlink removal, but the symlink should not be attempted to be removed if the node is of type bridge
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.