GithubHelp home page GithubHelp logo

amazonlinux / amazon-ec2-net-utils Goto Github PK

View Code? Open in Web Editor NEW
84.0 84.0 52.0 186 KB

ec2-net-utils contains a set of utilities for managing elastic network interfaces on Amazon EC2.

License: Apache License 2.0

Shell 84.96% Makefile 15.04%

amazon-ec2-net-utils's Introduction

amazon-ec2-net-utils

Background

The amazon-ec2-net-utils package provides functionality needed to configure a Linux instance for optimal performance in a VPC environment. It handles:

  • Per-interface policy routing rules to accommodate VPC source/dest restrictions
  • Configuration of secondary IPv4 addresses
  • Configuration of ENIs upon hotplug
  • Routing configuration for delegated prefixes

The version 1.x branch of the amazon-ec2-net-utils package was used in Amazon Linux 2 and earlier releases. It has a long history and is tightly coupled to ISC dhclient and initscripts network configuration. Both of these components are deprecated and will not make up the primary network configuration framework in future releases of Amazon Linux or other distributions. The 2.x branch (released from the main branch in git) represents a complete rewrite targeting a more modern network management framework. The rest of this document describes the 2.x branch.

Implementation

amazon-ec2-net-utils leverages systemd-networkd for most of the actual interface configuration, and is primarily responsible for mapping configuration information available via IMDS to systemd-networkd input configuration. It provides event-based configuration via udev rules, with timer based actions in order to detect non event based changes (e.g. secondary IP address assignment). Generated configuration is stored in the /run/ ephemeral filesystem and is not persisted across instance reboots. The generated configuration is expected to be regenerated from scratch upon reboot. Customers can override the behavior of the package by creating configuration files in the local administration network directory /etc/systemd/network as described in systemd-networkd's documentation.

By utilizing a common framework in the form of systemd, the amazon-ec2-net-utils package should be able to integrate with any systemd-based distribution. This allows us to provide customers with a common baseline behavior regardless of whether they choose Amazon Linux or a third-party distribution. Testing has been performed on Debian, Fedora, and Amazon Linux 2023.

Usage

amazon-ec2-net-utils is expected to be pre-installed on Amazon Linux 2023 and future releases. In the common case, customers should not need to be aware of its operation. Configuration of network interfaces should occur following the principle of least astonishment. That is, traffic should be routed via the ENI associated with the source address. Custom configuration should be respected. New ENI attachments should be used automatically, and associated resources should be cleaned up on detachment. Manipulation of an ENI attachment should not impact the functionality of any other ENIs.

Build and install

The recommended way to install amazon-ec2-net-utils is by building a package for your distribution. A spec file and debian subdirectory are provided and should be reasonably suitable for modern rpm or dpkg based distributions. Build dependencies are declared in debian/control and in amazon-ec2-net-utils.spec and can be installed using standard tools from the distributions (e.g. dpkg-checkbuilddeps and apt, or dnf builddep, etc)

The post installation scripts in the spec file and or .deb package will stop NetworkManager or ifupdown, if running, and initialize systemd-networkd and systemd-resolved. The expectation is that amazon-ec2-net-utils will take over and initialize a running system, without rebooting, such that it is indistinguishable from a system that booted with amazon-ec2-net-utils.

rpm build and installation

$ mkdir -p rpmbuild/BUILD
$ git -C amazon-ec2-net-utils/ archive main | (cd rpmbuild/BUILD/ && tar xvf -)
$ rpmbuild -bb rpmbuild/BUILD/amazon-ec2-net-utils.spec
$ sudo dnf install rpmbuild/RPMS/noarch/amazon-ec2-net-utils-2.0.0-1.al2022.noarch.rpm

dpkg build and installation

$ dpkg-buildpackage -uc -us -b
$ sudo apt install ../amazon-ec2-net-utils_2.0.0-1_all.deb

Installation verification

$ # inspect the state of the system to verify that networkd is running:
$ networkctl # should report all physical interfaces as "routable" and "configured"
$ networkctl status eth0 # should report "/run/systemd/network/70-eth0.network" as the network conf file
$ resolvectl # show status of systemd-resolved

Example:

[ec2-user@ip-10-0-0-114 ~]$ networkctl
IDX LINK TYPE     OPERATIONAL SETUP
 1 lo   loopback carrier     unmanaged
 2 eth0 ether    routable    configured

2 links listed.
[ec2-user@ip-10-0-0-114 ~]$ networkctl status eth0
● 2: eth0
                    Link File: /usr/lib/systemd/network/99-default.link
                 Network File: /run/systemd/network/70-eth0.network
                         Type: ether
                        State: routable (configured)
            Alternative Names: enp0s5
                               ens5
                         Path: pci-0000:00:05.0
                       Driver: ena
                       Vendor: Amazon.com, Inc.
                        Model: Elastic Network Adapter (ENA)
                   HW Address: 02:c9:76:e3:18:0b
                          MTU: 9001 (min: 128, max: 9216)
                        QDisc: mq
 IPv6 Address Generation Mode: eui64
         Queue Length (Tx/Rx): 2/2
                      Address: 10.0.0.114 (DHCP4 via 10.0.0.1)
                               fe80::c9:76ff:fee3:180b
                      Gateway: 10.0.0.1
                          DNS: 10.0.0.2
            Activation Policy: up
              DHCP4 Client ID: IAID:0xed10bdb8/DUID
            DHCP6 Client DUID: DUID-EN/Vendor:0000ab11a9aa54876c81082a0000

Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: Link UP
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: Gained carrier
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: Gained IPv6LL
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: DHCPv4 address 10.0.0.114/24 via 10.0.0.1
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: Re-configuring with /run/systemd/network/70-eth0.net>
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: DHCP lease lost
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: DHCPv6 lease lost
Sep 01 17:44:54 ip-10-0-0-114.us-west-2.compute.internal systemd-networkd[2042]: eth0: DHCPv4 address 10.0.0.114/24 via 10.0.0.1
[ec2-user@ip-10-0-0-114 ~]$ resolvectl
Global
      Protocols: LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: uplink

Link 2 (eth0)
   Current Scopes: DNS
        Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.0.2
      DNS Servers: 10.0.0.2

Getting help

If you're using amazon-ec2-net-utils as packaged by a Linux distribution, please consider using your distribution's support channels first. Your distribution may have modified the behavior of the package to facilitate better integration, and may have more specific guidance for you.

Alternatively, if you don't believe your issue is distribution specific, please feel free to open an issue on GitHub.

Contributing

We are happy to review proposed changes. If you're considering introducing any major functionality or behavior changes, you may wish to consider opening an issue where we can discuss the details before you proceed with implementation. Please refer to CONTRIBUTING.md for additional expectations.

amazon-ec2-net-utils's People

Contributors

brddan avatar keithaws avatar markmartirosian avatar nmeyerhans avatar vigh-m avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-ec2-net-utils's Issues

Circle CI is failing to clone this repo

Circle Ci claims to be unable to clone the repository with the following:

error cloning repository: unknown error: ERROR: You're using an RSA key with SHA-1, which is no longer allowed. Please use a newer client or a different key type.

This is discussed at in the Circle CI forums

Git and OpenSSH can clone this repository from GitHub within an amazonlinux:2 container without issue, so it's not entirely clear where the problem needs to be addressed.

Package should use systemd presets instead of enabling/disabling services

The spec currently contains these lines:

    systemctl enable systemd-networkd.service
    systemctl enable systemd-resolved.service
    systemctl disable NetworkManager-wait-online.service
    systemctl disable NetworkManager.service

Rather than do this, the package should create a systemd preset file at %{_presetdir}/80-amazon-ec2-net-utils.preset (where %{_presetdir} is defined to be /usr/lib/systemd/system-preset on Enterprise Linux) which looks like this:

enable systemd-networkd.service
enable systemd-resolved.service
disable NetworkManager-wait-online.service
disable NetworkManager.service

The script in the spec would then need to systemctl preset each of those services instead (and probably also systemctl daemon-reload before) since it expects to also start services.

The reason for using presets instead of having the script directly enable/disable services is that it persists the preference/requirement that systemd-networkd is used instead of NetworkManager. If another preset on the system with a lower priority (which may be provided by the OS vendor, a package, or the sysadmin) expresses the preference to enable NetworkManager either directly or indirectly in a preset, then this package will stop working correctly once the preset is applied (which could happen when packages are installed/upgraded or deliberately via systemctl preset-all). Presets are how the Fedora packaging guidelines now recommend doing this (and these guidelines would apply to Enterprise Linux as well). With a preset in place, NetworkManager will not be automatically enabled, and only an explicit enable would cause that to happen.

(Edited: I typed the priority as 98 originally because that's what I had been using all day, but this would have to be <90 since the OS defaults are 90-* and systemd uses whatever comes first and doesn't allow later ones to override it)

1.x: repeatedly deletes and creates rules for delegated IPv6 prefixes

When monitoring the routing state with ip mon, we can see that the prefix rules are deleted and recreated with every DHCPv6 lease renewal:

Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal dhclient[9263]: PRC: Renewing lease on eth1.
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal dhclient[9263]: XMT: Renew on eth1, interval 10710ms.
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal dhclient[9263]: RCV: Reply message on eth1 from fe80::444:bbff:fe4e:186b.
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 3: eth1    inet6 2600:1f14:eeb:22ff:e08b:498b:18eb:9ba4/128 scope global dynamic
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: valid_lft 450sec preferred_lft 140sec
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted fe80::444:bbff:fe4e:186b dev eth1 lladdr 06:44:bb:4e:18:6b router STALE
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted ff02::1:2 dev eth1 lladdr 33:33:00:01:00:02 NOARP
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted ff02::16 dev eth1 lladdr 33:33:00:00:00:16 NOARP
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ec2net[25668]: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/06:89:d4:90:7a:e9/ipv6-prefix
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ec2net[25669]: [get_meta] Getting token for IMDSv2.
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ec2net[25672]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/06:89:d4:90:7a:e9/ipv6-prefix
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28158:        from 2600:1f14:eeb:22ff:3a73::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28157:        from 2600:1f14:eeb:22ff:4a0::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28156:        from 2600:1f14:eeb:22ff:4c62::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28155:        from 2600:1f14:eeb:22ff:4e62::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28154:        from 2600:1f14:eeb:22ff:5ade::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28153:        from 2600:1f14:eeb:22ff:5cbc::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28152:        from 2600:1f14:eeb:22ff:5d27::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28151:        from 2600:1f14:eeb:22ff:5eac::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28150:        from 2600:1f14:eeb:22ff:6d07::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28149:        from 2600:1f14:eeb:22ff:6f06::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28148:        from 2600:1f14:eeb:22ff:6f7a::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28147:        from 2600:1f14:eeb:22ff:6f68::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28146:        from 2600:1f14:eeb:22ff:7a3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28145:        from 2600:1f14:eeb:22ff:7bf6::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28144:        from 2600:1f14:eeb:22ff:7c41::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28143:        from 2600:1f14:eeb:22ff:8bd3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28142:        from 2600:1f14:eeb:22ff:9de4::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28141:        from 2600:1f14:eeb:22ff:32fe::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28140:        from 2600:1f14:eeb:22ff:60ff::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28139:        from 2600:1f14:eeb:22ff:71a4::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28138:        from 2600:1f14:eeb:22ff:72cd::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28137:        from 2600:1f14:eeb:22ff:90fe::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28136:        from 2600:1f14:eeb:22ff:97c6::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28135:        from 2600:1f14:eeb:22ff:220e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28134:        from 2600:1f14:eeb:22ff:882e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28133:        from 2600:1f14:eeb:22ff:3203::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28132:        from 2600:1f14:eeb:22ff:4568::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28131:        from 2600:1f14:eeb:22ff:8873::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28130:        from 2600:1f14:eeb:22ff:a7d3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28129:        from 2600:1f14:eeb:22ff:ac51::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28128:        from 2600:1f14:eeb:22ff:af07::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28127:        from 2600:1f14:eeb:22ff:af82::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28126:        from 2600:1f14:eeb:22ff:b0a0::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28125:        from 2600:1f14:eeb:22ff:b473::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28124:        from 2600:1f14:eeb:22ff:c3b3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28123:        from 2600:1f14:eeb:22ff:c9b4::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28122:        from 2600:1f14:eeb:22ff:d2ac::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28121:        from 2600:1f14:eeb:22ff:d6ac::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28120:        from 2600:1f14:eeb:22ff:d8ce::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28119:        from 2600:1f14:eeb:22ff:d8df::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28118:        from 2600:1f14:eeb:22ff:d16b::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28117:        from 2600:1f14:eeb:22ff:d41e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28116:        from 2600:1f14:eeb:22ff:d889::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28115:        from 2600:1f14:eeb:22ff:dabd::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28114:        from 2600:1f14:eeb:22ff:dd91::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28113:        from 2600:1f14:eeb:22ff:f8cc::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28112:        from 2600:1f14:eeb:22ff:fa73::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28111:        from 2600:1f14:eeb:22ff:fe0e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: 28110:        from 2600:1f14:eeb:22ff:ffa8::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28205:        from 2600:1f14:eeb:22ff:4c62::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28196:        from 2600:1f14:eeb:22ff:6f68::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28178:        from 2600:1f14:eeb:22ff:ac51::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28177:        from 2600:1f14:eeb:22ff:af07::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28163:        from 2600:1f14:eeb:22ff:dd91::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28181:        from 2600:1f14:eeb:22ff:4568::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28201:        from 2600:1f14:eeb:22ff:5d27::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28195:        from 2600:1f14:eeb:22ff:7a3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28179:        from 2600:1f14:eeb:22ff:a7d3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28192:        from 2600:1f14:eeb:22ff:8bd3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28198:        from 2600:1f14:eeb:22ff:6f06::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28162:        from 2600:1f14:eeb:22ff:f8cc::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28200:        from 2600:1f14:eeb:22ff:5eac::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28165:        from 2600:1f14:eeb:22ff:d889::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28160:        from 2600:1f14:eeb:22ff:fe0e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28194:        from 2600:1f14:eeb:22ff:7bf6::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28188:        from 2600:1f14:eeb:22ff:71a4::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28183:        from 2600:1f14:eeb:22ff:882e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28173:        from 2600:1f14:eeb:22ff:c3b3::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28159:        from 2600:1f14:eeb:22ff:ffa8::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28167:        from 2600:1f14:eeb:22ff:d16b::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28206:        from 2600:1f14:eeb:22ff:4a0::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28191:        from 2600:1f14:eeb:22ff:9de4::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28182:        from 2600:1f14:eeb:22ff:3203::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28202:        from 2600:1f14:eeb:22ff:5cbc::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28180:        from 2600:1f14:eeb:22ff:8873::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28203:        from 2600:1f14:eeb:22ff:5ade::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28207:        from 2600:1f14:eeb:22ff:3a73::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28171:        from 2600:1f14:eeb:22ff:d2ac::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28193:        from 2600:1f14:eeb:22ff:7c41::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28204:        from 2600:1f14:eeb:22ff:4e62::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28170:        from 2600:1f14:eeb:22ff:d6ac::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28185:        from 2600:1f14:eeb:22ff:97c6::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28187:        from 2600:1f14:eeb:22ff:72cd::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28168:        from 2600:1f14:eeb:22ff:d8df::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28175:        from 2600:1f14:eeb:22ff:b0a0::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28161:        from 2600:1f14:eeb:22ff:fa73::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28176:        from 2600:1f14:eeb:22ff:af82::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28184:        from 2600:1f14:eeb:22ff:220e::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28189:        from 2600:1f14:eeb:22ff:60ff::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28172:        from 2600:1f14:eeb:22ff:c9b4::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28174:        from 2600:1f14:eeb:22ff:b473::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28169:        from 2600:1f14:eeb:22ff:d8ce::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28164:        from 2600:1f14:eeb:22ff:dabd::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28199:        from 2600:1f14:eeb:22ff:6d07::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28197:        from 2600:1f14:eeb:22ff:6f7a::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28190:        from 2600:1f14:eeb:22ff:32fe::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28186:        from 2600:1f14:eeb:22ff:90fe::/80 lookup 10001
Sep 20 21:00:36 ip-10-0-1-115.us-west-2.compute.internal ip[25632]: Deleted 28166:        from 2600:1f14:eeb:22ff:d41e::/80 lookup 10001

Only get 1 secondary private IP after assigned multiple secondary private ip to ENI

I assigned multiple secondary private ip to my EC2 instance (Amazon Linux 2) and restart my network service (service network restart), but I only get 1 private ip with "ip addr" ,

I found get_secondary_ipv4s only return array first IP , ${ipv4s[@]:1} , I change to ${ipv4s[@]:2} or other number if can be return other private ip , I think may be this is bug .

get_secondary_ipv4s() {
ipv4s=($(get_ipv4s))
ec=$?
echo "${ipv4s[@]:1}"
return $ec
}

1.x: Race condition at boot can cause instance to miss ENI attachments

Amazon Linux 2 instances run cloud-init at launch with a default configuration that installs pending security updates. In the case where systemd is one of the packages being updated, there is a short period of time in which systemd-udevd is not listening for kernel uevents. If an ENI is attached during that time window, the ec2-net-utils udev rule is never triggered and the interface is not configured.

The following log was taken from an instance that experienced impact from this scenario. A potential solution is to run the ec2net-scan service with a systemd timer.

Nov  3 23:08:42 ip-172-16-58-243 yum[2437]: Updated: systemd-libs-219-78.amzn2.0.20.x86_64
Nov  3 23:08:42 ip-172-16-58-243 systemd: Closed udev Control Socket.
Nov  3 23:08:42 ip-172-16-58-243 systemd: Closed udev Kernel Socket.
Nov  3 23:08:42 ip-172-16-58-243 systemd: Stopping udev Kernel Device Manager...
Nov  3 23:08:42 ip-172-16-58-243 systemd: Stopped udev Kernel Device Manager.
Nov  3 23:08:42 ip-172-16-58-243 dbus[1778]: [system] Reloaded configuration
Nov  3 23:08:43 ip-172-16-58-243 kernel: pci 0000:00:06.0: [1d0f:ec20] type 00 class 0x020000
Nov  3 23:08:43 ip-172-16-58-243 kernel: pci 0000:00:06.0: reg 0x10: [mem 0x00000000-0x00003fff]
Nov  3 23:08:43 ip-172-16-58-243 kernel: pci 0000:00:06.0: BAR 0: assigned [mem 0x80000000-0x80003fff]
Nov  3 23:08:43 ip-172-16-58-243 kernel: ena 0000:00:06.0: enabling device (0000 -> 0002)
Nov  3 23:08:43 ip-172-16-58-243 kernel: ena 0000:00:06.0: ENA device version: 0.10
Nov  3 23:08:43 ip-172-16-58-243 kernel: ena 0000:00:06.0: ENA controller version: 0.0.1 implementation version 1
Nov  3 23:08:43 ip-172-16-58-243 kernel: ena 0000:00:06.0: LLQ is not supported Fallback to host mode policy.
Nov  3 23:08:43 ip-172-16-58-243 kernel: ena 0000:00:06.0: Elastic Network Adapter (ENA) found at mem 80000000, mac addr 06:38:13:48:56:f3
Nov  3 23:08:43 ip-172-16-58-243 dbus[1778]: [system] Reloaded configuration
Nov  3 23:08:43 ip-172-16-58-243 dbus[1778]: [system] Reloaded configuration
Nov  3 23:08:43 ip-172-16-58-243 cloud-init: Updating   : systemd-libs-219-78.amzn2.0.20.x86_64                       2/12
Nov  3 23:08:43 ip-172-16-58-243 systemd: Reexecuting.
Nov  3 23:08:43 ip-172-16-58-243 systemd: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Nov  3 23:08:43 ip-172-16-58-243 systemd: Detected virtualization amazon.
Nov  3 23:08:43 ip-172-16-58-243 systemd: Detected architecture x86-64.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Unset automount Arbitrary Executable File Formats File System Automount Point.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Stopping Arbitrary Executable File Formats File System Automount Point.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Listening on udev Control Socket.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Listening on udev Kernel Socket.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Starting udev Kernel Device Manager...
Nov  3 23:08:44 ip-172-16-58-243 systemd-udevd: starting version 219
Nov  3 23:08:44 ip-172-16-58-243 systemd-udevd: Network interface NamePolicy= disabled on kernel command line, ignoring.
Nov  3 23:08:44 ip-172-16-58-243 systemd: Started udev Kernel Device Manager.

2.4.0-1.amzn2023.0.1 breaks docker connectivity inside host

While updating to 2.4.0 of amazon-ec2-net-utils (new AMI released), I found that docker containers with bridge network and published ports are no longer accessible on the host itself. They can be connected to just fine outside of the host. Downgrading to 2.3.0 fixed the problem.

At first glance, I do not understand what might be misconfigured.

How to reproduce:
Instance type: t4g.micro
AMI: amazon/al2023-ami-minimal-2023.2.20231002.0-kernel-6.1-arm64 (amazon-ec2-net-utils-2.4.0-1.amzn2023.0.1.noarch)

dnf install docker
docker run -p 5678:80 ealen/echo-server
curl -I --header 'X-ECHO-CODE: 404' <host ip>:5678

Connection hangs. Connection outside the host works with both versions of ec2-net-utils.

I'd be happy to give any additional details.

ec2-net-utils deleting custom ip rules upon state change

OS: Debian 12.1
Version: 2.3.0

Any custom ip rules that I've created seem to get deleted whenever an interface or secondary IP is added/removed from the instance.

For example, I'm using the following rule to mark packets from a specific source IP so I can take action in iptables based on that mark.

ip rule add from 10.0.254.116 fwmark 10001 table 10001

Whenever there's a state change on the interfaces (additional interfaces added or removed, or additional IP addresses added or removed) it looks like all of the ip rules are deleted and rebuilt.

Is there a way to make the rule persist or at least be recreated when ec2-net-utils refreshes all the policies?

add support for multiple NetworkCardIndex values

ec2-net-utils currently has a built-in assumption that the per-interface device-number value is unique across the instance. On most instance types, where the per-interface network-card API returns a 404 HTTP response, that is the case. However, certain instance types with EFA support, such as the p5 family or the c6in.32xlarge, support ENI attachments with defined NetworkCardIndex values. On these instance types, network-card is defined for each interface exposed in IMDS and device-number is only unique within the scope of the device-number value. In other words:

[ec2-user@ip-10-0-0-186 ~]$ for a in $(cat /sys/class/net/ens*/address); do echo $a; echo -n "device-number: " ; get_iface_imds "$a" device-number; echo -n "network-card: "; get_iface_imds "$a" network-card;  done
02:5e:c3:d7:22:4b
device-number: 0   <-- device-number 0
network-card: 0
02:8f:5f:f9:92:55
device-number: 1
network-card: 0
02:94:bf:87:a6:53
device-number: 0   <-- also device-number 0
network-card: 1

In the above example, there are two interfaces with device-number == 0, which is not handled properly by amazon-ec2-net-utils.

Going from v1.7.0 to v1.7.1 caused instance to be unreachable

After updating a t3 instance (AWS Linux 2) to v1.7.1 (and rebooting) it started as normal but was unreachable, either from private or public (elastic) IP addresses.

Logging in via the serial tty, I could connect from the instance to the public internet without any trouble; it was only inbound connections that seemed affected.

Tried disassociating/re-associating the instance's elastic IP, as well as a full stop-then-start cycle, but that had no effect.

Tried restoring a backup copy of the instance -- this came online fine, but after a yum update it showed the exact same behaviour (reachable only via serial tty).

Performing a yum downgrade to v. 1.7.0 restored the instance's visibility.

Note that this is not a general bug, as I tested the update on a near-identical T3 dev server first and it seemed to work fine. It only caused the issue with the live server (of course). But I lack the networking smarts to pinpoint what the exact combination of circumstances that triggers the issue is -- however, reporting in case it may help someone else.

1.x: calls `ec2ifup` for vlan interfaces

While investigating #48, I noticed that vlan interfaces trigger other issues related to the udev rules. In particular, the rules from 53-ec2-network-interfaces.rules still fire for vlan devices, leading to systemd invoking ec2ifup for the device. However, since most of the work that ec2ifup does is driven by data from IMDS, the resulting configuration is incomplete. Unfortunately, it gets far enough into the interface configuration that it starts dhclient, which makes me suspect that fixing this problem may break things for some people.

Expected behavior

Undefined

Actual behavior

The interface is partially configured. Because it is configured far enough that dhclient is invoked, some customers may be relying on this behavior, despite it being not explicitly intended.

Logs when creating a vlan interface are similar to the following:

$ sudo ip link add link eth0 eth0.5 type vlan id 5
$ journalctl
...
Apr 25 23:09:26 ip-10-0-0-164.us-west-2.compute.internal systemd[1]: Started Enable elastic network interfaces eth0.5.
Apr 25 23:09:26 ip-10-0-0-164.us-west-2.compute.internal ec2ifup[32521]: /etc/sysconfig/network-scripts/ec2net-functions-lib: line 61: let: 0.5: syntax error: invalid arithmetic operator (error token is ".5")
Apr 25 23:09:26 ip-10-0-0-164.us-west-2.compute.internal ec2net[32525]: [plug_interface] eth0.5 plugged
Apr 25 23:09:26 ip-10-0-0-164.us-west-2.compute.internal ec2net[32526]: [rewrite_primary] Rewriting configs for eth0.5
Apr 25 23:09:26 ip-10-0-0-164.us-west-2.compute.internal ec2net[32530]: [activate_primary] Activating eth0.5
Apr 25 23:09:26 ip-10-0-0-164.us-west-2.compute.internal dhclient[32548]: DHCPDISCOVER on eth0.5 to 255.255.255.255 port 67 interval 6 (xid=0x7747ae7e)

ec2ifup has generated an ifcfg-eth0.5:

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0.5 
DEVICE=eth0.5
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet
USERCTL=yes
PEERDNS=no
IPV6INIT=yes
DHCPV6C=yes
DHCPV6C_OPTIONS=-nw
PERSISTENT_DHCLIENT=yes
HWADDR=02:76:d3:3f:ec:db
DEFROUTE=no
EC2SYNC=yes
MAINROUTETABLE=yes

But it hasn't create any /etc/sysconfig/network-scripts/route* files.

cloud-init does not support for AWS ec2 instance ipv6 interface

As we were testing the same cloud-init binary, we see that IPv6 address that gets assigned to the AWS EC2 instance is not reflecting inside the OS. To make it visible we had to add below 2 entries in the ifcfg-eth0.

echo "DHCPV6C=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo "IPV6INIT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0
systemctl restart network

After that I see the IP is visible in eth0, but since the proper routes are not added its not pingable on the network.

So you see this as a bug in cloud-init. I do not see this issue in Oracle Enterprise Linux 7.X.

Current version of cloud-init is 19.4.

[Feature Request] - refresh-policy-routes systemd timer emits lots of noisy journald log entries

The refresh-policy-routes systemd timer unit, which appears to trigger the refresh-policy-routes systemd service unit (which itself runs /usr/bin/setup-policy-routes), runs around once every 2 minutes, which adds up to a lot of noisy repeated log messages in journald, making it harder to scan the logs and find our important system and application log messages.

systemd itself emits many messages due to the starting and ending of various services (including itself), and the setup-policy-routes script also emits debug logs that are mostly unnecessary. Here's what shows up in our logs every 2 minutes:

Starting [email protected] - Set up policy routes for ens5...
Got IMDSv2 token from http://169.254.169.254/latest
Starting configuration for ens5
/lib/systemd/systemd-networkd-wait-online ens5
[get_meta] Querying IMDS for mac
Configuring ens5 as primary
Using existing cfgfile /run/systemd/network/70-ens5.network
[get_meta] Querying IMDS for network/interfaces/macs/XX:XX:XX:XX:XX:XX/local-ipv4s
[email protected]: Deactivated successfully.
Finished [email protected] - Set up policy routes for ens5.
[email protected]: Consumed 31ms CPU time, received 1.5K IP traffic, sent 1.3K IP traffic.

I can think of a few possible solutions, not sure if applicable:

  • Run the script/program as a long running systemd service, instead of invoking repeatedly on a timer, to stop the repeated messages from systemd itself about the lifecycle of the service.
  • Only emit warning/error messages from the script, not debug/info messages, by default.

In the meantime, is it safe to disable this service? I'm not sure what it does and I couldn't find any documentation/references online or a description in the script itself, and I haven't had time to dive deep into the code yet.

Improve testing in the main branch

The 1.x branch has a fairly large test suite that has been helpful in avoiding regressions in some recent development work. The main branch needs something similar beyond just shellcheck.

interface disappearing

Hi,
We're running version 1.6.1 on Amazon Linux 2, instance provisioned with static private IPs through cloudformation. Sometimes it got to a state where any of the interface got in a bad state. and ip addr would display the interface but without an assigned ip address. Restarting network would bring it back. I have instances running 1.5.3 and I'm not seeing this issue. I'm not sure if this is already reported. is the tool causing this issue? it feels like IMDSv2 is failing because of token. How is that provided to the tool?

Each interface config file has this entry.
BOOTPROTO=dhcp

from /var/log/messages
May 25 09:41:56 test-server dhclient[2953]: DHCPREQUEST on eth1 to xxx.xxx.xxx.xxx port 67 (xid=0xb7ed6ba) May 25 09:41:56 test-server dhclient[2953]: DHCPACK from xxx.xx.xx.x (xid=0xb7ed6ba) May 25 09:41:56 test-server ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/xx:xx:xx:xx:xx:xx/local-ipv4s May 25 09:41:56 test-server ec2net: [get_meta] Getting token for IMDSv2. May 25 09:42:26 test-server ec2net: [get_meta] Failed to get IMDSv2 metadata token after 60 attempts... Aborting May 25 09:42:26 test-server ec2net: [rewrite_rules] Could not get IPv4 addresses for eth1 from IMDS. Aborting May 25 09:42:26 test-server ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/xx:xx:xx:xx:xx:xx/local-ipv4s May 25 09:42:26 test-server ec2net: [get_meta] Getting token for IMDSv2. May 25 09:42:26 test-server dhclient[2953]: bound to xxx.xx.xx.xxx -- renewal in 1588 seconds. May 25 09:42:57 test-server ec2net: [get_meta] Failed to get IMDSv2 metadata token after 60 attempts... Aborting May 25 09:42:57 test-server ec2net: [rewrite_aliases] Failed to get secondary IPs from IMDS. Aborting

Recent changes to this repository

With the availability of Amazon Linux 2022 in the amazonlinux GitHub organization, I'd like to propose changes to this repository to make it more consistent with the other packages including with Amazon Linux.

  1. The repository should move from aws to amazonlinux in GitHub. Moving a repository automatically creates redirects from the old location, so existing references will continue to work.
  2. The existing master branch will be retired. A new 1.x branch will be created from this branch, and releases targeting Amazon Linux 2 will be made from this branch.
  3. A new main branch will be created. This will be the default destination for pull requests and will be the branch from which 2.x tags, targeting Amazon Linux 2022 and other reasonably modern Linux distributions, will be created.

Any additional suggestions for structural or organizational changes to this repository are welcome at this time.

CC @fred-lefebvre @halfdime-code @stewartsmith

Using device number from IMDS before propagation

When a secondary ENI is attached to an EC2 instance shortly after instance launch, setup_interface() in lib.sh will sometimes fetch the wrong device number from IMDS for the newly-attached interface. This results in conflicting default routes.

There is a comment calling out this possibility in the start condition:

# Ideally we'd use the device-number interface property from IMDS,
# but interface details take some time to propagate, and IMDS
# reports 0 for the device-number prior to propagation...

But then in setup_interface we proceed to do exactly what we were supposed to avoid:

device_number=$(get_iface_imds "$ether" device-number)

changes+=$(create_interface_config "$iface" "$device_number" "$ether")

This leads to identical eni.conf contents for the two interfaces:

[root@ip-172-31-65-182 ~]# cat /run/systemd/network/70-eth0.network.d/eni.conf
# Configuration for eth0 generated by [email protected]
[Match]
MACAddress=0e:63:0e:1f:54:e9
[Network]
DHCP=yes
[DHCPv4]
RouteMetric=512
[DHCPv6]
RouteMetric=512

[root@ip-172-31-65-182 ~]# cat /run/systemd/network/70-eth1.network.d/eni.conf
# Configuration for eth1 generated by [email protected]
[Match]
MACAddress=0e:62:4f:22:be:57
[Network]
DHCP=yes
[DHCPv4]
RouteMetric=512
[DHCPv6]
RouteMetric=512

And then conflicting default routes, which breaks a whole bunch of things.

[root@ip-172-31-65-182 ~]# ip route show table main to default
default via 198.19.64.1 dev eth0 proto dhcp src 198.19.93.32 metric 512
default via 10.0.32.1 dev eth1 proto dhcp src 10.0.34.196 metric 512

2.5.0 reloads network when running userdata scripts

When an EC2 instance boots with userdata that downloads a file from s3, amazon-ec2-net-utils is started, along with credential refresher loop.
cloud-init runs aws s3 cp to get the file, and while the download is running, amazon-ec2-net-utils reloads networkd and the download fails.

Support of CentOS7

Hi,

Currently we do not see the support available for CentOS 7 AMI. Can we get the steps to build the RPM and install it manually in the AMI, so that when we create the EC2 instance the interfaces are configured properly.

Regards,
Gaurav

1.x: Add optional provisioning of ipv4 delegated prefix IPs

When support for prefix delegation was added to 1.x, it was limited to updating routing rules, with the comment "Assignment of individual addresses within the prefix is out of scope for this package, and would instead be handled by a local IPAM system. A typical such system might provision IP addresses for use by locally running containers, etc."

The problem is that if we are using prefixes on e.g. an ec2 iptables NAT instance and provisioning IPs from the delegated range locally, they are removed whenever the ec2net functions run, multiple times an hour. It would be preferable if there was an option to have at least ipv4 addresses inside delegated prefixes detected and managed like other secondary addresses are.

PR incoming.

Persistent naming rules are not generated for ENA devices

It looks like the latest ec2-net-utils in Amazon Linux 2 does not correctly generate /etc/udev/rules.d/70-persistent-net.rules when new interfaces are added.

Bad case

On an m5.large running the latest AL2 AMI, the persistent naming rules don't run in response to attaching two interfaces:

[ec2-user@ip-172-31-4-222 ~]$ ip link  
​1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000  
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00  
​2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000  
   link/ether 0a:49:16:62:c5:1f brd ff:ff:ff:ff:ff:ff  
​4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000  
   link/ether 0a:11:9f:00:f4:8b brd ff:ff:ff:ff:ff:ff  
​5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000  
   link/ether 0a:94:de:7d:70:79 brd ff:ff:ff:ff:ff:ff
[ec2-user@ip-172-31-4-222 ~]$ ll /etc/udev/rules.d/  
total 32  
-rw-r--r-- 1 root root  294 Feb 14 20:22 51-ec2-hvm-devices.rules  
-rw-r--r-- 1 root root  575 Feb 14 20:22 51-ec2-xen-vbd-devices.rules  
-rw-r--r-x 1 root root  300 Feb 14 20:22 52-ec2-vcpu.rules  
-rw-r--r-- 1 root root  442 Mar 18 21:00 53-ec2-network-interfaces.rules  
-rw-r--r-- 1 root root  487 Feb 14 20:22 53-ec2-read-ahead-kb.rules  
-rw-r--r-- 1 root root  337 Feb 14 20:22 60-cdrom_id.rules  
-rw-r--r-- 1 root root 1603 Feb 14 20:22 70-ec2-nvme-devices.rules  
-rw-r--r-- 1 root root 1218 Mar 18 21:00 75-persistent-net-generator.rules

I suspect this is due to the change introduced in #21, i.e.


# do not create rule if interface uses unrecognized driver
ENV{ID_NET_DRIVER}!="vif|ena|ixgbevf", GOTO="persistent_net_generator_end"

These devices use the ena driver, but the udev event does not include that information:

[ec2-user@ip-172-31-4-222 ~]$ lspci -k | grep -A 2 Ethernet  
​00:05.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA)  
       Kernel driver in use: ena  
       Kernel modules: ena  
​00:06.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA)  
       Kernel driver in use: ena  
       Kernel modules: ena  
​00:07.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA)  
       Kernel driver in use: ena  
       Kernel modules: ena  
KERNEL[368.302913] add      /devices/pci0000:00/0000:00:06.0/net/eth1 (net)  
ACTION=add  
DEVPATH=/devices/pci0000:00/0000:00:06.0/net/eth1  
IFINDEX=4  
INTERFACE=eth1  
SEQNUM=1432  
SUBSYSTEM=net

Good case

If I downgrade ec2-net-utils to 1.5-3.amzn2, the rules are generated:

[ec2-user@ip-172-31-12-135 ~]$ ip link  
​1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000  
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00  
​2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000  
   link/ether 0a:c6:95:75:f4:81 brd ff:ff:ff:ff:ff:ff  
​3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000  
   link/ether 0a:11:9f:00:f4:8b brd ff:ff:ff:ff:ff:ff 
[ec2-user@ip-172-31-12-135 ~]$ ls /etc/udev/rules.d/  
51-ec2-hvm-devices.rules      52-ec2-vcpu.rules                53-ec2-read-ahead-kb.rules  70-ec2-nvme-devices.rules  75-persistent-net-generator.rules  
51-ec2-xen-vbd-devices.rules  53-ec2-network-interfaces.rules  60-cdrom_id.rules           70-persistent-net.rules  
[ec2-user@ip-172-31-12-135 ~]$ cat /etc/udev/rules.d/70-persistent-net.rules    
# This file was automatically generated by the /usr/lib/udev/write_net_rules  
# program, run by the persistent-net-generator.rules rules file.  
#  
# You can modify it, as long as you keep each rule on a single  
# line, and change only the value of the NAME= key.  
  
# elastic network interface  
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="0a:11:9f:00:f4:8b", KERNEL=="eth*", NAME="eth1"  
  
# elastic network interface  
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="0a:94:de:7d:70:79", KERNEL=="eth*", NAME="eth2"

Intermittent IPv4 setup failures with 2.1.0

Issue was observed during testing prior to packaging for Amazon Linux 2022. Occasionally, instances would fail to boot properly with the following messages logged:

Jun 10 20:00:05 localhost systemd-networkd[1485]: ens5: DHCPv4 address 10.0.0.11/24, gateway 10.0.0.1 acquired from 10.0.0.1
Jun 10 20:00:05 localhost systemd-networkd[1485]: ens5: Could not set DHCPv4 route: Nexthop has invalid gateway. Network is unreachable
Jun 10 20:00:05 localhost systemd-networkd[1485]: ens5: Failed

When this happens, routes are not properly added to the main route table:

[root@ip-10-0-0-75 ~]# ip ro show table main
[root@ip-10-0-0-75 ~]# ip ro get 169.254.169.254
RTNETLINK answers: Network is unreachable
[root@ip-10-0-0-75 ~]# ip ro get 169.254.169.254 from 10.0.0.75
169.254.169.254 from 10.0.0.75 via 10.0.0.1 dev ens5 table 10005 uid 0 
    cache 

Although the rule and secondary table are set up correctly, the lack of a route in the main table means a lookup with an unspecified source will fail before the policy configuration is ever consulted.

This happens intermittently.

[1.x] ec2-net-utils probes for nonexistent local-ipv4s key in IPv6-only subnets

Instance is Amazon Linux 2. Secondary ENI is in an IPv6-only subnet and attached as a secondary ENI. The following is logged:

Mar 24 22:01:27 ip-10-0-0-244.us-west-2.compute.internal ec2net[13788]: [plug_interface] eth1 plugged
Mar 24 22:01:27 ip-10-0-0-244.us-west-2.compute.internal ec2net[13789]: [rewrite_primary] Rewriting configs for eth1
Mar 24 22:01:27 ip-10-0-0-244.us-west-2.compute.internal ec2net[13793]: [activate_primary] Activating eth1
Mar 24 22:01:28 ip-10-0-0-244.us-west-2.compute.internal kernel: ena 0000:00:06.0 eth1: Local page cache is disabled for less than 16 channels
Mar 24 22:01:28 ip-10-0-0-244.us-west-2.compute.internal kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Mar 24 22:01:28 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPREQUEST on eth1 to 255.255.255.255 port 67 (xid=0x1dda62f9)
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPREQUEST on eth1 to 255.255.255.255 port 67 (xid=0x1dda62f9)
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPNAK from 169.254.0.1 (xid=0x1dda62f9)
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal ec2net[13848]: [remove_aliases] Removing aliases of eth1
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal ec2net[13850]: [remove_rules] Removing rules for eth1
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 7 (xid=0x329a8de1)
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPREQUEST on eth1 to 255.255.255.255 port 67 (xid=0x329a8de1)
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPOFFER from 169.254.0.1
Mar 24 22:01:36 ip-10-0-0-244.us-west-2.compute.internal dhclient[13811]: DHCPACK from 169.254.0.1 (xid=0x329a8de1)
Mar 24 22:01:37 ip-10-0-0-244.us-west-2.compute.internal ec2net[13914]: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:37 ip-10-0-0-244.us-west-2.compute.internal ec2net[13915]: [get_meta] Getting token for IMDSv2.
Mar 24 22:01:37 ip-10-0-0-244.us-west-2.compute.internal ec2net[13918]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:37 ip-10-0-0-244.us-west-2.compute.internal ec2net[13922]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:38 ip-10-0-0-244.us-west-2.compute.internal ec2net[13926]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:38 ip-10-0-0-244.us-west-2.compute.internal ec2net[13930]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:39 ip-10-0-0-244.us-west-2.compute.internal ec2net[13934]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:39 ip-10-0-0-244.us-west-2.compute.internal ec2net[13938]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:40 ip-10-0-0-244.us-west-2.compute.internal ec2net[13942]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:40 ip-10-0-0-244.us-west-2.compute.internal ec2net[13946]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:41 ip-10-0-0-244.us-west-2.compute.internal ec2net[13950]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
Mar 24 22:01:41 ip-10-0-0-244.us-west-2.compute.internal ec2net[13954]: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:c0:eb:86:d0:cf/local-ipv4s
...

The interface is still configured with the expected link-local IPv4 and global IPv6 addresses, but the extra noise in the logs is unnecessary and unhelpful.

Add device-index as additional interface properties

We recently added a feature that adds the ENI ID as an altname on interfaces (fa68f01ba3d6a). That lets us do nice things like ip addr show dev eni-00e3071eba3d8b83c, or just quickly map the ENI ID to the local interface name in the ip output.

Another field that would be useful to add as an altname property is the device index, available at http://169.254.169.254/latest/meta-data/network/interfaces/macs/<mac address>/device-number/. That would give the ability to refer to interfaces by their device index, and would also allow easy visual correlation between ENI ID and device index in the ip output.

Currently we manage the ENI altname by directly calling ip link property add .... Systemd does have support for altname configuration in .link files, and it would probably be better to replace the ip invocation with another drop-in for consistency with what we're doing elsewhere.

Debian Package

Hello!
Is there a pre-released deb for debian or the only available option is to build it ourself?

amazon-ec2-net-utils incompatible with amazon-vpc-cni-k8s

Following up on problem noted here:
awslabs/amazon-eks-ami#1528 (comment)

Context:
amazon-vpc-cni-k8s is a networking plugin for Kubernetes, it is used by a lot of installers for kubernetes on EC2 instances including community based tools like kops etc. amazon-vpc-cni-k8s is developed and maintained by the Amazon EKS team members and primary use case is definitely for EKS. The link above points to the issue and discussion on when we identified the problem and removed the package. This happened on AL2 a while ago and cropped up yet again in AL2023 because the package name changed just a bit (amazon- prefix was added). So we had to hunt again for the root cause and ended up tagging the same package maintained in this repo as the cause. An overview of how the amazon-vpc-cni-k8s works is here. essentially the ip address routing rules setup by kubernetes/containerd/amazon-vpc-cni-k8s get stomped on by amazon-ec2-net-utils.

amazon-ec2-utils is not upgradable from ec2-net-utils

amazon-ec2-net-utils is intended to replace the ec2-net-utils package. Thus, it should be possible to upgrade from ec2-net-utils to amazon-ec2-net-utils. Right now, however, this presents conflicting file errors from RPM:

[ec2-user@ip-10-0-0-39 ~]$ rpm -qi ec2-net-utils
Name        : ec2-net-utils
Version     : 1.4
Release     : 2.amzn2
Architecture: noarch
Install Date: Mon 21 Sep 2020 09:11:38 PM UTC
Group       : System Tools
Size        : 25775
License     : MIT and GPLv2
Signature   : RSA/SHA256, Tue 14 Jul 2020 04:46:12 PM UTC, Key ID 11cf1f95c87f5b1a
Source RPM  : ec2-net-utils-1.4-2.amzn2.src.rpm
Build Date  : Mon 13 Jul 2020 11:41:11 PM UTC
Build Host  : build.amazon.com
Relocations : (not relocatable)
Packager    : Amazon Linux
Vendor      : Amazon Linux
URL         : http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1825
Summary     : A set of network tools for managing ENIs
Description :
ec2-net-utils contains a set of utilities for managing elastic network
interfaces.
[ec2-user@ip-10-0-0-39 ~]$ sudo rpm -U amazon-ec2-net-utils-1.4-2.amzn2.noarch.rpm 
        file /etc/sysconfig/network-scripts/ec2net-functions from install of amazon-ec2-net-utils-1.4-2.amzn2.noarch conflicts with file from package ec2-net-utils-1.4-2.amzn2.noarch

It looks like the spec file is missing an Obsoletes: ec2-net-utils entry.

1.x: regression in the handling of `/etc/sysconfig/network-scripts/route-*` files

Customizations to /etc/sysconfig/network-scripts/route-ethX are overwritten on ifup ethX. This is a change from behavior prior to 1.6.1 and thus constitutes a regression.

To reproduce:

  1. launch an instance, and attach a second ENI. This will create /etc/sysconfig/network-scripts/route-eth1.
  2. Modify that file, e.g. echo "# some custom changes" >> /etc/sysconfig/network-scripts/route-eth1
  3. Cycle the interface, e.g. sudo ifdown eth1; sudo ifup eth1

Expected behavior, based on the behavior of the package prior to version 1.6.1: Changes from (2) above are preserved in route-eth1

Actual behavior as of 1.6.1: /etc/sysconfig/network-scripts/route-eth1 is recreated from scratch and the local customization is lost.

Note that ec2ifup, as invoked by udev on ENI attachment, is expected to overwrite route-ethX. The change in behavior only occurs with ifup.

Secondary IPs missed in aliases depending on sort

After a bit of debugging, looks like I came across an issue where secondary IPs associated with an ENI are "lesser" than the primary IP, the logic mistakenly aliases to the wrong IP.

To reproduce, attach an ENI with multiple IPs where the second IP comes earlier in the CIDR than the primary, ex:

172.20.4.123/22 [primary]
172.20.2.34/22 [secondary]

addresses=$(get_iface_imds $mac local-ipv4s | tail -n +2)

Has an assumption that the first IP address is the primary IP (correctly so), so it does not make an extra alias for the primary IP in ec2net_alias.conf that gets created.

However, due to the sort in get_iface_imds, it will get missed and then the primary IP instead gets added to the aliases file.

get_imds network/interfaces/macs/${mac}/${key} $max_tries | sort

I'm not sure if the sort is being used for other purposes, though it may suffer from some other logic failures since it is a plain sort not accounting for address notation. I believe local-ipv4s will always return primary first and then secondaries in order of assignment on the ENI, so the alias logic just needs to receive the list unsorted.

Let me know if there's any other details I can help with.

[email protected]: Restart= setting other than no

Hi,

I've built and installed an amazon-ec2-net-utils-2.0.0 RPM on Alma Linux 8 but there's an issue with the [email protected] script.

I'm at (commit d908637ada3aac35c3e180f6f355a0a2483faa8a (HEAD -> main, origin/main, origin/HEAD)

In the systemd service file, apparently Type=oneshot is not allowed with Restart=on-failure.

Is this a known issue? What is the fix for it?

Below are further notes for your reference:

dnf localinstall amazon-ec2-net-utils-2.0.0-1.el8.noarch.rpm reports the following near the end of the installation:

Failed to start [email protected]: Unit [email protected] has a bad unit file setting.

Here it is again in context:

---8<---
  Installing       : amazon-ec2-net-utils-2.0.0-1.el8.noarch                                                                            2/2 
  Running scriptlet: amazon-ec2-net-utils-2.0.0-1.el8.noarch                                                                            2/2 
Created symlink /etc/systemd/system/dbus-org.freedesktop.resolve1.service → /usr/lib/systemd/system/systemd-resolved.service.
Created symlink /etc/systemd/system/multi-user.target.wants/systemd-resolved.service → /usr/lib/systemd/system/systemd-resolved.service.
Removed /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
Removed /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Failed to start [email protected]: Unit [email protected] has a bad unit file setting.
See system logs and 'systemctl status [email protected]' for details.
Skipping lo with driver 

  Verifying        : systemd-networkd-250.3-1.el8.x86_64                                                                                1/2 
  Verifying        : amazon-ec2-net-utils-2.0.0-1.el8.noarch                                                                            2/2 

Installed:
  amazon-ec2-net-utils-2.0.0-1.el8.noarch                                systemd-networkd-250.3-1.el8.x86_64                               

Complete!

Showing the status of policy-routes@eth0:

# systemctl status -l policy-routes@eth0
● [email protected] - Set up policy routes for eth0
   Loaded: bad-setting (Reason: Unit [email protected] has a bad unit file setting.)
   Active: inactive (dead)

Apr 08 02:54:08 am177.motec.com.au systemd[1]: [email protected]: Service has Restart= setting other than no, which isn't allowed for Type=oneshot services. Refusing.

Apparently Type=oneshot is not allowed with Restart=on-failure.

# cat policy-routes\@.service 
[Unit]
Description=Set up policy routes for %I
StartLimitIntervalSec=10
StartLimitBurst=5

[Service]
Type=oneshot
RemainAfterExit=yes
PrivateTmp=yes
AmbientCapabilities=CAP_NET_ADMIN
NoNewPrivileges=yes
User=root
Environment="EC2_IF_INITIAL_SETUP=true"
ExecStart=/usr/bin/setup-policy-routes %i start
ExecStop=/usr/bin/setup-policy-routes %i stop
Restart=on-failure
RestartSec=1

ec2-net-utils 1.6 is sensitive to IMDS propagation latency

There have been reports that, under some circumstances, ec2-net-utils 1.6 as bundled with Amazon Linux 2 can fail to configure a hot-plugged interface.

If the IMDS data has not yet fully propagated to the service, then queries by the client may return 404 errors. Ec2-net-utils prior to 1.6 would retry all 404 errors for several seconds, which insulated it from this situation but which caused problems in IPv6-only subnets because some of the fields it queried for would never appear, leading to long delays in configuring the interface. 1.6 tried to correct this behavior by querying one level up of the specified key in the URL path with retries, but not retrying the key itself. There are, however, cases where, for example,

$ curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:3f:ef:3b:20:71/

would return data, but

$ curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:3f:ef:3b:20:71/subnet-ipv4-cidr-block

would still return a 404 error.

Symptoms of the problem are the secondary ENIs would fail to configure and log messages similar to the following in the journal:

[rewrite_primary] Error 0 contacting IMDS for eth1. Bailing out.

Add ShellCheck to CI

89ff608#diff-3aa0e32e6448cbc26df6ab19057db2c2bed15a6ca163a658eb601c0d656ec18eR299
This commit changed a flat whitespace-delimited string to a Bash array, which was (is) a serious bug until #18 updated the code which receives and interprets the array (by the present tense, I meant I'm still waiting for 1.5.1 to be available via amzn2-core Yum). Bugs like this would have easily been caught with popular static analysis tool like https://www.shellcheck.net/ (see also https://news.ycombinator.com/item?id=22212338 )

ShellCheck raises tons of warnings for this repository in its current shape (my VS Code automatically displays them with red squigglies), but I believe tools like reviewdog can limit its scope to a changed potion of the code under a CI environment so that you guys can gradually improve the code quality without being overwhelmed by a pile of warnings

Please add a git tag for version 1.2

Judging by the RPM spec file, I believe this package is version 1.2. Would you mind adding a git tag so this can be referenced externally? As an example, the aws-cli repo does this. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.