Proxmox with BGP+EVPN+VXLAN

Proxmox by default does not support BGP+EVPN+VXLAN but there is a small piece of documentation on the Wiki of Proxmox.

Using the routing daemon Frrouting a Proxmox cluster can also be configured to use BGP with EVPN+VXLAN for it’s routing allowing for very flexible networks.

I won’t go into all the details of Frr, BGP, EVPN and VXLAN as the internet already more then enough resources about this. I’ll get right to it.

Frrouting

After installing Proxmox (7.1) on the node I temporarily connected the node via a ad-hoc connection to the internet to be able to install Frr.

curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
echo deb https://deb.frrouting.org/frr bullseye frr-7 | sudo tee -a /etc/apt/sources.list.d/frr.list
apt install frr frr-pythontools

After installing Frr I configured the /etc/frr/frr.conf config file:

frr version 7.5.1
frr defaults traditional
hostname infra-72-45-36
log syslog informational
no ip forwarding
no ipv6 forwarding
service integrated-vtysh-config
!
interface enp101s0f0np0
  no ipv6 nd suppress-ra
!
interface enp101s0f1np1
  no ipv6 nd suppress-ra
!
interface lo
  ip address 10.255.254.5/32
  ipv6 address 2a05:1500:xxx:xx::5/128
!
router bgp 4200400036
  bgp router-id 10.255.254.5
  no bgp ebgp-requires-policy
  no bgp default ipv4-unicast
  no bgp network import-check
  neighbor core peer-group
  neighbor core remote-as external
  neighbor core ebgp-multihop 255
  neighbor enp101s0f0np0 interface peer-group core
  neighbor enp101s0f1np1 interface peer-group core
  !
  address-family ipv4 unicast
    redistribute connected
    neighbor core activate
    exit-address-family
  !
  address-family ipv6 unicast
    redistribute connected
    neighbor core activate
    exit-address-family
  !
  address-family l2vpn evpn
    neighbor core activate
    advertise-all-vni
    exit-address-family
  !
line vty
!

In this case the host will be connecting to two Cumulus Linux routers using BGP Unnumbered.

The interfaces enp101s0f0np0 and enp101s0f1np1 are the uplinks of this node the the two routers.

/etc/network/interfaces

Now we need to make sure the /etc/network/interface file is populated with the proper information.

auto lo
iface lo inet loopback

auto enp101s0f1np1
iface enp101s0f1np1 inet manual
mtu 9216

auto enp101s0f0np0
iface enp101s0f0np0 inet manual
mtu 9216

This makes sure the interfaces (Mellanox ConnectX-5 2x25Gb SFP28) interfaces are online and running with an MTUof 9216.

The MTU of 9216 is needed so I can transport traffic with an MTU of 9000 within my VXLAN packets. VXLAN has an overhead of 50 bytes. So to transport an Ethernet packet of 1500 bytes you need to make sure you have at least an MTU of 1550 on your VXLAN underlay network.

VXLAN bridges

I now created a bunch of devices in the interfaces file:

auto vxlan201
iface vxlan201 inet static
    mtu 1500
    pre-up ip link add vxlan201 type vxlan id 201 dstport 4789 local 10.255.254.5 nolearning
    up ip link set vxlan201 up
    down ip link set vxlan201 down
    post-down ip link del vxlan201

auto vmbr201
iface vmbr201 inet manual
    bridge_ports vxlan201
    bridge-stp off
    bridge-fd 0

auto vxlan202
iface vxlan202 inet static
    mtu 1500
    pre-up ip link add vxlan202 type vxlan id 202 dstport 4789 local 10.255.254.5 nolearning
    up ip link set vxlan202 up
    down ip link set vxlan202 down
    post-down ip link del vxlan202

auto vmbr202
iface vmbr202 inet manual
    address 192.168.202.36/24
    bridge_ports vxlan202
    bridge-stp off
    bridge-fd 0

I created vmbr201 and vmbr202 which correspond to VNI 201 and 202 on the network. These can now be used with Proxmox to connect VMs to.

The IP-Address (10.255.254.5) set at the local argument of the ip command is the IP-address connected to the loopback interface and advertised using BGP.

This will be the address used for the VTEP in EVPN/VXLAN communication.

In reality however I have much more bridges

  • vmbr200
  • vmbr201
  • vmbr202
  • vmbr601
  • vmbr602
  • vmbr603
  • vmbr604
Overview of interfaces in Proxmox

Proxmox cluster network

To be able to create a cluster with Proxmox you need a Layer2 network between the hosts where corosync can be used for cluster communication.

In this case I’m using vmbr202 which has IP-address 192.168.202.36/24 on this node. Other nodes in the cluster have a IPv4 address in the same network and allows them to communicate with the others.

Using fail2ban to block unauthorized calls to CloudStack API

Apache CloudStack does not have a build-in mechanism to rate-limit failed authentication attemps on the API. This potentially allows an attacker to brute-force credentials and gain access.

The api.allowed.source.cidr.list configuration option in CloudStack can be used to globally or on an account level limit the source IPs where the API allows requests from. This is always good to do (if possible), but it does not cover every use-case.

Sometimes you just want to keep malicious traffic outside the door and fail2ban can help there.

Nginx proxy in front of CloudStack

A common use-case is that the Management server of Apache CloudStack is not directly connected to the network, but placed behind a reverse proxy like Nginx or something similar.

This proxy can then also handle SSL termination.

In this example we’re using Nginx as a proxy.

fail2ban

Using fail2ban we can scan the access logs of Nginx and block IP addresses who are abusing our API. In this case we filter on two HTTP status codes:

  • 401
  • 531

This results in that we create the following files:

  • /etc/fail2ban/jail.d/nginx-401.conf
  • /etc/fail2ban/jail.d/nginx-531.conf
  • /etc/fail2ban/filter.d/nginx-401.conf
  • /etc/fail2ban/filter.d/nginx-531.conf

jail.d/nginx-401.conf

[nginx-401]
enabled = true
port = http,https
filter = nginx-401
action = iptables-allports
logpath = %(nginx_access_log)s
bantime = 3600
findtime = 600
maxretry = 25
ignoreip = 127.0.0.1/8

filter.d/nginx-401.conf

[Definition]
failregex = ^ -."(GET|POST|HEAD).HTTP.*" 401
ignoreregex =

Change 401 to 531 where needed to also block HTTP codes 531.

iptables

The action taken by fail2bain is iptables-allports which causes iptables to block all traffic from the particular source IP when it is being banned.

Creating a Management Routing Instance (VRF) on Juniper QFX5100

For a Ceph cluster I have two Juniper QFX5100 switches running as a Virtual Chassis.

This Virtual Chassis is currently only performing L2 forwarding, but I want to move this to a L3 setup where the QFX switches use Dynamic Routing (BGP) and thus become the gateway(s) for the Ceph servers.

This should work, but one of the things I was missing is a dedicated Management Port which uses a different routing table/instance.

Starting with JunOS 17.3R1 you can create a Management Routing Instance as described on the website of Juniper.

set system management-instance

This now creates the Routing Instance called mgmt_junos.

I try to run as much as possible IPv6-only or at least prefer IPv6 over IPv4.

I ran into the problem that configuring an IPv6 address on my em0 interface just wouldn’t work. It kept saying that the IPv6 address was Duplicate.

This is probably something which happens because both QFX switches are connected to the same Out of Band switch and causes it to receive it’s DAD over a different link. I had to disable DAD on interface em0 to make it work.

In addition I configured all DNS lookups to be performed using this routing instance.

The end result for my configuration (snippets):

system {
management-instance;
name-server {
2a00:f10:ff04:153::53 routing-instance mgmt_junos;
2a00:f10:ff04:253::53 routing-instance mgmt_junos;
93.180.70.22 routing-instance mgmt_junos;
93.180.70.30 routing-instance mgmt_junos;
}
}
interfaces {
unit 0 {
family inet {
address 172.17.5.10/24;
}
family inet6 {
address 2a00:f10:XXX:XXX::100/64
dad-disable;
}
}
}
routing-instances {
mgmt_junos {
routing-options {
rib mgmt_junos.inet6.0 {
static {
route ::/0 next-hop 2a00:f10:XXX:XXX::1;
}
}
static {
route 0.0.0.0/0 next-hop 172.17.5.1;
}
}
}
}

This now allows me to SSH to my Juniper QFX Virtual Chassis over interface em0 which uses a different routing instance/table.

Should I make a mistake in the default routing instance, for example a BGP misconfiguration, I can still SSH to my switch(es).

Or if there is a routing error (BGP issue) I can also still reach the switches.

Renaming a network interface with systemd-networkd on Ubuntu 18.04

On a Ubuntu system where I’m creating a VXLAN Proof of Concept with CloudStack I wanted to rename the interface enp5s0 to cloudbr0.

I found many documentation on the internet on how to do this with *.link files, but I was missing the golden tip, which was you need to re-generate your initramfs.

/etc/systemd/network/50-cloudbr0.link

[Match]
MACAddress=00:25:90:4b:81:54

[Link]
Name=cloudbr0

After you create this file, re-generate your initramfs:

update-initramfs -c -k all

You can now use cloudbr0 in *.network files to use it like any other network interface.

In my case this is how my interfaces look like:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
6: cloudbr0:  mtu 9000 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:25:90:4b:81:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.11/24 brd 192.168.0.255 scope global cloudbr0
       valid_lft forever preferred_lft forever
    inet6 2a00:f10:114:0:225:90ff:fe4b:8154/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 2591993sec preferred_lft 604793sec
    inet6 fe80::225:90ff:fe4b:8154/64 scope link 
       valid_lft forever preferred_lft forever
8: cloudbr1:  mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 86:fa:b6:31:6e:c1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.11/24 brd 172.16.0.255 scope global cloudbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::84fa:b6ff:fe31:6ec1/64 scope link 
       valid_lft forever preferred_lft forever
9: vxlan100:  mtu 1450 qdisc noqueue master cloudbr1 state UNKNOWN group default qlen 1000
    link/ether 56:df:29:8d:db:83 brd ff:ff:ff:ff:ff:ff

VXLAN with VyOS and Ubuntu 18.04

VXLAN

Virtual Extensible LAN uses encapsulation technique to encapsulate OSI layer 2 Ethernet frames within layer 4 UDP datagrams. More on this can be found on the link provided.

For a Ceph and CloudStack environment I needed to set up a Proof-of-Concept using VXLAN and some refurbished hardware. The main purpose of this PoC is to verify that VXLAN works with CloudStack, Ceph and Ubuntu 18.04

VyOS

VyOS is an open source network operating system based on Debian Linux. It supports VXLAN, so using this we were able to test VXLAN in this setup.

In production a other VXLAN capable router would be used, but for a PoC VyOS works just fine running on a regular server.

Configuration

The VyOS router is connected to ‘the internet’ with one NIC and the other NIC is connected to a switch.

Using static routes a IPv4 subnet (/24) and a IPv6 subnet (/48) are routed towards the VyOS router. These are then splitted and send to multiple VLANs.

As it took me a while to configure VXLAN under VyOS

I’m only posting that configuration.

interfaces {
    ethernet eth0 {
        address 31.25.96.130/30
        address 2a00:f10:100:1d::2/64
        duplex auto
        hw-id 00:25:90:80:ed:fe
        smp-affinity auto
        speed auto
    }
    ethernet eth5 {
        duplex auto
        hw-id a0:36:9f:0d:ab:be
        mtu 9000
        smp-affinity auto
        speed auto
        vif 300 {
            address 192.168.0.1/24
            description VXLAN
            mtu 9000
        }
    vxlan vxlan1000 {
        address 10.0.0.1/23
        address 2a00:f10:114:1000::1/64
        group 239.0.3.232
        ip {
            enable-arp-accept
            enable-arp-announce
        }
        ipv6 {
            dup-addr-detect-transmits 1
            router-advert {
                cur-hop-limit 64
                link-mtu 1500
                managed-flag false
                max-interval 600
                name-server 2a00:f10:ff04:153::53
                name-server 2a00:f10:ff04:253::53
                other-config-flag false
                prefix 2a00:f10:114:1000::/64 {
                    autonomous-flag true
                    on-link-flag true
                    valid-lifetime 2592000
                }
                reachable-time 0
                retrans-timer 0
                send-advert true
            }
        }
        link eth5.300
        mtu 1500
        vni 1000
    }
    vxlan vxlan2000 {
        address 109.72.91.1/26
        address 2a00:f10:114:2000::1/64
        group 239.0.7.208
        ipv6 {
            dup-addr-detect-transmits 1
            router-advert {
                cur-hop-limit 64
                link-mtu 1500
                managed-flag false
                max-interval 600
                name-server 2a00:f10:ff04:153::53
                name-server 2a00:f10:ff04:253::53
                other-config-flag false
                prefix 2a00:f10:114:2000::/64 {
                    autonomous-flag true
                    on-link-flag true
                    valid-lifetime 2592000
                }
                reachable-time 0
                retrans-timer 0
                send-advert true
            }
        }
        link eth5.300
        mtu 1500
        vni 2000
    }
}

VLAN 300 on eth5 is used to route VNI 1000 and 2000 in their own multicast groups.

The MTU of eth5 is set to 9000 so that the encapsulated traffic of VXLAN can still be 1500 bytes.

Ubuntu 18.04

To test if VXLAN was actually working on the Ubuntu 18.04 host I made a very simple script:

ip link add vxlan1000 type vxlan id 1000 dstport 4789 group 239.0.3.232 dev vlan300 ttl 5
ip link set up dev vxlan1000
ip addr add 10.0.0.11/23 dev vxlan1000
ip addr add 2a00:f10:114:1000::101/64 dev vxlan1000

That works! I can ping 10.0.0.11 and 2a00:f10:114:1000::1 from my Ubuntu 18.04 machine!

Docker containers with IPv6 behind NAT

WARNING

In production IPv6 should always be used without NAT. Only use IPv6 and NAT for testing purposes. There is no valid reason to use IPv6 with NAT in any production environment.

IPv6 and NAT

IPv6 is designed to remove the need for NAT and that is a very, very good thing. NAT breaks Peer-to-Peer connections and that is exactly what is one of the great things of IPv6. Every device on the internet gets it’s own public IP-Address again.

Docker and IPv6

Support for IPv6 in Docker has been there for a while now. It is disabled by default however. The documentation describes on how to enable it.

I wanted to enable IPv6 on my Docker setup on my laptop running Ubuntu, but as my laptop is a mobile device the IPv6 prefix I have changes when I move to a different location. IPv6 Prefix Delegation isn’t available at every IPv6-enabled location either, so I wanted to figure out if I could enable IPv6 in my Docker setup locally and use NAT to have my containers reach the internet over IPv6.

At home I have IPv6 via ZeelandNet and at the office we have a VDSL connection from XS4All. When I’m on a remote location I enable our OpenVPN tunnel which has IPv6 enabled. This way I always have IPv6 available.

The Docker documentation shows that enabling IPv6 is very easy. I modified the systemd service file of docker and added a fixed IPv6 CIDR:

ExecStart=/usr/bin/dockerd --ipv6 --fixed-cidr-v6="fd00::/64" -H fd://

fd00::/64 is a Site-Local IPv6 subnet (deprecated) which can be safely used.

I then added a NAT rule into ip6tables so that it would NAT for me:

sudo ip6tables -t nat -A POSTROUTING -s fd00::/64 -j MASQUERADE

Result

My Docker containers now get a IPv6 Address as can be seen below:

root@da80cf3d8532:~# ip -6 a
1: lo:  mtu 65536 state UNKNOWN qlen 1
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
15: eth0@if16:  mtu 1500 state UP 
    inet6 fd00::242:ac11:2/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever
root@da80cf3d8532:~#

In this case the address is fd00::242:ac11:2 which as assigned by Docker.

Since my laptop has IPv6 I can now ping pcextreme.nl from my Docker container.

root@da80cf3d8532:~# ping6 -c 3 pcextreme.nl -n
PING pcextreme.nl (2a00:f10:101:0:46e:c2ff:fe00:93): 56 data bytes
64 bytes from 2a00:f10:101:0:46e:c2ff:fe00:93: icmp_seq=0 ttl=61 time=14.368 ms
64 bytes from 2a00:f10:101:0:46e:c2ff:fe00:93: icmp_seq=1 ttl=61 time=16.132 ms
64 bytes from 2a00:f10:101:0:46e:c2ff:fe00:93: icmp_seq=2 ttl=61 time=15.790 ms
--- pcextreme.nl ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 14.368/15.430/16.132/0.764 ms
root@da80cf3d8532:~#

Again, this should ONLY be used for testing purposes. For production IPv6 Prefix Delegation is the route to go down.

Calculating DS record from DNSKEY with Python 3

While working on DNSSEC for PCextreme’s Aurora DNS I had to convert a DNSKEY to a DS-record which could be set in the parent zone for proper delegation.

The foundation for Aurora DNS is PowerDNS together with Python 3.

The API for Aurora DNS has to return the DS-records so that a end-user can use these in the parent zone. I had the DNSKEY, but I didn’t have the DS-record so I had to calculate it using Python 3.

I eventually ended up with this Python code which you can find on my Github Gists page.

"""
Generate a DNSSEC DS record based on the incoming DNSKEY record

The DNSKEY can be found using for example 'dig':

$ dig DNSKEY secure.widodh.nl

The output can then be parsed with the following code to generate a DS record
for in the parent DNS zone

Author: Wido den Hollander 

Many thanks to this blogpost: https://www.v13.gr/blog/?p=239
"""

import struct
import base64
import hashlib


DOMAIN = 'secure.widodh.nl'
DNSKEY = '257 3 8 AwEAAckZ+lfb0j6aHBW5AanV5A0V0IfF99vAKFZd6+fJfEChpZtjnItWDnJLPa3/LAFec/tUhLZ4jgmzaoEuX3EQQgI1V4kp9SYf8HMlFPP014eO+AnjkYFGLE2uqHPx/Tu7/pO3EyKwTXi5fMadROKuo/mfat5AEIhGjteGGO93DhnOa6kcqj5RHYJBh5OZ/GoZfbeYHK6Muur1T16hHiI12rYGoqJ6ZW5+njYprG6qwp6TZXxJyE7wF1JdD+Zhbjhf0Md4zMEysP22wBLghBaX6eDIBh/7jU7dw1Ob+I42YWk+X4NSiU3sRYPaq1R13JEK4zVqQtL++UVtgRPEbfj5RQ8='


def _calc_keyid(flags, protocol, algorithm, dnskey):
    st = struct.pack('!HBB', int(flags), int(protocol), int(algorithm))
    st += base64.b64decode(dnskey)

    cnt = 0
    for idx in range(len(st)):
        s = struct.unpack('B', st[idx:idx+1])[0]
        if (idx % 2) == 0:
            cnt += s << 8
        else:
            cnt += s

    return ((cnt & 0xFFFF) + (cnt >> 16)) & 0xFFFF


def _calc_ds(domain, flags, protocol, algorithm, dnskey):
    if domain.endswith('.') is False:
        domain += '.'

    signature = bytes()
    for i in domain.split('.'):
        signature += struct.pack('B', len(i)) + i.encode()

    signature += struct.pack('!HBB', int(flags), int(protocol), int(algorithm))
    signature += base64.b64decode(dnskey)

    return {
        'sha1':    hashlib.sha1(signature).hexdigest().upper(),
        'sha256':  hashlib.sha256(signature).hexdigest().upper(),
    }


def dnskey_to_ds(domain, dnskey):
    dnskeylist = dnskey.split(' ', 3)

    flags = dnskeylist[0]
    protocol = dnskeylist[1]
    algorithm = dnskeylist[2]
    key = dnskeylist[3].replace(' ', '')

    keyid = _calc_keyid(flags, protocol, algorithm, key)
    ds = _calc_ds(domain, flags, protocol, algorithm, key)

    ret = list()
    ret.append(str(keyid) + ' ' + str(algorithm) + ' ' + str(1) + ' '
               + ds['sha1'].lower())
    ret.append(str(keyid) + ' ' + str(algorithm) + ' ' + str(2) + ' '
               + ds['sha256'].lower())
    return ret


print(dnskey_to_ds(DOMAIN, DNSKEY))

VirtualBox images to experiment with IPv6

Around me I noticed that a lot of people don’t have hands-on experience with IPv6. The networks they work in do not support IPv6 nor does their ISP provide them with native IPv6 connectivity at home.

On my local systems I often use Virtual Box to set up (IPv6) testing environments. I thought I’d create some Virtual Machine images to get some hands-on experience with IPv6.

The images and README can be found on Github and are aimed to be easy to install and work with.

Requirements

To run the images you need to have Virtual Box installed. You also should be able to use the Linux command line as the Virtual Machines are based on Ubuntu 16.04.

More information can be found in the repository on Github in the README file.

Download

You can download the images here.

How to use

Please take a look at the README on Github. It tells you how to use them.

Happy testing!

AnyIP: Bind a whole subnet to your Linux machine

IPv6 Prefix Delegation

In my previous post I wrote how you can use Docker with IPv6 and Prefix Delegation.

A IPv6 subnet routed to a Linux machine can be used with other things than Docker. That’s where the AnyIP feature of the kernel comes in.

Linux Kernel AnyIP

The AnyIP feature of the Linux kernel allows you to bind a complete IPv4 or IPv6 subnet to your system.

Instead of adding all addresses manually to the kernel you can tell it to bind a complete subnet.

Configuring

IPv4

ip -4 route add local 192.168.0.0/24 dev lo

In this case the Linux kernel will now respond to ARP requests for any IPv4 address in the 192.168.0.0/24 subnet.

IPv6

ip -6 route add local 2001:db8:100::/64 dev lo

In this case the kernel will respond for Neigh Sollicitations on any IPv6 address in the 2001:db8:100::/64 subnet.

Example usage

Let’s assume that you have the IPv6 prefix 2001:db8:100::/60 routed to your Linux machine through IPv6 prefix delegation.

From that /60 subnet we take the first /64 subnet and attach it to lo.

ip -6 route add local 2001:db8:100::/64 dev lo

You can now ping any of the addresses in that subnet:

  • 2001:db8:100::1
  • 2001:db8:100::100
  • 2001:db8:100::200
  • 2001:db8:100::dead:b33f

If you would start a webserver which listens on port 80 you can use any of the IPv6 addresses in that subnet and the webserver will respond to it.

Use cases

It could be that you want to to mass-shared hosting on a system where you want to assign each hostname/domainname it’s own IPv6 address. Instead of attaching single IPs to a interface you can simply attach a complete subnet and point traffic to any of the IPs in that subnet.

Demo

On a virtual machine on PCextreme’s Aurora Compute I deployed a Instance with Prefix Delegation enabled.

After running ‘dhclient’ I got the subnet 2a00:f10:500:40::/60 assigned to my Instance.

It was then just one line to attach a /64 subnet:

ip -6 route add local 2a00:f10:500:40::/64 dev lo

Random address generator

I wrote a small piece of Python code to generate a random IPv6 address:

#!/usr/bin/env python3
"""
Generate a random IPv6 address for a specified subnet
"""

from random import seed, getrandbits
from ipaddress import IPv6Network, IPv6Address

subnet = '2a00:f10:500:40::/64'

seed()
network = IPv6Network(subnet)
address = IPv6Address(network.network_address + getrandbits(network.max_prefixlen - network.prefixlen))

print(address)

Using a small loop in Bash I could now ping random addresses in that subnet:

while [ true ]; do ping6 -c 2 `./random-ipv6.py`; done

Some example output:

--- 2a00:f10:500:40:d142:1092:ea84:74b4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 10.252/11.680/13.108/1.428 ms
PING 2a00:f10:500:40:4e50:f264:6ea9:d184(2a00:f10:500:40:4e50:f264:6ea9:d184) 56 data bytes
64 bytes from 2a00:f10:500:40:4e50:f264:6ea9:d184: icmp_seq=1 ttl=56 time=10.0 ms
64 bytes from 2a00:f10:500:40:4e50:f264:6ea9:d184: icmp_seq=2 ttl=56 time=10.0 ms

--- 2a00:f10:500:40:4e50:f264:6ea9:d184 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 10.085/10.087/10.089/0.002 ms
PING 2a00:f10:500:40:d831:1f89:b06d:fe12(2a00:f10:500:40:d831:1f89:b06d:fe12) 56 data bytes
64 bytes from 2a00:f10:500:40:d831:1f89:b06d:fe12: icmp_seq=1 ttl=56 time=9.77 ms
64 bytes from 2a00:f10:500:40:d831:1f89:b06d:fe12: icmp_seq=2 ttl=56 time=10.1 ms

--- 2a00:f10:500:40:d831:1f89:b06d:fe12 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1005ms
rtt min/avg/max/mdev = 9.777/9.958/10.140/0.207 ms
PING 2a00:f10:500:40:2c45:26ee:5b93:fa2(2a00:f10:500:40:2c45:26ee:5b93:fa2) 56 data bytes
64 bytes from 2a00:f10:500:40:2c45:26ee:5b93:fa2: icmp_seq=1 ttl=56 time=10.2 ms
64 bytes from 2a00:f10:500:40:2c45:26ee:5b93:fa2: icmp_seq=2 ttl=56 time=10.0 ms

Docker and IPv6 Prefix Delegation

As posted earlier I have IPv6 Prefix Delegation working at our office to test with Docker.

One of the missing links was to automatically configure Docker to use the prefix obtained through DHCPv6+PD. I manually configured the prefix in Docker, but I also had to run dhclient manually.

I figured this could be automated so I gave it a try.

Ubuntu Networking

At first I tried to figure out if Ubuntu’s networking was somehow able to request a prefix through DHCPv6. Long story short: Neither Ubuntu nor CentOS are able to do so. You have to script this manually.

dhclient

To obtain a prefix I had to run dhclient manually. That wasn’t to hard. Simply run:

dhclient -6 -P -d -v eth0

This resulted in obtaining a prefix:

Bound to *:546
Listening on Socket/eth0
Sending on   Socket/eth0
PRC: Confirming active lease (INIT-REBOOT).
XMT: Forming Rebind, 0 ms elapsed.
XMT:  X-- IA_PD d5:68:28:08
XMT:  | X-- Requested renew  +3600
XMT:  | X-- Requested rebind +5400
XMT:  | | X-- IAPREFIX 2001:980:XXXX:140::/60
XMT:  | | | X-- Preferred lifetime +7200
XMT:  | | | X-- Max lifetime +7500
XMT:  V IA_PD appended.
XMT: Rebind on eth0, interval 940ms.
RCV: Reply message on eth0 from fe80::da67:d9ff:fe81:bcec.
RCV:  X-- IA_PD d5:68:28:08
RCV:  | X-- starts 1457617054
RCV:  | X-- t1 - renew  +604800
RCV:  | X-- t2 - rebind +967680
RCV:  | X-- [Options]
RCV:  | | X-- IAPREFIX 2001:980:XXXX:140::/60
RCV:  | | | X-- Preferred lifetime 1209600.
RCV:  | | | X-- Max lifetime 2592000.
RCV:  X-- Server ID: 00:03:00:01:d8:67:d9:81:bc:f0
PRC: Bound to lease 00:03:00:01:d8:67:d9:81:bc:f0.
PRC: Renewal event scheduled in 604800 seconds, to run for 362880 seconds.
PRC: Depreference scheduled in 1209600 seconds.
PRC: Expiration scheduled in 2592000 seconds.

As you can see, I got a /60 prefix. Now I had to somehow get this automated and configure Docker to use it.

Upstart

Since I was testing with Docker 1.10 under Ubuntu 14.04 I had to use Upstart to run dhclient.

The /etc/init/dhclient6-pd.conf Upstart script I created was rather simple:

description     "DHCPv6 Prefix Delegation client"

start on runlevel [2345]
stop on runlevel [!2345]

respawn
respawn limit 30 3
umask 022

console log

exec dhclient -6 -P -d eth0

DHCP hook

dhclient has hooks which it can execute when something happens. I wrote a hook which extracted the delegated IPv6 prefix and restarted Docker.

I placed the hook in the default location for DHCP hooks: /etc/dhcp/dhclient-enter-hooks.d/docker-ipv6:

#!/bin/bash

SUBNET_SIZE=80
DOCKER_ETC_DIR="/etc/docker"
DOCKER_PREFIX_FILE="${DOCKER_ETC_DIR}/ipv6.prefix"

if [ ! -z "$new_ip6_prefix" ]; then
    SUBNET=$(sipcalc -S $SUBNET_SIZE $new_ip6_prefix|grep Network|head -n 1|awk '{print $3}')
    echo "${SUBNET}/${SUBNET_SIZE}" > $DOCKER_PREFIX_FILE

    if [ "$old_ip6_prefix" != "$new_ip6_prefix" ]; then
        service docker restart
    fi
fi

For this to work you need to modify /etc/default/docker so that this line reads:

DOCKER_OPTS="--ipv6 --fixed-cidr-v6=`cat /etc/docker/ipv6.prefix`"

The result

Docker was now running properly with a IPv6 subnet configured and my containers have a IPv6 address as well.

wido@wido-desktop:~$ docker exec -ti 94c8f02 ip addr show dev eth0
13: eth0:  mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:980:XXXX:140:0:242:ac11:2/80 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever
wido@wido-desktop:~$

Native IPv6 in my Docker containers fully automated and dynamic!

All the scripts I used can be found on Github.