Using L3 (BGP) routing for your Ceph storage

Many Ceph storage environments out there are deployed using a L2 underlay.

This means that the Ceph servers (MON, OSD, etc) are connected using LACP/Bonding to a pair of switches. On their ‘bond0’ device (example) they are assigned an IPv4/IPv6 address and this is used for connectivity between the Ceph nodes and the Ceph clients.

Although this works fine, I try to avoid L2 as much as possible in datacenter deployments. L2 scales up to a certain point, but it has it’s limitations. Modern Top-of-Rack (ToR) switches can easily route traffic and wire-speed. This used to be a limitation of switches in the past. When designing environments I prefer using a L3 approach.

This blogpost is there to show you the rough concept. It’s NOT a copy and paste tutorial. You will need to adapt it to your situation.

Network setup and BGP configuration

Using Juniper QFX5100 switches and Frrouting on the Ceph nodes I’ve established BGP sessions between the ToR and Ceph nodes according to the diagram below.

Each nodes has two independent BGP sessions with the Top-of-Rack in it’s rack. Via these BGP sessions they advertise their local IPv6 /128 address. Via the same sessions they receive a default ::/0 IPv6 route.

ceph01# sh bgp summary 

IPv6 Unicast Summary (VRF default):
BGP router identifier 1.2.3.4, local AS number 65101 vrf-id 0
BGP table version 10875
RIB entries 511, using 96 KiB of memory
Peers 2, using 1448 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor        V    AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
enp196s0f0np0   4 65002    487385    353917        0    0    0 3d18h17m            1        1 N/A
enp196s0f1np1   4 65002    558998    411452        0    0    0 01:38:55            1        1 N/A

Total number of neighbors 2
ceph01#

Here we see two BGP sessions active over both NICs of the Ceph node. We can also see that a default IPv6 route is received via BGP.

ceph01# sh ipv6 route ::/0
Routing entry for ::/0
  Known via "bgp", distance 20, metric 0
  Last update 01:42:00 ago
    fe80::e29:efff:fed7:4719, via enp196s0f0np0, weight 1
    fe80::7686:e2ff:fe7c:a19e, via enp196s0f1np1, weight 1

ceph01# 

The Frrouting configuration ( /etc/frr/frr.conf ) is fairly simple:

frr defaults traditional
hostname ceph01
log syslog informational
no ip forwarding
no ipv6 forwarding
service integrated-vtysh-config
!
interface enp196s0f0np0
 no ipv6 nd suppress-ra
exit
!
interface enp196s0f1np1
 no ipv6 nd suppress-ra
exit
!
interface lo
 ipv6 address 2001:db8:100:1::/128
exit
!
router bgp 65101
 bgp router-id 1.2.3.4
 no bgp ebgp-requires-policy
 no bgp default ipv4-unicast
 no bgp network import-check
 neighbor upstream peer-group
 neighbor upstream remote-as external
 neighbor enp196s0f0np0 interface peer-group upstream
 neighbor enp196s0f1np1 interface peer-group upstream
 !
 address-family ipv6 unicast
  redistribute connected
  neighbor upstream activate
 exit-address-family
exit
!
end

On the Juniper switches a configuration was defined for the BGP Unnumbered (RFC5549) configuration as well. This blogpost explains very well on how BGP Unnumbered works on JunOS, I am not going to repeat it. I will highlight a couple of pieces of configuration.

root@tor01# show interfaces xe-0/0/1
description ceph01;
unit 0 {
    mtu 9216;
    family inet6;
}

root@tor01# show protocols router-advertisement 
interface xe-0/0/1.0;
root@tor01# show | compare 
[edit]
+  policy-options {
+      as-list bgp_unnumbered_as_list members 65101-65199;
+  }
[edit protocols]
+   bgp {
+       group ceph {
+           family inet6 {
+               unicast;
+           }
+           multipath;
+           export default-v6;
+           import ceph-loopback;
+           dynamic-neighbor bgp_unnumbered {
+               peer-auto-discovery {
+                   family inet6 {
+                       ipv6-nd;
+                   }
+                   interface xe-0/0/1.0;
+                   interface xe-0/0/2.0;
+                   interface xe-0/0/3.0;
+               }
+           }
+           peer-as-list bgp_unnumbered_as_list;
+       }
+   }
[edit policy]
+ policy-statement default-v6 {
+    from {
+        route-filter ::/0 exact;
+   }
+   then accept;
+}
+ policy-statement ceph-loopback {
+    from {
+        route-filter 2001:db8:100::/64 upto /128;
+   }
+   then accept;
+}

This will set up the BGP sessions via the interfaces xe-0/0/1 until xe-0/0/3 using IPv6 Autodiscovery.

The Ceph nodes should now be able to ping the other nodes:

PING 2001:db8:100::2(2001:db8:100::2) 56 data bytes
64 bytes from 2001:db8:100::2: icmp_seq=1 ttl=63 time=0.058 ms
64 bytes from 2001:db8:100::2: icmp_seq=2 ttl=63 time=0.063 ms
64 bytes from 2001:db8:100::2: icmp_seq=3 ttl=63 time=0.071 ms

--- 2001:db8:100::2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2037ms
rtt min/avg/max/mdev = 0.058/0.064/0.071/0.005 ms

Ceph configuration

From Ceph’s perspective there is not much to do. We just need to specify the IPv6 subnet Ceph is allowed to use and bind to.

[global]
	 mon_host = 2001:db8:100::1, 2001:db8:100::2, 2001:db8:100::3
	 ms_bind_ipv4 = false
	 ms_bind_ipv6 = true
	 public_network = 2001:db8:100::/64

This is all the configuration needed for Ceph 🙂

wdh@ceph01:~$ sudo ceph health
HEALTH_OK
wdh@infra-04-01-17:~$ sudo ceph mon dump
election_strategy: 1
0: [v2:[2001:db8:100::1]:3300/0,v1:[2001:db8:100::1]:6789/0] mon.ceph01
1: [v2:[2001:db8:100::2]:3300/0,v1:[2001:db8:100::2]:6789/0] mon.ceph02
2: [v2:[2001:db8:100::3]:3300/0,v1:[2001:db8:100::3]:6789/0] mon.ceph02
dumped monmap epoch 6
wdh@infra-04-01-17:~$ 

Creating a Management Routing Instance (VRF) on Juniper QFX5100

For a Ceph cluster I have two Juniper QFX5100 switches running as a Virtual Chassis.

This Virtual Chassis is currently only performing L2 forwarding, but I want to move this to a L3 setup where the QFX switches use Dynamic Routing (BGP) and thus become the gateway(s) for the Ceph servers.

This should work, but one of the things I was missing is a dedicated Management Port which uses a different routing table/instance.

Starting with JunOS 17.3R1 you can create a Management Routing Instance as described on the website of Juniper.

set system management-instance

This now creates the Routing Instance called mgmt_junos.

I try to run as much as possible IPv6-only or at least prefer IPv6 over IPv4.

I ran into the problem that configuring an IPv6 address on my em0 interface just wouldn’t work. It kept saying that the IPv6 address was Duplicate.

This is probably something which happens because both QFX switches are connected to the same Out of Band switch and causes it to receive it’s DAD over a different link. I had to disable DAD on interface em0 to make it work.

In addition I configured all DNS lookups to be performed using this routing instance.

The end result for my configuration (snippets):

system {
management-instance;
name-server {
2a00:f10:ff04:153::53 routing-instance mgmt_junos;
2a00:f10:ff04:253::53 routing-instance mgmt_junos;
93.180.70.22 routing-instance mgmt_junos;
93.180.70.30 routing-instance mgmt_junos;
}
}
interfaces {
unit 0 {
family inet {
address 172.17.5.10/24;
}
family inet6 {
address 2a00:f10:XXX:XXX::100/64
dad-disable;
}
}
}
routing-instances {
mgmt_junos {
routing-options {
rib mgmt_junos.inet6.0 {
static {
route ::/0 next-hop 2a00:f10:XXX:XXX::1;
}
}
static {
route 0.0.0.0/0 next-hop 172.17.5.1;
}
}
}
}

This now allows me to SSH to my Juniper QFX Virtual Chassis over interface em0 which uses a different routing instance/table.

Should I make a mistake in the default routing instance, for example a BGP misconfiguration, I can still SSH to my switch(es).

Or if there is a routing error (BGP issue) I can also still reach the switches.

VXLAN with VyOS and Ubuntu 18.04

VXLAN

Virtual Extensible LAN uses encapsulation technique to encapsulate OSI layer 2 Ethernet frames within layer 4 UDP datagrams. More on this can be found on the link provided.

For a Ceph and CloudStack environment I needed to set up a Proof-of-Concept using VXLAN and some refurbished hardware. The main purpose of this PoC is to verify that VXLAN works with CloudStack, Ceph and Ubuntu 18.04

VyOS

VyOS is an open source network operating system based on Debian Linux. It supports VXLAN, so using this we were able to test VXLAN in this setup.

In production a other VXLAN capable router would be used, but for a PoC VyOS works just fine running on a regular server.

Configuration

The VyOS router is connected to ‘the internet’ with one NIC and the other NIC is connected to a switch.

Using static routes a IPv4 subnet (/24) and a IPv6 subnet (/48) are routed towards the VyOS router. These are then splitted and send to multiple VLANs.

As it took me a while to configure VXLAN under VyOS

I’m only posting that configuration.

interfaces {
    ethernet eth0 {
        address 31.25.96.130/30
        address 2a00:f10:100:1d::2/64
        duplex auto
        hw-id 00:25:90:80:ed:fe
        smp-affinity auto
        speed auto
    }
    ethernet eth5 {
        duplex auto
        hw-id a0:36:9f:0d:ab:be
        mtu 9000
        smp-affinity auto
        speed auto
        vif 300 {
            address 192.168.0.1/24
            description VXLAN
            mtu 9000
        }
    vxlan vxlan1000 {
        address 10.0.0.1/23
        address 2a00:f10:114:1000::1/64
        group 239.0.3.232
        ip {
            enable-arp-accept
            enable-arp-announce
        }
        ipv6 {
            dup-addr-detect-transmits 1
            router-advert {
                cur-hop-limit 64
                link-mtu 1500
                managed-flag false
                max-interval 600
                name-server 2a00:f10:ff04:153::53
                name-server 2a00:f10:ff04:253::53
                other-config-flag false
                prefix 2a00:f10:114:1000::/64 {
                    autonomous-flag true
                    on-link-flag true
                    valid-lifetime 2592000
                }
                reachable-time 0
                retrans-timer 0
                send-advert true
            }
        }
        link eth5.300
        mtu 1500
        vni 1000
    }
    vxlan vxlan2000 {
        address 109.72.91.1/26
        address 2a00:f10:114:2000::1/64
        group 239.0.7.208
        ipv6 {
            dup-addr-detect-transmits 1
            router-advert {
                cur-hop-limit 64
                link-mtu 1500
                managed-flag false
                max-interval 600
                name-server 2a00:f10:ff04:153::53
                name-server 2a00:f10:ff04:253::53
                other-config-flag false
                prefix 2a00:f10:114:2000::/64 {
                    autonomous-flag true
                    on-link-flag true
                    valid-lifetime 2592000
                }
                reachable-time 0
                retrans-timer 0
                send-advert true
            }
        }
        link eth5.300
        mtu 1500
        vni 2000
    }
}

VLAN 300 on eth5 is used to route VNI 1000 and 2000 in their own multicast groups.

The MTU of eth5 is set to 9000 so that the encapsulated traffic of VXLAN can still be 1500 bytes.

Ubuntu 18.04

To test if VXLAN was actually working on the Ubuntu 18.04 host I made a very simple script:

ip link add vxlan1000 type vxlan id 1000 dstport 4789 group 239.0.3.232 dev vlan300 ttl 5
ip link set up dev vxlan1000
ip addr add 10.0.0.11/23 dev vxlan1000
ip addr add 2a00:f10:114:1000::101/64 dev vxlan1000

That works! I can ping 10.0.0.11 and 2a00:f10:114:1000::1 from my Ubuntu 18.04 machine!

Docker containers with IPv6 behind NAT

WARNING

In production IPv6 should always be used without NAT. Only use IPv6 and NAT for testing purposes. There is no valid reason to use IPv6 with NAT in any production environment.

IPv6 and NAT

IPv6 is designed to remove the need for NAT and that is a very, very good thing. NAT breaks Peer-to-Peer connections and that is exactly what is one of the great things of IPv6. Every device on the internet gets it’s own public IP-Address again.

Docker and IPv6

Support for IPv6 in Docker has been there for a while now. It is disabled by default however. The documentation describes on how to enable it.

I wanted to enable IPv6 on my Docker setup on my laptop running Ubuntu, but as my laptop is a mobile device the IPv6 prefix I have changes when I move to a different location. IPv6 Prefix Delegation isn’t available at every IPv6-enabled location either, so I wanted to figure out if I could enable IPv6 in my Docker setup locally and use NAT to have my containers reach the internet over IPv6.

At home I have IPv6 via ZeelandNet and at the office we have a VDSL connection from XS4All. When I’m on a remote location I enable our OpenVPN tunnel which has IPv6 enabled. This way I always have IPv6 available.

The Docker documentation shows that enabling IPv6 is very easy. I modified the systemd service file of docker and added a fixed IPv6 CIDR:

ExecStart=/usr/bin/dockerd --ipv6 --fixed-cidr-v6="fd00::/64" -H fd://

fd00::/64 is a Site-Local IPv6 subnet (deprecated) which can be safely used.

I then added a NAT rule into ip6tables so that it would NAT for me:

sudo ip6tables -t nat -A POSTROUTING -s fd00::/64 -j MASQUERADE

Result

My Docker containers now get a IPv6 Address as can be seen below:

root@da80cf3d8532:~# ip -6 a
1: lo:  mtu 65536 state UNKNOWN qlen 1
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
15: eth0@if16:  mtu 1500 state UP 
    inet6 fd00::242:ac11:2/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever
root@da80cf3d8532:~#

In this case the address is fd00::242:ac11:2 which as assigned by Docker.

Since my laptop has IPv6 I can now ping pcextreme.nl from my Docker container.

root@da80cf3d8532:~# ping6 -c 3 pcextreme.nl -n
PING pcextreme.nl (2a00:f10:101:0:46e:c2ff:fe00:93): 56 data bytes
64 bytes from 2a00:f10:101:0:46e:c2ff:fe00:93: icmp_seq=0 ttl=61 time=14.368 ms
64 bytes from 2a00:f10:101:0:46e:c2ff:fe00:93: icmp_seq=1 ttl=61 time=16.132 ms
64 bytes from 2a00:f10:101:0:46e:c2ff:fe00:93: icmp_seq=2 ttl=61 time=15.790 ms
--- pcextreme.nl ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 14.368/15.430/16.132/0.764 ms
root@da80cf3d8532:~#

Again, this should ONLY be used for testing purposes. For production IPv6 Prefix Delegation is the route to go down.

VirtualBox images to experiment with IPv6

Around me I noticed that a lot of people don’t have hands-on experience with IPv6. The networks they work in do not support IPv6 nor does their ISP provide them with native IPv6 connectivity at home.

On my local systems I often use Virtual Box to set up (IPv6) testing environments. I thought I’d create some Virtual Machine images to get some hands-on experience with IPv6.

The images and README can be found on Github and are aimed to be easy to install and work with.

Requirements

To run the images you need to have Virtual Box installed. You also should be able to use the Linux command line as the Virtual Machines are based on Ubuntu 16.04.

More information can be found in the repository on Github in the README file.

Download

You can download the images here.

How to use

Please take a look at the README on Github. It tells you how to use them.

Happy testing!

AnyIP: Bind a whole subnet to your Linux machine

IPv6 Prefix Delegation

In my previous post I wrote how you can use Docker with IPv6 and Prefix Delegation.

A IPv6 subnet routed to a Linux machine can be used with other things than Docker. That’s where the AnyIP feature of the kernel comes in.

Linux Kernel AnyIP

The AnyIP feature of the Linux kernel allows you to bind a complete IPv4 or IPv6 subnet to your system.

Instead of adding all addresses manually to the kernel you can tell it to bind a complete subnet.

Configuring

IPv4

ip -4 route add local 192.168.0.0/24 dev lo

In this case the Linux kernel will now respond to ARP requests for any IPv4 address in the 192.168.0.0/24 subnet.

IPv6

ip -6 route add local 2001:db8:100::/64 dev lo

In this case the kernel will respond for Neigh Sollicitations on any IPv6 address in the 2001:db8:100::/64 subnet.

Example usage

Let’s assume that you have the IPv6 prefix 2001:db8:100::/60 routed to your Linux machine through IPv6 prefix delegation.

From that /60 subnet we take the first /64 subnet and attach it to lo.

ip -6 route add local 2001:db8:100::/64 dev lo

You can now ping any of the addresses in that subnet:

  • 2001:db8:100::1
  • 2001:db8:100::100
  • 2001:db8:100::200
  • 2001:db8:100::dead:b33f

If you would start a webserver which listens on port 80 you can use any of the IPv6 addresses in that subnet and the webserver will respond to it.

Use cases

It could be that you want to to mass-shared hosting on a system where you want to assign each hostname/domainname it’s own IPv6 address. Instead of attaching single IPs to a interface you can simply attach a complete subnet and point traffic to any of the IPs in that subnet.

Demo

On a virtual machine on PCextreme’s Aurora Compute I deployed a Instance with Prefix Delegation enabled.

After running ‘dhclient’ I got the subnet 2a00:f10:500:40::/60 assigned to my Instance.

It was then just one line to attach a /64 subnet:

ip -6 route add local 2a00:f10:500:40::/64 dev lo

Random address generator

I wrote a small piece of Python code to generate a random IPv6 address:

#!/usr/bin/env python3
"""
Generate a random IPv6 address for a specified subnet
"""

from random import seed, getrandbits
from ipaddress import IPv6Network, IPv6Address

subnet = '2a00:f10:500:40::/64'

seed()
network = IPv6Network(subnet)
address = IPv6Address(network.network_address + getrandbits(network.max_prefixlen - network.prefixlen))

print(address)

Using a small loop in Bash I could now ping random addresses in that subnet:

while [ true ]; do ping6 -c 2 `./random-ipv6.py`; done

Some example output:

--- 2a00:f10:500:40:d142:1092:ea84:74b4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 10.252/11.680/13.108/1.428 ms
PING 2a00:f10:500:40:4e50:f264:6ea9:d184(2a00:f10:500:40:4e50:f264:6ea9:d184) 56 data bytes
64 bytes from 2a00:f10:500:40:4e50:f264:6ea9:d184: icmp_seq=1 ttl=56 time=10.0 ms
64 bytes from 2a00:f10:500:40:4e50:f264:6ea9:d184: icmp_seq=2 ttl=56 time=10.0 ms

--- 2a00:f10:500:40:4e50:f264:6ea9:d184 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 10.085/10.087/10.089/0.002 ms
PING 2a00:f10:500:40:d831:1f89:b06d:fe12(2a00:f10:500:40:d831:1f89:b06d:fe12) 56 data bytes
64 bytes from 2a00:f10:500:40:d831:1f89:b06d:fe12: icmp_seq=1 ttl=56 time=9.77 ms
64 bytes from 2a00:f10:500:40:d831:1f89:b06d:fe12: icmp_seq=2 ttl=56 time=10.1 ms

--- 2a00:f10:500:40:d831:1f89:b06d:fe12 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1005ms
rtt min/avg/max/mdev = 9.777/9.958/10.140/0.207 ms
PING 2a00:f10:500:40:2c45:26ee:5b93:fa2(2a00:f10:500:40:2c45:26ee:5b93:fa2) 56 data bytes
64 bytes from 2a00:f10:500:40:2c45:26ee:5b93:fa2: icmp_seq=1 ttl=56 time=10.2 ms
64 bytes from 2a00:f10:500:40:2c45:26ee:5b93:fa2: icmp_seq=2 ttl=56 time=10.0 ms

Docker and IPv6 Prefix Delegation

As posted earlier I have IPv6 Prefix Delegation working at our office to test with Docker.

One of the missing links was to automatically configure Docker to use the prefix obtained through DHCPv6+PD. I manually configured the prefix in Docker, but I also had to run dhclient manually.

I figured this could be automated so I gave it a try.

Ubuntu Networking

At first I tried to figure out if Ubuntu’s networking was somehow able to request a prefix through DHCPv6. Long story short: Neither Ubuntu nor CentOS are able to do so. You have to script this manually.

dhclient

To obtain a prefix I had to run dhclient manually. That wasn’t to hard. Simply run:

dhclient -6 -P -d -v eth0

This resulted in obtaining a prefix:

Bound to *:546
Listening on Socket/eth0
Sending on   Socket/eth0
PRC: Confirming active lease (INIT-REBOOT).
XMT: Forming Rebind, 0 ms elapsed.
XMT:  X-- IA_PD d5:68:28:08
XMT:  | X-- Requested renew  +3600
XMT:  | X-- Requested rebind +5400
XMT:  | | X-- IAPREFIX 2001:980:XXXX:140::/60
XMT:  | | | X-- Preferred lifetime +7200
XMT:  | | | X-- Max lifetime +7500
XMT:  V IA_PD appended.
XMT: Rebind on eth0, interval 940ms.
RCV: Reply message on eth0 from fe80::da67:d9ff:fe81:bcec.
RCV:  X-- IA_PD d5:68:28:08
RCV:  | X-- starts 1457617054
RCV:  | X-- t1 - renew  +604800
RCV:  | X-- t2 - rebind +967680
RCV:  | X-- [Options]
RCV:  | | X-- IAPREFIX 2001:980:XXXX:140::/60
RCV:  | | | X-- Preferred lifetime 1209600.
RCV:  | | | X-- Max lifetime 2592000.
RCV:  X-- Server ID: 00:03:00:01:d8:67:d9:81:bc:f0
PRC: Bound to lease 00:03:00:01:d8:67:d9:81:bc:f0.
PRC: Renewal event scheduled in 604800 seconds, to run for 362880 seconds.
PRC: Depreference scheduled in 1209600 seconds.
PRC: Expiration scheduled in 2592000 seconds.

As you can see, I got a /60 prefix. Now I had to somehow get this automated and configure Docker to use it.

Upstart

Since I was testing with Docker 1.10 under Ubuntu 14.04 I had to use Upstart to run dhclient.

The /etc/init/dhclient6-pd.conf Upstart script I created was rather simple:

description     "DHCPv6 Prefix Delegation client"

start on runlevel [2345]
stop on runlevel [!2345]

respawn
respawn limit 30 3
umask 022

console log

exec dhclient -6 -P -d eth0

DHCP hook

dhclient has hooks which it can execute when something happens. I wrote a hook which extracted the delegated IPv6 prefix and restarted Docker.

I placed the hook in the default location for DHCP hooks: /etc/dhcp/dhclient-enter-hooks.d/docker-ipv6:

#!/bin/bash

SUBNET_SIZE=80
DOCKER_ETC_DIR="/etc/docker"
DOCKER_PREFIX_FILE="${DOCKER_ETC_DIR}/ipv6.prefix"

if [ ! -z "$new_ip6_prefix" ]; then
    SUBNET=$(sipcalc -S $SUBNET_SIZE $new_ip6_prefix|grep Network|head -n 1|awk '{print $3}')
    echo "${SUBNET}/${SUBNET_SIZE}" > $DOCKER_PREFIX_FILE

    if [ "$old_ip6_prefix" != "$new_ip6_prefix" ]; then
        service docker restart
    fi
fi

For this to work you need to modify /etc/default/docker so that this line reads:

DOCKER_OPTS="--ipv6 --fixed-cidr-v6=`cat /etc/docker/ipv6.prefix`"

The result

Docker was now running properly with a IPv6 subnet configured and my containers have a IPv6 address as well.

wido@wido-desktop:~$ docker exec -ti 94c8f02 ip addr show dev eth0
13: eth0:  mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:980:XXXX:140:0:242:ac11:2/80 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever
wido@wido-desktop:~$

Native IPv6 in my Docker containers fully automated and dynamic!

All the scripts I used can be found on Github.

MariaDB Galera cluster on IPv6

MariaDB Galera

I try to set as much IPv6-only infrastructure as possible and the same goes for a new MariaDB Galera cluster I had to build.

According to the release notes MariaDB 10.1 should have IPv6 support, but it didn’t work out for me. I wouldn’t get my Galera cluster to work over IPv6-only.

Galera

I tracked the root-cause down to Galera not parsing the addresses properly and it had to be tweaked a bit.

Configuration

With the configuration posted below I was able to get a MariaDB 10.1 setup working on IPv6-only.

[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
query_cache_type=0

bind-address = ::

wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

wsrep_cluster_name="ns01"
wsrep_cluster_address="gcomm://ns011.XXX.eu,ns012.XXX.nl,ns013.XXX.info"

wsrep_sst_method=rsync

wsrep_node_name="ns011"

wsrep_provider_options = "gmcast.listen_addr=tcp://[::]:4567; ist.recv_addr=[2a00:f10:121:XX:XX:a0ff:fe00:1bc7]:4568"
wsrep_node_address = "[2a00:f10:121:XX:XX:a0ff:fe00:1bc7]:4567"
wsrep_sst_receive_address = "[2a00:f10:121:XX:XX:a0ff:fe00:1bc7]:4444"

This resulted in the Galera cluster functioning properly on a IPv6-only network. It’s indeed a bit more configuration then with IPv4, but that will probably be resolved in a future release.

The MariaDB status properly shows being connected over IPv6:

MariaDB [(none)]> show status like 'wsrep_incoming_addresses';
+--------------------------+--------------------------------------------------------------------------------------------------------------------+
| Variable_name            | Value                                                                                                              |
+--------------------------+--------------------------------------------------------------------------------------------------------------------+
| wsrep_incoming_addresses | [2a00:f10:121:XX:XX:a0ff:fe00:1bc7]:3306,[2a00:f10:400:XX:XX:d8ff:fe00:2ef]:3306,[2a00:1d20:3:XX:XX:c01:3:53]:3306 |
+--------------------------+--------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

MariaDB [(none)]>

IPv6 Prefix Delegation on a Cisco 887VA behind a XS4All VDSL2 connection

XS4All connection

At the PCextreme office we have a XS4All VDSL2 connection which has native IPv6. We get a /48 from XS4All.

I wrote two earlier blogposts about getting the Cisco 887VA router setup which might be of interest before you continue reading:

IPv6 Prefix Delegation

From XS4All we get a /48 routed to our office using DHCPv6 Prefix Delegation. We are experimenting and testing with Docker at the office where we also want to test the IPv6 capabilities of Docker.

The goal was to sub-delegate /60 subnets out of a /56 towards clients internally. I had to figure out how to get this configured on Cisco IOS.

  • We get a /48 delegated from XS4All
  • The first /56 is used for our local networks (LAN, Guest and Servers)
  • The second /56 is used as a pool to delegate /60 subnets from

Sipcalc

To calculate the IPv6 subnets used the tool ‘sipcalc’. I needed to find the second /56 in our /48:

sipcalc -S 56 2001:980:XX::/48

The output is rather long, so I trimmed it a bit:

-[ipv6 : 2001:980:XX::/48] - 0

[Split network]
Network			- 2001:0980:XX:0000:0000:0000:0000:0000 -
			  2001:0980:XX:00ff:ffff:ffff:ffff:ffff
Network			- 2001:0980:XX:0100:0000:0000:0000:0000 -
			  2001:0980:XX:01ff:ffff:ffff:ffff:ffff
Network			- 2001:0980:XX:0200:0000:0000:0000:0000 -
			  2001:0980:XX:02ff:ffff:ffff:ffff:ffff
...
...
Network			- 2001:0980:XX:ff00:0000:0000:0000:0000 -
			  2001:0980:XX:ffff:ffff:ffff:ffff:ffff

-

In this case 2001:0980:XX:0100:0000:0000:0000:0000:/56 is the second /56 in our /48.

Cisco IOS

Some searching brought me to cisco.com which had some examples.

Eventually it was actually quite easy to get it working.

Configuration

You need a DHCPv6 pool inside the Cisco and tell it to start a DHCPv6 server on the proper interface.

ipv6 dhcp pool local-ipv6
 prefix-delegation pool local-ipv6-pd-pool lifetime 3600 1800
 dns-server 2001:888:0:6::66
 dns-server 2001:888:0:9::99
 domain-name pcextreme.nl
interface Vlan1
 ip address 192.168.5.1 255.255.255.0
 ip nat inside
 ip virtual-reassembly in
 ipv6 address xs4all-prefix ::1/64
 ipv6 enable
 ipv6 nd other-config-flag
 ipv6 nd ra interval 30
 ipv6 nd ra dns server 2001:888:0:6::66
 ipv6 nd ra dns server 2001:888:0:9::99
 ipv6 dhcp server local-ipv6 rapid-commit
 ipv6 mld query-interval 60
ipv6 local pool local-ipv6-pd-pool 2001:980:XX:100::/56 60

That’s all!

Asking for a Prefix

On my Ubuntu desktop I could now request a subnet:

wido@wido-desktop:~$ sudo dhclient -6 -P -v eth0
Internet Systems Consortium DHCP Client 4.2.4
Copyright 2004-2012 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Bound to *:546
Listening on Socket/eth0
Sending on   Socket/eth0
PRC: Soliciting for leases (INIT).
XMT: Forming Solicit, 0 ms elapsed.
XMT:  X-- IA_PD d5:68:28:08
XMT:  | X-- Request renew in  +3600
XMT:  | X-- Request rebind in +5400
XMT: Solicit on eth0, interval 1060ms.
RCV: Advertise message on eth0 from fe80::da67:d9ff:fe81:bcec.
RCV:  X-- IA_PD d5:68:28:08
RCV:  | X-- starts 1455279332
RCV:  | X-- t1 - renew  +900
RCV:  | X-- t2 - rebind +1440
RCV:  | X-- [Options]
RCV:  | | X-- IAPREFIX 2001:980:XX:100::/60
RCV:  | | | X-- Preferred lifetime 1800.
RCV:  | | | X-- Max lifetime 3600.
RCV:  X-- Server ID: 00:03:00:01:d8:67:d9:81:bc:f0
RCV:  Advertisement recorded.
PRC: Selecting best advertised lease.

As you can see I got 2001:980:XX:100::/60 delegated to my desktop.

IPv6 routes

After I asked for a subnet on my desktop this is how the routes look like. You can see a /60 being routed to my Link-Local Address.

firewall-vdsl-veldzigt#show ipv6 route
IPv6 Routing Table - default - 8 entries
Codes: C - Connected, L - Local, S - Static, U - Per-user Static route
       B - BGP, HA - Home Agent, MR - Mobile Router, R - RIP
       H - NHRP, D - EIGRP, EX - EIGRP external, ND - ND Default
       NDp - ND Prefix, DCE - Destination, NDr - Redirect, O - OSPF Intra
       OI - OSPF Inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2, ON1 - OSPF NSSA ext 1
       ON2 - OSPF NSSA ext 2, la - LISP alt, lr - LISP site-registrations
       ld - LISP dyn-eid, a - Application
S   ::/0 [1/0]
     via Dialer0, directly connected
S   2001:980:XX::/48 [1/0]
     via Null0, directly connected
C   2001:980:XX::/64 [0/0]
     via Vlan1, directly connected
L   2001:980:XX::1/128 [0/0]
     via Vlan1, receive
C   2001:980:XX:1::/64 [0/0]
     via Vlan300, directly connected
L   2001:980:XX:1::1/128 [0/0]
     via Vlan300, receive
S   2001:980:XX:100::/60 [1/0]
     via FE80::C23F:D5FF:FE68:XX, Vlan1
L   FF00::/8 [0/0]
     via Null0, receive
firewall-vdsl-veldzigt#

The subnet is working now and I can use it to hand it out to my Docker containers.

ISC Kea DHCPv6 server

DHCPv6

In most situations StateLess Address AutoConfiguration (SLAAC) works just fine when you work with simple clients in a IPv6 network. But in other cases you want to assign pre-defined addresses or prefixes to clients and there DHCPv6 comes in to play.

While working on the IPv6 implementation for Apache CloudStack I found Kea, a DHCPv6 server from ISC.

DHCPv6 DUID

With IPv4 you could easily identify a client based on the MAC-address it send the DHCP request from. With IPv6 there is a DUID. The “DHCP Unique Identifier”. This is generated by the client and then used by the DHCPv6 server. A few possibilities the clients can choose from:

  • DUID-LL: DUID Based on Link-layer Address
  • DUID-LLT: Link-layer Address Plus Time
  • DUID-EN: Assigned by Vendor Based on Enterprise Number

While DUID seems nice, it can’t be dictated by the DHCPv6 server. The client generates the DUID itself and sends it towards the server. Not something you prefer if your are not in control of the clients.

In a cloud you are in control over the MAC-address, so that is what you want to use where possible. It can’t be spoofed by the client.

ISC Kea

Kea is a DHCPv4/DHCPv6 server being developed by the Internet Systems Consortium. It is a extensible and flexible DHCP server. Facebook uses it in their datacenters.

My goal was very simple. Set up Kea and see if I can use it to hand out an address to a client.

Configuration

I download the tarball and tested it with this configuration between two simple KVM VMs on my desktop.

{
    "Dhcp6": {
        "renew-timer": 1000,
        "rebind-timer": 2000,
        "preferred-lifetime": 3000,
        "valid-lifetime": 4000,
        "lease-database": {
            "type": "memfile",
            "persist": true,
            "name": "/tmp/kea-leases6.csv",
            "lfc-interval": 1800
        },
        "interfaces-config": {
            "interfaces": [ "eth1/2001:db8::1" ]
        },
        "mac-sources": ["duid"],
        "subnet6": [
            {
                "subnet": "2001:db8::/64",
                "id": 1024,
                "interface": "eth1",
                "pools": [
                    { "pool": "2001:db8::100-2001:db8::ffff" }
                ],
                "pd-pools": [
                    {
                        "prefix": "2001:db8:fff::",
                        "prefix-len": 48,
                        "delegated-len": 60
                    }
                ],
                "reservations": [
                    {
                        "hw-address": "52:54:00:d6:c2:a9",
                        "ip-addresses": [ "2001:db8::5054:ff:fed6:c2a9" ]
                    }
                ]
            }
        ]
    }
}

Starting Kea with this configuration was rather simple:

Starting Kea

$ kea-dhcp6 -c /etc/kea.json -d

Logs

When it starts you see some interesting bits in the log:

DHCP6_CONFIG_NEW_SUBNET a new subnet has been added to configuration: 2001:db8::/64 with params t1=1000, t2=2000, preferred-lifetime=3000, valid-lifetime=4000, rapid-commit is disabled
DHCPSRV_CFGMGR_ADD_SUBNET6 adding subnet 2001:db8::/64
HOSTS_CFG_ADD_HOST add the host for reservations: hwaddr=52:54:00:d6:c2:a9 ipv6_subnet_id=1024 hostname=(empty) ipv4_reservation=(no) ipv6_reservation0=2001:db8::5054:ff:fed6:c2a9
HOSTS_CFG_GET_ONE_SUBNET_ID_HWADDR_DUID get one host with IPv6 reservation for subnet id 1024, HWADDR hwtype=1 52:54:00:d6:c2:a9, DUID (no-duid)
HOSTS_CFG_GET_ALL_HWADDR_DUID get all hosts with reservations for HWADDR hwtype=1 52:54:00:d6:c2:a9 and DUID (no-duid)
HOSTS_CFG_GET_ALL_IDENTIFIER get all hosts with reservations using identifier: hwaddr=52:54:00:d6:c2:a9
HOSTS_CFG_GET_ALL_IDENTIFIER_COUNT using identifier hwaddr=52:54:00:d6:c2:a9, found 0 host(s)
HOSTS_CFG_GET_ONE_SUBNET_ID_HWADDR_DUID_NULL host not found using subnet id 1024, HW address hwtype=1 52:54:00:d6:c2:a9 and DUID (no-duid)
HOSTS_CFG_GET_ONE_SUBNET_ID_ADDRESS6 get one host with reservation for subnet id 1024 and including IPv6 address 2001:db8::5054:ff:fed6:c2a9
HOSTS_CFG_GET_ALL_SUBNET_ID_ADDRESS6 get all hosts with reservations for subnet id 1024 and IPv6 address 2001:db8::5054:ff:fed6:c2a9
HOSTS_CFG_GET_ALL_SUBNET_ID_ADDRESS6_COUNT using subnet id 1024 and address 2001:db8::5054:ff:fed6:c2a9, found 0 host(s)
HOSTS_CFG_GET_ONE_SUBNET_ID_ADDRESS6_NULL host not found using subnet id 1024 and address 2001:db8::5054:ff:fed6:c2a9
DHCPSRV_MEMFILE_DB opening memory file lease database: lfc-interval=1800 name=/tmp/kea-leases6.csv persist=true type=memfile universe=6
DHCPSRV_MEMFILE_LEASE_FILE_LOAD loading leases from file /tmp/kea-leases6.csv

You can see it has one reservation based on the MAC-address of the client which it handed out after it booted:

ALLOC_ENGINE_V6_HR_ADDR_GRANTED reserved address 2001:db8::5054:ff:fed6:c2a9 was assigned to client duid=[00:01:00:01:1e:47:7e:66:52:54:00:d6:c2:a9], tid=0xe7899a

Ubuntu client

The client was a simple Ubuntu 14.04 client with this network configuration:

auto eth0
iface eth0 inet dhcp
iface eth0 inet6 dhcp

And indeed, it obtained the correct address:

root@ubuntu1404:~# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:d6:c2:a9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.100/24 brd 192.168.100.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:db8::5054:ff:fed6:c2a9/64 scope global deprecated dynamic 
       valid_lft 62sec preferred_lft 0sec
    inet6 fe80::5054:ff:fed6:c2a9/64 scope link 
       valid_lft forever preferred_lft forever
root@ubuntu1404:~#

Lease database

Kea can store the leases in a CSV file or MySQL database if you want. In this test I used /tmp/kea-leases6.csv as a CSV file to store the leases in.

In production a MySQL database is probably easier to use, but for the test CSV worked just fine.