Using the Link-Local Address of IPv6

Link Local

One of the things not know to people is the functionality a Link-Local Address with IPv6 provides.

You might have seen them on your Linux (or any other) system. For example, on my Linux system:

wido@desktop:~$ ip addr show dev eth1
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:8f:9f:af:62 brd ff:ff:ff:ff:ff:ff
    inet 10.0.199.15/16 brd 10.0.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:8fff:fe9f:af62/64 scope link 
       valid_lft forever preferred_lft forever
wido@desktop:~$

As you can see, my Link-Local Address in this case is fe80::5054:8fff:fe9f:af62. What can I do with it?

What is it used for?

With IPv6 the Link-Local Address is used for multiple purposes:

  • Finding Routers using a Router Solicitation
  • Performing Duplicate Address Detection
  • Finding Neighbors

The Link-Local is however a fully functional address which you can use for multiple things.

Using Link-Local

Here at the office my colleague has a desktop and his Link-Local Address is fe80::821f:2ff:fed6:5f08.

So can I ping the address?

wido@wido-desktop:~$ ping6 fe80::821f:2ff:fed6:5f08
connect: Invalid argument
wido@wido-desktop:~$

No, that doesn’t seem to work. Or does it?

wido@wido-desktop:~$ ping6 -I eth0 -c 2 fe80::821f:2ff:fed6:5f08
PING fe80::821f:2ff:fed6:5f08(fe80::821f:2ff:fed6:5f08) from fe80::c23f:d5ff:fe68:2808 eth0: 56 data bytes
64 bytes from fe80::821f:2ff:fed6:5f08: icmp_seq=1 ttl=64 time=0.566 ms
64 bytes from fe80::821f:2ff:fed6:5f08: icmp_seq=2 ttl=64 time=0.612 ms

--- fe80::821f:2ff:fed6:5f08 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.566/0.589/0.612/0.023 ms
wido@wido-desktop:~$

So when I specify the interface I can ping his desktop!

You can also specify the interface this way: fe80::821f:2ff:fed6:5f08%eth0

wido@wido-desktop:~$ ping6 -c 2 fe80::821f:2ff:fed6:5f08%eth0
PING fe80::821f:2ff:fed6:5f08%eth0(fe80::821f:2ff:fed6:5f08) 56 data bytes
64 bytes from fe80::821f:2ff:fed6:5f08: icmp_seq=1 ttl=64 time=0.539 ms
64 bytes from fe80::821f:2ff:fed6:5f08: icmp_seq=2 ttl=64 time=0.481 ms

--- fe80::821f:2ff:fed6:5f08%eth0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.481/0.510/0.539/0.029 ms
wido@wido-desktop:~$

So can I SSH to it or do anything else with it?

wido@wido-desktop:~$ ssh fe80::821f:2ff:fed6:5f08%eth0
The authenticity of host 'fe80::821f:2ff:fed6:5f08%eth0 (fe80::821f:2ff:fed6:5f08%eth0)' can't be established.
ECDSA key fingerprint is d8:d7:d0:bd:3c:6a:18:31:e5:26:b1:13:96:a8:e1:89.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'fe80::821f:2ff:fed6:5f08%eth0' (ECDSA) to the list of known hosts.
wido@fe80::821f:2ff:fed6:5f08%eth0's password: 

wido@wido-desktop:~$

Indeed, I can! I can also telnet to the address:

wido@wido-desktop:~$ telnet fe80::821f:2ff:fed6:5f08%eth0 22
Trying fe80::821f:2ff:fed6:5f08%eth0...
Connected to fe80::821f:2ff:fed6:5f08%eth0.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.9
^]quit

telnet> quit
Connection closed.
wido@wido-desktop:~$

It is a functional address which you can use on your local network.

Security

Even if you think IPv6 is disabled on your system because you haven’t configured it, it isn’t.

Should you disable IPv6 then? No! Learn to work with it. IPv4 space is running out very quickly, so disabling it is not a wise thing to do.

Just make sure your firewall policies for both IPv4 and IPv6 are up to date. I’ve seen many systems where IPv6 isn’t firewalled at all, which makes them open to anybody on the local network.

Link-Local Addresses are not routed over the internet, so somebody has to gain access to the local Layer 2 LAN before it can connect via Link-Local, but still, keep it in mind.

Protecting your Ceph pools against removal or property changes

One of the dangers of Ceph was that by accident you could remove a multi TerraByte pool and loose all the data. Although the CLI tools asked you for conformation, librados and all it’s bindings did not.

Imagine explaining that you just removed a 200TB pool from your storage system due to a typo in your Python code…

So I suggested that we came up with a mechanism to prevent pools from being deleted from a Ceph cluster. And Sage quickly came up with something!

Hammer v0.94

Ceph version 0.94 aka ‘Hammer’ came out a couple of weeks ago and it has a some fancy features which prevent you from removing a pool by accident or on purpose.

Monitors denying pool removal

A new configuration setting for the monitors has been introduced:

mon_allow_pool_delete = false

If you add that to the ceph.conf ([mon] section) and restart your MONs you will not be able to remove any pool from your Ceph cluster. Not via the CLI or directly via librados. The Monitors will simply refuse it:

root@admin:~# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
root@admin:~#
root@admin:~# rados rmpool rbd rbd --yes-i-really-really-mean-it
pool rbd does not exist
error 1: (1) Operation not permitted
root@admin:~#

This is a cluster-wide configuration setting and can only be changed by restarting your Monitors. A good way to prevent anybody from removing a pool by accident or on purpose.

Pool flags

A different way to achieve this is by setting the new nodelete flag on a pool. Setting this flag prevents the pool from being removed.

Next to this flag a couple of other flags were introduced:

  • nodelete
  • nosizechange
  • nopgchange

The flags speak for themselves. If you set these flags those operations are no longer allowed:

root@admin:~# ceph osd pool set rbd nosizechange true
set pool 0 nosizechange to true
root@admin:~# ceph osd pool set rbd size 5
Error EPERM: pool size change is disabled; you must unset nosizechange flag for the pool first
root@admin:~#

I’m not allowed to change the size (aka replication level/setting) for the pool ‘rbd’ while that flag is set.

Applying all flags

To apply these flags quickly to all your pools, simply execute these three one-liners:

$ for pool in $(rados lspools); do ceph osd pool set $pool nosizechange true; done
$ for pool in $(rados lspools); do ceph osd pool set $pool nopgchange true; done
$ for pool in $(rados lspools); do ceph osd pool set $pool nodelete true; done

Your Ceph cluster just became a lot safer! No data loss or downtime due to fat fingers anymore 🙂

Rebuilding libvirt under CentOS 7.1 with RBD storage pool support

If you want to use CentOS 7.1 for your hypervisors with Apache CloudStack and Ceph’s RBD as Primary Storage you need to rebuild libvirt.

CloudStack requires libvirt to be built with RBD storage pool support. It uses libvirt to manage RBD volumes. By default libvirt under CentOS is not built with this support. (On Ubuntu it is btw).

Rebuilding from source

First we need to install a couple of packages:

$ yum install -y rpm-build gcc make ceph-devel

Now we need to download the sRPM:

$ wget http://vault.centos.org/centos/7.1.1503/os/Source/SPackages/libvirt-1.2.8-16.el7.src.rpm

Create a rpmbuild directory:

$ mkdir /root/rpmbuild

Now edit /root/.rpmmacros so that it contains:

%_topdir    /root/rpmbuild

Install the sRPM:

$ rpm -i libvirt-1.2.8-16.el7.src.rpm

Open the /root/rpmbuild/SPECS/libvirt.spec file and look for:

%else
    %define with_storage_rbd      0
%endif

Change this to:

%else
    %define with_storage_rbd      1
%endif

Now build the RPM:

$ cd /root/rpmbuild
$ rpmbuild -ba SPECS/libvirt.spec

After a couple of minutes you should have RPMs with RBD storage pool support enabled!