Enhanced RBD support for CloudStack 4.2

About 1 hour ago the new storage subsystem got merged into the master branch of CloudStack. That is wonderful news for all you out there who want to use features like snapshotting with RBD in CloudStack.

In pre-4.2 CloudStack a snapshot was the same as a backup. As soon as you created a snapshot it would also copy that snapshot to the secondary storage. This could not only lead to high network utilization when talking about 1TB RBD volumes, but it also caused problems with the underlying ‘qemu-img’ tool. To make a long story short: Snapshots with RBD just wouldn’t work in CloudStack 4.0 or 4.1 without resorting to dirty hacking. Which we didn’t.

The new storage subsystem separates the backup and snapshot process. Snapshots are handled by the primary storage and they can be copied to the ‘backup storage’ on request. This allows is to use the full snapshot potential of RBD.

I was waiting for the storage subsystem to be merged into the master branch before I could start working on this. About two weeks ago I already wrote a small function spec in CloudStack’s wiki to describe what has to be done.

A couple of choices still have to be made. Traditionally we could do everything through libvirt and ‘qemu-img’, but from what I can see now we’ll run into some trouble. We might have to go through the process of wrapping librbd into a Java library to get it all done, but I’m not completely positive about that. Some patches for libvirt(-java) could probably also do the job, but it would take a lot of time and work to get those upstream and into the repositories. The goal is to have this new RBD code work natively on a Ubuntu 13.04 system.

The expectation is that CloudStack 4.2 will be released mid-July this year, but if you are a daredevil you can always track the master branch and play around with that.

I’ll post updates on the cloudstack-dev list on a regular base about the progress, but you can also watch the master branch and search for commits with ‘RBD’ in the message.

100% CPU utilization on a Cisco 887VA

Some time ago I wrote a blogpost about using a Cisco 887VA router on a XS4All (dutch ISP) connection. The original article is mostly in Dutch, but I’ll keep this one in English since it will probably help users all over the world.

A couple of days ago I got an e-mail from somebody who read my blogpost and asked me if the 887VA was able to handle more then 25Mbit. I never really tested it since I thought the copper-cable in our office wasn’t that good. During a download I logged into the router and saw that the CPU was 94% utilized!

The VDSL line was however online at 38Mbit, so how could this happen? Was the router underpowered?

I couldn’t wrap my head around it. A brand new VDSL router from Cisco couldn’t handle just 25Mbit? Something had to be wrong.

Some searching brought me to the Cisco Support Forums and one of the suggestions was to turn on CEF. A Cisco technology to improve Layer 3 performance.

Logging in to the router showed me indeed that CEF was disabled for both IPv4 and IPv6.

no ip cef
no ipv6 cef

Enabling CEF was simple:

conf t
ip cef
ipv6 cef

And voila! I suddenly was able to use the full 38Mbit with just ~50% CPU load.

My EVSE is online!

It took some work and tuning, but my own Open EVSE is online!

After connecting the Advanced Power Supply cabling it’s automatically switchting to Level 2 charging on 30A.

I made a small change to the Open EVSE code since in the EU we have 230/400V instead of 110/220V. This can be found on my Github account.

 

 

 

Today I already got another Roadster on visit while helping him with installing OVMS in his 2.0 Roadster Sport. It charged nicely on 30A for about 4 hours.

To get Open EVSE working with the Roadster I had to add a 2.4k resistor on top of R1 to get the resistance back to 650 ~ 700 Ohm, like mentioned in the Open EVSE issue tracker.

 

 

 

 

 

Below are two pictures of both Roadster charging at the newly installed EVSE.

If you have any questions, feel free to contact me!