Exploring the CAN bus of my Tesla Model S

The CAN bus of a Tesla vehicle can show some interesting information about the state of different components in the vehicle.

Using a CAN bus cable, Bluetooth adapter and a App on your mobile phone you can gain much more insight on your Tesla vehicle.

I own two Tesla vehicles:

  • S85 from September 2013 (pre face-lift)
  • S100D from September 2018

Somewhere around 2015 Tesla switched to a different connector for the CAN bus so I needed two different cables. I bought my cables in Germany at EMDS.

EMDS also sells a cable for Model 3. I haven’t used this one as I don’t own a Model 3.

The CAN bus connector in a Model S can be found under the MCU’s main screen in the vehicle. You need to pull down the ‘chubby’ and there you will find the connector:

Cable connected to my Model S85

I am using the TM-Spy app on iOS for reading the values on my iPhone.

Screenshot of TM-Spy on iOS

For Android there is Scan My Tesla which also seems to be a very good app. I don’t have an Android device, so I was not able to test it.

I was mainly looking for these values:

  • Usable Full
  • DC Charge Total
  • AC Charge Total

After 253.543km of driving my battery has 75.6kWh of remaining capacity where this was ~81kWh when it was new. (The 85kWh battery was actually a 81kWh battery….)

Tesla also throttles a vehicle’s SuperCharging capabilities after more than X amount (I don’t know the exact value) of DC charging. My car seems to be affected as I SuperCharged a lot.

Charge Total is not a total sum of AC+DC, but from what I’ve read early firmwares did not count AC and DC charging in different values.

Interesting information though! I encourage everybody to use this information to gather more information about their vehicle’s state.

Happy exploring!

Replacing the eMMC in my Tesla Model S

Tesla’s vehicles are awesome. I own a S85 from 2013 and a S100D from 2018. I’ve driven 260.000km and 70.000km with these two vehicles and I love it.

There is however a design flaw in the early Tesla models which can become a very expensive reparation if not performed in time.

Version of of the MCU (Media Control Unit) which was installed up until early 2018 in Tesla S/X runs and is a ticking time bomb.

The problem is the Flash Memory (eMMC) which holds the Operating System of the computer. This wears out over time due to writing data to it.

Writing happens when you use the car. It caches Spotify, Google Maps and many more things. Even when the car is parked the MCU stays running and writes to the eMMC chip.

Eventually this chip will wear out. Before it does it becomes very slow and this results in the MCU becoming super sluggish, unresponsive, the screen reboots at random moments, bluetooth issues, etc, etc.

Inside of the Tesla MCU1
The eMMC chip

A lot has been written about this, so I won’t write to much. Short: Tesla will charge you around 2000 EUR/USD for a new MCU.

I choose to replace this eMMC chip myself and this was a lot cheaper! Total cost was <500 EUR.

Around the world there are a couple of companies doing these replacements. I replaced my eMMC chip together with Loek from Laadkabelwinke.nl (Netherlands): https://www.laadkabelwinkel.nl/tesla-mcu1-emmc-vervangen-reparatie

On Tesla Motors Club there are various topics about these replacements, one for example: https://teslamotorsclub.com/tmc/threads/preventive-emmc-replacement-on-mcu1.152489/

I highly recommend everybody with a Model S/X to replace this eMMC chip before it fails.

The chip will wear out and this causes all kinds of problems. I can’t stress this enough. Replace the chip before it’s too late!

In Europe I would recommend to go to Laadkabelwinkel.nl and have them replace the chip.

Enjoy your Tesla!

Creating a Management Routing Instance (VRF) on Juniper QFX5100

For a Ceph cluster I have two Juniper QFX5100 switches running as a Virtual Chassis.

This Virtual Chassis is currently only performing L2 forwarding, but I want to move this to a L3 setup where the QFX switches use Dynamic Routing (BGP) and thus become the gateway(s) for the Ceph servers.

This should work, but one of the things I was missing is a dedicated Management Port which uses a different routing table/instance.

Starting with JunOS 17.3R1 you can create a Management Routing Instance as described on the website of Juniper.

set system management-instance

This now creates the Routing Instance called mgmt_junos.

I try to run as much as possible IPv6-only or at least prefer IPv6 over IPv4.

I ran into the problem that configuring an IPv6 address on my em0 interface just wouldn’t work. It kept saying that the IPv6 address was Duplicate.

This is probably something which happens because both QFX switches are connected to the same Out of Band switch and causes it to receive it’s DAD over a different link. I had to disable DAD on interface em0 to make it work.

In addition I configured all DNS lookups to be performed using this routing instance.

The end result for my configuration (snippets):

system {
management-instance;
name-server {
2a00:f10:ff04:153::53 routing-instance mgmt_junos;
2a00:f10:ff04:253::53 routing-instance mgmt_junos;
93.180.70.22 routing-instance mgmt_junos;
93.180.70.30 routing-instance mgmt_junos;
}
}
interfaces {
unit 0 {
family inet {
address 172.17.5.10/24;
}
family inet6 {
address 2a00:f10:XXX:XXX::100/64
dad-disable;
}
}
}
routing-instances {
mgmt_junos {
routing-options {
rib mgmt_junos.inet6.0 {
static {
route ::/0 next-hop 2a00:f10:XXX:XXX::1;
}
}
static {
route 0.0.0.0/0 next-hop 172.17.5.1;
}
}
}
}

This now allows me to SSH to my Juniper QFX Virtual Chassis over interface em0 which uses a different routing instance/table.

Should I make a mistake in the default routing instance, for example a BGP misconfiguration, I can still SSH to my switch(es).

Or if there is a routing error (BGP issue) I can also still reach the switches.

Allowing SSH login for user without a password

To start with: This is something you should NOT use in most cases. It’s only intended to be used in very specific situations.

In my situation I want to allow some remote systems to create a reverse SSH tunnel without a password nor a key. It’s for hobby purposes and through firewalling I make sure that only those systems are allowed to connect to my ‘SSH proxy’.

I started by creating a group and a few users with that as their primary group:

groupadd reversessh
useradd -G reversessh user1
useradd -G reversessh user2
useradd -G reversessh user3
passwd -d user1
passwd -d user2
passwd -d user3

I then modified my /etc/ssh/sshd_config that it only allows specific groups and allows users with an empty password:

PermitEmptyPasswords yes
AllowGroups root reversessh

I also needed to modify PAM to make sure it allows this login. Therefor you need to modify /etc/pam.d/common-auth that it contains:

auth    [success=1 default=ignore]      pam_unix.so nullok

After I restarted SSH to users user1 until user3 were able to log on without a password nor a key.

Is this very secure? No! But it does serve a purpose in some use-cases.

Setting noout flag per Ceph OSD

Prior to Ceph Luminous you could only set the noout flag cluster-wide which means that none of your OSDs will be marked as out.

On large(r) cluster this isn’t always what you want as you might be performing maintenance on a part of the cluster, but you sill want other OSDs which go down to be marked as out.

You can however also set the noout flag per OSD basis using this command:

ceph osd add-noout 0

This means that osd.0 will not be marked as out. You can verify this by looking at the OSDMap:

root@alpha:~# ceph osd dump|grep osd.0
osd.0 up in weight 1 up_from 5 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:[2001:db8::100]:6800/1618,v1:[2001:db8::100]:6801/1618,v2:0.0.0.0:6802/1618,v1:0.0.0.0:6803/1618] [v2:[2001:db8::100]:6804/1618,v1:[2001:db8::100]:6805/1618,v2:0.0.0.0:6806/1618,v1:0.0.0.0:6807/1618] exists,noout,up c0c3e5c3-918b-4f3e-a48c-ea8e7c014a3b
root@alpha:~#

Here you can see the noout flag has been set for osd.0.

Removing the flag for this OSD is the reverse process:

ceph osd rm-noout 0

Other flags you can set per osd:

  • nodown
  • noup
  • noin
  • noout

Use this to your advance!

Comparing two Ceph CRUSH maps

Sometimes you want to test if changes you are about to make to a CRUSH map will cause data to move or not.

In this case I wanted to change a rule in CRUSH where it would use device classes, but I didn’t want any of the ~1PB of data in that cluster to move.

By swapping IDs I could prevent data to move:

root default {
id -50 # do not change unnecessarily
id -53 class hdd # do not change unnecessarily
id -122 class ssd # do not change unnecessarily
root default {
id -53 # do not change unnecessarily
id -50 class hdd # do not change unnecessarily
id -122 class ssd # do not change unnecessarily

Notice how I swapped the IDs. After this I updated the rule:

rule rgw {
id 6
type replicated
min_size 1
max_size 10
step take ams02-objects class hdd
step chooseleaf firstn 0 type host
step emit
}

I then compiled the CRUSHMap and ran crushtool to see if there were any differences:

root@mon01:~# crushtool -i crushmap --compare crushmap.new 
rule 0 had 0/10240 mismatched mappings (0)
rule 1 had 0/10240 mismatched mappings (0)
rule 2 had 0/10240 mismatched mappings (0)
rule 3 had 0/10240 mismatched mappings (0)
rule 4 had 0/10240 mismatched mappings (0)
rule 5 had 0/3072 mismatched mappings (0)
rule 6 had 0/10240 mismatched mappings (0)
maps appear equivalent
root@mon01:~#

No changes! So it was safe to inject this map:

root@mon01:~# ceph osd setcrushmap -i crushmap.new

HAProxy in front of Ceph Manager dashboard

The Ceph Mgr dashboard plugin allows for an easy dashboard which can show you how your Ceph cluster is performing.

In certain situations you can’t contact the Mgr daemons directly and you have to place a Proxy server between your computer and the Mgr daemons.

This can be done easily with HAProxy and the following configuration which assumes that:

  • SSL has been disabled in the Dashboard plugin
  • Dashboard plugin listens in port 8080
  • Mgr is running on the hosts mon01, mon02 and mon03
global
  log         127.0.0.1 local1
  log         127.0.0.1 local2 notice

  chroot      /var/lib/haproxy
  pidfile     /var/run/haproxy.pid
  maxconn     4000
  user        haproxy
  group       haproxy
  daemon

  stats socket /var/lib/haproxy/stats

defaults
  log                     global
  mode                    http
  retries                 3
  timeout http-request    10s
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s
  maxconn                 3000
  option                  httplog
  no option               httpclose
  no option               http-server-close
  no option               forceclose

  stats enable
  stats hide-version
  stats refresh 30s
  stats show-node
  stats uri /haproxy?stats
  stats auth admin:haproxy

frontend https
  bind *:80
  default_backend ceph-dashboard

backend ceph-dashboard
  balance roundrobin
  option httpchk GET /
  http-check expect status 200
  server mon01 mon01:8080 check
  server mon02 mon02:8080 check
  server mon03 mon03:8080 check

You can now point your browser to the URL/IP of your HAProxy and use your Ceph dashboard.

In case a Mgr machine fails the health checks of HAProxy will make sure it fails over to on of the other Mgr daemons.

Renaming a network interface with systemd-networkd on Ubuntu 18.04

On a Ubuntu system where I’m creating a VXLAN Proof of Concept with CloudStack I wanted to rename the interface enp5s0 to cloudbr0.

I found many documentation on the internet on how to do this with *.link files, but I was missing the golden tip, which was you need to re-generate your initramfs.

/etc/systemd/network/50-cloudbr0.link

[Match]
MACAddress=00:25:90:4b:81:54

[Link]
Name=cloudbr0

After you create this file, re-generate your initramfs:

update-initramfs -c -k all

You can now use cloudbr0 in *.network files to use it like any other network interface.

In my case this is how my interfaces look like:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
6: cloudbr0:  mtu 9000 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:25:90:4b:81:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.11/24 brd 192.168.0.255 scope global cloudbr0
       valid_lft forever preferred_lft forever
    inet6 2a00:f10:114:0:225:90ff:fe4b:8154/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 2591993sec preferred_lft 604793sec
    inet6 fe80::225:90ff:fe4b:8154/64 scope link 
       valid_lft forever preferred_lft forever
8: cloudbr1:  mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 86:fa:b6:31:6e:c1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.11/24 brd 172.16.0.255 scope global cloudbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::84fa:b6ff:fe31:6ec1/64 scope link 
       valid_lft forever preferred_lft forever
9: vxlan100:  mtu 1450 qdisc noqueue master cloudbr1 state UNKNOWN group default qlen 1000
    link/ether 56:df:29:8d:db:83 brd ff:ff:ff:ff:ff:ff

Replacing Xeon headlight bulb on Tesla Model S (2013)

Headlight bulb replacement

On my Tesla Model S (September 2013) the left headlight bulb failed and it had to be replaced. As I like to do such things myself I started to figure out how to do so.

On YouTube I found a great video to do so (you’ll find it below), but I didn’t know which bulbs I needed to order.

I called Tesla and they told me my European Model S from 2013 uses Osram Xenon D8S 25W bulbs. I searched for a local dealer and ordered them. The total was EUR 150,00 for two bulbs.

Order two

You should always replace both bulbs at the same time. As both bulbs have been on for the same amount of time they usually fail within a reasonable time from each other.

Replacing them

It took me about 90 minutes to replace the bulbs. You need to remove the front wheel to access the bulbs. I also had to replace my summer for winter tires, so I could do two jobs at once.

Video

I can try to explain everything, but there is a great video on YouTube about this:

VXLAN with VyOS and Ubuntu 18.04

VXLAN

Virtual Extensible LAN uses encapsulation technique to encapsulate OSI layer 2 Ethernet frames within layer 4 UDP datagrams. More on this can be found on the link provided.

For a Ceph and CloudStack environment I needed to set up a Proof-of-Concept using VXLAN and some refurbished hardware. The main purpose of this PoC is to verify that VXLAN works with CloudStack, Ceph and Ubuntu 18.04

VyOS

VyOS is an open source network operating system based on Debian Linux. It supports VXLAN, so using this we were able to test VXLAN in this setup.

In production a other VXLAN capable router would be used, but for a PoC VyOS works just fine running on a regular server.

Configuration

The VyOS router is connected to ‘the internet’ with one NIC and the other NIC is connected to a switch.

Using static routes a IPv4 subnet (/24) and a IPv6 subnet (/48) are routed towards the VyOS router. These are then splitted and send to multiple VLANs.

As it took me a while to configure VXLAN under VyOS

I’m only posting that configuration.

interfaces {
    ethernet eth0 {
        address 31.25.96.130/30
        address 2a00:f10:100:1d::2/64
        duplex auto
        hw-id 00:25:90:80:ed:fe
        smp-affinity auto
        speed auto
    }
    ethernet eth5 {
        duplex auto
        hw-id a0:36:9f:0d:ab:be
        mtu 9000
        smp-affinity auto
        speed auto
        vif 300 {
            address 192.168.0.1/24
            description VXLAN
            mtu 9000
        }
    vxlan vxlan1000 {
        address 10.0.0.1/23
        address 2a00:f10:114:1000::1/64
        group 239.0.3.232
        ip {
            enable-arp-accept
            enable-arp-announce
        }
        ipv6 {
            dup-addr-detect-transmits 1
            router-advert {
                cur-hop-limit 64
                link-mtu 1500
                managed-flag false
                max-interval 600
                name-server 2a00:f10:ff04:153::53
                name-server 2a00:f10:ff04:253::53
                other-config-flag false
                prefix 2a00:f10:114:1000::/64 {
                    autonomous-flag true
                    on-link-flag true
                    valid-lifetime 2592000
                }
                reachable-time 0
                retrans-timer 0
                send-advert true
            }
        }
        link eth5.300
        mtu 1500
        vni 1000
    }
    vxlan vxlan2000 {
        address 109.72.91.1/26
        address 2a00:f10:114:2000::1/64
        group 239.0.7.208
        ipv6 {
            dup-addr-detect-transmits 1
            router-advert {
                cur-hop-limit 64
                link-mtu 1500
                managed-flag false
                max-interval 600
                name-server 2a00:f10:ff04:153::53
                name-server 2a00:f10:ff04:253::53
                other-config-flag false
                prefix 2a00:f10:114:2000::/64 {
                    autonomous-flag true
                    on-link-flag true
                    valid-lifetime 2592000
                }
                reachable-time 0
                retrans-timer 0
                send-advert true
            }
        }
        link eth5.300
        mtu 1500
        vni 2000
    }
}

VLAN 300 on eth5 is used to route VNI 1000 and 2000 in their own multicast groups.

The MTU of eth5 is set to 9000 so that the encapsulated traffic of VXLAN can still be 1500 bytes.

Ubuntu 18.04

To test if VXLAN was actually working on the Ubuntu 18.04 host I made a very simple script:

ip link add vxlan1000 type vxlan id 1000 dstport 4789 group 239.0.3.232 dev vlan300 ttl 5
ip link set up dev vxlan1000
ip addr add 10.0.0.11/23 dev vxlan1000
ip addr add 2a00:f10:114:1000::101/64 dev vxlan1000

That works! I can ping 10.0.0.11 and 2a00:f10:114:1000::1 from my Ubuntu 18.04 machine!