Using a CJDNS Tunnel for Site-Site Encryption

The next task on my list for switching to a new VPS is adding it to my IPv6 ULA network. I'm going to use CJDNS for the time being.

CJDNS Versus IPsec

Two months ago I was writing an article about creating an IPsec tunnel between my new VPS (vps3) and my home server (home).

Time, however, is not a friend of mine. I will probably run out of time to give notice to cancel my old VPS (vps2) if I were to go in the direction of IPsec.

The 18th August 2015 is the end of my minimum notice period—"14 days (before 18-May-2015)"—at which point some of my add-ons for vps2 will be auto-renewed.

On the plus side, those add-ons are those I was given free due to an outage (15 GB HDD, 1 IP address) so auto-renew will not cost me anything.

The next pertinent date is 31st May 2015 when my add-on for vps3 (an extra IP address) will "automatically terminate". The reason this is an issue is because when, upon trying to cancel the add-on within the OpenITC control panel, I am greated with the following:

This order is permanently attached to a parent product. You cannot cancel this order without cancelling the parent order.

Hopefully by disabling "service auto renewing" on the add-on the IP address will just be removed from vps3 on the 1st June. Hopefully vps3 will still be functioning on the 1st June, and hopefully on the 17th or 18th I will only be invoiced for £2.50 instead of £3.50.

Here is the problem with the timing: although my 5 GB add-on for vps2 needs to be cancelled before 18th August 2015, my package for vps2 needs to be cancelled before 17th June 2015.

That means I will not know if the invoicing for vps3 is correct before I initiate cancellation of service for vps2.

As my body is still repairing itself after surgery, and I need to finish reorganising my room after moving stuff for the electrician to add a double socket on a spur, which I can't do until I am able to finish putting my new sit/stand computer desk together (tried last week and my wound dressing ripped off), doing any major coding or securely signing cerificates just isn't comfortable at the moment. Besides, I still need to buy a computer chair.

Therefore sticking with what (for the most part) I know works would be best for the time being. I'll probably move to IPsec at a later date.

CJDNS

CJDNS is a secure networking protocol that routes packets over a private IPv6 WAN. Hyperboria is the largest known CJDNS network and, because you need to peer with someone that is peered with others already on the network, can be considered a darknet.

Because a CJDNS node can peer with multiple other nodes, it is also considered a meshnet as there can be multiple routes data packets can travel between two nodes. A node doesn't need to join the Hyperboria network—my home and vps2 nodes are only peered with each other, creating a point-to-point darknet.

All that would be needed to create a meshnet would be to make vps3 a CJDNS node and peer it with home and vps2. Traffic between home and vps3 could then either travel directly, or go via vps2. As Hyperboria has many more nodes, it is a much larger meshnet than my 3 node meshnet would be, and as such there are many more routes traffic can take between nodes.

Anyway, I have been using a tunnel on top of CJDNS on top of IPv6 for a site-to-site link between home and vps2. It does have problems occasionally, resulting in me restarting my Hurricane Electric tunnels, my CJDNS tunnels, and my ULA (IPv6 Unique Local Address) tunnels which usually fixes things—sometimes, however, the problem is with varnish or MySQL instead.

Native IPv6

One of the bonuses with my new VPS is that vps3 has native IPv6. That means that rather than having two HE tunnels I will only need the one at home, removing one of the layers that can cause potential routing outages.

Alternatively, I can move my HE tunnel endpoint from vps2 to vps3, keeping public IP addresses the same. I probably won't be keeping public IP addresses the same when I start moving things to my new VPS because I will need a way to test the services one-by-one rather than moving the whole subnet and hoping everything continues to function.

One of the advantages I have, though, is that the vast majority of my backend services are using ULA addresses instead of publicly routable addresses. For example, my MySQL server on vps2 asks my home (replication master) server for updates at IP address fdd7:5938:e2e6:1::3306:1.

Although my home MySQL instance is bound to all IP addresses (a limitation in MySQL is binding to multiple IPs), firewall rules mean it is effectively not listening on publicly routable IP addresses.

Although SNI has reduced the need for unique IP addresses for HTTPS (TLS) connections, native IPv6 means I can have unique IP addresses for each Web site hosted on vps3 without relying on an IPv6 tunnel broker for routing. It also means more services can share my IPv4 IP address so I can now cut the number of such IPs needed down from three to just one.

Installing CJDNS

On vps3, I ran the following commands:

sudo apt-get update
sudo apt-get install nodejs git build-essential
cd /usr/local/src
sudo mkdir cjdns
sudo chown thejc:root cjdns
git clone https://github.com/cjdelisle/cjdns.git cjdns
cd cjdns
./do
sudo mkdir /usr/local/etc/cjdns
sudo chown thejc:root /usr/local/etc/cjdns
umask 077 && ./cjdroute --genconf > /usr/local/etc/cjdns/cjdroute.conf

Configuring CJDNS

The first thing to do is to open /usr/local/etc/cjdns/cjdroute.conf and make a few modifications.

Change the UDPInterface bind from 0.0.0.0:27177 to the public IP address, 149.255.108.141:27177.

Change beacon from 2 to 1.

Uncomment the tunDevice line and set it to tun0.

Uncomment the logTo line, and set noBackground to 1.

Finally, we need to add the following rules to our firewall.

In iptables:

### CJDNS ###
-A INPUT -d 149.255.108.141 -p udp --dport 27177 -j ACCEPT

And in ip6tables we need to let traffic with a source of tun0 on home and a destination of tun0 on vps3 through, and forward traffic with a destination of tun0 on home and a source of tun0 on vps3 through:

-A INPUT -i tun0 -d fc…[vps3]… -s fc…[home]… -j ACCEPT
-A FORWARD -i tun0 -o ula-net -d fc…[home]… -s fc…[vps3]… -j ACCEPT

Copy the line from cjdroute.conf that starts // "your.external.ip.goes.here: to the connectTo section of cjdroute.conf on my home server, uncommenting it and substituting in the correct public IP address for vps3. Restart CJDNS on home server.

Fire up cjdroute on vps3:

sudo /usr/local/src/cjdns/cjdroute < /usr/local/etc/cjdns/cjdroute.conf

Attempt pinging the vps3 CJDNS IP address from my home server. There should be very verbose activity in CJDNS on vps3. If CJDNS keeps repeating "No nodes in routing table, check network connection and configuration." every so often double-check the firewall rules on vps3.

Assuming everything is working (i.e. home is getting ping responses from vps3's CJDNS IP) comment out the logTo line in vps3's cjdroute.conf, set noBackground to 0, and restart CJDNS on vps3.

At this point CJDNS is working between home and vps3, but it is not automatically started on reboots.

Automatically Starting CJDNS

sudo nano /etc/init.d/cjdns

Copy the contents of https://github.com/ProjectMeshnet/CJDNS-init.d-Script/blob/master/setup.sh minus the first line into /etc/init.d/cjdns on vps3.

Change GIT_PATH and PROG_PATH to /usr/local/src/cjdns.

Change CJDNS_CONFIG to /usr/local/etc/cjdns/cjdroute.conf.

Change the first command in the stop() function so that it checks if there are 1 cjdroute processes instead of 2 (the version of CJDNS at the time of writing only creates one process):

if [ $(pgrep cjdroute | wc -l) != 1 ];

Save cjdns. Now make it executable, and then make it automatically start on boot:

sudo chmod +x /etc/init.d/cjdns
sudo update-rc.d cjdns defaults

Now, test CJDNS's init script works:

sudo killall cjdroute
sudo service cjdns start
sudo service cjdns stop

Start pinging the vps3 CJDNS IP from home server indefinitely, and then start the cjdns service using sudo service cjdns start.

If everything is working correctly, ping responses should start to be returned after approximately 30 seconds (the point at which home's CJDNS instance will pull down and attempt to bring back up a non-responsive connection to a node).

When home starts getting ping responses from vps3, abort the indefinite pinging.

As a final test, reboot vps3. Reconnect to vps3 over SSH and run sudo service cjdns status.

If Cjdns is running is returned, try pinging vps3 from home again. Remember, it might take 30 seconds for the CJDNS instance on home to reconnect to vps3.

Assuming everything is working, the next stage is to add vps3 to my ULA network.

ULA Routing

The last 4 hexadecimal digits of the eth0 interface's MAC address on vps3 are 6c:8d. I use these digits to determine the ULA subnet allocated to a machine. vps2, for example, has been given subnet fdd7:5938:e2e6:9660::/64, and vps3 will be given subnet fdd7:5938:e2e6:6c8d::/64.

My ULA /48 is registered in the sixxs.net IPv6 ULA (Unique Local Address) RFC4193 Registration List to avoid potential addressing conflicts in any future network mergers.

The odds of two of my IPv6 ULA /64 subnets clashing when I'm using the last 4 digits of the primary MAC address of a device are fairly low, given there are FFFD (65,533 in decimal) possible combinations (I'm excluding 0000 and FFFF from the count).

I am using the subnet fdd7:5938:e2e6:3::/64 for ULA endpoints, so fdd7:5938:e2e6:3::6c8d/128 will be what my home server routes traffic for fdd7:5938:e2e6:6c8d::/64 to.

In order to create a tunnel, I duplicated /etc/init.d/ula-ipv6-tunnel (copying it to /etc/init.d/ula-ipv6-tunnel-vps3) on my home server, and made some modifications to IPv6 subnets in both scripts so that, for example, fdd7:5938:e2e6:3::9660/64 was changed to a /128.

My home server now has two ULA interfaces—ula-net for between home and vps2, and ula-net-vps3 for between home and vps3.

With some further firewall rules, I am able to ping between vps3 and home and vice-versa.

For the time being,