The day

Friday night, Cloudflare has a still ongoing outage, some of their services are not working, and my household is in blackout due to local electricity failure — the best time to do something, right?

It was in my plan to migrate this blog from a home server to a cloud provider, but according to the priorities it was assigned to happen much later. And I’ve decided that it could be an interesting case to test my infrastructure design and data strategies.

The migration

While my server’s UPS can beep for many hours because of low-wattage PC Engines masterpiece, the hardware chain of the ISP used for the blog cannot last for a long time. Let’s check — yes, Cloudflare already reports that the origin host is offline. I still have a laptop and other ISPs running. But I would like to imagine that my server also is offline, for the sake of testing. Then the situation can be depicted like this:

01 the situation

I’m in need of two data sources, then:

  • the blog sources git repo

  • the server config git repo

I keep my data backed by Tarsnap, so I can fetch the required archive when I have troubles like this one. And I have ways to get the respective Tarsnap keys if I’m in a situation with zero data in my hands (for obvious reasons I cannot write about the details here).

Okay, I have the sources. My old plans were to give Vultr a try after the FreeBSD support discontinuation announcement by DigitalOcean. Also, I wanted to try an IPv6-only host as Cloudflare can proxy clients to IPv6-only origin. So, conceptually it’s going to be the following transition:

02 concept

As long as one of my ISPs supports IPv6 traffic the new origin server setup is like a breeze without additional tunnels:

  • create a Vultr account

  • add my SSH public key

  • spin up a FreeBSD instance

  • update my ~/.ssh/config on the laptop for this new host

  • blog# pkg install nginx

  • upload nginx configuration from my home server config git repo

  • blog# sysrc nginx_enable=yes

  • blog# service nginx start

The blog itself is cooked very simple way:

  • spin up Docker on my laptop

  • laptop$ make clean install

The domain is delegated to Cloudflare for the magic proxying — the last step is tinkering with the zone (usually, the scariest part):

  • add AAAA record

  • remove existing A one

  • wait for the TTL :)


Unfortunately, it’s migrated very well. Thus, if my configuration and assumptions still have some issues then I have not spotted them during this test case. The only thing left is to move the nginx configuration to the blog git repository itself — now it’s completely a separate thing to have all the needed files together in a single repo.


Copyright © Igor Ostapenko
(handmade content)

Submit a like
Post a comment