Metafarce Update -> systemd, man pages, and TLS

I've recently had time to update the guts of metafarce.com. This post is about the updates to those guts, including what I tried that didn't work out so well. The first section is full of personal opinion about the state of free UNIX OS's.1 The second section concerns my adventures in getting TLS to work, and thoughts on the state of free TLS certificate signing services.

Background

I wanted to have IPv6 connectivity, DNSSEC and TLS for metafarce.com and a few other domains I host. The provider I had been using for VPS did not offer IPv6, so I found a VPS provider that did. The provider I had been using for DNS did not support DNSSEC, so I found a DNS provider that did.

Switching VPS providers meant I had to setup a new machine anyways. I had been running Debian for years, but I decided to switch to OpenBSD. My Debian VPS had been fine over the years. I kept it updated with apt-get and generally never had any major problems with. The next section deals with why I switched.

Because Reasons

Actually, two reasons.

The first reason is because of systemd. I simply didn't want to deal with it. I didn't want to learn it, I didn't see the value in it, and it has crappy documentation. This isn't me saying systemd is crap. I don't know if it's crap because I haven't spent any time evaluating it. This is me saying I don't care about systemd, and it isn't worth my time to investigate. There are other places on the web where one can argue the [de]merits of systemd, this is not a place for that.

One of the key things I've missed in the assorted arguments surrounding systemd is the lack of historical context. As if many of the systemd combatants aren't aware of how sacred init systems are to UNIX folk. One of the first big splits in UNIX history was between those who wanted a BSD style init, and those who wanted a sysV style init. There is a long history of UNIX folk arguing about how to start their OS. However, I saw very little recognition of that fact in arguments for/against systemd.

The second reason is because Debian man pages suck. Debian probably has the highest quality man pages of any Linux distro, but they still suck. They're often outdated, incomplete, incorrect, and it doesn't seem like Linux users care all that much that their man pages suck. Most users only read man pages during troubleshooting, and then only after failing to find their solution on the web. I read man pages for every application I install. I want to know how the application works, what files it uses, signals it accepts, etc.

The BSD UNIX's have excellent man pages, and they get the attention they deserve during release cycles. Unlike most Linux distributions, updates to man pages in BSD UNIX's are listed in changelogs and seen as development work on-par with software changes. This is as it should be. Documentation acts as kind of contract between user and programmer. It sets user expectations. If a man page says a program should behave in a certain fashion and the program doesn't, then we know it's a bug.

There is a trend in the UNIX world to think man pages are outdated. Some newer UNIX applications don't even include man pages. This is stupid. Documentation is part of the program, and should not be relegated to an afterthought. Also, you might not always have the web when troubleshooting.

TLS and StartSSL

Metafarce.com and the other domains I run on this VPS now have both IPv6 and DNSSEC. Metafarce does not yet have TLS(i.e. https) because I refuse to pay for it. Startssl.com offers free certificates, so in theory I should be able to get one for free. The problem is that I cannot convince StartSSL that I control metafarce.com. To successfully validate that a user owns a domain the user must have access to any email address in the whois record, OR have access to either postmaster@, hostmaster@, or webmaster@ for that domain.

I don't control any of the email addresses in my whois record, I don't have a privacy service for my whois record, my registrar just doesn't allow me to edit them. I'm also not willing to create an MX record for metafarce.com, then setup mail forwarding for postmaster@, hostmaster@, or webmaster@. Therefore I cannot convince StartSSL that I control metafarce.com. I shouldn't be in this situation. We have DNS SOA records for reasons, and one of those reasons is to host the zone's admin email address. At the very least the address listed in metafarce.com's SOA record should be available to use for domain validation purposes.

Also, how do they know the DNS domain controller will be the only one who has access to the these email addresses? The list, while not arbitrary, is not forced reserved for all mail setups.2 There are plenty of email accepting domains that forward these addresses straight to /dev/null.

Another method I have seen used to confirm control of a zone is to create a TXT record with a unique string. StartSSL could provide me with a unique string, I would then add a TXT record with that string as its value. This method assumes that someone who can create TXT records for a domain controls the domain, which is probably a fair assumption.

I think StartSSL has chosen a poor method for tying users to domains. Whois records should not be relied upon as a method for proving control. Not only does this break for people who use whois privacy services, but many users cannot directly edit their whois record, and don't have the skills/resources to setup email forwarding for their domain.

The outcome of all this is that I don't support https for metafarce.com. Without having my cert signed by a CA, users have to wade through piles of buttons and dialogs that scare them away. Thus it remains unencrypted.3 Proving that a given user controls a given domain is a tough problem, and I don't mean to suggest otherwise. StartSSL offers a free signing service and they should be commended for it. I just hope the situation improves so that myself and others can start hosting more content over TLS.

Let's Encrypt to the Rescue

Let's Encrypt is a soon to be launched certificate authority run by the Internet Security Research Group(ISRG). They're a public benefit corporation backed by a few concerned corporate sponsors and the EFF. They're going to sign web TLS certs for free at launch, which is great in and of itself. Even greater is the Internet draft they've written for their new automagic TLS cert creation and signing. We'll see how it works out, but if they get it right this will be a huge boon for TLS adoption. At the very least I can then start running TLS everywhere without having to pay for it.


  1. I use the term UNIX very generally as a super category. Any OS which imbues the concepts of UNIX is a type of UNIX. Linux, Minix, MacOSX, *BSD, and Solaris are all types of UNIX. I'm not sure about QNX, but Windows and VxWorks are definitely not UNIX. 

  2. RFC 2142 does actually reserve these, but that doesn't mean mail admins always do. 

  3. Another site I host on this VPS, synonomic.com, supports TLS. The cert for synonomic.com is not signed by any CA, so the user has to click at some scary looking buttons in order to view content. The cert is guaranteed by DANE to be for synonomic.com, yet no browsers currently support DANE out-of-the-box.