Home NAS build, 2017 Edition

Many years ago, before WiFi or even routers were common in homes, I was faced with the problem of how to connect multiple computers in my home to the internet when our cable provider only provided a single IP address. Being the late 1990s-early 2000s, this was a problem largely faced only by super-nerds like me, and the generally accepted solution was to build your own router, generally using a Linux box with two Ethernet ports, ipchains (or iptables, if you were bleeding-edge) to do NAT, DHCP and DNS. I had an old computer laying around so that’s what I did. This box served me well for many years, ultimately being replaced by a Linksys WRT54g.

While the Linux box was in service, it took on a few other roles, the most useful of which was in-house file server. Long before services like Dropbox — or even Gmail — existed, it was a real pain to share large files with other people in your house. Streaming music wasn’t really a thing back then; the only way to share your music collection with someone was to keep your MP3s on a central server somewhere, or just re-download them from Napster.

Eventually my need for a dedicated file server mostly dried up, and the power in my basement was so flaky that the thing would shut off randomly without my even noticing, and then when I actually wanted to get something off it it wasn’t turned on, which was annoying. That plus the fact that the thing had a 750 MHz Athlon CPU filled with cobwebs and dust and a single 40 GB drive meant it was more hassle than it was worth. The final nail was my house being flooded during Hurricane Sandy, at which point the server was submerged in brackish sea water.

And so, for several years, I made do without a dedicated file server in my house, backing all my important stuff up to a single USB 3.0 WD My Book. As a sysadmin, trusting my important stuff to a single spinning disk never sat well with me, but it was the cheapest option.

A problem that arose in the absence of a file server was how to share the 100,000 photos & videos I’d taken over the past 15 years with the rest of my family? I enabled file sharing on my Mac and exported the USB drive, but that required the USB drive to be always plugged in, which was annoying — a laptop tethered to a USB disk isn’t very portable.

Finally, about a year ago, I decided I needed a new file server, to hold personal items like family photos as well as to serve as a DLNA media server for my movie and music collection. Plex makes this formerly clunky task pretty smooth. The question I faced was whether to build my own home file server — a more modern version of my ancient Athlon box — or buy one of the off-the-shelf home NAS devices that have come to market over the past few years. I know several people who have Synology, QNAP or Drobo devices and they all speak highly of them. Ultimately, about a month ago, I decided to go with a Synology DS416. I already detailed my Synology experience in a separate post, but the short story is it was just too slow for my needs and didn’t really simplify anything I wanted to do, so I returned it and decided to build my own server once again.

Ultimately I settled on the parts in this list. While this came out slightly more expensive than the Synology + 4 drives, it’s far more powerful. I should note that that’s not the exact motherboard I got — for some reason PC Part Picker doesn’t list the Asus Prime B250M-C. I also went with 16 GB memory, which may be overkill for a fileserver, but I’d like this thing to last at least 10 years, so a bit of futureproofing won’t hurt — especially if I want to play around with running VMs or something.

I was surprised to see how powerful the Kaby Lake i3–7100 CPU was — two 64-bit cores (plus HT) at 3.9 GHz with 51 Watts TDP. This really put the Synology’s 32-bit Annapurna Labs Alpine AL-212 dual-core 1.4GHz CPU to shame. I also threw in a Crucial 275 GB SATA m.2 SSD for the OS drive. I couldn’t find a case I really liked, but fortunately had an old Core 2 Duo desktop collecting dust, housed in a Cooler Master case, and everything fit inside it without issue (though I had to buy a bracket to mount one of the 3.5″ drives in a 5.25″ bay).

I debated running FreeNAS, but ultimately went with Ubuntu 16.04 (Xenial) since I’ve been working with it for almost 4 years and have grown to love it. Configuring the storage drives was something I’d been thinking about for a long time. My original plan was to use mdadm to create a RAID5 of the 4 spinning drives, which would give me about 11–12 TB usable — more than enough for the foreseeable future. But in reading about the new features of Xenial, ZFS was one of the big headlines, so I decided to go with a raidz — the ZFS equivalent of RAID5.

I ran into a couple of bumps along the way — one of the HGST drives bricked itself after a couple of days and I had to RMA it, and another reported I/O errors in syslog on every reboot, so I had to RMA that one also. I should point out that I steered away from Seagate and toward HGST on the basis of the Backblaze hard disk report. While I didn’t go with the specific model of HGST drive they used, I figured the brand might be worth something. Having had a 50% defect rate on my batch of 4 drives, it appears I was wrong.

With all the RMA’d drives replaced, I performed a 24-hour run of badblocks to make sure nothing was wrong with the new drives. Fortunately all looked good and I’m now back up and running. After having read this and this, and after my own painful experience with failed disks, I decided to rebuild my ZFS using a striped mirror set (RAID10) rather than a raidz (RAID5). While this means I lose 50% of the capacity, I gain much shorter rebuild times and better performance when a disk fails and the array is in a degraded state. I also went with lz4 compression for the zfs volume based on some of the benchmarks in those articles — in this box, CPU power far exceeds disk speed, so minimizing disk reads and writes via compression is a big win.

On the software side, I currently have the file server running Samba for general “file server” stuff, vsftpd for my Foscam dog camera to store videos, netatalk for Apple File Protocol so I can backup my Mac to it via Time Machine. I installed Plex, which — as expected — performs far better now than it did on Synology and has no problem transcoding videos on the fly. I actually wrote a Python script to transcode all my old .AVI home videos into iPhone-compatible MP4s (h.264 + AAC audio) and the i3 plowed through them pretty quickly.

Being a sysadmin at heart, I wanted to make sure I had some decent monitoring in place. Lately I’ve rediscovered collectd, which I consider by far the simplest way to see relevant metrics for a Linux system, and which is fairly trivial to configure. The collection3 web UI provides an easy way to see everything from available disk space to current CPU speed — see below.

CPU speed for the past 24 hours. It was pegged at 3.9 GHz during the badblocks test.
The newly created ZFS volume is filling up as I copy my stuff onto it.

I also installed Webmin, which I hadn’t looked at in quite a while. I’m not completely sold on it, but it does give a very nice dashboard with overall system health, with metrics ranging from CPU usage to drive temperature:

Webmin dashboard

I wanted to be able to access my server from the Internet via a browser, but over a TLS connection rather than plain HTTP, for reasons that would hopefully be obvious. I was initially planning to use a self-signed SSL certificate, but this seemed like a good time to try out Let’s Encrypt. Installation and setup were pretty simple and in under 5 minutes I had a trusted cert installed with Nginx fronting it — for free! Sweet! Last time I bought an SSL cert it cost $200 for one year, and involved annoying phone verification.

One really neat feature this file server provides is the ability to present ZFS snapshots to Windows clients as “Previous Versions” using volume shadow copy. ZFS, as a copy-on-write filesystem, makes creating snapshots trivial, and a few Samba config lines present them to Windows under the Previous Versions tab in file info. A cron job on the server generates snapshots every 8 hours. Samba config snippet and the cron job are below.

Here’s what a file looks like that has previous versions — I created it at 23:33, snapshotted it, then edited it again at 23:34:

The snapshots are visible using this command:

root@lunix:~# zfs list -t snapshot
NAME                                USED  AVAIL  REFER  MOUNTPOINT
lunix1/data1@snap_2017-02-09-0113   284K      -  3.02M  -
lunix1/data1@snap_2017-02-09-0432  4.84M      -   482G  -
lunix1/data1@snap_2017-02-09-0433  4.79M      -   489G  -

I’ve only had this thing up a couple of days but so far I’m pretty happy with how it turned out over all. If I had to do it again I’d probably opt for 2x 6TB or 8TB drives rather than 4x 4TB drives — I don’t need the striping performance, really, and two fewer spinning drives would reduce overall power consumption, failure probability, cost, and open up more options for smaller cases — the case I have is nice, but it’s a full desktop chassis.

Thanks for reading.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: