Skip to main content

Home Server Part 1: I need ZFS!

With more and more devices in my life, I soon started to feel the need for some kind of private and centralized data storage. I don’t want to buy desktop computers, laptops and phones each equipped with huge local storage and copy things from A to B all the time. Cloud providers work great in this case, but if the amount of data you have exceeds a few hundred Gigabytes they can be expensive and for sure way too slow if you don’t access the internet with Gigabit speeds.

Having a small server at home solves this problem for me. My devices all have rather limited local storage and access all the important files remotely by now. To be honest, this can be achieved with any store bought NAS system out there. But that would steal a lot of the fun, right? (And also, you get a lot more bang for your buck when building the system yourself.)

Last fall, after about 7 years, my current system started to show its age. It was build on a RAID-5 with 3x 3TB disks which had filled to over 95% over time. Backup disks had even less free space and I had to start excluding less important stuff from my backups. Bit rot ate up one of my oldest files. And simple tasks like zipping a file for download over Nextcloud would bring the old dual core CPU to its limits.

It’s time for a new server and I started to put a lot more thought into building the new one than 7 years ago.

Data Storage

I need more space. That is not a problem, single harddrive capacities exceed 10TB by now and I can easily double or triple space for the new system if I want to. But the ext4 RAID-5 wasn’t cutting it. I mentioned that one of my oldest video files “rot” away, basically meaning it got corrupted just by lying around (check out Data Degradation on Wikipedia). As I result, I wanted to do more than a simple RAID-5.

Which inevitably lead me to ZFS. There is a variety of filesystems out there and I learned that some put more effort into keeping data safe than others. I was using ext4 on Linux which works perfectly fine, but does not do too much for reliability. There is Btrfs which brings a lot of great features to the table and is part of the Linux ecosystem. And then there is ZFS which was originally developed by Oracle and brought to the open source community with OpenZFS. Many features between the two are quite similar and my decision towards ZFS was more of a convenience regarding the operating system I decided to use. But I’ll get to that. Here’s the features that made me want to run ZFS:

Checksums on multiple levels of the data structure

Good old ext4 also uses checksums to be able to check data integrity. But ZFS does it on a completely different level (literally). There are multiple checksums for the data on all the hierarchically organized levels of the file system which allow checking data integrity to be very fast AND thorough. Everytime ZFS reads somthing from the disk, the corresponding checksums are verified. So if something does not check out, you’ll know right away.

Scrubbing

ZFS offers a tool to perform a checkup on your data and verifying integrity. It’s called a scrub and should be part of data housekeeping, since it allows you to notice problems with your storage even if you’re not accessing every single file all the time. Given some kind of redundancy, the filesystem will heal itself if a file is corrupt. Now that sounds very cool! I felt like a fool relying on fschk to check for errors which turns out to be far less thorough than a ZFS scrub.

Redundancy and RAID

Running a RAID-5 also offered redundancy in my old system. But ZFS does a litte more (again). Next to offering the usual like striping and mirroring to build a storage architecture from a number of disks, it is possible to store multiple copies of the same file automatically. They might end up on the same physical device, but it gives you another copy when having to repair corrupted data if a drive sector goes bad. This will of course double the effective storage requirements for a given file.

Caching and Compression

Data can be compressed on the fly, which can conveniently save a lot of space for well compressable data. Deduplication will store the same block of data only once and it can be used arbitrarily often. Also very handy. But what has the most impact for me are the caching mechanisms. ZFS offers different levels of read caches – a first in RAM and an optional second one on a small but fast device called L2ARC. This gives you the possibility to even further ramp up the speed of a hard disk RAID with a high performance SSD. The same levels exist for writing, where a ZLOG device can act as a write cache with much more space than what RAM can offer.

Snapshots

Of course one of the main things that makes ZFS ideal for data storage are snapshots. The filesystem is copy-on-write and allows for keeping a number of older copies of the same file. By writing only the diff to the disk, this can be done with relatively low overhead regarding space. This gives you the possibility to rollback to a desired older version of a file or even deleted files if you need to.

DevOps