ZFS for home NAS?
I have been doing some research on NASes for home use. I basically want a NAS that offers redundancy (some form of raid), the ability to add disks as I go. It should also support at least SMB as file sharing protocol (but preferable others as well), and of course not be too expensive. All home NASes I have found yet has been lacking on at least one of the above criteria.
I have read about people using ZFS on FreeBSD or OpenSolaris for their storage servers. ZFS is a open source file system developed by Sun Microsystems which has some features that makes it very compelling for a file storage server. Unfourtanly ZFS is not available on Linux at the time of writing (i think it is some licensing issues that is preventing a port of it), if it were I would definitely go for it.
To give it a try, i downloaded OpenSolaris 2009.6 and installed it as a virtual machine in VMware Fusion. Instead of having to add several virtual disks to the VM, i decided to test the features of ZFS using regular files (ZFS can use files as disk devices). An easy way to create some “disks” is to use the mkfile
command, it will create a file that can be used a disk device:
# mkfile 100m /tmp/disk1
# mkfile 100m /tmp/disk2
# mkfile 100m /tmp/disk3
# mkfile 100m /tmp/disk4
ZFS has tree leveles. The highest level is a ZFS pool, which can consist of several ZFS filesystems. A ZFS filesystem consists of one or more devices. Filesystems within a pool share its resources and are not restricted to a fixed size. You can add or remove devices to a pool (for example to increase your storage space), while the pool is running. Devices in a filesystem can be configured in mirrored mode or in RAIDZ mode to offer redundancy. ZFS also supports filesystem level snapshots and cloning from existing file systems. The two main ZFS commands are:
zpool - Manages the pools and the devices within them
zfs - Manages ZFS filesystems
Ok, so lets create a pool from the disks we created earlier:
# zpool create storage /tmp/disk1 /tmp/disk2
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 7.94G 4.28G 3.66G 53% ONLINE -
storage 191M 74.5K 191M 0% ONLINE -
As you can see we combined two disks into one pool. The filesystem automatically gets mounted on /storage (this is the default mount point, it can be changed). No volume management, configuration or formatting is needed. Lets destoy this pool to create a more interesting one.
# zpool destroy storage
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 7.94G 4.36G 3.58G 54% ONLINE -
As you can see, it is gone. Lets create a new pool using RAIDZ (a form of raid, similar to RAID5):
# zpool create storage raidz /tmp/disk1 /tmp/disk2 /tmp/disk3
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 7.94G 4.38G 3.56G 55% ONLINE -
storage 286M 140K 286M 0% ONLINE -
One thing that’s a little different in a ZFS raidz pool versus other RAID-5 filesystems is that the reported available disk space doesn’t subtract the space required by parity. Of course parity will take up space, so this is something to keep in mind when monitoring the disks. We can monitor the status of the pool by using the zpool status
command:
# zpool status storage
pool: storage
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/tmp/disk1 ONLINE 0 0 0
/tmp/disk2 ONLINE 0 0 0
/tmp/disk3 ONLINE 0 0 0
errors: No known data errors
After some playing around with ZFS i certainly think it would be a great choice for a storage server. It is way easier to use than the software/LVM solutions i have tried on Linux. The biggest drawback would be OpenSolaris itself, I just find the GNU application userland easier to use compared to the Solaris one. Maybe I should give Nexenta (OpenSolaris Kernel, GNU application userland) a chance?
Read more:
ZFS on Wikipedia
RAID-Z