Consider for a moment that ZFS is a cross-platform RAID filesystem
I must confess that I have never trusted hardware RAID.
Best practices dictate that to properly prepare for the inevitable hardware failure you should stock up on all of the spares you might need up front: spare drives (preferably identical), spare drive trays (if proprietary) and above all, spare controllers (also identical and with identical firmware). You should also practice your disaster recovery strategy (assuming you have one) and hope that you'll never need to deploy it.
Therein lies the problem with proprietary RAID solutions. They place you and your data at the mercy of a vendor's proprietary hardware, utilities and disk layouts.
Software RAID on the BSDs has long provided a good alternative, but never a cross-platform one. This is where ZFS offers a simple, revolutionary notion: portable software RAID arrays.
Or so the saying goes. I have long been astonished that few RAID solutions support the use of mirrored drives outside of their original software and/or hardware contexts. Even though mirrored arrays consist of independent copies of the same data by definition, member disks are often dependent on the hardware controllers that created them. When that old Pentium III server finally goes up in smoke, you'll probably discover that you need a few key files that you swear were being backed up like clockwork but are now trapped on an array that will not install on your new PCIe-based hardware. With ZFS being both cross-platform and entirely software-based, you finally have hope of removing the minimum drives from an array and plugging them into your laptop with a USB or eSATA interface. Your ZRAID level will determine how many drives you need but believe it or not, ZFS is simply ZFS.
I have found a few examples where people have migrated ZFS pools between operating systems but only as one-off experiences with their fingers firmly crossed. Here is what I discovered while moving two zpools created in FreeNAS to several other operating systems.
My zpools consist of a pair of mirrored 500GB drives named cft
and a single 2TB drive named single
. The first thing I learned is that you should keep careful track of your pool names because the ZFS utilities will not always recognize and import them automatically.
As a point of reference, the FreeNAS shell reports:
zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT cft 460G 93.5K 460G 0% ONLINE /mnt single 1.81T 400K 1.81T 0% ONLINE /mnt zpool status pool: cft state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM cft ONLINE 0 0 0 mirror ONLINE 0 0 0 ada1p2 ONLINE 0 0 0 ada2p2 ONLINE 0 0 0 errors: No known data errors pool: single state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM single ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 errors: No known data errors zfs list NAME USED AVAIL REFER MOUNTPOINT cft 85K 453G 22K /mnt/cft single 364K 1.78T 112K /mnt/single
In all cases I did not export the zpools prior to testing in order to simulate a more realistic situation.
I recently saw an off-hand comment complimenting the Ubuntu ZFS FUSE package and decided to try it first. It can be found in its own "ppa" repository and can be installed with:
sudo apt-add-repository ppa:zfs-native/stable sudo apt-get update sudo apt-get install ubuntu-zfs
Upon reboot with my pools mounted:
sudo zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT cft 460G 94.5K 460G 0% ONLINE - single 1.81T 408K 1.81T 0% ONLINE - sudo zpool status pool: cft state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM cft ONLINE 0 0 0 mirror ONLINE 0 0 0 sdc2 ONLINE 0 0 0 sdd2 ONLINE 0 0 0 errors: No known data errors pool: single state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM single ONLINE 0 0 0 sda2 ONLINE 0 0 0 errors: No known data errors sudo zfs list NAME USED AVAIL REFER MOUNTPOINT cft 86K 453G 23K /cft single 368K 1.78T 116K /single ls /cft HelloWorld.txt
Astonishingly, ZFS FUSE recognized the pools and automatically mounted them upon boot. The mount points changed from /mnt/cft/
and /mnt/single/
under FreeNAS to /cft
and /single
but simply worked.
Also quite astonishingly, this works using a Xubuntu live CD. Open a terminal, type the apt-get
commands, hit enter when prompted and then start using zfs(8)
and zpool(8)
as you normally would.
Here are the results if the system is booted with only one member of the mirrored pool:
sudo zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT cft 460G 198K 460G 0% DEGRADED - sudo zpool status pool: cft state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. pool will no longer be accessible on older software versions. see: http://www.sun.com/msg/ZFS-8000-20 config: NAME STATE READ WRITE CKSUM cft DEGRADED 0 0 0 mirror DEGRADED 0 0 0 5776137213146340217 UNAVAIL 0 0 0 was /dev/ada1p2 sdb2 ONLINE 0 0 0 sudo zpool import cft cannot import 'cft': pool may be in use from other system use '-f' to import anyway sudo zpool import -f cft ls /tmp/cft helloworld.txt
You may need to issue a sudo zfs scrub cft
but this worked in all of my tests, even if the broken array reported FAULTED
status. That was a surprise.
As you have probably noticed when installing FreeBSD 9.0, the installer now gives a choice of a shell and live CD. I found that the live CD and the PC-BSD command line ZFS tools behaved the same. This should be no surprise considering that they are both essentially FreeBSD 9.0. What was unexpected however was the fact that FreeBSD 9.0 did not automatically recognize the zpools for import. This was even after reformatting the single
drive in case the Ubuntu ZFS FUSE tools made changes to it. The mismatch between ZFS versions may have something to do with this but it was not a problem under Ubuntu which is ZFS version 16. The leap from version 15 under FreeNAS to version 28 in FreeBSD 9.0 is probably the cause but the backwards compatibility could be slightly more elegant.
zpool list ZFS filesystem version 5 ZFS storage pool version 28 no pools available zpool import single cannot import 'single': pool may be in use from other system, it was last accessed by freenas.local (hostid: 0xe49a62c8) on Wed Jul 11 10:24:11 2012 use '-f' to import anyway zpool import -f single cannot mount '/single': failed to create mountpoint zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT single 1.81T 496K 1.81T 0% 1.00x ONLINE - zpool status pool: single state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavilable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scan: none requested config: NAME STATE READ WRITE CKSUM single ONLINE 0 0 0 ada0p2 ONLINE 0 0 0
The version warnings can be expected and cannot mount '/single': failed to create mountpoint
is caused by the fact that the FreeBSD 9.0 live CD has root mounted read-only. You can work around this by mounting the pool on /var
:
zfs set mountpoint=/var/single single zfs mount single mount ... second on /var/second (zfs, local, nfsv4acls) ls /var/second/ helloworld.txt
If you know you will we staying with FreeBSD/PC-BSD 9.0, you can upgrade the FreeNAS-created zpool from v15 to v28 with zpool upgrade -a
As you may know, the last public version of OpenSolaris was forked into the Illumos project and one of its variations is the OpenIndiana operating system. I tested version 151a of OpenIndiana and couldn't wait to see pure, upstream ZFS in action. Much to my surprise, I can not get the OpenIndiana live CD to recognize any FreeNAS-created zpools. The same is true if OpenIndiana is installed to hard disk. I have tried exporting the pools prior to import and upgrading them to v28. Either way, OpenIndiana refuses to see them and I will keep investigating.
There were many rumors that ZFS would be included in Mac OS X but I suspect that licensing concerns prevented this from becoming a reality. Proprietary vendor Ten's Compliment has stepped in with a ZFS v28 driver for Mac OS X called ZEVO. The "Silver" edition is the only version released to date and is limited to non-RAID volumes:
sudo zpool list no pools available sudo zpool import -f -d /dev -o readonly=off single sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT single 1.81Ti 3.55Mi 1.81Ti 0% 1.00x ONLINE - mount ... single on private/tmp/single (zfs, local, nodev, nosuid, journaled, mounted by dexter)
With a member of the broken mirror:
sudo zpool list no pools available sudo zpool import -f -d /dev -o cft pool: cft id: 7148641423518626295 state: DEGRADED status: The pool was last accessed by another system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: cft DEGRADED mirror-0 DEGRADED /dev/disk4s2 ONLINE /dev/ada1p2 UNAVAIL cannot open sudo zpool list no pools available
Be sure to ignore the drives upon insertion rather than initializing it and as you can see, the broken mirror could not be imported.
OpenSolaris 2009.06 includes zpool version 14 which will not mount our version 15 pool from FreeNAS. The commercial evaluation version of Oracle Solaris 11 11/11 includes zpool version 33 and worked predictably:
su root zpool list (no mention of the cft pool) zpool import cft cannot import 'cft': pool may be in use from other system, it was last accessed by freenas.local (hostid: 0xe49a62c8) on Wed Jul 11 11:44:03 2012 use '-f' to import anyway zpool import -f cft ls /cft helloworld.txt zpool status pool: cft state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are not available. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scan: none requested config: NAME STATE READ WRITE CKSUM single ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 see: http://www.sun.com/msg/ZFS-8000-20 config: NAME STATE READ WRITE CKSUM cft ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4tld0s1 ONLINE 0 0 0 c4t2d0s1 ONLINE 0 0 0 errors: No known data errors
I am including SmartOS separately because it is not configured like any other OS mentioned here except for FreeNAS itself. Like FreeNAS, SmartOS is designed to be booted from a flash device. It can also be booted from CD-ROM but the disc will remain your boot device even after you have formatted a hard drive. Their logic is that you should not sacrifice a full drive to a task that can be achieved with 1GB of flash. Smart. No pun intended.
zpool list NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT cft 460G 81K 460G - 0% 1.00x ONLINE - zones 1.81T 4.01G 1.81T - 0% 1.00x ONLINE - zpool status cft pool: cft state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scan: none requested config: NAME STATE READ WRITE CKSUM cft ONLINE 0 0 0 mirror ONLINE 0 0 0 clt1d0s1 ONLINE 0 0 0 clt2d0s1 ONLINE 0 0 0 errors: No known data errors ls /cft helloworld.txt
That was a pleasant surprise! The pool cft
was automatically imported on boot and behaves as expected. SmartOS includes zpool version 28.
Now that my ZFS pools have gone where no pools have gone before, let's see how they are about coming home to FreeNAS.
I cannot tell which experiment is to blame but fortunately this problem was easily resolved by exporting and importing the pools:
I encourage you to experiment with ZFS journeys between more systems and will gladly publish your findings. The potential for universal ZFS pool portability just might take the pain out of RAID administration and broken mirror data interrogation once and for all.
Copyright © 2011 – 2014 Michael Dexter unless specified otherwise. Feedback and corrections welcome.