[2025-jun-17] The SlackDocs Wiki has moved to a new server, in order to make it more performant.
Table of Contents
Using ZFS as the root filesystem in Slackware
This is a guide to configuring Slackware (15.0 or -current, on any supported architecture) to boot using a ZFS filesystem as the root (/) partition. The steps on this page assume that you already have the OpenZFS software installed and have set up a your ZFS filesystem(s), but just need to adjust the system boot process to support using one of those ZFS filesystems as /. If you'd first like more information on what ZFS is, why you might want to use it, how to install OpenZFS, how to create ZFS filesystems, etc., you might wish to start at my ZFS Overview guide. For a brief guide to setting up ZFS on non-root partitions only, you might also want to check out David Chmelik's ZFS Home HOWTO.
Of note, another “Slackware Root on ZFS” HOWTO exists at https://openzfs.github.io/openzfs-docs/Getting%20Started/Slackware/Root%20on%20ZFS.html. That guide provides some good background info, but it's brief, somewhat dated (e.g. does not cover bootloaders other than ELILO), and does not detail all of the changes that are required to a “vanilla” Slackware system (e.g. in /etc/rc.d). This HOWTO aims to fill that gap with additional details, updates to reflect the latest Slackware versions, and scripts that provide maximum flexibility over a wide range of possible configurations, including continuing to boot without issues if they're copied back to a non-ZFS filesystem.
The steps below should also be useful to anyone trying to set up ZFS on root on other Linux distributions or UNIX-like operating systems, although some modifications will be required to the /etc/rc.d/ scripts. I am particularly grateful to the Arch Linux project for (as always) providing high-quality documentation that helped me get started on the original versions of the scripts below.
What is tricky about using ZFS for the root filesystem? Why this guide?
There's really nothing that difficult about it, but there are several interrelated steps. Because the CDDL license prevents ZFS from being shipped as an internal part of the Linux kernel, you'll need to manually install extra packages (such as https://slackbuilds.org/repository/15.0/system/openzfs) to get ZFS support. Because ZFS isn't a core part of Linux, most bootloaders don't know to recognize and make use of ZFS volumes, so you'll need to either install a special ZFS-aware bootloader such as ZFSBootMenu, or else keep a custom startup image (initrd or initramfs) on a filesystem that your existing bootloader /does/ understand, and then use that to access the true ZFS filesystem. Finally, because most users don't go to the trouble of all of the above, the default Slackware system management scripts aren't written with some of the quirks of ZFS (for example, the need to load encryption keys before mounting an encrypted volume) in mind. The required changes are actually relatively small and are detailed below. I have submitted them to Pat as proposed changes to Slackware's `sysinitscripts` package, which would simplify the experience for Slackware's ZFS users, but unless or until that happens you will just have to manually manage the small changes relative to the defaults.
Conventions
In the command examples below, I'll assume your intended root filesystem lives in the pool named testpool and is itself named slackroot, i.e. testpool/slackroot. You should change them to whatever names you chose for your own system. If you're using the GRUB bootloader, the root= property in the /boot/grub/grub.cfg snippet will also have to be adjusted. All of the other scripts are written to be name-independent, i.e. they will automatically use either the root= name passed in from your bootloader (during the early startup process), or automatically discover the ZFS volume name that has been mounted at / (after you pass that point in your init scripts).
If you choose to use ZFSBootMenu, it will automatically discover and use whichever ZFS filesystem has a legacy mountpoint set to mountpoint=/, but a specific filesystem can be passed in using the bootfs property. That can be useful if you plan to keep different ZFS filesystems with mountpoint=/ (perhaps for different OSes) visible to that machine.
All code samples have been tested on Slackware64 versions 15.0 and -current (as of August 2025). Where necessary, minor differences between the 15.0 and -current versions are indicated. The same scripts should work without modification on other hardware architectures, and require only minor adjustments for older or newer versions of Slackware.
Throughout this document,
- The term “ramdisk” should be understood to mean either an initrd or initramfs-formatted ramdisk where appropriate. I personally use initrd, which is reflected in a few spots in the scripts, but the changes for initramfs (if supported by your bootloader) should be straightforward.
- The term “root filesystem” means the ZFS dataset that will be mounted at
/when your system boots. The term “boot drive” means the filesystem that holds your kernel and ramdisk. Unless you are using the ZFSBootMenu bootloader, the boot drive cannot be a ZFS partition (i.e. will have to be different from the root filesystem, and in a different format, such as FAT32 or ext2/3/4, that your bootloader natively supports).
- The term “GRUB” refers to GRUB 2. Legacy GRUB can probably be made to work with appropriate changes to the scripts in the GRUB section below.
Filesystem configuration requirements and recommendations
Mountpoint
- For the best compatibility with existing Slackware scripts, the root filesystem should be set to use
legacymountpoints.- This will allow use of traditional
mount/umountcommand syntax to manage the volume, and will prevent the ZFS system utilities from trying to automatically mount or unmount it.
sudo zfs set mountpoint=legacy testpool/slackroot
- If you'll be using ZFSBootMenu and want to auto-detect your root filesystem, you can either use the
legacymountpoint setting withorg.zfsbootmenu:active:
sudo zfs set mountpoint=legacy org.zfsbootmenu:active=on testpool/slackroot
- or (again, for ZFSBootMenu only) use ZFS' built-in automount support with a manual mountpoint of
/andorg.zfsbootmenu:activeset off:
sudo zfs set mountpoint=/ org.zfsbootmenu:active=off testpool/slackroot
Encryption
Yes, you can boot into an encrypted root filesystem, as long as your ramdisk takes care of loading the key before mounting the encrypted root. Note that storing the key inside the ramdisk would defeat most of the purpose of encryption, unless the ramdisk itself is also protected by some (non-ZFS) encryption method. For most use cases, it's probably better to either store the keyfile on a removable device (such as a USB drive that is securely locked away after the system boots), or to use ZFS' keylocation=passphrase feature, which will prompt the user to enter a password at pool import time.
If you want to boot into the root filesystem directly from ZFSBootMenu, you /must/ choose keylocation=passphrase. However, if you really need or want to use another method, you could set up an intermediate ZFS boot volume (either unencrypted, or with keylocation=passphrase), and then have the ramdisk on that volume load the “real” key and springboard into your final root filesystem.
The modified /etc/rc.d scripts below include steps to load additional keys for any other encrypted ZFS filesystems during the Slackware startup process, and may be helpful even when the root filesystem is not ZFS.
Performance tuning
The most important ZFS performance parameter is getting your pool's ashift property set correctly, but I assume you already did that before starting this guide.
You almost certainly want to disable file access time tracking. If you don't disable it, ZFS will have to write back changed metadata blocks every time a file is opened, which will cause a massive increase in disk write traffic, reduce cache performance, and increase the wear rate on SSDs. The downside is that some system tools might erroneously detect files as “newly changed” (for example, a mail client might always report that “You've got new mail!” if the spool access time doesn't change when you read your messages), but this is usually at most a minor annoyance. As a compromise, you could enable relatime to update the on-disk metadata only if the previously recorded access time was before the last change to file, or if it's been at least 24 hours since the last access time update. That will solve things like the “new mail!” notification problem at the cost of only a small increase in disk traffic. On my system, I leave access time tracking completely off to minimize SSD wear and unnecessary snapshot size growth, and have not noticed any adverse effects. For a traditional spinning-rust disk, I might enable both atime and relatime. Setting atime=on and relatime=off (not recommended) would provide the traditional UNIX behavior of filesystem writes every time a file is read.
- To fully avoid disk writes that would change only file (read) access times, use
sudo zfs set atime=off testpool/slackroot
- For the compromise
relatimemethod, use
sudo zfs set relatime=on testpool/slackroot
- Note that the
relatimeoption is only effective whenatime=on, i.e. settingrelatime=onhas no effect ifatime=off.
Another controversial ZFS performance parameter is l2arc_noprefetch. Note that this will have no effect if you have not dedicated a device to the Level-2 Adaptive Replacement Cache (L2ARC) role. L2ARC is probably unnecessary for most workstation use cases, but can be useful on high-throughput servers, especially when available buffer RAM is small relative to the amount of data served. In my testing, I found that enabling the prefetch (i.e. setting l2arc_noprefetch=0) significantly improved read performance, so I adjusted the /etc/rc.d/rc.S script below to make that the boot-time default. You may wish to comment that out if benchmarking shows a different result on your particular system.
Early boot process
Bootloader choice
For the purposes of this guide, you can use any bootloader that can find and start a Linux kernel and ramdisk image. You can also boot either from UEFI or from legacy BIOS. In fact, if your system is already booting into Linux from non-ZFS partitions, there are probably no changes necessary to your firmware settings or bootloader location to support ZFS. If you're already booting via a ramdisk (initrd or initramfs), then all you need to do is provide a new ramdisk image with ZFS support. Even if you're not already using a ramdisk, most modern bootloaders (including GRUB, rEFInd and Syslinux / PXELINUX) have this support built in, and you'll just need to change a few options in your boot menu entries but shouldn't have to reinstall anything. If you're still using LILO, you'll need to specify the initrd path in /etc/lilo.conf and then re-run the lilo command.
Of course, you also have the option of switching to a fully ZFS-aware bootloader. ZFSBootMenu can automatically discover and mount any zpools and ZFS filesystems on your machine, and will automatically scan for bootable kernel/ramdisk pairs. I personally primarily use GRUB since I'm familiar with it and appreciate the highly programmable menu options, but ZFSBootMenu would let you avoid the need to maintain any non-ZFS partitions other than for UEFI. Note that GRUB happily supports operating systems and root partitions hosted either on ZFS (using an appropriate ramdisk) or other filesystems, while ZFSBootMenu will boot only into ZFS-based systems. Of course, ZFSBootMenu will happily coexist alongside other bootloader options (selectable from your UEFI menu) if you need to also occasionally boot into non-ZFS systems.
Boot drive
If you're using ZFSBootMenu, you can keep your kernels and ramdisks in the /boot directory of any ZFS filesystem (not necessarily the same partition that the ramdisk will mount as your system root!). Otherwise, they'll need to be in a filesystem that your bootloader /does/ understand, such as FAT or ext4. If you're using UEFI boot, you'll need a FAT-formatted EFI partition anyway for the bootloaders, and can store the kernel and ramdisk there too if necessary. If you're still using legacy BIOS, you will at least need a Master Boot Record (MBR), as well as space to store the main GRUB or ZFSBootMenu binaries. On my system, I use a separate boot disk with a FAT partition for my EFI data and bootloader binaries, and an ext4 partition holding the kernels and initrds. If you don't want to install an extra permanent drive, you could store the boot drive on a removable USB device, in a non-ZFS hard disk partition, or even load it over a network using PXE or similar protocols. Some bootloaders, including GRUB, will let you store the bootloader (but usually not kernel/ramdisk) images in unallocated sectors between ordinary disk partitions. Note also that if you add an entire physical disk to a zpool (instead of just specific partitions), you will not be able to use that disk's MBR for the bootloader, as ZFS will keep its own data in sector zero.
Boot menu entry
GRUB
To use the initrd described further down in this guide, any ZFS-aware menu entries in /boot/grub/grub.cfg must meet a few requirements:
- (OPTIONAL but recommended) Provide a
searchcommand that finds the correct boot drive - You can find the UUID of a given partition with
sudo blkid /dev/sdXX, e.g./dev/sda2 - Do not include the
UUID=prefix in the GRUB search command parameter- If you don't want to search for the boot drive, you can specify it with GRUB's partition syntax in subsequent commands (e.g.
linux (hd0,msdos2)/vmlinux-custom-latest), but you run the risk of boot failure if subsequent hardware changes (such as swapping cables, or adding or removing a drive from the system) changes the firmware drive assignments.
- Specify an
initrdline that names the correct ramdisk image inside that boot drive - Pass the ZFS name of the root filesystem as the
root=parameter on the kernel command line.
A sample entry would be:
menuentry "Slackware with ZFS support" {
insmod all_video
search --no-floppy --fs-uuid YOUR-DISK-UUID-GOES-HERE
linux /vmlinuz-custom-latest root=testpool/slackroot ro pci=nommconf pcie_aspm=off
initrd /initrd-custom-latest.gz
}
ZFSBootMenu
By default, ZFSBootMenu will scan for candidate kernels and ramdisks in any filesystem meeting the criteria described in the Mountpoint section above, pair them off based on filename, and auto-generate a menu of all candidates found. To turn off scanning for certain filesystems, to concentrate the scan only on a single designated filesystem, or even to set up automatic boot into a particular kernel and ramdisk combination, follow the configuration directions on the ZFSBootMenu website.
ZFSBootMenu's scanning looks only at the /boot subdirectory of each ZFS filesystem, and the ZFS filesystem providing a kernel/ramdisk pair in /boot need not be the same filesystem that will be set as root during the boot process.
Building a ZFS-capable kernel
Kernel configuration
One simple option if you're new to kernel configuration, or expect to run only one kernel at a time, is the OpenZFS SlackBuild, which provides loadable kernel modules and the userspace toolchain. Note that the kernel modules will need to be rebuilt and reinstalled every time you move to a new kernel. The SlackBuild script will include the kernel version as part of the package version string, so that tools like sbopkg can automatically detect the need to rebuild when your kernel version increases (even if the version of openzfs itself remains unchanged). Since the package manages the files under /lib/modules but not /boot/initrd-tree/lib/modules, Slackware's upgradepkg will not remove older copies of zfs.ko and spl.ko from the initrd tree; this ensure s that future initrds will still be able to boot older kernels with ZFS support. For example, if you build the openzfs-2.3.3 modules for kernel version 6.15.2, and then build an initrd that bundles zfs.ko and spl.ko for kernel version 6.15.2, you'll have copies of the ZFS modules under both /lib/modules/6.15.2 and /boot/initrd-tree/lib/modules/6.15.2. Upgrading the openzfs package for kernel version 6.15.3 (even if openzfs stays at version 2.3.3) will delete the copies under /lib/modules/6.15.2 and install new ones under /lib/modules/6.15.3, but will not delete the copies under /boot/initrd-tree/lib/modules/6.15.2. If you then upgrade openzfs again for kernel version 6.15.4 before ever building an initrd for kernel 6.15.3, the new package will delete the copies under /lib/modules/6.15.3 and install new ones under /lib/modules/6.15.4. Since you never built a 6.15.3 initrd, you won't have a /boot/initrd-tree/lib/modules/6.15.3, and therefore will have lost all copies of your 6.15.3 ZFS modules unless you made a manual backup (or can find them in an older ZFS snapshot!). If you then build an initrd for 6.15.4, the new initrd will contain the necessary ZFS modules to be paired with either kernel 6.15.2 or 6.15.4, but cannot be used with a 6.15.3 kernel.
If you prefer to manually configure your kernel, see the https://zfsonlinux.org documentation or my separate ZFS Overview. Note that all of the scripts and procedures in this HOWTO work equally well regardless of whether your kernel ZFS support is built-in or in the form of loadable modules.
Preparing the initrd
The directions in OpenZFS' "Slackware Root on ZFS" HOWTO provide all the necessary hints, but are based on older versions of Slackware's /boot/initrd-tree/init script. I recommend using the steps and scripts below.
Working with Slackware's mkinitrd package
The mkinitrd package places a compressed copy of Slackware's standard initrd directory tree in /usr/share/mkinitrd/initrd-tree.tar.gz. The first time you run mkinitrd, that file will be unzipped under /boot/initrd-tree. Any changes you make under /boot/initrd-tree will be reflected in subsequent mkinitrd builds. Note that when the mkinitrd package is upgraded, the new compressed directory tree is not automatically unzipped into /boot/initrd-tree. This means that your local changes will be preserved, but it also means that new features or bugfixes in the initrd-tree.tar.gz bundle will not automatically be applied to your system; if the ChangeLog indicates any major benefits, you will need to manually merge them into your local version.
Slackware's initrd-tree uses BusyBox to provide the most commonly used UNIX utilities, and uses /boot/initrd-tree/init (which will be automatically run by the kernel after its own startup process is complete) as the master script for setting up and loading your final Slackware root filesystem.
Files to place under /boot/initrd-tree
/etc/HOSTNAME
The zpool layer uses your system hostname to ensure that two different machines (or virtual machines) are not simultaneously trying to access the same zpool. This can cause the zpool command to fail in environments where no hostname is set, which would include Slackware's standard initrd environment. Although several workarounds are possible, including command-line switches to zpool or using the zgenhostid utility shipped with OpenZFS, I opted to just modify /boot/initrd-tree/init to manually load the hostname from /boot/initrd-tree/etc/HOSTNAME (the same method used later by the “real” Slackware startup scripts). According to the documentation, ZFS tools expect a fully-qualified domain name, but in my testing an unqualified name also appeared to work. Set your desired name with
echo my-system-name.some-domain.org > /boot/initrd-tree/etc/HOSTNAME
and ensure you apply the hostname section of the /boot/initrd-tree/init patches shown below. You can of course adjust /boot/initrd-tree/init to instead set a hostname from a different source, e.g. a reverse lookup on an assigned network interface address, if more appropriate to your network, but you'll want to make sure it's set to /something/ before the first zpool command is invoked.
Note that for the best protection against user error, you should not let multiple ZFS-enabled hosts (either physical servers or virtual machines) ever try to access the same pool using the same hostname. That means that if you're using my /boot/initrd-tree/etc/HOSTNAME method, different physical servers or virtual machines should not try to share the same boot-up ramdisk images (instead, set a unique HOSTNAME and build a custom ramdisk for each instance, or move to a more sophisticated name assignment method).
Kernel modules
Your bootloader will let you choose and load the main kernel image separately from the ramdisk file, but any modules that the kernel tries to load in the early stages of the boot process (before handoff to the “real” root filesystem) will need to be part of the ramdisk image. That obviously includes the ZFS drivers themselves (unless built directly into the kernel image, which is not the default), as well as anything the kernel will need to access devices that are part of the zpool, network drivers if using a remote zpool if using a network scheme to set the hostname, and any display and keyboard drivers that you might need in order to see and respond to a keylocation=passphrase prompt for an encrypted ZFS root. It can also include more subtle dependencies, such as modules the kernel might need for any compression or encryption algorithms on the selected ZFS root filesystem.
You can set the MODULE_LIST= parameter in /etc/mkinitrd.conf to have mkinitrd automatically place the necessary modules into your ramdisk image every time it is rebuilt. If for some reason you don't want to do that, then you will be responsible for copying the appropriate .ko files into /boot/initrd-tree/lib/modules.
I prefer to keep the ramdisk fairly minimal, so my module list is just
MODULE_LIST="zfs:spl:r8169:xhci:xhci-pci:xhci-hcd:hid:usbhid:i2c-hid:hid_generic"
zfs and spl are the ZFS drivers, r8169 is for my Ethernet card (useful for remote logging of kernel messages via the netconsole driver in case there's ever a boot problem), and the hci/hid modules allow use of my keyboard to enter the encryption passphrase. I defer all other device drivers until the initrd-to-ZFS handoff is complete. The required list for your system will almost certainly be different, and depends on your hardware as well as the choices you made about which drivers to include in the main kernel image (=y) instead of as loadable modules (=m). As a more conservative alternative, you could build a list of all of the modules your kernel currently has loaded with
echo MODULE_LIST=\"'lsmod | tail -n +2 | cut -d\ -f1 | tr \\\\n :`\"
and use that in your mkinitrd.conf. (Note the backquote `, not an ordinary apostrophe, inside the “.)
Firmware files
Any firmware files needed by device drivers that load in the early phase of your kernel startup (before the ramdisk remounts your final root device as /) will need to be included in /boot/initrd-tree/lib/firmware. By default, mkinitrd will manage this process for you, but you may wish to manually inspect it if you're having boot problems, especially if any of your ZFS-related hardware (e.g. drive or network controllers) relies on loadable firmware blobs. You can use sudo dmesg | grep firmware to look for evidence of firmware files in use by your kernel drivers. Any load failed with error -2 messages usually indicate that the necessary files were not copied from /lib/firmware to /boot/initrd-tree/lib/firmware; you can manually correct that and rebuild the ramdisk.
ZFS tooling
Naturally, you will need some of the ZFS userspace utilities accessible from inside the ramdisk. Neither the upstream openzfs Makefile, openzfs.Slackbuild, nor the zfs packages from most other distributions will copy this into /boot/initrd-tree for you, so you'll need to do it manually:
sudo cp -ai /sbin/{fsck.zfs,mount.zfs,zfs,zpool} /boot/initrd-tree/sbin
I believe those four files are all that's required to be inside the initrd for most configurations, but if you want to also include the rest of the userspace utilities, it certainly can't hurt:
sudo cp -ai /sbin/{zdb,zed,zfs_ids_to_path,zgenhostid,zhack,zinject,zstream,ztest} /boot/initrd-tree/sbin/
sudo mkdir -p /boot/initrd-tree/usr/bin && sudo cp -ai /usr/bin/{arc_summary,arcstat,dbufstat,raidz_test,zvol_wait} /boot/initrd-tree/usr/bin/
Libraries
Of course, if your ZFS utilities are dynamically linked (the default), all of their shared library dependencies will need to be included too. You will definitely need to manually copy the libnvpair.so, libuutil.so, libzfs.so, libzfs_core.so, libzfsbootenv.so, and libzpool.so that are shipped with OpenZFS:
sudo cp -a /lib64/lib{nvpair,uutil,zfs,zpool}* /boot/initrd-tree/lib64/
(replace lib64 with lib for 32-bit systems). You might have other dependencies that also need to be manually copied; run
sudo mkdir -p /boot/initrd-tree/usr/bin && sudo cp -ai /usr/bin/ldd /boot/initrd-tree/usr/bin/
sudo chroot /boot/initrd-tree
ldd /sbin/{zfs,zpool}
exit
to check from the “inside the initrd” point of view. If chroot or ldd report any errors, those must be resolved before your ramdisk can be used to boot into ZFS filesystems.
On my system, this command was sufficient to copy all of the non-ZFS library dependencies for zfs and zpool:
sudo cp -ai /usr/lib64/lib{c.so.6,c-2,m.so.6,m-2,zstd.so.1,lzma.so.5,z.so.1,crypto.so.3,tirpc.so.3,unwind.so.8,udev.so.1,blkid.so.1,uuid.so.1,gssapi_krb5.so.2,krb5.so.3,k5crypto.so.3,com_err.so.2,lzma.so.5,krb5support.so.0,keyutils.so.1,resolv.so.2}* /usr/lib64/libgcc_s.so.1* /lib64/ld-linux* /boot/initrd-new/lib64/
Check your local versions with ldd /sbin/zpool and ldd /sbin/zfs.
Note that ZFS' userspace tooling does /not/ necessarily have to be kept in sync with version of openzfs used in your kernel drivers, although it's generally a good idea. This means that you don't necessarily need to refresh the ZFS binaries and shared libraries inside your initrd every time you upgrade your kernel drivers to a new openzfs version, and that an initrd built using one version of openzfs should suffice to boot kernels built against a range of openzfs versions (as long as the loadable modules for that kernel are present within the initrd image). However, if you don't frequently update the tool versions inside /boot/initrd-tree, you will also need to guard against Slackware shipping new system library versions and removing older dynamic libraries on which the ZFS tools depend; I was once bitten by exactly that when the version number on libunwind.so increased and I upgraded the libraries and busybox inside the initrd tree, but failed to anticipate that zfs and zpool would also be affected. That's one of several reasons that I recommend keeping several different choices of kernel and ramdisk, from different “vintages”, on your boot partition so that you can easily revert to old, known-working combinations when necessary.
Event triggers
The ZFS Event Daemon (zed) is normally not required within the ramdisk; it can be started later in the system startup process if desired. However, if you do want to have zed support in the early boot process, you'll also need to copy additional OpenZFS files into the ramdisk:
sudo mkdir -p /boot/initrd-tree/usr/libexec sudo cp -a /etc/zfs /boot/initrd-tree/etc/ sudo cp -a /usr/libexec/zfs /boot/initrd-tree/usr/libexec/
(OPTIONAL) Additional software
If there are any problems mounting your designated root partition, Slackware's /boot/initrd-tree/init will dump you out to a command prompt. You may wish to have additional disk and network utilities (and their shared library dependencies) pre-installed into the ramdisk to assist with troubleshooting and recovery. Some of the tools I like to install include cfdisk, kmod, mdadm, sfdisk, udevadm, and udevd. Again, use ldd to identify their shared libraries, and verify you can ldd and run these files from within a chroot /boot/initrd-tree.
Using initramfs
I personally use initrd instead of initramfs, but the OpenZFS software package (including versions built by the openzfs.SlackBuild from https://slackbuilds.org) installs initramfs setup scripts under /usr/share/initramfs-tools. If you experiment with ZFS-capable initramfs on Slackware, please contribute your lessons learned to this HOWTO.
Adapting Slackware's default startup and shutdown scripts
Slackware's startup scripts expect to use /bin/mount to access whatever filesystem was passed as root= on the kernel command line, as well as anything designated to be auto-mounted in your /etc/fstab. That requires adjustment for ZFS, since the associated pool has to be imported (manually or automatically) before a filesystem can be mounted, encryption keys may have to be manually loaded before importing the pool, and /bin/mount can only be used on ZFS filesystems with mountpoint=legacy. Additionally, Slackware's default scripts do not know about ZFS and thus do not make any attempt to export a zpool when shutting down the system.
All of the patches below are context diffs that can be applied with patch -p1.
/boot/initrd-tree/init
For the initrd, we need to ensure a hostname is set in order to use the ZFS utilities, we need to modify the script to allow root “devices” that aren't actually device filenames, and we need check /proc/mounts to see whether the root partition appears to be ZFS, and if so use the ZFS utilities to manage the filesystem.
This patch should apply cleanly against both slackware64-15.0 and slackware64-current:
- initrd-init-zfs.patch
--- slack-current/init 2025-05-10 14:46:30.000000000 -0400 +++ zfs-current/init 2025-08-15 19:04:52.547321653 -0400 @@ -86,6 +86,9 @@ PATH="/sbin:/bin:/usr/sbin:/usr/bin" +# BCS: Set hostname for the benefit of the ZFS tools +/bin/hostname -F /etc/HOSTNAME || /bin/hostname darkstar + # Mount /proc and /sys: mount -n proc /proc -t proc mount -n sysfs /sys -t sysfs @@ -132,6 +135,10 @@ root=UUID=*) ROOTDEV=$(echo $ARG | cut -f2- -d=) ;; + root=*) + # BCS: handle root devices that aren't in /dev, e.g. ZFS or network pathnames + ROOTDEV=$(echo $ARG | cut -f2- -d=) + ;; rootfs=*|rootfstype=*) ROOTFS=$(echo $ARG | cut -f2 -d=) ;; @@ -334,7 +341,26 @@ # Switch to real root partition: /sbin/udevadm settle --timeout=10 echo 0x0100 > /proc/sys/kernel/real-root-dev - mount -o ro${ROOTFLAGS:+,$ROOTFLAGS} -t $ROOTFS $ROOTDEV /mnt + + if [ "$ROOTFS" = "zfs" ]; then + # BCS: for ZFS root, first import the storage pool and any necessary encryption keys + echo "Importing ZFS pool $ROOTPOOL" + ROOTPOOL=`dirname $ROOTDEV` + /sbin/zpool import $ROOTPOOL + /sbin/zfs load-key $ROOTDEV + /sbin/zfs set readonly=on $ROOTDEV + + if [ $(/sbin/zfs get -H -o value mountpoint $ROOTDEV) = "legacy" ]; then + mount -o ro${ROOTFLAGS:+,$ROOTFLAGS} -t $ROOTFS $ROOTDEV /mnt + else + echo "WARNING: ZFS root device $ROOTDEV is not set to use legacy mountpoints; initrd scripts may not function as expected" + echo -n "Mountpoint will be: " + /sbin/zfs get -H -o value mountpoint $ROOTDEV + /sbin/zfs mount $ROOTDEV + fi + else + mount -o ro${ROOTFLAGS:+,$ROOTFLAGS} -t $ROOTFS $ROOTDEV /mnt + fi if [ ! -r /mnt/sbin/init ]; then echo "ERROR: No /sbin/init found on rootdev (or not mounted). Trouble ahead."
/etc/rc.d/rc.S
For rc.S, we never need to fsck ZFS as it always has a consistent on-disk image, and we need to use zfs set to toggle read-write mode, instead of mount -o remount. Additionally, I like to ensure L2ARC prefetch is enabled (i.e. noprefetch is off), but that's not strictly required and might not ensure the best performance in your use case:
Patch against slackware64-current:
- rc.S-zfs-current.patch
diff --color -Naur slack-current/rc.S zfs-current/rc.S --- slack-current/rc.S 2025-01-26 15:09:51.000000000 -0500 +++ zfs-current/rc.S 2025-08-04 20:42:35.422359127 -0400 @@ -238,6 +238,18 @@ elif grep -q ' / bcachefs ' /proc/mounts ; then echo "Remounting bcachefs root device(s) with read-write enabled." /sbin/mount -o remount,rw,fsck,fix_errors / + # BCS: Same for ZFS (no fsck required, ZFS should always have a consistent on-disk image): + elif grep -q ' / zfs ro' /proc/mounts ; then + echo "Remounting ZFS root device with read-write enabled." + # Improves startup performance on my system; YMMV + echo 0 > /sys/module/zfs/parameters/l2arc_noprefetch + if [ -e /sbin/zfs ] ; then + # BCS: if present, use ZFS tools to manage ZFS mounts + /sbin/zfs set readonly=off `grep ' / zfs ' /proc/mounts | cut -d\ -f1` + else + # If ZFS tools weren't present, this probably won't work, but give it a shot: + /sbin/mount -o remount,rw / + fi elif [ ! $READWRITE = yes ]; then # Check the root filesystem: RETVAL=0
Patch against slackware64-15.0:
- rc.S-zfs-15.0.patch
diff --color -Naur slack-15.0/rc.S zfs-15.0/rc.S --- slack-15.0/rc.S 2022-01-27 17:32:52.000000000 -0500 +++ zfs-15.0/rc.S 2025-08-15 19:24:50.326407790 -0400 @@ -251,6 +251,18 @@ if grep -q ' / f2fs ' /proc/mounts ; then echo "Remounting root device with read-write enabled." /sbin/mount -w -v -n -o remount / + # BCS: Same for ZFS (no fsck required, ZFS should always have a consistent on-disk image): + elif grep -q ' / zfs ro' /proc/mounts ; then + echo "Remounting ZFS root device with read-write enabled." + # Improves startup performance on my system; YMMV + echo 0 > /sys/module/zfs/parameters/l2arc_noprefetch + if [ -e /sbin/zfs ] ; then + # BCS: if present, use ZFS tools to manage ZFS mounts + /sbin/zfs set readonly=off `grep ' / zfs ' /proc/mounts | cut -d\ -f1` + else + # If ZFS tools weren't present, this probably won't work, but give it a shot: + /sbin/mount -o remount,rw / + fi elif [ ! $READWRITE = yes ]; then # Check the root filesystem: RETVAL=0
/etc/rc.d/rc.6
For rc.6, we again prefer to use the ZFS toolchain to toggle read-only mode instead of mount -o remount,ro, and we try to export all non-busy pools during a system shutdown. This operation is expected to fail on the pool containing the root filesystem, since it's still mounted.
Patch against slackware64-current:
- rc.6-zfs-current.patch
diff --color -Naur slack-current/rc.6 zfs-current/rc.6 --- slack-current/rc.6 2025-08-03 18:49:16.000000000 -0400 +++ zfs-current/rc.6 2025-08-04 20:56:57.827272519 -0400 @@ -320,7 +320,12 @@ # (and found to be clean) at boot: /bin/sync echo "Remounting root filesystem read-only:" - /bin/mount -v -o remount,ro / + if [ -e /sbin/zfs ] && grep -q ' / zfs ' /proc/mounts ; then + # Since / appears to be a ZFS dataset, use the ZFS toolchain to mark it as read-only + /sbin/zfs set readonly=on `grep ' / zfs ' /proc/mounts | cut -d\ -f1` + else + /bin/mount -v -n -o remount,ro / + fi fi # This never hurts: @@ -381,6 +393,17 @@ sleep 15 fi +# Attempt to export all active ZFS pools. +# Note that is expected to fail for the pool still hosting the root filesystem +if [ -e /sbin/zpool ]; then + for pool in `awk '$3=="zfs" { split($1,piece,"/"); print piece[1] }' < /proc/mounts | sort | uniq`; do + /sbin/zpool export $pool + if [ $? -gt 0 ]; then + echo "ZFS pool export failed on $pool, possibly due to busy root filesystem" + fi + done +fi + # Now halt (poweroff with APM or ACPI enabled kernels) or reboot. if [ "$shutdown_command" = "reboot" ]; then echo "Rebooting."
Patch against slackware64-15.0:
- rc.6-zfs-15.0.patch
diff --color -Naur slack-15.0/rc.6 zfs-15.0/rc.6 --- slack-15.0/rc.6 2021-11-10 16:50:54.000000000 -0500 +++ zfs-15.0/rc.6 2025-08-15 19:23:45.018403093 -0400 @@ -255,7 +255,12 @@ # (and found to be clean) at boot: /bin/sync echo "Remounting root filesystem read-only:" - /bin/mount -v -n -o remount,ro / + if [ -e /sbin/zfs ] && grep -q ' / zfs ' /proc/mounts ; then + # Since / appears to be a ZFS dataset, use the ZFS toolchain to mark it as read-only + /sbin/zfs set readonly=on `grep ' / zfs ' /proc/mounts | cut -d\ -f1` + else + /bin/mount -v -n -o remount,ro / + fi fi # This never hurts: @@ -318,6 +323,17 @@ fi fi +# Attempt to export all active ZFS pools. +# Note that is expected to fail for the pool still hosting the root filesystem +if [ -e /sbin/zpool ]; then + for pool in `awk '$3=="zfs" { split($1,piece,"/"); print piece[1] }' < /proc/mounts | sort | uniq`; do + /sbin/zpool export $pool + if [ $? -gt 0 ]; then + echo "ZFS pool export failed on $pool, possibly due to busy root filesystem" + fi + done +fi + if [ "$container" = "lxc" ]; then # Confirm successful shutdown of the container: echo "LXC container stopped."
Handling system upgrades
sysvinit-scripts
Upgrades to the sysvinit-scripts package, if applied blindly, would overwrite the /etc/rc.d patches above. By default, Slackware's package tools will give the updated files a .new suffix so the administrator can manually merge in local changes (such as my ZFS patches).
mkinitrd
Similarly, updates to mkinitrd will replace the compressed version of the initrd tree at /usr/share/mkinitrd/initrd-tree.tar.gz; if unzipped, that would overwrite the local changes to /boot/initrd-tree/init. Manually maintaining the files and merging occasional changes is not difficult, but if you're not comfortable with it you should choose a different root filesystem.
kernel
See the corresponding section in my ZFS Overview.
Potential issues and troubleshooting
If you're having trouble with the boot process, double-check the following:
- Correct 'root=' (ZFS dataset name) is passed on the kernel command line (should be automatic in ZFSBootMenu, will need to be manually set in GRUB menu entries)
- If using GRUB, the
searchline or manual partition specification (e.g.(hd0,gpt2)) is set to the filesystem that holds the desired kernel and ramdisk files - If using ZFSBootMenu, filesystem
mountpointandorg.zfsbootmenu:activeare set correctly. /sbin/zpooland/sbin/zfsboth run without errors whenchroot-ed into your/boot/initrd-tree- Kernel you're trying to boot really was built with ZFS support
- If using loadable modules, they are present under
/boot/initrd-tree/lib/modulesfor the desired kernel version - If trying to mount an encrypted filesystem, any cryptographic algorithm modules used by your kernel are present under
/boot/initrd-tree/lib/modules/[KERNEL VERSION]. Double-check withlsmodfrom a known good boot. - Firmware files for any device your early boot depends upon are present in
/boot/initrd-tree/lib/firmware. Double-check againstdmesg | grep firmwarefrom a known good boot. - BIOS/UEFI are loading the bootloader that you think they are
Sources
Originally written by spillner, 2025. References: