[2025-jun-17] The SlackDocs Wiki has moved to a new server, in order to make it more performant.
Table of Contents
ZFS Overview
This HOWTO provides background information on the Zettabyte File System (ZFS) and links and instructions to software and documentation that are useful for running ZFS under Linux, particularly Slackware. It does not attempt to explain how to manage your ZFS datasets (there's already plenty of great documentation on that all over the web), just to help you figure out how to get your Linux system into a state where you can execute those management commands.
This page was originally written as a complement to my Root on ZFS HOWTO, which lays out the steps for booting Slackware on a ZFS filesystem. You may also be interested in David Chmelik's ZFS Home HOWTO. General ZFS administration guides applicable to all operating systems are available at
If you're already familiar with ZFS, you can safely skip over the “Background” section below.
Background
What is ZFS?
ZFS started in the early 2000s as an internal project at Sun Microsystems, aiming to create a practical filesystem that would never require an fsck or repair after a system crash. In addition, the lead engineers behind the project (Jeff Bonwick, Bill Moore, and Matt Ahrens)1) aimed to make the system as scalable as possible, up to the size of the largest conceivable datacenter cluster filesystems, with full flexibility to use any kind of local networked storage, with minimal configuration required when adding new devices, and (perhaps most importantly) to be as resistant as possible to hardware failures, with multiple data copies distributed across independent devices so that individual device failures do not even interrupt the overall filesystem. It was released to the public in 2005 as part of the Solaris operating system, and released as open source a few months later with ports to OpenBSD, FreeBSD, Linux, and MacOS quickly following. The original intellectual property rights to ZFS transferred to Oracle when they acquired Sun in 2010, and Oracle shifted its further internal development to a closed-source model but continued to respect the open source rights granted under Sun's CDDL for earlier versions of the code. The Linux and FreeBSD ports are now managed by the OpenZFS project; its internal code has diverged significantly from the original Sun/Oracle implementations but the command-line interface remains largely unchanged, and Sun or Oracle documentation and user guides remain at least 95% applicable to OpenZFS administration.
Due to its flexibility, reliability, and scalability, ZFS is widely used in data centers, cluster computing, and other “enterprise” applications. It's also the core of several commercial products, including https://proxmox.com backup systems and https://qnap.com network-attached storage (NAS) devices. However, it's admittedly a niche choice for individual users, despite its many advantages. This means that while there are some Linux distributions, such as Ubuntu, that now offer first-class support for installing to and using ZFS filesystems, on many other distributions, including Slackware, you'll have to install additional packages and do a little extra work yourself— that's the motivation for this HOWTO.
Concepts
ZFS is implemented as three-layer stack:
- Virtual devices (vdevs) abstract the details of physical devices (traditional hard disks, solid-state drives, RAID controllers, network shares, etc.) in order to provide a consistent API to the next higher layer (zpool).
- Zpools are composed of an arbitrary number of vdevs, with each vdev assigned a particular role such as main storage, cache, an “intents log” to minimize the period of time (milliseconds) during which transient data might be lost if a crash occurs during a write operation, etc.
- Vdevs can also internally implement features such as RAID-like mirroring or striping, encryption, etc.
- Storage pools (zpools) provide the most generic possible abstraction of a storage system:
- A
write()operation that accepts a chunk of data and return a special token that can be used to retrieve it later - A
read()operation that takes the storage token and returns the original chunk of data. - Zpools can be enlarged at any time by simply adding additional vdevs, without any need to copy data or reformat disks. However, once added, vdevs can generally not be removed until the pool is “destroyed” (deleted).
- The zpool layer implements optional data compression (which is almost always a performance win) and deduplication (which usually is not, due to requiring tremendous amounts of RAM to operate efficiently).
- Zpools also offer RAID-like features, such as mirroring and/or striping data across multiple vdevs for reliability and performance. Zpools can also store multiple copies of data at different locations within a single vdev, offering transparent recovery in the event some disk sectors go bad.
- Zpool
scruboperations can run in the background, comparing on-disk data blocks to their checksums, and automatically marking disk (vdev) sectors as suspect and creating alternate copies when the checksums do not match. This provides early detection and avoidance of hardware failure, instead of waiting for an I/O error as in traditional filesystems.
- The
zfslayer offers a full UNIX-style filesystem on top of the zpool layer.- It can leverage the copy-on-write feature (described below) to provide instant, zero-cost snapshots of the filesystem at any point in time, and even mount these snapshots alongside the “live” filesystem for file comparison and recovery, or as the starting point for a new filesystem (diverging from the original at the moment of the snapshot).
- ZFS datasets (filesystems and snapshots) can be sent and received between zpools, either within the local system or across a network. This is particularly useful for maintaining online backups and physically distributed locations, or for cloning systems (e.g. individual workstations built from the same master image) across a network.
Special features
Copy-on-write
Many traditional filesystems are structured as a mapping from filenames onto specific disk sectors. When you save changes to file, they are written back to the same disk sectors that held the previous version (or to new sectors added to the mapping when the file size increases). The filename-to-sector mapping is itself stored as a lookup table spread across other specially reserved disk sectors. This is simple and low overhead, although it places a burden on the filesystem to choose efficient mappings (usually using contiguous blocks of sectors wherever possible) to ensure good read and write performance. More importantly, if problems such as a power interruption, hardware error, or software crash occur during a write operation, the file might be left in an inconsistent state, with some disk sectors reflecting the newest update and others still holding data from the previous version that should have been overwritten. Even worse, if the fault occurred while the filesystem driver was writing one of the “superblocks” holding part of that filename-to-sector mapping, an exhaustive examination of the disk contents might be required to determine which sectors still hold valid file data, and 100% recovery is not guaranteed.
More advanced filesystems often mitigate these issues by maintaining a “journal” of recent changes. In the event of a system crash, all journal entries that were successfully written to disk can be “replayed” to reconstruct a valid and internally consistent filesystem. This significantly enhances the speed and accuracy of crash recovery, although it generally entails some costs in performance and code complexity.
ZFS embraces a third approach known as “copy-on-write.” In this model, a fresh copy of a data block is written to disk (instead of overwriting the old location) whenever that block's contents are changed, and only when they are changed. For example, when you run cp old_file new_file within a copy-on-write filesystem, the command completes nearly instantaneously, regardless of the file size, since the only on-disk change is to add a new directory entry that maps the name new_file to the same on-disk sectors that old_file was already taking up. The key difference in comparison to a traditional UNIX hard link (in which two or more different filenames refer to the exact same collection of data) is that subsequent edits to the content of new_file will not change the data read when accessing old_file. Instead, when you make a change to new_file, ZFS will create fresh copies of all changed sectors, and update the new_file directory entry to reference those new copies instead of the sectors previously shared with old_file. This process is completely transparent to the user, except that changes to very large files will be noticeably faster than in traditional filesystems, since only the changed data blocks are actually updated.
The filesystem layer of ZFS uses this copy-on-write functionality to guarantee that there is always a consistent on-disk representation of the filesystem as of a particular moment in time, without any need to fsck. Directories and files are stored in a tree structure of linked blocks (analogous to a B-tree). When a file is created or edited, its data is written into new copies of the existing blocks; if additional space is required then new leaf blocks are added to the tree. Child blocks are always committed to disk before the parent blocks (containing pointers to all copies of the child block, along with a checksum of the expected child contents) are themselves updated. The parent block updates are again performed by writing a new copy of the block at a different disk location, instead of overwriting the previous copy. This process continues recursively up to the top of the tree, where two copies of the filesystem's “master block” are stored at different known locations on disk. One copy of the master block will be written and verified before the other is updated. At filesystem mount time, ZFS checks both master blocks for checksum integrity and to determine which has a newer timestamp. If the system is recovering from a crash during a previous write operation, then at least one of the master blocks is guaranteed to be valid, either pointing to a copy of the filesystem tree that reflects that write operation, if the master block update finished just in time before the crash, or else pointing to the older copy that reflects the filesystem as it existed before that last write operation. Other than potentially the changes being made during that interrupted write operation, all other on-disk data is unaffected, and even the file(s) that were being updated at the time of the crash still have consistent on-disk copies reflecting an older version. The amount of transient data that could be “lost” when a write operation is interrupted can be further reduced by designating a vdev as the “ZFS Intents Log” (ZIL), although this is strictly optional and probably unnecessary for most users.
Snapshots
If you're familiar with Apple's Time Machine, ZFS snapshots are a more powerful (and efficient!) version of the same idea. Copy-on-write is the core mechanism behind ZFS snapshots; taking a snapshot simply marks the disk blocks comprising the snapshot as ineligible for automatic reclamation (i.e. they won't be overwritten with new data) until the snapshot is destroyed. This is essentially an instantaneous operation, regardless of filesystem size, and initially ties up no additional disk sectors compared to the “live” filesystem. As further filesystem changes are made after taking the snapshot, new blocks will be written to disk while the old “snapshot version” blocks are retained, causing disk usage to slowly grow. However, since only changed files are duplicated (and really only the specific blocks that have changed within those files, when the updated files are large enough to span multiple ZFS blocks), the space “taken up” by the snapshot is proportional to the rate of filesystem changes, not the overall filesystem size. For example, a filesystem with few overwrites or deletions, such as an archive, will have essentially zero-cost snapshots, while a filesystem in which files are frequently edited or deleted, such as /tmp, would pay a higher space cost. When looking at filesystems or snapshots with the zfs list command, the “USED” column indicates how much space could be reclaimed if that particular filesystem or snapshot were deleted (i.e. the “cost” of keeping it), while “REFER” indicates the total size of all files referenced by that snapshot. In other words, copying the filesystem or snapshot to a new zpool would use up “REFER” bytes, while deleting the filesystem or snapshot would return “USED” bytes to the current zpool.
Snapshots can be browsed at any time via the hidden /.zfs directory at the root of every ZFS filesystem; this allows you to use traditional UNIX tools such as find, diff, diffoscope, or rsync to identify changes and recover old versions if required. Snapshots can also be mounted as a separate read-only filesystem, or used as the starting point for a new read-write copy of the archived filesystem. zfs list -t snapshot will list all snapshots on your system (or a given ZFS filesystem, if specified at the end of the command line).
The ability to mount snapshots as live filesystems is particularly useful in performing a bisect to see whether recent system changes (e.g. Slackware package updates or configuration file changes) might be responsible for a given problem. It's often advisable to take a system snapshot right before performing a major configuration change, and then deleting only after you're sure the updated system is behaving as expected— otherwise, simply revert to the snapshot and it's like your changes never happened. This is also useful in migrating live filesystems to a different storage pool with minimal downtime: you can take a snapshot of the live system, zfs send that snapshot to the new pool (which might take a considerable amount of time for large filesystems, especially over a network transfer), and then when ready to complete the migration, perform a much smaller incremental zfs send of only the changes since the snapshot.
https://github.com/zfsonlinux/zfs-auto-snapshot (SlackBuild script at https://slackbuilds.org/repository/15.0/system/zfs-auto-snapshot) uses the snapshot feature and cron to take automatic scheduled backups of your ZFS filesystems. For example, you could configure it to keep a rotating pool of the last 24 hourly backups, seven daily backups, eight weekly backups, and twelve monthly backups, and have a pretty good chance of being able to find old versions of any file you might have overwritten in the past year.
https://zrepl.github.io leverages snapshots to provide an even more flexible automated backup system, including using zfs send to archive your backups to remote servers with minimal network traffic (since ZFS already has the internal change history for every dataset).
Data checksumming
This is probably the real “killer feature” of ZFS for most home and small office users. (The fsck-free instant crash recovery is also nice, but ext4 and other modern journaling filesystems have something close to it.) All of your data can be automatically saved at multiple disk locations, with checksums stored in yet another location. Every zpool read() operation compares the data obtained from the vdev against its expected checksum, and it will automatically and transparently find and use (and then clone for further protection) a different “good” copy when the checksum verification fails. You can safely throw consumer-grade disks into a ZFS pool and not really worry about hardware degradation or failure. Of course, you will want to run regular scrubs on the pool to detect any impending failures as early as possible, will want to keep a healthy amount of available free space in the pool to ensure that ZFS does not feel constrained in how many data copies to keep, and should make regular backups to another (ideally off-site) storage system, probably with ZFS' excellent snapshot and send capabilities to guard against all of the failures (e.g. accidentally reformatting some of your pool disks, or physical theft or destruction of the entire server) that ZFS /can't/ protect against, but even if you don't take those prudent steps, you're still pretty unlikely to actually lose data. For important records or memories that you really don't want to lose, ZFS is generally the most robust filesystem you could possibly choose, which is why it's used to heavily in enterprise backends and online transaction processing scenarios.
Encryption and compression
ZFS offers built-in compression and strong encryption at zpool level. On modern hardware, the CPU cost of these features is negligible, and enabling compression actually improves read/write performance in most cases because less total disk I/O (the real time-cost driver) is required. Encrypted datasets can be unlocked at mount time via a manually typed passphrase, or via a keyfile.
Size limits
ZFS uses 128-bit addresses, meaning that it could index a 64-bit filesystem worth of indices into other 64-bit filesystems. The estimated volume of worldwide internet traffic in 2024 was slightly in excess of 2^63 bytes, so a ZFS filesystem could theoretically archive traffic at that rate for about 40 quintillion years. In other words, it's more practical to spend your time worrying about the eventual heat death of our universe than about running out of logical address space in your ZFS pool.
Portability
OpenZFS for Linux currently builds on x86, x86_64, arm, aarch64, ppc, ppc64, riscv, sparc, and sparc64 architectures (the list at https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#hardware-requirements is not always up-to-date). 32-bit x86 support requires the i686 (Pentium Pro/II and beyond) architecture; i586 (original Pentium) is unsupported, which seems fair as it's more than 30 years old at this point.
OpenZFS for Linux filesystems can be directly imported in FreeBSD, and vice versa. Furthermore, you can zfs send and zfs receive datasets between different OpenZFS implementations and versions. The on-disk layout is stable between OpenZFS versions, with even old versions of the tooling able to work on newer pools, and vice versa. The zfs send format is not interoperable between OpenZFS and Oracle ZFS, but you are unlikely to encounter the latter in the wild anymore, and other filesystem transfer tools (such as rsync) can bridge the gap.
ZFS is very conservative about the data it accepts, so any versions that /do/ have an incompatibility are likely to generate an error message. Note, however, that these error messages usually do not occur until the transfer is complete (even if OpenZFS could have recognized earlier in the data stream that there would be a problem), which might waste considerable time and bandwidth for larger datasets. Additionally, in some cases OpenZFS will unpack as much of the data stream as it can before generating the error, so even though you get an error message you might also have a mountable filesystem, with only a subset (but at least a consistent and accessible subset!) of the intended contents.
Potential disadvantages
License
OpenZFS is distributed under the Oracle (originally Sun) Common Development and Distribution License (https://opensource.org/license/CDDL-1.0). This grants fairly free rights to modify, share/distribute, and use the software, including for commercial purposes, but it is /not/ fully compatible with the GNU Public License (https://www.gnu.org/licenses/old-licenses/gpl-2.0.html) used by the Linux kernel. In particular, the CDDL does not appear to require contributors to release all derivative works under CDDL terms, only to continue to make the original CDDL-covered code available under those terms. For example, a vendor might be able to make a modification to OpenZFS and sell it commercially without releasing the source of their changes, which is not something that could be done with the GPL-covered Linux kernel.
I am not a lawyer and nothing in this document should be construed as specific legal advice. However, the generally accepted interpretation of the CDDL and GPL is that CDDL-covered code can be provided in source form (obviously), or in binary form as separate modules that can be loaded into GPL code such as the Linux kernel, but a single binary that integrates both CDDL and GPL code cannot be publicly distributed (whether commercially or free-of-charge) without violating the terms of the GPL. In other words, a Linux distribution packager is probably free to distribute the ZFS drivers as binary modules, but not a standalone kernel with built-in ZFS support.2) This restriction means that the ZFS drivers will always live /outside/ of the main Linux source tree, at least until Oracle and other contributors agree to re-license their code under the GPL.
If you're simply using or contributing changes to OpenZFS, and not trying to publicly distribute a product that somehow integrates Linux and ZFS, then all this really means for you is that there will always be an extra step to obtain and install your ZFS drivers apart from the main kernel. Additionally (although this isn't directly due to the license issues), the out-of-tree ZFS driver won't always be up to date with the very latest Linux kernel releases. To ensure maximum performance and runtime predictability, the ZFS drivers make heavy use of internal kernel data structures and methods, which frequently change as the kernel evolves, and might cause older versions of the ZFS driver to fail to build against newer kernels. In practice, these kinds of breaking changes are rare with Linux point releases (e.g. 6.15.7 to 6.15.8), but very common with minor version number bumps (e.g. 6.15 to 6.16). After a new Linux minor version is released, there's usually a 7-10 day lag before a new ZFS version is released with full support. This generally isn't a problem unless you insist on constantly running the very latest kernels, in which case you're welcome to contribute to the OpenZFS git repository.
Performance
ZFS was designed from the ground up with a focus on reliability and scalability, not raw speed. That doesn't mean that it's slow; it actually performs fairly well on a wide range of benchmarks. In fact, the built-in compression algorithms can reduce total disk traffic and often outperform uncompressed filesystems. It also tends to exhibit more consistent behavior as loads increase, where other filesystems sometimes “fall of a cliff” when buffers get saturated. The sheer number of potential ZFS configurations and tunables makes benchmarking somewhat complex; it's possible to configure a ZFS system to be slower than it needs to be, but with a little bit of tuning it can usually keep up with almost anything else. Nonetheless, it needs to be said that there are a lot of common use cases where a simpler filesystem will consistently hit a slightly higher throughput than ZFS; if you really have a use case where that last few percent of read/write speed outweighs the risk of losing data and/or access to other ZFS features (such as instant snapshotting or native encryption), you might want to shop around a bit. https://www.enterprisedb.com/blog/postgres-vs-file-systems-performance-comparison offers some good but dated benchmarks as a starting point.
Complexity
ZFS' design arguably violates the “UNIX philosophy” of small, independent tools that each do one thing well and interoperate well with each other, instead of trying to be an all-in-one, one-size-fits all solution. ZFS very much attempts to address a wide range of storage problems with a single integrated solution, and in many cases duplicates functionality already available elsewhere in Linux. For example, the ability to enlarge your storage pools at any time by simply adding new disks could also be acheived with LVM. Filesystem- or device-level encryption was already available in LUKS. RAID-like mirroring and striping (although not the on-disk checksumming and multiple data copies) are provided by the multi-disk (md) driver. Moving to ZFS to get these features does feel a bit like reinventing the wheel.
Of course, ZFS wasn't originally developed with Linux in mind, and the inventors' relentless focus on reliability made them skeptical of third-party solutions for these problems anyway. There's little point in protecting so zealously against hardware failures if your data integrity could be equally undermined by software issues. The copy-on-write and checksumming model provided a solid, easily auditable framework for building reliable drivers, and these features were easily integrated directly into it, so there was little need to depend on outside drivers. For the most part, I'd say that ZFS' three-layer model generally reflects the overall UNIX philosophy, with relatively little direct coupling between the modules except for a few instances where the zfs layer needs to understand the enabled zpool capabilities to achieve the best possible performance. Of course, this is a matter of personal opinion, and its inarguable that you can't just install “a part of” ZFS— if you want any ZFS features, you're going to have to embrace the whole stack, and adjust your system administration to suit. The all-in-one approach means that you will need to learn to use new tools (principally the zfs and zpool commands) to accomplish system administration tasks that you may be used to accomplishing with more traditional UNIX utilities. Fortunately, the ZFS tooling is user-friendly and well documented.
How can I install ZFS on Slackware?
You can find tarballs of ZFS releases for Linux at https://zfsonlinux.org, or access the Git repository at https://github.com/openzfs/zfs. For non-Linux specific information, start at https://openzfs.org/wiki/Main_Page for general ZFS documentation and links to releases for other operating systems.
Note that ZFS' userspace tooling does /not/ necessarily have to be kept in sync with version of openzfs used in your kernel drivers, although it's generally a good idea.
Using the SlackBuild script
https://slackbuilds.org/repository/15.0/system/openzfs provides a script that will automatically download the latest ZFS on Linux tarball, compile it against your running kernel (or another kernel version that you specify via the `KERNEL` environment variable), and package the ZFS drivers and toolchain as a .txz file that you can manage with traditional Slackware package tools. For many users, this will be most straightforward way to add ZFS support to Slackware. Note that the script is set up to build the ZFS drivers as loadable modules; if you want them built into the main kernel image, you can modify the script or follow the manual build directions below.
Note that the kernel modules will need to be rebuilt and reinstalled every time you move to a new kernel. The SlackBuild script will include the kernel version as part of the package version string, so that tools like sbopkg can automatically detect the need to rebuild when your kernel version increases (even if the version of openzfs itself remains unchanged). However, this also means that Slackware's upgradepkg will delete the old modules (zfs.ko and spl.ko) from /lib/modules/[old kernel version] every time you upgrade, unless you make manual backups. There are several options if you want to preserve your old modules in place (if you occasionally boot back into old kernels):
- The simplest is probably just to rename
/var/lib/pkgtools/packages/openzfs-[version]_SBoto something else (e.g./var/lib/pkgtools/packages/manual-openzfs-[version]_SBo). Upgradepkg will then not consider the newly-builtopenzfspackage and the oldermanual-openzfsto be same software, and so it will regard the old/lib/modulesfiles as belonging to a separate package and will not auto-remove them when upgradingopenzfs. Since everything else generally keeps the same filename when a newopenzfspackage is built, all other files will be overwritten with the new versions as intended. You simply need to remember to perform this renaming after every timeopenzfsis installed, at least for any kernel versions whose modules you wish to keep. - Alternatively, you could manually edit
/var/lib/pkgtools/packages/openzfs-[version]_SBoto remove the/lib/modulesentries; this will remove those files from automated package manager control. Like renaming the package file, this manual editing step will need to be re-performed every time you reinstall theopenzfspackage. - A less-recommended approach that doesn't require periodic manual re-editing, but does blur the lines between “building” and “installing” an SBo package (so use with caution!), would be to revise
openzfs.SlackBuildto manually copy the kernel modules (zfs.koandspl.ko) directly into/lib/modules/[version]instead of into$PKG/lib/modules. - If you will be building the ZFS modules into an initrd or initramfs, see the pertinent notes in my Root on ZFS HOWTO. (TL;DR:
mkinitrdwill have the side-effect of automatically saving your old modules under/boot/initrd-tree/lib/modules, and including them in all future initrd builds, so if you're booting from an initrd and load ZFS before switching back to therealroot partition, you don't need to worry about keeping the original copies of the modules under/lib/modules.)
Building the ZFS driver and tools yourself
If you don't want to use openzfs.SlackBuild (perhaps because you don't want to worry about manually manipulating which kernel modules are under package manager control), the basic process is:
- Download and unpack the source to your desired kernel version
- If necessary, download and unpack the source to the latest version of https://zfsonlinux.org. Note that the latest OpenZFS release might not support the very latest kernel versions, but will usually catch up within a week or two.
- Copy your existing kernel config file (e.g.
gzip -dc /proc/config.gz) into the kernel source directory as.config.
- In the kernel source directory, run
make prepareto create some configuration artifacts that will be required by OpenZFS.
- In the OpenZFS source directory, run
./configure --with-linux=/path/to/new/kernel --prefix=/usr --enable-linux-builtin ./copy-builtin /path/to/new/kernel
- Back in the kernel source directory, run
make oldconfigand answer questions about any new hardware. You should also be prompted for CONFIG_ZFS.
- Build your kernel and copy modules to
/lib/modules/[version]with
make && make modules_install
- If desired, sign the kernel with your Secure Boot keys (e.g. using https://github.com/Foxboron/sbctl)
- Back in the OpenZFS directory, build and install the userspace tools and libraries with
make && make install
- Copy the new kernel (for x86, it will be
/path/to/new/kernel/arch/x86/boot/bzImage) to/bootwith a suitable filename. It's not strictly required to copy the System.map and .config files, but I consider it a good practice to archive those too.
- If you use an initrd, build a corresponding ramdisk with (substituting the correct kernel version for “6.x.x”):
mkinitrd -F -k 6.x.x -o /boot/initrd-filename-desired.gz
- If using GRUB, add an appropriate boot menu entry to
/boot/grub/grub.cfg
I use the following script to automate that process:
- installkernel.sh
#!/bin/sh # Modified from the script at http://www.ohse.de/uwe/software/installkernel.html # Arguments: # $1 - kernel version # $2 - kernel image file # $3 - kernel map file # $4 - default install path (will use /boot if omitted) # function setdefaults { kernelsrc="/usr/src/linux-$1" defaulttag=-custom defaultimage="$kernelsrc/arch/x86/boot/bzImage" defaultmap="$kernelsrc/System.map" defaultconfig="$kernelsrc/.config" defaultdest=/boot } if [[ $(basename `pwd`) =~ ^linux-.*$ ]]; then defaultversion=$(pwd | cut -d- -f2) ; else defaultversion=`uname -r` fi setdefaults $defaultversion if [ "$1" = "--help" -o "$1" = "-help" -o "$1" = "-h" -o "$1" = "-H" -o "$1" = "-?" ]; then echo Syntax: $0 '[kernel version]' '[release tag]' '[kernel image file]' '[map file]' '[config file]' '[destination directory]' echo Default values are $defaultversion, $defaulttag, $defaultimage, $defaultmap, $defaultconfig, $defaultboot respectively exit fi if [ $# -ge 1 ]; then version=$1 setdefaults $version else version=$defaultversion fi if [ $# -ge 2 ]; then tag=$2 else tag=$defaulttag fi if [ $# -ge 3 ]; then image=$3 else image=$defaultimage fi if [ $# -ge 4 ]; then map=$4 else map=$defaultmap fi if [ $# -ge 5 ]; then config=$5 else config=$defaultconfig fi if [ $# -ge 6 ]; then destination=$6 else destination=$defaultdest fi kernel=$destination/vmlinuz$tag-$version if [ -f $kernel ] ; then echo "WARNING: $kernel already exists\; press enter to continue anyway or ^C to abort and supply new version/tag as \$1/\$2" else echo "will install as $kernel, press enter to continue or ^C to abort" fi read mapdest=$destination/System.map$tag-$version configdest=$destination/config$tag-$version initrddest=$destination/initrd$tag-$version.gz cp -a $image $kernel if [ $? -ne 0 ] ; then echo "error installing new kernel from $image to $kernel" >&2 exit 1 fi sbctl sign $kernel if [ $? -ne 0 ] ; then echo "WARNING: error signing $kernel for Secure Boot, press enter to continue" >&2 read fi cp -a $map $mapdest if [ $? -ne 0 ] ; then echo "error installing new System.map from $map to $mapdest" >&2 exit 1 fi cp -a $config $configdest if [ $? -ne 0 ] ; then echo "error copying configuration from $config to $configdest" >&2 exit 1 fi mkinitrd -F -k $version -o $initrddest
How can I set up ZFS filesystems?
There are already plenty of good guides to that elsewhere; I would start with https://wiki.archlinux.org/title/ZFS#Storage_pools as the most readable and comprehensive.
Useful administration tools
- https://github.com/zfsonlinux/zfs-auto-snapshot (SlackBuild script at https://slackbuilds.org/repository/15.0/system/zfs-auto-snapshot) for automated periodic snapshots
- https://zrepl.github.io for more sophisticated backup capabilities
- https://zfsbootmenu.org for a bootloader that can directly load ZFS filesystems (see my HOWTO)
Potential issues and troubleshooting
TODO
Sources
Originally written by Brent Spillner, 2025. References: