[2025-jun-17] The SlackDocs mailing lists at https://lists.alienbase.nl/mailman/listinfo have been retired. No one has been using these lists for years and it's time to say goodbye. The list archives remain available at https://scalzi.slackware.nl/mailman/listinfo/slackdocs

[2025-jun-17] The SlackDocs Wiki has moved to a new server, in order to make it more performant.

Welcome to the Slackware Documentation Project

Virtualizing Slackware AArch64 on the HoneyComb LX2

Target
Platform AArch64/ARM64
Hardware Model Solid-Run Honeycomb LX2K (lx2160Acex7) Workstation
Document Version 1.09 July 2025
Author Brenton Earl <el0226@slackware>
This guide assumes the Honeycomb will be used as a virtual machine server. Some decisions on implementation are specific to this hardware.

Before you Begin

  1. Verify your target host machine supports virtualization.
  2. Important information is found in the Installing Slackware: EFI-based AArch64 Hardware Model guide as well.
  3. Some related articles are found in the Slackware ARM / AArch64 table of contents.

Getting Help

Please reach out on the Slackware ARM Forum for help.

Support the Slackware ARM Project

If you like what we're doing here, please consider becoming a patron.

Software Requirements

Applications Purpose
Slackware Aarch64 Full installation is recommended
Qemu A virtualization software. Built with the aarch64-softmmu target
Libvirt Daemon and management platform to control virtual machines
Virt-Manager A graphical interface to monitor and configure Local and Remote virtual machines
netcat-openbsd Establishes remote connections using Unix sockets within libvirt
The minimal amount of dependencies are included in the build scripts.

Build Scripts and Packages

It is recomended that you follow this guide and use the build scripts to install your own packages from source:

  • The build scripts are here

Alternatively, you can install packages and skip building from source:

  • Pre-built packages here.

In the future the pre-built packages will become outdated and you will need to rebuild the packages with the included build scripts. The current package set was built on the Aarch64 port of Slackware-current. If you run Slackware-current you will notice the operating system is constantly upgraded. What works today may need a rebuild tomorrow. It is recommended that the packages are rebuilt as often as you update your Slackware-current installation.

A stable Slackware release in the future will not require as many rebuilds. In that case the pre-built packages will become a much more attractive option.

Download

The whole bundle can be downloaded from the Slackware.uk public mirror like so:

# export REPO_DIR=/tmp/slackware-aarch64-virtualization
# mkdir -pv $REPO_DIR
# cd $REPO_DIR
Do not forget the trailing dot at the end of the rsync command.
# rsync -Paav slackware.uk::slackwarearm/people/brent/slackware-aarch64-virtualization/ .
Change the value of $REPO_DIR if you decide to download the assets to a different directory.

You need to mofify the configuration if you use a different directory path to store this project. The REPOSITORY value is in the arm/build and the tools/refresh.source files:

# vim tools/refresh.source
# vim arm/build 

Set the directory path by editing this line in both files:

REPOSITORY=${REPOSITORY:-/tmp/slackware-aarch64-virtualization/virtualization-scripts}

Build

Enter the build script directory:

# cd $REPO_DIR/virtualization-scripts

Verify the SlackBuild scripts are all marked executable.

# chmod -v +x arm/build  */arm/build */*SlackBuild tools/refresh.source
The Slackware official package for netcat is removed by the build scripts automatically. It is swapped out for netcat-openbsd.

Replacing netcat will enable the virt-manager application to connect remotely to a virtual machine host running the libvirt daemon. Every virtual machine on such a remote host is accessible over the SSH protocol. This is much more secure than VNC for remote access.

To verify the checksum of included source code:

# CHECK=1 ./tools/refresh.source 

Output should look similar to:

spice-protocol-0.14.4.tar.xz: OK
spice-0.15.2.tar.bz2: OK
spice-gtk-0.42.tar.xz: OK
gtk-vnc-1.5.0.tar.xz: OK
lloyd-yajl-2.1.0-0-ga0ecdde.tar.gz: OK
libvirt-11.3.0.tar.xz: OK
libvirt-python-11.3.0.tar.gz: OK
libvirt-glib-5.0.0.tar.xz: OK
osinfo-db-tools-1.12.0.tar.xz: OK
osinfo-db-20240701.tar.xz: OK
libosinfo-1.12.0.tar.xz: OK
virt-manager-5.0.0.tar.xz: OK
libmd-1.1.0.tar.xz: OK
libbsd-0.11.7.tar.xz: OK
netcat-openbsd-7.4-2.tar.gz: OK
qemu-10.0.2.tar.xz: OK

Execute the build script in the virtualization-scripts directory:

# ./arm/build

Then you wait. Depending on your system, it could be a significantly long time frame to wait. Each package will be built and installed automatically. The build script will exit if a package fails to build.

Updates

Build scripts can be updated by editing the arm/build script in each directory.

Modify the VERSION variable. Qemu is used as an example, but the process is the same for the other build scripts. Modify the VERSION variable.

# vim qemu/arm/build

You also need to provide the location by URL in the download.info file in each directory. Edit the file by deleting the old web link and pasting in the new one.

# vim qemu/download.info 

When you have edited all of the build scripts you wish to update, execute:

# REFRESH=1 ./tools/refresh.source 
Each invocation of the refresh.source script will delete existing source tarballs for each build script. It will then download the new tarball, create a sha256 hash, for every build script. Run a refresh sparingly if you have bandwidth limitations. Unchanged source tarballs will also be downloaded.

Configure

This section describes how to configure the virtual machine daemon for system boot and manual control. We will also configure the daemon to allow the correct permissions for remote access management.

Add to rc.local:

# vim /etc/rc.d/rc.local 

Create rc.local_shutdown:

# vim /etc/rc.d/rc.local_shutdown 

Mark the daemon and boot scripts as executable:

# chmod -v +x /etc/rc.d/rc.libvirt  /etc/rc.d/rc.local /etc/rc.d/rc.local_shutdown

Finally, edit libvirtd.conf to set the users group to have permissions to manage virtual machines over unix sockets.

# vim /etc/libvirt/libvirtd.conf 

The virtual machine daemon (libvirt) will automatically start on system boot and stop during system shut down.

To manually you can control the daemon directly. This is useful if you do not wish to leave the daemon running all the time.

To Start:

# /etc/rc.d/rc.libvirt start

To Stop:

# /etc/rc.d/rc.libvirt stop

At this point in the guide you can manage virtual machines locally. Next, a description of the process to access virtual machines over the network to manage functionality.

Virtual Machine Management

In this section virtual machine management will be discussed using the virt-manager application. This guide is for Slackware Aarch64 on the HoneyComb. However, this tool can be used to remotely access Aarch64 virtual machines from a Slackware system system of any type. Small to large network access does not matter. Remote access only requires the SSH protocol to log in as your user and launch the virt-manager GUI on your machine.

Remote Qemu/KVM Management

Launch virt-manager from your software menu.

Use the mouse to select File → Add Connection.

Configure the connection for remote access using the SSH protocol. Enter the User name and Host name and click the Connect button.

This is the list of connections you have available. The “HC” instance is for remotely managing of the Honeycomb.

Details
  • The values for User name and Host name are specific to the configuration of your systems.
  • The OpenSSH daemon should be configured on the virtual machine host to allow your user to connect on port 22 for access to libvirtd.
  • The user name “x” should be replaced with the user account on your remote virtual machine host and on the local client
  • The host name is configured as “hc”, but can be an IP address or a host in /etc/hosts.

Slackware ARM32

On 32 bit ARM (ARM32) virtual machines are not used due to hardware constraints. However, remote access to virtual machines on other hosts is available using virt-manager. The libvirt daemon does not need to be active on client machines that connect to a remote virtual machine host. This applies to local, remote, and a mixture of, hosts of other architectures supported by Qemu/KVM on Linux.

Manual Installation

The purpose of this section is to explain how to boot the kernel and ramdisk directly. This process removes the necessity to create new installation media. This is a great option to efficiently test the kernel or to try out Slackware ARM.

Storage Pools

First, add a new storage pool that will be accessible to your virtual machines. The kernels and installer directories store the kernel and installer ramdisk within a Slackware tree. I store my files on the HoneyComb in /root/kernels/armv8 and /root/installer.

Right click the Qemu session in the main virt-manager window → Details → Storage

Manual Installation Steps

With your storage pools added and started we can now begin. Create a new virtual machine and select Manual Install

Configure your settings for the new virtual machine.

On the final dialogue check the Customize Configuration Before Install box. Then click the Finish button.

Next, go to the Boot Options section. Click Enable Direct Kernel Boot. Enter the file path to the kernel and installer. They are located in the storage pools configured earlier in this section.

Add the following to the kernel args input:

mitigations=off earlyprintk root=/dev/ram rw kbd=us nic=auto:eth0:dhcp

Click Apply. In the upper left corner of the window, click Begin Installation.

Install Slackware ARM as usual. The setup process is the same as any of the supported hardware models. In fact, the hardware model will show up as the hypervisor:

KVM Virtual Machine virt-10.0

Next, you need to adjust the boot options. Shut down the virtual machine entirely. Enter the file path on your system for the stock Slackware ramdisk in the “Initrd Path” input. Then adjust the kernel args.

Add the following to the kernel args input:

mitigations=off earlyprintk root=/dev/vda3 rootfstype=ext4

The root=/dev/vda3 setting points to the root disk within the virtual machine. Depending on your partition scheme, you may need to adjust this.

Summary

We discussed how to install a virtualization stack, direct kernel booting, and storage pools. The virt-manager application removes a lot of the manual labor of managing virtual machines.

Using this software in this way is applicable to any Aarch64 hardware which is designed with virtualization in mind. Finally, it extends the Slackware ARM platform by enabling users to run other Linux distributions on top of Slackware.

Screenshots

Slackware64-current with LiLo local. Remote Slackware Aarch64 installer.

Controlling virtual machines remotely and locally from a Slackware x86_64 host.

Managing a Debian 12 install.


Installing a Debian 12 virtual machine on the HoneyComb.  Virt-manager is running on a Slackware x86_64 host and managing the Debian installation remotely.

Errata

None at this time.

QR Code
QR Code slackwarearm:virtualization_slackware_aarch64 (generated for current page)