28-mar-2016: Welcome to our new server! Faster and less prone to crashing.

Welcome to the Slackware Documentation Project

Slackware on AWS EC2

Despite the fact that Slackware Linux predates virtualization and 'Cloud Computing' by many years, it is not normally considered for Amazon Web Services Elastic Compute Cloud (AWS EC2). A possible reason could be the lack of updated and comprehensive documentation on how this can be implemented.

After some research, the author managed to identify the following sources:

This wiki is an attempt to provide a comprehensive tutorial on how Slackware may be built and deployed on AWS EC2.


The steps bellow give you a good overview of the whole process:

  1. To get knowledge about some core concepts that apply to an OS deployment in a particular Infrastructure as a Service (IaaS), namely, AWS EC2;
  2. To check the requirements for the whole thing;
  3. To create a virtualized machine so we can rely to build our kernels;
  4. To prepare a set of file trees that is going to drive the final Slackware initialization;
  5. To create a virtualized machine that is going to become our Slackware ready for the AWS EC2 cloud debut;
  6. To launch our first Slackware in the cloud, made out of the blue!

The 4th and 5th nodes require some expansion. We're living in times where human rights and individual freedoms are considered threats to the State and governments around the world. As our Slackware is going to live in the cloud, which are all private business, we'll never know what practices those businesses stick with. As such, our premise is: do not trust anyone.

This fact of life will lead to a set of configuration files and a ready to deploy Slackware image that's not a standard Slackware setup as you might have in your notebook/desktop. All those steps are optional and are going to be appropriately indicated with the [OPTIONAL] prefix. They are documented here so you can at least know how to stay in the brightest side of the cloud.

Before we get into the real thing, a mandatory a word of advice is as follows: this is going to take a looong time.

No kidding! It's boring and your patience will be tested. There are many multiple hours steps and a mistake in the wrong place could cost you many hours and lots of frustration. Don't be worried if it takes a couple of days so you can SSH into your first Slackware instance that have booted just fine. Things can go a little bit faster with enough computing power and bandwidth, but do not expect that much. Grab yourself a mug of hot coffee, sit yourself comfortably and let it go.

The good news is: once you have mastered the process, you are going to deploy as many Slackware instances as you wish in a snap.

That being said, shall we?


'Cloud' is an umbrella term that describes a huge amount of technologies used by different IaaS vendors. It says nothing by itself. As such, we need to get familiar with some core concepts used by IaaS vendor AWS EC2.

  • Hypervisor: as stated in Wikipedia, 'a hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that creates and runs virtual machines.' AWS EC2 uses Xen to implement its hypervisor and is on top of it that your Slackware instances is going to live.
  • Virtualization Spectrum: there's no easy explanation to this one, but in simple terms it refers to the way that a hypervisor exposes machine features to the virtual machine. For an in-depth explanation of this concept, please read this article by Brendan Gregg. At the time of this writing, the only know way to make Slackware work in AWS EC2 is by using PV kernels.
  • PV-GRUB: packaged as AKI (Amazon Kernel Images), these little piece of software acts as a grub bootloader. This allows you to boot your own kernel inside your Slackware image. For more information about PV-GRUB AKIs, please read this article.
  • dom0 Kernel: Xen Wiki's states that a 'Dom0, or domain zero to expand the abbreviation, is the initial domain started by the Xen hypervisor on boot. The Xen hypervisor is not usable without Domain-0 (“dom0”).' This is true, but specifically at AWS EC2 our Slackware image is also going to have a dom0 kernel to boot (called by PV-GRUB), in oppose to domU kernels. Even so, you can't use your instance to virtualize something on top of it.
  • Image: as per Rackspace definition, 'is a single file that contains a virtual disk that has a bootable operating system on it.' This is the file that is going to hold your ready for deploy Slackware. In the AWS EC2 domain, this is called AMI.
  • Volume: it's a block level storage volume, just like your HDD or SSD. You count on volumes to hold your data, OS included. AWS service for volumes is EBS (Elastic Block Store).
  • Snapshot: as most of you must already know, a snapshot is a kind of an 'immutable picture' of a storage volume in a certain point in time. This is not different here, but it's worth to mention as this is a core concept in the process. The final, ready to deploy Slackware image, is going to be created from a snapshot and will only exist while this snapshot also exists, even if the volume that had originated this snapshot had been removed.

We're going to get pretty used to those terms in the course of our learning here, so don't worry if you haven't got the big picture by now.


There are some requirements to complete this how to. You must fulfil each one of them alone before continue. This how to will not teach you how to get there, but you can find almost all the knowledge you need right here in the Slackware Documentation Project.

  1. A local, bare metal and working Slackware64 14.1 (or current) install with multilib support.
  2. VirtualBox AMD64 (look for 'All distributions' links).
  3. At least one instance with any Linux OS at your AWS EC2 account, up and running. In the appropriate moment, well call this server ec2-inst.
  4. A fully functional AWS Command Line Interface, which includes its configuration.
  5. About 35GB of local storage, a SSD is strongly recommended to speed up things a lot.
  6. A very good internet connection.
  7. Installation media for Slackware64 14.1 or, [OPTIONAL] as strongly recommended, that you have a PXE/TFTP and NFS server with the Slackware64 14.1 tree connected to a local, wired Gigabit Ethernet network. This is going to speed things a lot also! This how to assumes you have it. If you don't, you have to adapt the proper steps by yourself.
  8. CPU power and memory is good, but not a requirement. This how to was prepared (and all steps taken) in a spartan Intel Core i3 2365M 1,4GHz with 10GB RAM 1333MHz. If you have a 3rd gen Core i7 or a Xeon with much more and faster RAM, compilation times are going to be only quicker, but not exponential faster because most compilers are not multithread capable. The most impressive improvement areas are SSD, Gigabit Ethernet with a PXE/TFTP/NFS server and an intrnet connection with very high throughput.

Now that we have everything in place, let's brush our bits.

The SlackFactory

We're going to need a local VirtualBox virtual machine that is going to provide us kernels and packages for the final image, so we're going to call it the slackfactory. We are going to keep this machine after the process, as it'll be pretty useful to create new images or update the existing ones.

Install and Initial Setup

First, create a new VirtualBox virtual machine with the following settings:

Name: slackfactory
Type: Linux
Version: Other Linux (64-bit)
Memory size: 512MB
Hard drive: Create a virtual hard drive now
Hard drive file type: VDI (VirtualBox Disk Image)
Storage on physical hard drive: Dynamically allocated
File location and size: leave 'slackfactory', 15GB

Now let's configure it before the first boot. Select your slackfactory VM and hit Settings. Only things that need changes are described.

  Boot Order: enable Network, then Hard Disk (put them in this order with the arrows), disable Floppy and CD/DVD
  Pointing Device: PS/2 Mouse
  Uncheck 'Enable Audio'
  In 'Adapter 1' tab, set 'Attached to: Bridged Adapter' and 'Name: eth0' (or your equivalent)
  Uncheck 'Enable USB 2.0 (EHCI) Controller'
  Uncheck 'Enable USB Controller'

Hit OK to close the dialog.

Boot the VM by hitting Start. A dialog about selecting a virtual optical disc will appear. Just dismiss it by clicking on Start. Your Slackware setup will boot from PXE.

Hit enter to the boot prompt, wait for the kernel loads, enter again to select a keyboard map message and type root to login.

Let us create one partition with the entire virtual disk volume. Do not create a swap partition. This is a different step of a standard Slackware install, where a swap partition is recommended.

root@slackware:/# fdisk /dev/sda
Command (m for help): n
Partition type:
   p primary (0 primary, 0 extended, 4 free)
   e extended
Select (default p): [enter]
Partition number (1-4, default 1): [enter]
First sector (2048-41943039, default 2048): [enter]
Last sector, +sectors or +size{K,M,G} (2048-31457279, default 31457279): [enter]
Partition 1 of type Linux and of size 15 GiB is set
Command (m for help): w

Then type setup and hit enter. As we do not have a swap partition, we can start at TARGET option. Highlight it and press Enter to begin installation.

Select /dev/sda1 as the installation partition, Format to quick format the partition, ext4 as the filesystem type, OK.

As the source media, we'll choose our NFS server, option 4 and hit OK. You may change this if you are using another kind of installation media.

Setup will ask you to enable DHCP, so say Yes. Next it'll ask you for the NFS server IP. Let's assume ours is, so type this and continue.

Next it'll ask for the source directory. This is the entire path for the Slackware package series in your NFS server, for instance, /volume1/mirrors/slackware64-14.1/slackware64. Type this path then OK twice.

If everything went fine with NFS mounting, then you should see no errors. If it's not clean, then check you NFS server settings and try again. Let's assume all is fine, select No and hit enter.

Now it's time to select the packages we need. Remember we are going to need Xen, compile the kernel and other stuff in this virtual machine. This impose some package series requirements. You must install series A, AP, D, F, K, L, N, T, TCL and X. Select those series and deselect all others, namely E, KDE, KDEI, XAP and Y. Hit OK when done.

Then select full, go grab a mug of coffee, sit back and relax while the packages are being installed. This is going to take about 17 minutes.

While you wait, it's worth to mention that all the time waiting forecasts are approximate measures taken from the reference bare metal system where this how to was written: Intel Core i3 2365M 1,4GHz, 10GB RAM 1333MHz, SSD SATA III, Gigabit Ethernet and a NFS server capable to deliver about 90MB/sec of reading throughput. This how to will provide some time expectation for the long run tasks based on that system.

Now that packages series are installed, choose Skip at both USB flash boot install LILO dialogs. Select ps2 mouse, say No to GPM configuration and Yes to configure network.

Setup will then ask for a hostname. Type slackfactory and for the domain, local. Just accept DHCP, wich is the default, nothing at set DHCP hostname then Yes to confirm everything.

Now we have to select the daemons we are going to run in this Slackware. We do not need too many of them. Check rc.fuse, rc.inetd, rc.messagebus, rc.rpc, rc.syslog and rc.sshd. From that list, only rc.rpc is not a default one. We need it so we can connect to the NFS server later from this machine. If you're not using NFS, you don't need rc.rpc.

Say No to console font dialog and YES to UTC hardware clock (VirtualBox's default). Now we hit an important configuration: timezone.

The important thing here is to configure it to the timezone of where you live, even for the machines that are not going to be deployed in the same country (and timezone) that you live at. Stick always with only one timezone on all server you administer. You can even set straight to UTC if you feel adventurous, but do not mix timezones.

So, as the reference system is at Sao Paulo, Brazil, we're going to set the timezone in this how to to America/Sao_Paulo.

Continuing, default window manager is not important, so just hit OK and say Yes to set a root password. Choose a secure one, as we assume that you know what a secure root password is.

Then say OK, select EXIT in the main menu and STOP!

Did you remember that we don't have LILO, right? If we reboot now, we'll not be able to boot straight into that machine anymore (at least not easily). When the dialog asks you 'Would you like to reboot your system?', say a loud No.

We're back in the warm prompt. Now we're going to take another non standard setup method: install grub.

root@slackware:/# chroot /mnt
root@slackware:/# grub-install /dev/sda
Installation finished. No error reported.
root@slackware:/# grub-mkconfig -o /boot/grub/grub.cfg
Generating grub.cfg
Found linux image: /boot/vmlinuz-huge-3.10.17
Found linux image: /boot/vmlinuz-generic-3.10.17
  No volume groups found
root@slackware:/# exit

If you have never tried to boot into Slackware by any other means but LILO, well, you just did it.

Now our brand new Slackware can boot, but not by rebooting. Did you remember that we've set VirtualBox to boot via network (PXE) in the first place? So, if you just reboot now, your PXE boot will take precedence and you are going to see the regular Slackware setup boot prompt. We need to change that.

At virtual machine window menu, select Machine then Close. You will see a dialog. Select Power off the machine in it. Now go to Settings, System and move Network down in the Boot Order box. Click Ok and you're all set. Just start the machine again.

This of course does not apply for the ones that are using another installation media. In that case, just remove the media before reboot, then:

root@slackware:/# reboot

You'll see the grub dialog and a countdown that will take you into your new slackfactory in 10 seconds. Log in as root and we're done with slackfactory basic install. So far, so good.

But let's move to a more comfortable place to work, SSH. There we can copy and paste commands. So we can get into our slackfactory, we need to know its IP. Type:

root@slackfactory:~# ip -4 addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    inet brd scope global eth0
       valid_lft forever preferred_lft forever

Our IP is Let's SSH into it:

user@localhost:~$ ssh root@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is 9f:60:d9:d2:63:6d:5a:d0:8d:a3:a9:47:b1:bf:84:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
root@'s password:
Last login: Fri Jan 16 03:49:57 2015
Linux 3.10.17.

Cool! So far, so good. Let's also make things a little bit easier within the command prompt by installing an extra Slackware package for bash completion.

root@slackfactory:~# mount -t nfs /mnt/
root@slackfactory:~# installpkg /mnt/extra/bash-completion/bash-completion-2.1-noarch-1.txz
root@slackfactory:~# umount /mnt

Now let's dive into the serious stuff. You can move all your open VirtualBox windows away. We're not going to need them for a while.

Oh, from now on, this how to will not provide the full server command prompt or command outputs anymore, unless they're extremely necessary to understand the underlying context. All commands in the virtual machines must run as root, so there will be no # (root) or $ (user) prefix either. This will make things easier so you can copy and paste, although you're strongly encouraged to type them for yourself so you can learn what you're doing.

Add a Swap File

Now that we have a fully functional Slackware to play with, lets put it in a good shape by adding a swap file and upgrading it.

Did you remember that we skip the swap file partition when setting things up? So our slackfactory do not run out of memory, let's add it's counterpart: the swap file. This is just a precaution step, as this machine will be very light on memory. It requires less then 50MB to boot!

Anyway, how this is quite easy, it's better stay safe than sorry.

dd if=/dev/zero of=/swapfile1 bs=1024 count=524288
chmod 0600 /swapfile1
mkswap /swapfile1 && swapon /swapfile1
cat /proc/swaps
Filename                                Type            Size    Used    Priority
/swapfile1                              file            524284  0       -1

We just added a 512MB swap file to our slackfactory and it's already working. Now we just need to persist it so it get loaded on boot time. To do so, edit the file /etc/fstab and add to it:

/swapfile1       none             swap        sw               0   0

Swap files are poor performers than their partition incarnation. But this is not important here as we are inside a VM already with all its restrictions. Besides, this is a good chance to get used to it as AWS EBS volumes are paid per GB and you can save some bucks by using swap files inside your already provisioned volume. Done with swap for now.

Upgrading and sbopkg install

Before upgrade our slackfactory, we need to configure the Slackware package management backend, aka slackpkg. To do so, edit the file /etc/slackpkg/blacklist. It should have in it (just the useful bits are listed):


# This one will blacklist sbopkg:
# This one will blacklist all SBo packages:

Also, edit /etc/slackpkg/mirrors to add this:

# Active Slackware64-14.1 Mirror

Now run:

slackpkg -batch=on -default_answer=y update gpg && \
  slackpkg -batch=on -default_answer=y update && \
  slackpkg -batch=on -default_answer=y upgrade-all

A lot of things are going to get done here. Basically, your system will get upgraded unattended. Just watch it. This is going to take about 15 minutes.

Now, run:

slackpkg new-config && \
  slackpkg -batch=on -default_answer=y check-updates && \
  slackpkg -batch=on -default_answer=y clean-system

When it tells you that You have four choices:, just say O to overwrite files. We have nothing important to keep yet.

Your system is now in a good shape. Moving on.

Let's install sbopkg. This non-standard Slackware tool will make things a lot easier to build our packages from SlackBuilds.org. In the same step, it'll get installed and the mainline SlackBuilds.org repository is going to get created and populated with tons of SlackBuilds scripts.

mkdir packages && cd packages
wget http://sbopkg.googlecode.com/files/sbopkg-0.37.0-noarch-1_cng.tgz && \
  installpkg sbopkg-0.37.0-noarch-1_cng.tgz && \
  sbopkg -r

When sbopkg ask to you (C)reate or (A)bort?:, just type C and hit enter. This should take less than a minute.

Now comes our first really AWS EC2 related task: Xen install.

Xen Install

Install a fully functional Xen hypervisor in our Slackware is a matter of a little trick and patience. As almost anything from now on, it must be compiled from sources. This is where sbopkg comes to help us a lot!

So, lets do it. First, the little trick:

ln -s /usr/include/gnu/stubs-64.h /usr/include/gnu/stubs-32.h

This command deceives Xen 32bit routines to think they have all what they need to run, including 32bit guests operating systems. We don't as our Slackware is going to be 64bit, so we fool Xen with the above command. If you don't, you'll get the error bellow, but not before a long wait.

Building package for xen...

  For a 32bit guest support 32bit libs are required to build the
  stub-domain. This is usualy done by installing multilib, but
  there is also a chance that you do not need 32bit guest support,
  so you might want to try symlinking stubs-64.h to stubs-32.h
  like this:

  # ln -s /usr/include/gnu/stubs-64.h /usr/include/gnu/stubs-32.h

Now we can move to Xen and all it's dependencies download, build and install process, all in a single command line. The time command will give you how long everything took in the end.

time sbopkg -B -i "yajl acpica xen"

This is going to take about 40 minutes. Grab a cup of tea.

When it's done, it's time to build our dom0 kernel.

Compiling dom0 Kernel

As we said before, the dom0 kernel is the one that will run our Slackware AWS EC2 instance. It's the most important bit of software to achieve the whole goal of this how to.

The problem is that this will be long, very long. Quantifying long: about 2 hours and 50 minutes!

So, to make things easier and you can go get something nice to do while you wait, we'll do this inside a screen. That way you can disconnect from SSH and the thing will be there doing its stuff. Use that time to take a shower, give some love to your company, ride a bike, whatever you want to.

Here are the commands:

cd /usr/share/doc/xen-4.3.3/dom0/
time BOOTLOADER=grub ROOTDEV=/dev/sda1 sh ./kernel-xen.sh"

To disconnect from the screen session without kill the process, just type CTRL-A D. This is type CTRL-A then release CTRL-A, then press D, for detach. After that, you can just type exit to leave SSH.

If you want to check the process, just SSH to the box again and type htop. Look for high processor usage, make or other long named process process in the first lines. While you see them, kernel compilations is going on.

You can also check compilation activity by returning to the detached screen. Type screen -r. If it's running, detach again or leave it there. It's up to you. Just don't type CTRL-C as it is going to stop everything and you'll have to start over. I don't want that, right?

When it finishes, just type CTRL-D or exit to close the screen.

Now we have to install and boot with this kernel to make sure it's able to drive our Slackware AWS EC2 instances.

Installing and Booting with dom0 Kernel

You've got a dom0 kernel! Congratulations!

Now we've to boot with it. We'll need to update grub menu. There are two ways of doing that. The automated way, that is kinda bloated because it adds many entries in the grub boot menu that are totally useless in our context. And the custom way, wich will give us only what we gonna need.

The automated way:

grub-mkconfig -o /boot/grub/grub.cfg

When you reboot, you'll see the following entries:

Slackware-14.1 GNU/Linux
Advanced options for Slackware-14.1 GNU/Linux
Slackware-14.1 GNU/Linux, with Xen hypervisor
Advanced options for Slackware-14.1 GNU/Linux (with Xen hypervisor)
Slackware Linux (Slackware 14.1)
Advanced options for Slackware Linux (Slackware 14.1)

To boot with Xen kernel, you have to select Slackware-14.1 GNU/Linux, with Xen hypervisor. The default option, Slackware-14.1 GNU/Linux, is going to fail.

If you select the advanced options in the grub menu, there will be many other boot options. The fact is that we only need to boot with Xen hypervisor/kernel by default and, as a fallback for when things go wrong, to boot with Slackware's huge kernel. We'll never need the generic kernel from default Slackware install.

So, let's clean things up and provide a custom way to boot from grub.

removepkg kernel-generic
vi /boot/grub/grub.cfg

Replace all content with the code bellow:

set default=0
set fallback=1
set timeout=1

menuentry 'Slackware64 14.1, Linux 3.10.17 Xen Kernel' {
  insmod part_msdos
  insmod ext2
  set root=(hd0,msdos1)
  echo 'Loading Slackware64 14.1, Linux 3.10.17 Xen Hypervisor...'
  multiboot /boot/xen.gz
  echo 'Loading Slackware64 14.1, Linux 3.10.17 Xen Kernel...'
  module /boot/vmlinuz-xen root=/dev/sda1 ro
  echo 'Loading Slackware64 14.1, Linux 3.10.17 Xen Ramdisk ...'
  module /boot/initrd-xen.gz

menuentry 'Slackware64 14.1, Linux 3.10.17 Huge Kernel' {
  insmod part_msdos
  insmod ext2
  set root=(hd0,msdos1)
  echo 'Loading Slackware64 14.1, Linux 3.10.17 Huge Kernel...'
  linux /boot/vmlinuz root=/dev/sda1 ro

With this code, your slackfactory default boot kernel is going to be Slackware64 14.1, Linux 3.10.17 Xen Kernel and you'll have to wait just one second so the boot start. If things go wrong, then grub will let you know and will fallback to the standard Slackware64 14.1, Linux 3.10.17 Huge Kernel kernel. That way you'll never loose the ability to boot and fix what went wrong.

Go ahead and reboot. If you pay attention to details, you'll see that the boot sequence have changed. If not, login is going to clearly state:

Welcome to Linux 3.10.17-xen (tty1)

This is the sign that all went well and we've got a Xen kernel up and running.

[OPTIONAL] Custom Packages Building

We're going to need some packages in the final image. slackfactory will provide us those packages too. As many of them are dependencies to others we need to build and install those packages in our slackfactory, although they are not going to be that useful on it.

Let's put our Xen package and its dependencies in a safe place. Those packages are not going to be installed to the final image, so we also need to put them in their own directory.

cd packages
mkdir -p xen
mv /tmp/*.tgz xen

Before we get into the building and for the sake of the transparency, these are the packages we're going to build:

  • AWS Command Line Interface: the final image is going to run on AWS EC2, so it's only natural that we have its main tool in place in case we need.
  • Amazon CloudWatch Monitoring Scripts for Linux: this is needed so we can monitor our AWS EC2 instance resources.
  • Tor - The Onion Router: From Tor SlackBuilds README, 'using Tor can help you anonymize web browsing and publishing, instant messaging, IRC, SSH, and other applications that use the TCP protocol.' This will improve our security by adding a conundrum layer that may help to keep our SSH server keys and user key pairs away from the State hungry for souls and other bad guys out there.
  • torsocks: it's going to make our life easier if we need to open a SSH connection from our AWS EC2 instance to another via Tor.

This is a pretty basic (albeit unusual) set of packages, although there are many dependencies to them. Just enough to start. If you need to add more, you're free to expand that list at your will. This is the right moment to get into it.

But keep in mind something that is very important to know: less software, less bugs.

The advice here is to keep a pretty minimal set of software for the base image then add just what you need for each instance role after they are launched with the base image.

The final instances are going to sit in an Internet data center you do not control, with Internet connection usually open to the world. Keep on each instance just what you really need to provide the service you have to. This simple measure will avoid potential security issues with the software.

Now to the package building. First the easy ones.

AWS Command Line Interface. Takes about 3 minutes:

time sbopkg -B -b "mock pysetuptools nose pyasn1 rsa six docutils bcdoc \
  colorama jmespath python-dateutil botocore MarkupSafe Jinja2 Pygments Sphinx \

Torsocks and Tor. Takes about 7 minutes:

groupadd -g 220 tor
useradd -u 220 -g 220 -c "The Onion Router" -d /dev/null -s /bin/false tor
time sbopkg -B -i "torsocks tor"

Then the most complicated, Amazon CloudWatch Monitoring Scripts for Linux. Another 6 minutes.

This one needs a non-official SlackBuild for perl-switch. Luckly, PhantomX have done this to us and provide the SlackBuild script to it in his awesome slackbuilds repository.

time sbopkg -B -i "perl-www-robotrules perl-net-http perl-IO-HTML \
  perl-lwp-mediatypes perl-http-date perl-html-tagset perl-html-parser \
  perl-encode-locale perl-http-message perl-http-negotiate perl-http-daemon \
  perl-http-cookies perl-file-listing libwww-perl Crypt-SSLeay \
  perl-Mozilla-CA Net-SSLeay perl-Net-LibIDN perl-IO-Socket-SSL \
mkdir -p perl-switch && cd perl-switch
wget https://raw.githubusercontent.com/PhantomX/slackbuilds/master/perl-modules/perl-switch/perl-switch.SlackBuild
wget https://raw.githubusercontent.com/PhantomX/slackbuilds/master/perl-modules/perl-switch/slack-desc
sh perl-switch.SlackBuild
mv perl-switch-2.17-noarch-1root.tgz perl-switch-2.17-noarch-1_SBo.tgz
installpkg perl-switch-2.17-noarch-1_SBo.tgz
cd ..

Now we need to move all the packages for our package directory so we can use them later.

mv /tmp/*.tgz perl-switch/*.tgz .

Right on! By know we have all the packages we're going to need. Moving to the image payload trees building.

Payload Trees

The payload trees are a set of tarballs which will hold all the necessary files to build our Slackware AWS EC2 image. Those files, provided by slackfactory and by you, are the ones to drive the final instances.

Let's get into the details of each one of them.

Kernel Tree

The kernel payload tree contains all the Xen kernel files needed to boot the instance at AWS EC2.

It is made from the kernel files itself (System.map, config, initrd and vmlinuz), the kernel modules loaded at boot time and the Xen kernel source tree built by dom0 kernel building process. The sources files are needed because you might want to build some packages that require the kernel headers or other source files

Let's build it. It's quite simple, but takes a bit long, about 19 minutes.

cd /usr/src
rm -f linux && ln -sf linux-3.10.17-xen linux
cd ~/ && mkdir -p payload && cd payload
time tar --owner=0 --group=0 -pcJvf xen-kernel.tar.xz \
  /boot/System.map-3.10.17-xen /boot/System.map-xen \
  /boot/config-3.10.17-xen /boot/config-xen /boot/initrd-3.10.17-xen.gz \
  /boot/initrd-tree/ /boot/initrd-xen.gz /boot/vmlinuz-3.10.17-xen \
  /boot/vmlinuz-xen /boot/xen-4.3.3.gz /boot/xen-syms-4.3.3 \
  /boot/xen.gz /lib/modules/3.10.17-xen /usr/src/linux-3.10.17-xen \
  /usr/src/linux /usr/src/linux-3.10.17-xen /usr/src/xenlinux

Package Tree

Now for the packages. As we already have built all the required packages and placed them to a common location, it's quite easy to build this payload too.

cd ~/
time tar --owner=0 --group=0 -pcJvf payload/packages.tar.xz packages/*.tgz

Configuration Tree

The configuration payload tree is the smallest, but at the same time the most complex and security sensitive one.

The security concerns comes comes from the fact that the files you drop here are going to drive ALL THE INSTANCES YOU DEPLOY FROM THE IMAGE BUILD WITH THEM*. So, if you make a mistake here, this is going to propagate to all your instances. Bad, too bad!

For instance: you have made a weak SSH server configuration that anybody could easily crack and then crated an image with this bug. If you deploy 100 instances from this image, your 100 instances is going to be all vulnerable. Got it?

So, to make you life a little bit easier, the author of this guide have kindly provided a starting point so you do not have to figure out what to change.

But before use it, pay very much attention to the following statement:

Me, Deny Dias, the author of this article, and The Slackware Documentation Project, herein represented by the SlackDocs admin team states that:


In simple terms, if you screw something by using the files that this how to and his author provided, you are screwed alone. It's nobody faults but yours alone!

If you understood this right, please continue. If not, write your own set of configuration from scratch.

Now, let's suppose you are going to use the provided configuration tree. First thing is to obtain it.

cd ~/
git clone https://github.com/denydias/slackami.git
chmod 440 slackami/etc/sudoers

Now it's a good time to revise all the configuration files individually and change anything you might want to.

You might also want to compare the files with the standard ones provided by Slackware default install. You could use diff for this task. For instance:

diff -u /etc/sudoers slackami/etc/sudoers

When you get happy with your changes, itś time to create the payload tarball.

time tar --owner=0 --group=0 -pcJvf payload/config.tar.xz slackami/boot \
  slackami/etc slackami/tmp

Now we have everything in place so we can build our actual Slackware 64 AWS EC2 image. Just to make sure:

ls -lah payload/
total 124M
drwxr-xr-x 2 root root 4.0K Jan 17 03:01 ./
drwx--x--- 9 root root 4.0K Jan 17 01:36 ../
-rw-r--r-- 1 root root  17K Jan 17 01:44 config.tar.xz
-rw-r--r-- 1 root root  13M Jan 17 03:01 packages.tar.xz
-rw-r--r-- 1 root root 111M Jan 17 01:08 xen-kernel.tar.xz

To give slackfactory some well deserved rest, lets copy those tarballs to our bare metal machine and turn slackfactory off, as it's not necessary anymore.

user@localhost:~$ scp -r root@ Downloads/
user@localhost:~$ ls -lah Downloads/payload/*.xz
-rw-r--r-- 1 user users  17K Jan 17 03:08 config.tar.xz
-rw-r--r-- 1 user users  13M Jan 17 03:08 packages.tar.xz
-rw-r--r-- 1 user users 111M Jan 17 03:08 xen-kernel.tar.xz

Then in the slackfactory prompt:

root@slackfactory:~# halt

Great! We're half way there.

The SlackAMI

Now we have to create another local VirtualBox virtual machine: slackami.

This one is going to have a short life, as we'll just install Slackware, reboot, SSH into it, make some minor OS adjustments, which includes to load the payload trees. After the initial adjustments, it will never boot again under VirtualBox. So we have to make it happen all at once. The good news is that this is quite fast, as wee already prepared almost everything within our slackfactory.


The installation and setup process is very similar to the one at slackfactory. Thus, this how to is going to shown only the differences from that process to this one.

Create the new VirtualBox virtual machine in the same way you have done for slackfactory, with one difference:

File location and size: leave 'slackami', 8GB

Move on and follow the same initial procedures. Stop only in the package series selection.

The package series required for slackami are: A AP D F L N. Select those series, deselect all others, being those E, K, KDE, KDEI, T, TCL, X, XAP an Y. Continue.

All the rest is exactly the same up to the setup finishes install packages. As we are installing a lot less stuff this time, the packages install process should take now only about 11 minutes.

After that you just need to change the following steps:

Hostname: slackami
Startup services: default (do not enable rc.rpc this time)

Install grub as usual and reboot slackami. Get the IP as we've already done for slackfactory and SSH into your slackami for the first time. Move VirtualBox windows away again. We're done with them.

Image Preparation

The initial setup is quite different from the one we did to slackfactory. Remember that we're now preparing the final image. What you do here is going to appear in your Slackware instance at AWS EC2 and will replicate to any instance you launch with this image.

First, let's create the swap file. This one is the same as we did to slackfactory, but do not add the entry to /etc/fstab, as this will come from the config payload. Just create and activate it.

dd if=/dev/zero of=/swapfile1 bs=1024 count=524288
chmod 0600 /swapfile1
mkswap /swapfile1 && swapon /swapfile1
cat /proc/swaps

Let's make the Slackware password hashing mechanism a little bit harder to crack.

sed -i 's/ENCRYPT_METHOD SHA256/ENCRYPT_METHOD SHA512/g' /etc/login.defs
cat /etc/login.defs | grep "ENCRYPT_METHOD"

You should see:


Change your root password so it get hashed with the new settings. You can use the same password.

passwd root

Let's enable NFS just to install bash completion, then turn it off for once and for all. After that, you might want to exit and login back so bash completion loads.

sh /etc/rc.d/rc.rpc start
mount -t nfs /mnt/
installpkg /mnt/extra/bash-completion/bash-completion-2.1-noarch-1.txz
umount /mnt
sh /etc/rc.d/rc.rpc stop

[OPTIONAL] Time to remove some standard Slackware packages we really don't need in a cloud environment.

removepkg httpd kernel-generic kernel-huge kernel-modules mariadb \
  mozilla-nss openvpn samba seamonkey-solibs wpa_supplicant

'HEY! ARE YOU NUTS?', we hear you screaming! This is really going to remove standard Slackware kernel, Apache and MariaDB. Yes, indeed! Remember: less software, less bugs.

As for Slackware kernels, except for kernel-headers package that is required to compile other software, they have no use at AWS EC2 instances. You can't boot with them. That's the precise reason we built a dom0 Xen kernel for.

As for Apache, MariaDB and others, if you need them later, you can easily install again by the role that your instance plays. You don't need them on every instance you have, even that those services are not enabled by default by Slackware. Chances are you might use AWS RDS for the database or nginx for your webserver. So, why to keep these bloating packages? Just get rid of'em in your base image. As a bonus, this will free you up some space in your tiny 8GB filesystem.

Ok, we have no kernel anymore. DO NOT REBOOT YOUR SLACKAMI MACHINE FROM THIS POINT ON. The Xen kernel is a no go here because we have no boot grub configuration to load it.

Now, from your bare metal host, send the payload files to the slackami.

user@localhost:~$ scp -r Downloads/payload root@

And extract them. Please note that the extraction commands are different for each payload. Do not make mistakes here.

cd payload
tar -C ~/ -xvf packages.tar.xz
tar -C / -xvf xen-kernel.tar.xz --no-same-owner --same-order
tar -C / -xvf config.tar.xz --strip-components=1 --no-same-owner --same-order

Now it's a good time to check if everything is in place. cat some files to see if they match what you wrote at slackfactory. Also make sure that /etc/sudoers permissions are -r--r-----. List /boot, /lib/modules and /usr/src to check that Xen kernel files are there.

There are three very important pieces so the image boots alright at AWS EC2. Two files, /boot/grub/menu.lst and /etc/fstab, are listed bellow together with their permissions and contents. The other piece are the all the kernel files, including its modules. Without those pieces, there will be no boot at all.

-rw-r--r-- 1 root root 154 Jan 17 01:36 /boot/grub/menu.lst
default 0
timeout 0

title Slackware64 Xen
  root (hd0,0)
  kernel /boot/vmlinuz-xen root=/dev/xvda1 console=hvc0
  initrd /boot/initrd-xen.gz

-rw-r--r-- 1 root root 392 Jan 17 01:36 /etc/fstab
# Default system volumes
/dev/xvda1       /                ext4        defaults         1   1
devpts           /dev/pts         devpts      gid=5,mode=620   0   0
proc             /proc            proc        defaults         0   0
tmpfs            /dev/shm         tmpfs       defaults         0   0
/swapfile1       none             swap        sw               0   0

# Additional volumes

Also checks that /tmp/firstrun is and that /etc/rc.d/rc.local are there and its contents looks fine. They are essential to give you access to your booted instances.

If everything looks fine, move on.

Let's upgrade our Slackware. This time we do not have to configure slackpkg as its configuration files already came with the config payload, so we just upgrade now.

slackpkg -batch=on -default_answer=y update gpg && \
slackpkg -batch=on -default_answer=y update && \
time slackpkg -batch=on -default_answer=y upgrade-all

This is going to take about 8 minutes. Less software, less bugs.

Now we have to be careful with the configuration files, as we already have the final ones in place and we do not want them to be replaced by the upgrade process. Let's check what have changed then.

find /etc -name "*.new" -o -name "*.orig"

From that list, we do not want to change /etc/login.defs.new, /etc/ssh/sshd_config.new and /etc/ssh/sshd_config.new, as they are all in our config payload already. So we just:

rm -rf /etc/login.defs.new /etc/ssh/sshd_config.new /etc/ssh/ssh_config.new
find /etc -name "*.new" -o -name "*.orig"

Looks better now. Run:

slackpkg new-config && \
  slackpkg -batch=on -default_answer=y check-updates && \
  slackpkg -batch=on -default_answer=y clean-system

When it tells you that You have four choices:, just say O to overwrite the remaining configuration files. Then clean up:

find /etc -name "*.orig" -delete

Even that you have managed to protect the configuration payload files, some other files might have been rewritten. For instance, slackpkg does not check for changes for files in /etc/rc.d. So, as our openssh package was updated, its /etc/rc.d/rc.sshd was rewritten. So, just grab the source from your original packages and replace its contents with the one you need at the final image.

We also need to recreate the user for Tor. This is the right place to do it.

groupadd -g 220 tor
useradd -u 220 -g 220 -c "The Onion Router" -d /dev/null -s /bin/false tor

Time to install our packages payload.

cd ..
installpkg packages/*.tgz
find /etc -name "*.new" -delete

Now lets complete the Amazon CloudWatch Monitoring Scripts for Linux installation. Quite simple:

cd /opt
wget http://ec2-downloads.s3.amazonaws.com/cloudwatch-samples/CloudWatchMonitoringScripts-v1.1.0.zip
unzip CloudWatchMonitoringScripts-v1.1.0.zip
rm -v CloudWatchMonitoringScripts-v1.1.0.zip

Remember that we are not going to initialize sbopkg this time. We can make this later within the running instance. This will save some space.

We have to clean up all SSH server keys as they are going to be recreated for each new instance deployed, which means they are all be different.

rm -v /etc/ssh/ssh_host_*key* /etc/ssh/moduli
ls -la /etc/ssh/ssh*
-rw-r--r-- 1 root root 2453 Jan 17 01:36 /etc/ssh/ssh_config
-rw-r--r-- 1 root root 4153 Jan 17 01:36 /etc/ssh/sshd_config

Let's also create the user so we can login to our new live instances. The devops user is not optional if you have used the provided config payload. The script /tmp/firstrun is going to need it to load your AWS EC2 key pairs. Also set a initial password to it (different from the root one).

useradd -m -g users -G wheel,power,netdev -s /bin/bash devops
passwd devops

We need to clean up all the mess. We don need that /boot/grub/grub.cfg from the installation anymore, so let's get rid of it too. AWS EC2 will use /boot/grub/menu.lst instead, which is part of our configuration payload.

rm -vrf ~/.bash_history ~/.gnupg ~/packages ~/payload /boot/grub/grub.cfg

Now a little trick to save you from a lot of time in the next step. This will get all the empty space in your slackami partition and fill it with zeroes. That way, when we're going to compress the image to send to AWS EC2, it'll get smaller and transfer faster.

This does not take so long. Less than a minute.

time cat /dev/zero > /tmp/zerofill; \
  sleep 1 && sync && \
  rm -v /tmp/zerofill && \
  sleep 1 && sync

Now, a final touch to the image. In the case you decide to make this image publicly available or if there are others in your work that are going to use it, you do not want them to see the commands that you've typed before, right? Send you bash history to the limbo with:

export HISTFILE=/dev/null

Time to say goodbye to your short lived Slackware installation. If you made something wrong up to here, you are going to discover it only at AWS EC2. If you did everything right too.


Slackware is dead. Long live Slackware.

Creating and Uploading the Image

Now we are back to the bare metal machine and ready to create our image file. This is done from VirtualBox files and utilities and with the help of other standard tools.

Right to the commands:

user@localhost:~$ cd ~/VirtualBox\ VMs/slackami && \
  time VBoxManage internalcommands converttoraw slackami.vdi slackami.img && \
  time tar -cJf slackami.tar.xz slackami.img && \
  rm slackami.img
user@localhost:~$ time scp slackami.tar.xz ec2-user@ec2-inst:

These are long steps. The conversion of the VirtualBox vdi disk to raw is quite fast, about a minute, but the others takes quite long to complete. Compression takes 1:15h, another 31 minutes for the scp.

This is where CPU power and enough bandwidth can really help. But hey, it could be much worse if we haven't filled the image with zeroes above. If you've done everything right, your compressed, ready to upload image have about 700MB.

As you can see, the scp command is going to send your image to an EC2 instance you have already prepared at AWS. This is part of requirements and this how to will not tech you how to get this one.

Your image is finally sitting there. We can now make something useful so it get ready to become an AMI, Amazon Machine Image.

Preparing the EBS Volume and Taking a Snapshot

We are close to an happy end. Now it's time to make our image fits to the AWS EC2 do's and don't. First thing being to make it live inside an EBS, Elastic Block Storage, volume.

Open your AWS EC2 console if you haven't yet and login to you account. Go to Volumes in the left menu and click Create Volume button. A configuration dialog is going to open. It must be set as follow:

Type: Magnetic
Size (GiB): 8
IOPS: leave blank
Availability Zone: default (current) should be fine
Snapshot ID: leave blank
Encryption: unchecked

The important here is the volume size. It must match our slackami volume, which is 8GB. The difference between GB (1024) and GiB (1000) is irrelevant, so don't bother yourself with useless conversions.

When done, click Create. Give it some time to roll the engines.

With the new volume shows the 'available' state, select it and click the Actions button at the top to choose the Attach Volume function.

In the Instance field, just put your cursor there and a list with the available instances should appear. If you have many instances and it gets difficult to choose one, just type the instance ID right in the field.

For the Device field, leave the default (/dev/sdf) unless this device is already in use by the instance. In that case, you have to change it to a suitable one, from f to p.

Now just click in Attach.

Time to login to your instance:

user@localhost:~$ ssh ec2-user@ec2-inst
user@ec2-inst:~$ sudo -i
root@ec2-inst:~# cd /home/ec2-user

The volume we've just created have no filesystem. As such, we're not able to mount it. But it's attached and what we want is to 'burn' our Slackware image to it. So, type this:

time tar -xf slackami.tar.xz -O | dd of=/dev/xvdf bs=10M

This is another long step with no output. Should take about 10 minutes. But you can monitor it from outside. Open a new terminal on your bare metal host, login to your ec2-inst as we just did up there and type (no need to sudo):

watch -n5 'sudo kill -USR1 $(pgrep ^dd)'

Now go back to the terminal where the terminal is running. You'll see something like:

0+39225 records in
0+39225 records out
865392128 bytes (865 MB) copied, 67,3646 s, 12.8 MB/s

Do not get fooled: the volume size is 8GB, not the 700MB of the compressed image file we've uploaded to EC2. The tasks finishes when all that 8GB get transferred.

After the image has been burned to the volume, you can close this other SSH session.

The volume has our data, but has no partition yet. So let's partition it. Quite simple and quick, although might seen scary.

fdisk /dev/xvdf

Disk /dev/xvdf: 8589 MB, 8589934592 bytes
86 heads, 1 sectors/track, 195083 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc69ef5d3

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdf1            2048    16777215     8387584   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now you can mount it!

mount /dev/xvdf1 /mnt
ls -la /mnt/boot
cat /mnt/boot/grub/menu.lst
ls -la /mnt/etc/fstab
ls -la /mnt/etc/sudoers
cat /mnt/etc/sudoers
ls -la /mnt/lib/modules
ls -la /mnt/usr/src
ls -la /mnt/tmp/firstun

Did you recognize that stuff? It's your image!

You are in the perfect moment to fix anything wrong you did in the slackami creation steps. Change whatever configuration you'd like, delete files, remove stuff.

This how to will left as an exercise to the reader's imagination all the possibilities that this feature to mount the volume image into an existing instance leaves opened to the AMI maintenance over the time.

All set? Lets unmount and go for the snapshot.

umount /mnt

Go back to the the EC2 console, right click to the the image volume and select Detach Volume. You can give it a name tag if you'd like, for instance, 'slackware64-14.1-image'.

Right click it again and choose Create Snapshot. Fill the dialog with:

Name: Slackware64 14.1
Description: Snapshot for Slackware64 14.1 AMI

Then click Create. EC2 console will tell you 'Snapshot Creation Started' and give you a clickable snapshot ID. Click on it and you'll go to the Snapshots panel.

In that panel, take note of the Snapshot ID, for instance, snap-1234abcd. Also pay attention to the Status column. When it says 'complete' for our snapshot task, you may continue. The snapshot details also brings you a Progress indicator. The number is the percentage of the task done. Wait for it to finish. Should take a couple of minutes.

When it's done, go back to your bare metal host as we are ready for the final act with the image before launch it.


Almost there! These last step must happen in your bare metal host prompt, where you have AWS CLI already set up and properly configured to access your AWS account with enough privileges to do what you need to. By know you should already have this knowledge.

In this step, we are going to register a previously created snapshot (immutable) from a regular volume (mutable) into a ready to deploy AMI, Amazon Machine Image. Although you can do it straight at EC2 console (select snapshot, right click, Create Image), it's much easier to make it with AWS CLI. So let's stick with it.

We need some extra information to complete this step: the PV-GRUB kernel ID and the snapshot ID. This last one we already have from the previous step. Let's get the PV-GRUB kernel ID.

Our image is 64bit, so we need to focus on 64bit PV-GRUB too. Our PV-GRUB also should live in the AWS EC2 region we chose to put our image, sa-east-1 in the example, but you might chose a different one, so change it accordingly. So, let's search for the chosen one with AWS CLI:

aws ec2 describe-images --region sa-east-1 --owners amazon --filters \
  "Name=architecture,Values=x86_64" "Name=name,Values=pv-grub-*.gz" \
  --output table

A nice table is going to pop in front of you. It shows all the the 64bit PV-GRUB available in the region sa-east-1. You are looking for the ImageID of the PV-GRUB that is 'configured for (hd0)/boot/grub/menu.lst', as written in the Descriptiuon field. In that region case, the ImageID we want is aki-5553f448.

Now that we have all we need to register our image, it's just a matter of doing it.

aws ec2 register-image --region sa-east-1 --name "Slackware64 14.1" \
  --description "Slackware64 14.1 AMI, default user: devops" \
  --architecture x86_64 --kernel-id aki-5553f448 \
  --root-device-name "/dev/sda1" \
  --block-device-mappings "[{\"DeviceName\": \"/dev/sda\",\"Ebs\": {\"SnapshotId\": \"snap-1234abcd\"}}]" \
  --output json

The output of this command is going to be something similar to:

    "ImageId": "ami-5678efab"

This is AWS telling you that it got your command, understood it right and you AMI registration is in progress.

Let's verify it:

aws ec2 describe-images --region sa-east-1 --image-ids ami-5678efab --output json

In that output, there are four important fields to check:

"BlockDeviceMappings": [
  "DeviceName": "/dev/sda",
  "Ebs": {
      "SnapshotId": "snap-fe86f015",
"KernelId": "aki-5553f448",
"RootDeviceName": "/dev/sda1",

DeviceName and RootDeviceName must match the device and and partition from our VirtualBox slackami image, /dev/sda and /dev/sda1, respectively. SnapshotID and LernelID must match the ones we obtained earlier.

If all this info looks fine, run to the EC2 console, click in AMI at the left menu and your AMI is going to gloriously appear there, under your 'Owned by me' category. Yes, it's all yours!

You can give it a tag name. Go ahead, you deserve it!


The moment of the truth has finally arrived. Time to launch a new instance from our brand new, made out of the blue, AMI!

In the EC2 console, AMI menu, select your new private AMI and click launch. In the Chose Instance Type step, just leave the defaults, as we are just testing for now. A t1.micro instance is just enough.

For the Configure Instance ste, defaults are fine too.

In add storage, leave it 8GB in size and set General Purpose for Volume Type. Fortunately, AWS has recently changed its behavior for root volumes, leaving it out of that insane 'Ephemeral Storage' thing that wiped your root partition data on every reboot. General Purpose is faster and as reliable as Magnetic those days. Leave Delete on Termination checked.

Add a name tag to the instance if you wish.

If you already have security groups set, chose one of your choice. If not, Add a new one (the default) and allow at least the default rule (SSH, TCP, 22, Anywhere). If you don't do that, you'll not have access to your instance SSH server. Security groups are AWS terminology for firewall.

Then, review and…. LAUNCH.

Aha! Gotcha ya! Not yet! Choose your key pair so you can SSH into the instance. You you already have key pairs created, chose one. If not, create a new one here. Just don choose 'Proceed without a key pair', as you'll loose the ability to login to your instance. Mark the checkbox that you know what you are doing and…

LAUNCH!!! (for real this time)

In the next page, click in View Instances. The engine is rolling. Give it some time. While you wait, it would be nice to understand what is happening under the hood.

  1. As soon as you hit the Launch button for the second time, you've sent a signal to AWS EC2 to build a new instance using the AMI you've just created.
  2. AWS EC2 started to crunch the bits by setting up a virtual machine inside it's Xen domain in a bare metal server you'll never see in your entire live, created a new volume to be the root volume of this new instance, copying the data from the AMI to this new volume, attaching that volume to this new instance and, finally, booting it.
  3. As soon as the new instance passes the so called POST phase, PV-GRUB takes control as the boot loader. It'll read the contents of (hd0)/boot/grub/menu.lst' to check what kernels are available to boot and where they are. Well, we know what kernels are available as we've built that file. (hd0) in that context is the volume attached by Xen as /dev/sda.
  4. As there is a kernel to boot, the same one as we've built as the dom0 Xen kernel, this kernel starts to boot.
  5. From now on, this is just a normal boot as you have seen many times.
  6. Things get different again when Slackware's SysVinit calls /etc/rc.d/rc.local. At this momment, rc.local is going to call an external script called /tmp/firsrun.
  7. firstrun is going to call AWS EC2 instance metadata webserice two times. First it'll set the hostname based on the Private DNS name you can see at AWS EC2 console. Second it is going to retrieve the public key you've set in the last launch step and save it in /home/devops/.ssh/authorized_keys. That's why devops user is not optional (although you can change that if you also change firstrun script). firstrun will then delete itself, wait ten seconds and reboot the instance so the hostname get applied.
  8. After the first reboot, there will be no firstrun, so rc.local is going to run normally up to its end, which in turns will start Tor.
  9. Tor is going to create a circuit in the Tor network and expose the hidden service to the instance SSH server.
  10. And that will be just the first of many boots of your instances. From now on, they are all going to look exactly the same.

Congratulate yourself. You did it! You have a Slackware instance built by yourself out of the blue.


When you defined a key pair to the instance just before the launch, by creating a new one or using an existing one, you also have managed to save the private key, right? You gonna need it now.

Also, in AWS EC2 console, click on your new instance and get Public DNS assigned to it.

To connect to the instance, just type:

user@localhost:~$ ssh -i path/to/private.key devops@public.dns
The authenticity of host 'public.dns (ip.ip.ip.ip)' can't be established.
ED25519 key fingerprint is so:me:th:in:g.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'public.dns' (ED25519) to the list of known hosts.
Linux 3.10.17-xen.

And that proves that you really did it.

Next Steps

Did you remember that we've installed Tor and Amazon CloudWatch Monitoring Scripts for Linux in our image right?

Well, Tor is already running. Get the hidden hostname for your instance:

sudo -i
cat /var/lib/tor/hidden_service/hostname

Now, get Tor and torsocks up and running into your local host (just like we did here) and connect to your instance. Just like that:

user@localhost:~$ torsocks ssh -i path/to/private.key devops@crazything.onion

Oh, and remember: Tor slowness is not a bug.

As for the Amazon CloudWatch Monitoring Scripts for Linux, we'll leave that to you. It's pretty simple to configure it.

Where do you go from here? Well, up to the sky. You are in the cloud.


The author of this how to would like to personally thanks some people that made possible to concatenate all this knowledge that you just read in one piece.

  • Patrick Volkerding, for Slackware.
  • Soydaner Ulker, Skaperen et all from LQ post, whom wrote the articles already mentioned at the start of this how to and that were fundamental to brings me the first thoughts on the whole process.
  • Eric Hameleers, who gave me the initial steps on how to write to The Slackware Documentation Project.
  • Wagner Hiendlmayer, who gave me the opportunity to run a shitty business made out of an even more shitty software. This forced me to come out of the comfort zone I was and to learn many new stuff up to a point that, with his valuable help, the business came to be cool (at least I think it is) and may even become profitable in the near future.


In Other Languages
Translations of this page?:
QR Code
QR Code howtos:cloud:aws_ec2 (generated for current page)