Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

Installing EndeavourOS ARM on a PineBook Pro

I have already blogged about installing Arch on a PineBook Pro: the first article and the second article.

In this blog post, I’ll describe how to install EndeavourOS on a PineBook Pro.

As detailed here, https://arm.endeavouros.com/endeavouros-arm-install/, there are 3 ways to install EndeavourOS on an Arm device like the PineBook Pro. In this blog post, I’ll experiment with the first one.

This method consists of a two-step installation process:

  1. use the standard EndeavourOS ISO, booting that from a PC, to install the installation image on an external device (in this example, I will use a USB stick);
  2. then boot the PineBook Pro with the created USB stick and use Calamares to finalize the installation on the very same device you booted from.

Note that I will install EndeavourOS for Arm on an external device, NOT on the eMMC of the PineBook Pro. In this article, I’ll leave a few hints on how to do that on the internal eMMC.

First step

On a standard PC, boot the EndeavourOS ISO (in this example, I’m using the Cassini 2023-03 R2):

After adjusting the keyboard layout and connecting to the Internet, choose “EndeavourOS ARM Image Installer”.

As noted, you need first to insert a USB stick. If you plan to install it on the PineBook Pro’s internal eMMC, you must extract it and place it in a USB adapter. Then, choose “Strat ARM Installer”. That is a textual installation procedure so the installer will open a terminal in full-screen mode.

After pressing OK, you must select the ARM computer (in this case, “PineBook Pro”):

Concerning the file system, in all my experiments, BTRFS has never worked: when rebooting the USB stick (see later), the screen stays blank forever after selecting the boot media. So, the only working solution is EXT4:

Then, you have to write the device where you want to install the installer; the dialog shows all the devices, and you must write the main path of the device, NOT of a possibly existing single partition (in this case, it’s “/dev/sdb”):

Small note: unfortunately, the colors of this textual installer are not ideal 😉

Then, the procedure will prepare the device and download an archive from the Internet for the image to put on the USB stick (it’s a big image, so be patient):

Ultimately, it tells you about the temporary username and password for the installer copied on the USB. It also suggests unmounting the USB with a file manager. In the live environment, you use Thunar to unmount the USB stick. You can recognize the mounted USB stick to unmount because it should show two mounted partitions (the first one is about 128 Mb):

Umounting one of them will also unmount the other one.

Second step

It’s time to boot the PineBook Pro with the UBS stick we created with the abovementioned process. If, in the previous procedure, you created the installer on the eMMC (connected with a USB adapter), you should put the eMMC inside the PineBook Pro.

When the PineBook Pro starts, you should find a way to boot from the USB stick. If you’ve always used the Manjaro installation that comes with the PineBook Pro, you have U-Boot as the bootloader (see my previous blog post for a screenshot of U-Boot booting from the USB stick). If you’re lucky, it should give precedence to the USB stick (I’ve read that this is not always the case, depending on the version of the installed U-Boot). In this example, I have Tow-Boot as the bootloader, so when you see the message telling you to press ESCAPE (or Ctrl-C) to enter the boot menu, please do so:

And then, select the USB as the boot media (of course, if you installed the image on an SD, choose accordingly):

After some textual logs, you should get to the graphical environment for the actual installation. The Window Manager is OpenBox, so, differently from the standard EndeavourOS installer for PC, you don’t have a fully-fledged Desktop Environment (Xfce):

Now, you can choose whether to install an “Official” (e.g., KDE or GNOME) or a “Community” edition (e.g., Sway).

Remember: the installation will be performed on the same media you have just booted. In this example, it’s a USB stick. Again, if you want to install EndeavourOS on the internal eMMC, you first need to extract the eMMC, put it on a USB adapter, do the first procedure described above, put the eMMC back into the PineBook Pro, and start the installation from the eMMC.

As you can see from the screenshots above, there’s no section for partitioning the disk. The partitions have already been created during the first procedure. This installation procedure only finalizes the installation.

I’ve tried both KDE and GNOME.

Enjoy your EndeavourOS installation 🙂

If you like it on a USB stick (remember, it should be a fast USB), you might want to install it on the eMMC (see the notes in this blog post about that). I have already done that, and it works much better than the original Manjaro!

My Ansible Role for “Oh My Zsh” and other CLI programs

I have already started blogging about Ansible; in particular, I have shown how to develop and test an Ansible role with Molecule and Docker, also on Gitpod.

This blog post will describe my Ansible role for installing “Oh My Zsh” and several command line programs. The role also installs the starship prompt or the p10k theme. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.

This role is for my personal installation and configuration and is not meant to be reusable.

The role can be found here: https://github.com/LorenzoBettini/ansible-molecule-oh-my-zsh-example.

My other post has already described many parts related to zsh installation, configuration, and verification with Ansible and Molecule.

The main file “tasks/main.yml” is as follows:

Besides “zsh” and “git” (which are needed for installing other things, and, in general, I need it daily), this installs several command line tools, like ripgrep, procs, dust, exa, bat, zoxide. Note that, depending on the operating system, these tools must be installed differently (e.g., from the package manager or by downloading a binary distribution). In a few cases, the package names differ depending on the operating system. In such cases, the default names are defined in “vars/main.yml” and properly overridden depending on the operating system:

The task also installs a few fonts (nerd fonts, with emoji, or with icon characters), which are needed because “starship” and “p10k” need a few icon characters; the same holds for other tools like exa.

“Oh My Zsh” is installed by cloning its Git repository; the same holds for some external plugins. The task also sets “zsh” as the default shell for the current user.

Finally, depending on the variable with_starship, which defaults to true, it installs the starship prompt or the p10k theme. These are handled in the corresponding included files “starship.yml” and “p10k.yml”, respectively.

Note that both files copy the corresponding template for “.zshrc” (depending on starship or p10k, the contents of “.zshrc” are slightly different). For “p10k”, it also copies my theme configuration”; for “starship”, I’m OK with the default prompt. The copied “.zshrc” contains several “aliases” for the command line programs installed by this role (e.g., “ls” is aliased to “exa” commands).

Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. In such cases, I test the “starship” installation and verify that the tools that differ for their installations in different operating systems are installed correctly. This is the “verify.yml” (remember that this installs “starship” and NOT “p10k”, so it ensures that only the former is installed):

Concerning “p10k,” I have a separate scenario with a different “verify.yml” (I test this only on Arch since “starship” and “p10k” installations and configurations are the same in all three operating systems):

However, this “verify.yml” could also be used for the other operating systems since it performs the same verifications concerning installed programs. It differs only for the final part.

Of course, this is tested on GitHub Actions and can be developed directly on the web IDE Gitpod.

I hope you find this post useful for inspiration on how to use Ansible to automatize your Linux installations 🙂

TLP: Limiting Battery Charge on LG Gram in Linux

I had already blogged on how to limit battery charge on LG Gram in Linux. In that post, you had to manually set the threshold “80” in the file “/sys/devices/platform/lg-laptop/battery_care_limit”.

With TLP, the procedure is easier and more automatic.

First, you must install tlp (remember that tlp conflicts with power-profiles-daemon, so you have to disable the latter first or uninstall it). In Arch-based distros:

Ensure that the tlp service is enabled on boot and the first time you should start it (“sudo systemctl start tlp”).

By running “sudo tlp-stat”, you should see near the end this line:

Edit the file “/etc/tlp.conf” and uncomment the following lines (note there’s one also for the start of charging, but that option doesn’t seem to be supported in this laptop):

Restart the service (“sudo systemctl restart tlp.service”), and it should be already active (run “tlp-stat” again):

That’s all. This will persist on reboot. However, this will not persist if you hibernate and return from hibernation (unless you restart the tlp service as shown above).

Customizing KDE in Arch Linux on a PineBook Pro

In a previous blog post (and another one), I showed how to install Arch Linux on a PineBook Pro.

In this blog post, I’m showing how I customize KDE on that installation (thus, it is similar to this other one for GNOME).

I’m not using Wayland because I still don’t find it usable in Plasma. Indeed, I haven’t even installed the Wayland session: I’m using X11.

First of all, the default screen resolution of KDE is too tiny for my eyes. Thus, I set 150% (fractional) scaling:

After that, you have to log out and log in to see the setting in effect.

I also enable “Tap-to-click” in the “Touchpad” settings.

Then, I set “Ctrl+Alt+T” as a shortcut for opening the terminal (Konsole). Actually, the shortcut is already configured in the “Custom Shortcuts”, but it’s not enabled by default in this distribution. It’s just a matter of selecting the corresponding checkbox (the “Examples” checkbox must be selected first to enable the other checkbox):

Then, I install an AUR helper. I like “yay,” so I first installed the needed dependencies:

And then

I install, by using “yay”, the “touchegg” program for touchpad gestures. (Plasma Wayland already provides touchpad gestures, but I prefer to use the X11 session, as I said above). For ARM, we need to install the AUR package:

You will get this warning, but proceed anyway: it compiles and works fine:

I customize touchegg touchpad gestures for KDE by creating the file “~/.config/touchegg/touchegg.conf” with the following contents:

In particular, note the 3-finger gestures (for the “Expose” effect, hide all windows, switch workspace, etc.).

Let’s start “touchegg” and verify that gestures work

And then let’s enable it so that it automatically starts on the subsequent boots:

Let’s move on to ZSH, which I prefer as a shell:

Since I’m going to install “Oh My Zsh” and other Zsh plugins, I install these fonts (remember from the previous post that I had already installed “noto-fonts” and “noto-fonts-emoji”) and finder tool (“curl” is required for the installation of “Oh My Zsh”):

Let’s install “Oh My Zsh” by running the following command as documented on its website:

When asked, I agreed to change my default shell to Zsh. In the end, we should see the prompt changed to the default one of “Oh My Zsh”:

I then install some external plugins:

And I enable them by editing the ~/.zshrc, in particular, the “plugins” line (I also enable other plugins that are part of the OMZ distribution):

Once saved, you have to start a new terminal with zsh to see the plugins in action (remember that, until you log out and log in, the default shell is still BASH, so you might have to run “zsh” manually to switch to ZSH in the currently logged session).

Besides the syntax highlighting for commands, you have completion after “cd” (press TAB), excellent command history (with Ctrl+R), suggestions, etc.

Let’s switch to the “Starship” prompt. Let’s run the documented installation program:

Now, let’s edit the ~/.zshrc file again; we comment the line starting with “ZSH_THEME,” and we add to the end of the file:

Opening another ZSH shell, we should see the fantastic Starship prompt in action, e.g.,

To quickly search for file names from the command line, I install “locate”, enable its periodic indexing and run the indexing once the first time (if you’re on a BTRFS file system, you might want to have a look at this older post of mine):

Then, you should be able to look for files with the command “locate” quickly.

Gnome uses “Baloo” for file indexing and searching, e.g., from “KRunner” (Alt+Space) or the application launcher (Alt+F1 or simply Meta). I like it, and it quickly keeps the index up to date. However, by default, baloo also indexes the file contents, which uses too many resources, so I disable the content indexing feature. I blogged about configuring baloo in another post.

Speaking about KRunner, I find it extremely slow on this laptop; especially the first time you run it, it might take a few seconds to show up. I often use it to search for files or programs, and I need such a mechanism to be fast. I found that the application launcher is instead fast to start (Meta key). However, when you start typing to search, the results are all mixed: applications, files, and settings are all together, and it might not be easy to find what you need:

For this reason, I right-click on the KDE icon and select the “Alternative” “Application Menu”:

This launcher has a valuable feature to categorize the search results. For example, with the exact search string as above, I get results clearly separated:

I also use the “yakuake” drop-down terminal a lot:

I run it once (it’s enabled by default by pressing “F12”), and it will start automatically upon logging in.

That’s all! I hope you enjoyed this post. 🙂

You might also look at my other posts on this PineBook Pro laptop.

Xtext, monorepo and Maven/Tycho

TL; DR: Xtext sources are now in a single Git monorepo (https://github.com/eclipse/xtext), and the build infrastructure is based entirely on Maven/Tycho (Gradle is not used anymore).

Background

A few years ago, Xtext sources were split into 6 separate GitHub repositories. I did not take part in that decision (I guess at that time, I wasn’t even a committer).

I think that at that time, the splitting was carefully thought out, aiming at making contributions and maintenance easier. In fact, while Xtext is mainly based on Eclipse, it has several core parts that are independent of Eclipse (including LSP support and plain Maven/Gradle projects support, without the Eclipse UI).

Those core parts were built with Gradle. I’m not a fan of Gradle: I’ve always found Gradle gives you too much power as a build tool, not to mention that, typically, the build breaks with new releases of Gradle. On the contrary, Maven has been around for several years and is rock solid. It’s rigid, so you have to obey its rules. But if you do, everything works.
Moreover, its files (POM) and build infrastructures always look the same in Maven projects, so you know where to look when you deal with a Maven project. For Eclipse projects, Tycho complements Maven. Tycho is not perfect, but everything works if you stick with its rules.

The rigidness of Maven/Tycho usually forces you to keep the build simple. And that’s good! 🙂

Of course, also previously, the Eclipse projects of Xtext had to be built with Maven/Tycho. Thus, the build of Xtext was split into 6 repositories with a mixture of Gradle builds and Maven/Tycho builds.

As a result, Xtext was a nightmare to build and maintain. That’s my opinion, of course, but if you read Xtext GitHub issues, PRs, and discussions, you see I wasn’t the only one to say that.

For example, the main problem was that if you changed something in one of the core projects (i.e., on its Git repository), you had to make sure that all the other projects did not break:

  1. once the core project was built successfully, archive its artifacts (p2 repository and Maven artifacts) in Jenkins (JIRO in Eclipse)
  2. trigger downstream jobs, making sure to use the archived artifacts corresponding to the branch with the changes
  3. wait for all downstream jobs to finish
  4. if anything breaks, go back from the beginning and start again!

Moreover, while it was easy to parametrize the Maven and Gradle files to use the archived Maven artifacts, it was a bit harder to use the archived p2 repositories in the target platforms of the Eclipse downstream projects. Typically, you had to temporarily modify and commit the target platform files to refer to the archived p2 repositories corresponding to the current feature branch. Before merging, you must remember to revert the changes to the target platform files.

In the end, the code in the single Git repositories was actually coupled, so the splitting in single Git repositories has always sounded wrong to me.

The release process used to be complex as well. Again, that was due to Git repository splitting and a mixture of Maven/Gradle builds.

A big change

I had always wanted to bring Xtext sources back into a single Git repository and to use Maven/Tycho only. I told the project lead Christian Dietrich about that many times, but he was always reluctant due to the time and effort it would have taken. On January 2023, I finally managed to convince him to give me a chance 🙂

I first started porting all single repositories to Maven/Tycho, removing the Gradle build.

Things were not so easy, but not even impossible. A few tests were failing in the new build infrastructure, but Christian and Sebastian Zarnekow helped me to bring everything to a working situation. We also took the chance to fix a few things in that respect.

In this stage, I also took the chance to set up the CI building also on GitHub Actions (our primary CI is, of course, Eclipse Jenkins “JIRO”).

Then, I moved to the repo merging. This part was easy. I had prepared the single Git repositories thinking about repository merging, so fixing the foreseen merge conflicts was just a matter of adjusting a few things in only a few files (mainly the parent POM).

Finally, I simplified the release infrastructure a lot. We used to have a vast Jenkinsfile for the release, with many complex operations based on Git cloning all the repositories, tagging, and tons of shell operations.

Now, the whole release is carried on during the Maven build (as it should be, in my opinion), including the uploading to the Eclipse web directories. The Jenkinsfile is very small.

The Oomph setup has been simplified as well. It takes time to have a development workspace, but that is due to dependency resolutions and the first Eclipse build. It takes place automatically.

Now, the complete Maven/Tycho build of Xtext, which consists of compilation and running all the tests (almost 40k tests!), takes about 1 hour and a half on Linux (Jenkins and GitHub Actions) and about 2 hours on macOS in GitHub Actions. It takes much less than before when you had to wait for the builds of the single Git repositories and downstream projects. The building of some single Git repositories was, of course, faster, but the overall time for building all the Git repositories was much more significant in the end.

Final thoughts

I hope this vast change improves the maintainability of the project. It should also help contributions from the community. Maybe I’m biased, but personally, I find working and contributing to Xtext much better now, as it used to be in the old days 😉

To me, the effort was very worthwhile!

In the end, it didn’t take long. Of course, I do not measure in months passed from the start of the porting (I started in the first days of January, and the first release with the new infrastructure, version 2.31.0, was at the end of May, though the first milestone with the new infrastructure was at the beginning of April). The time I spent working on that was much less. Of course, I hadn’t worked on that full-time: I did that in my spare time 🙂

In my humble opinion: If your project’s infrastructure and build are too complex, don’t change the build tools: change the latter and keep it simple. 😉 A single big Git monorepo is easier to deal with than 6 small Git repositories, especially when the single Git repositories contain code meant to belong together.

Many thanks to Christian and Sebastian for letting me work on this restructuring and promptly helping me during the process! 🙂

Hyprland and the Variety wallpaper manager

I’ve just started experimenting with the Wayland compositor Hyprland and wanted to use my favorite wallpaper manager, Variety. Unfortunately, Variety does not support Hyprland out of the box. However, it’s easy to make it work also on Wayland.

I’m going to use Arch Linux in this blog post.

First of all, you must install “swaybg”, a wallpaper tool for Wayland compositors, and “variety”:

Now, start variety and do the first-time configuration. Currently, trying to change the wallpaper will not work.

Variety creates the directory “~/.config/variety/scripts”. Edit the file “set_wallpaper” inside that directory and search for the block starting like this:

Change it like that (you could also remove the part about SWAYSOCK if you want or if you don’t plan to use “sway” at all):

This relies on the XDG_CURRENT_DESKTOP environment variable to be set accordingly, which should be like that automatically; you might want to check that:

Restart Variety, and now you can change the wallpaper!

Stay tuned for more posts on Hyprland 🙂

 

Installing Arch Linux with BTRFS on a PineBook Pro (external storage)

This is a follow-up to the article Installing Arch Linux on a PineBook Pro (external storage); differently from the previous post, this one is based on more automatic mechanisms, so it will be easier to copy and paste commands once a few variables have been correctly and carefully set. Moreover, in this post, I’ll install KDE instead of GNOME. Finally, we’ll use BTRFS for the main partition, instead of EXT4.

This post will describe my experience installing Arch Linux on a PineBook Pro on external storage (a micro SD card in this example). Thus, the Manjaro default installation on the eMMC will not be touched. You should use a fast card, or the overall experience will be extremely slow.

The post is based on the instructions found at https://wiki.pine64.org/wiki/Pinebook_Pro_Software_Release#Arch_Linux_ARM.

The installation process consists of two steps:

  • First, install the official Arch Linux ARM distribution; this will not be enough to have many hardware parts working (e.g., WiFi, battery management, and sound).
  • Then, add the repositories with kernels and drivers for the PineBook Pro.

The first part must be performed from an existing Linux installation on the PineBook Pro. I will use the Manjaro installation that comes with the PineBook Pro. The second part will be performed on the installed Arch Linux system on an external drive (a USB stick in this example). Since after the Arch Linux installation, the WiFi is not working, for this part, you need to connect to the wired network, e.g., with an ethernet USB adapter.

Finally, I’ll also show how to install KDE.

First part

This is based on https://wiki.pine64.org/wiki/Installing_Arch_Linux_ARM_On_The_Pinebook_Pro.

I insert my SD card, which is /dev/sda. (Use “lsblk” to detect that.) By the way, typically, an SD card should be detected with a device name of the shape “/dev/mmcblkX”, but in this example, the SD card is inserted in a USB adapter, so its device name has the typical shape “/dev/sdX”.

From now on, I’m using this device. In your case, the device name might be different.

From now on, all the instructions are executed as “root” from a terminal window; thus, I first run:

I will do the following steps in a directory of the root’s home:

We need to download and extract the latest release of Tow-Boot for the Pinebook Pro from https://github.com/Tow-Boot/Tow-Boot/releases. At the time of writing, the latest one is “2021.10-005”:

Now we flash Tow-Boot to /dev/sda (replace this with the device you are using).

Remember: this will wipe all the contents of the specified device. Moreover, make sure you specify the correct device, or you might overwrite the eMMC of the computer.

To make things easily reproducible and minimize the chances of specifying the wrong device name (which is extremely dangerous), I will use environment variables:

The process creates the partition table for the device, with the first partition for Tow-Boot. This first partition must not be modified further. As you see in a minute, we skip the first partition when we further partition the disk.

The output should be something like this:

Now, we must create the partitions on the USB stick. The process is documented step-by-step here https://wiki.pine64.org/wiki/Installing_Arch_Linux_ARM_On_The_Pinebook_Pro#Creating_the_partitions, and must be followed strictly:

The instructions must be followed strictly concerning, at least, the very first partition (the boot partition) that will be created, which must NOT touch the one created in the previous step. Then, after creating the boot partition, I’ll do things slightly differently: I will create a SWAP partition (not contemplated in the above instructions; The PineBook Pro has only 4 Gb of RAM, and it is likely to exhaust it, so it’s better to have a SWAP partition to avoid system hangs. Then, I’ll create the root partition.

These are the essential contents of my terminal where I follow the above-mentioned instructions (since I had already used this USB stick for some experiments before writing this blog post, fdisk detects an existing ext4 signature). Remember, though, that I created a SWAP partition that was not described in the above-mentioned instructions:

Now I format the boot, the swap, and the root partitions. I will use EXT4 for the boot partition and BTRFS for the root partition.

Again, to increase reproducibility and avoid possible mistakes, I’m going to define additional environment variables, based on the one I have already created above, to refer to the 3  partitions:

It’s worthwhile to double-check that all the environment variables refer to the right partitions:

Remember that I’m using the environment variables set above:

Now we mount the root partition to create the BTRFS subvolumes, following a standard scheme, and we unmount it:

Now we have to mount all the subvolumes to the corresponding directories (the “-m” flag creates the mounting subdirectory if it does not exist); I’m enabling BTRFS compression (by default, the compression level for zstd will be 3):

Then, we mount the boot partition on “/mnt/boot” (again, by creating that):

Let’s verify that the layout of the destination filesystem is as expected:

Now, we download the tarball for the rootfs of our USB stick installation. The instructions are once again taken from the link mentioned above, and they also include the verification of the contents of the downloaded archive:

And we extract the root filesystem onto the mounted root partition of our USB stick:

This is another operation that takes time.

Now, we must create the “/etc/fstab” on the mounted partition. To do that, we need to know the UUID of the two partitions by using “blkid”. You need to take note of the UUID from the output (which will be completely different according to the used external device):

Let’s take note of the UUIDs (remember, they will be different in your case) and create the corresponding environment variables:

We create the file “/etc/fstab” in “/mnt” according to the BTRFS subvolumes and to the other two partitions. This can be done by running the following command, which relies on the values of the 3 environment variables that we have just created:

Finally, we need to create the file “/mnt/boot/extlinux/extlinux.conf” (the directory must be created first, with:

Once again, the contents are generated by the following command that relies on the environment variable for the UUID of the root partition:

Note that we must specify “rootflags=subvol=@” because the “/” of is on the subvolume “@”. Otherwise, the system can boot, but then nothing else will work.

We can now unmount the filesystems

And we can reboot into the (hopefully) installed Arch Linux on the USB stick to finish a few operations. Remember that we need a wired connection for the next steps.

Upon rebooting, you should see the two entries (if you let the timeout expire, it will automatically boot the first entry):

After we get to the prompt, we can log in with “root” and password “root” (needless to say: change the password immediately).

Let’s connect a network cable (you need a USB adapter for that), and after a few seconds, we should be online. We verify that with “ifconfig”, which should show the assigned IP address for “eth0”.

Since there’s no DE yet, I suggest you keep following the official web pages (and this blog post) by connecting to the PineBook Pro via SSH so that it will be easy to copy and paste commands into the terminal window of another computer. Moreover, when logged into the PineBook Pro directly, you will see lots of logging information directly on the console (I guess this could be prevented by passing specific options to the kernel, but we’ll install a DE later, so I don’t care about that much). The SSH server is already up and running in the PineBook Pro installed system, so once we know the IP address from the output of “ifconfig”, we can connect via SSH. However, root access via SSH is disabled, so we must connect with the other predefined account “alarm” and password “alarm” (again, you might want to change this password right away):

Once we’re logged in since “sudo” is not yet configured, we switch to root:

We have to initialize the pacman keyring:

The guide https://wiki.pine64.org/wiki/Installing_Arch_Linux_ARM_On_The_Pinebook_Pro ends at this point.

What follows are my own instructions I usually run when installing Arch.

In particular, I configure time, network time synchronization, and timezone (Italy, in my case):

The next step is required for doing gnome-terminal work (and it’s also part of the Arch standard installation instructions):

Edit “/etc/locale.gen” and uncomment “en_US.UTF-8 UTF-8” and other needed locales.

Generate the locales by running:

Edit the “/etc/locale.conf” file, and set the LANG variable accordingly, for example, for the UTF-8 local above:

We could run a first system upgrade

I don’t know if that’s strictly required because we’ll add the additional repository for the PineBook Pro in a minute. However, just in case, it might be better to update the system.

Let’s reboot and verify that everything still works.

The kernel at the time of writing is

NOTE: By the way, I noted that if I want to boot from the USB stick, it’s better to use the right-hand side USB port (which is USB 2) instead of the left-hand side port (USB 3). Otherwise, the system seems to ignore the system on the USB stick and boots directly to the installed Manjaro system.

Second part

As mentioned above, I suggest connecting to the PineBook Pro via SSH. In any case, what follows must be executed as “root” (“su -“).

Let’s now add the repositories with kernels and drivers specific to PineBook Pro.

The project is documented here: https://github.com/SvenKiljan/archlinuxarm-pbp, and these are the contents of the additional repository that we’ll add in a minute https://pacman.kiljan.org/archlinuxarm-pbp/os/aarch64/.

Note that this project also provides a way to install Arch Linux directly with these repositories, with a procedure similar to the one in the first part. I prefer to install official Arch Linux first and then add the additional repositories, though.

The addition of the PineBook Pro repository to an existing Arch Linux installation and the installation of specific kernel and drivers is documented as a FAQ: https://github.com/SvenKiljan/archlinuxarm-pbp/blob/main/FAQ.md#how-do-i-migrate-from-other-arch-linux-arm-releases-for-the-pinebook-pro.

The addition of the PGP key and the repositories to “/etc/pacman.conf” is done by pasting the following commands (remember, as the user “root”):

Let’s now synchronize the repositories

And let’s install the packages specific to the PineBook Pro (note that we’re going to install the Linux kernel patched by Manjaro for the PineBook Pro):

Of course, we’ll have to answer “y” to the following question:

Let’s reboot and verify that everything still works (again, by connecting via SSH).

Now, we should be using the new kernel:

Before installing a DE, I prefer creating a user for myself (“bettini”) and configuring it as a “sudoer”. (We must install “sudo” first).

Then (by simply running “visudo”), we enable the users of the group “wheel” in “/etc/sudoers”; that is, we uncomment this line:

Then, I try to re-connect with my user and verify that I can run commands with “sudo” (e.g., “sudo pacman -Syu”).

Install KDE

As usual, I’m still doing these steps via SSH.

I’m going to install KDE with some fonts, pipewire media session, firefox, and the NetworkManager:

It’s about 680 Mb of packages to install, so please be patient.

Now, I enable the primary services (the login manager, the NetworkManager to select a network from KDE, and the profile daemon for switching between power profiles, e.g., “Balanced” and “Powersave”):

OK, time to reboot.

The graphical SDDM login manager should greet us this time, allowing us to get into KDE and select a WiFi connection.

NOTE: I always hear a strange noise when the login manager or the KDE DE starts. It also happens with the pre-installed Manjaro. It must be the sound card that gets activated…

IMPORTANT NOTE: Upon rebooting, the WiFi does not always work (it looks like the WiFi card is not seen at all); that also happens with Manjaro. The only solution is to shut down the computer (i.e., NOT simply rebooting it) and boot it again.

Here’s the KDE About dialog:

And of course, once installed, let’s run “neofetch”:

That’s all for now!

In a future blog post, I’ll describe my customizations to KDE (installed programs and configurations).

Stay tuned! 🙂

My script for automated Arch Linux installation

In a previous post, I reported the procedure for installing Arch Linux. The procedure is basically the one shown in the official Arch Wiki.

After a few manual steps, this post will show my installation script for automatically installing Arch Linux. I took inspiration from https://github.com/ChrisTitusTech/ArchTitus, but, differently from that project, my script is NOT meant to be reusable. The script is heavily tailored to my needs. I describe it in this post in case it might inspire others to follow a similar approach 🙂

The script (which actually consists of several scripts called from the main script) is available here: https://github.com/LorenzoBettini/my-archlinux-install-script.

I’ll describe the script by demonstrating its use for installing Arch Linux on a virtual machine (VirtualBox). However, I use the script for my computers. Also, for real computers, I perform the installation via SSH from another computer for the same reasons I have already explained.

The virtual machine preparation is the same as in my previous post, so I’ll start from the already configured machine.

I start the virtual machine with the Arch ISO mounted:

Inside the live environment, the SSH server is already up and running. However, since we’ll connect with the root account (the only one present), we must give the root account a password. By default, it’s empty, and SSH will not allow you to log in with a blank password. Choose a password. This password is temporary; if you’re in a trusted local network, you can choose an easy one.

Then, I connect to the virtual machine via SSH.

From now on, I’ll insert all the commands from a local terminal connected to the virtual machine.

Initial manual steps

First, I ensure the system clock is accurate by enabling network synchronization NTP:

Then, I partition the disk according to my needs. My script heavily relies on this partitioning scheme consisting of four partitions:

  • the one for booting in UEFI mode, formatted as FAT32, 300Mb (it should be enough for UEFI, but if unsure, go on with 512Mb)
  • a swap partition, 20Gb (I have 16Gb, and if I want to enable hibernation, i.e., suspend to disk, that should be enough)
  • a partition meant to host common data that I want to share among several Linux installations on the same machine (maybe I’ll blog about that in the future), formatted as EXT4, 30Gb
  • the root partition, formatted as BTRFS, the rest of the disk

To do that, I’m using cfdisk, a textual partition manager, which I find easy to use. In the virtual machine, the disk is “/dev/sda”:

The partitions must be manually formatted:

Sometimes, I have problems with the keyring, so I first run the following commands that ensure the keyring is up-to-date:

I’m going to clone the installation script from GitHub, so I need to install “git”:

And now, I’m ready to use the installation script.

Running the installation script

First, I clone the installation script from GitHub:

The script has no parameter but relies on a few crucial environment variables to set appropriately. The first four variables refer to the partitions I created above. The last one is the name for the machine (in this example, it will be “arch-gnome”):

The script will check that all these variables are set. However, it does not check whether the specified partitions are correct, so I always double-check the environment variables.

And now, let’s run it:

The script will do all the Arch Linux installation steps. These automatic steps correspond to the ones I showed in my previous post, where I ran them manually.

When the script finishes (it takes a few minutes), I have to perform a few additional manual operations before rebooting. I’ll detail these latter manual operations at the end of the post. In the next section, I’ll describe the script’s parts.

The installation script(s)

As I anticipated, the script actually consists of several scripts.

The main one, install.sh, is as follows:

Note that the installation logs are saved in the “bettini” user’s home directory (the last run script will create the user). These can be inspected later.

The main script calls the other scripts.

We have the script for checking that all the needed environment variables are set (00_check.sh):

The script 01_mount-partitions.sh mounts the partitions and, for the main BTRFS partition, also creates the BTRFS subvolumes:

The script 02_pacstrap.sh performs the “pacstrap” (it also sets the mirrors) and generates the /etc/fstab:

Then, 03_prepare-for-arch-chroot.sh prepares the script for arch-chroot: it copies all the shell scripts into the /mnt/root:

In fact, by looking at the main script, you see that further shell scripts are executed using arch-chroot.

The script 04_configuration.sh takes care of all the configuration steps:

Note the use of the environment variable INST_HOSTNAME for creating the file /etc/hosts. I’m using en_US.UTF-8 for the language, but other local configurations are for Italy.

The script 05_bootloader.sh configures and installs GRUB. It also configures GRUB for the “mem_sleep_default” parameter (for suspend) and for hibernation; in that respect, it also configures mkinitcpio accordingly (note the “resume” hook):

Note that it uses the generated /etc/fstab to retrieve the UUID of the swap partition.

Finally, the script 06_user.sh creates my user and configures it so that I can use “sudo”:

It also sets the right permissions for my user in the mount point where I want the shared partition.

That’s all. The script also prints a message to remind me to set the password for my user.

Final manual steps

I execute a few manual steps to finalize the installation when the script finishes.

First of all, I once again use arch-chroot:

And I set the password for my user:

Then, I install KDE or GNOME (not both).

For KDE, I would run the following:

For GNOME, I would run the following:

And that ends the installation.

I exit chroot and unmount /mnt:

As you see, most of the steps are performed by the script! 🙂

I can restart the system (in this example, the virtual machine) and enjoy the installed Arch!

That’s another reason why I love Arch Linux so much: the installation can be easily scripted!

It took me some time to finalize all the scripts, but using a virtual machine, especially with snapshots, wasn’t that hard. I encourage you to bake your installation script. It’ll be fun 🙂

By the way, before existing chroot and rebooting, I usually run my Ansible playbook for installing other programs (either KDE or GNOME) and configure the system and user according to my needs. I’ll blog about such a playbook in the future.

KVM Virtual Machine Manager and Virtual Machines on external drives

Last year, I blogged about my first experiences with KVM and Virtual Machine Manager.

Then, I stopped using KVM because I’ve always found VirtualBox easier for my experiments. In particular, with VirtualBox, it is trivial to store virtual machines on an external drive (I mean, a fast external SD, of course): you specify to use a directory on the external drive, and all information about the virtual machine will be stored there. Then, you attach the drive to another computer with VirtualBox and open the virtual machine from the external drive. Easy!

Things are more complicated with KVM, QEMU, and Virtual Machine Manager. Even making QEMU access an external drive requires additional configuration steps.

In this blog post, I’ll summarize the steps to achieve that.

I’ll first show the manual export/import procedure for the machines’ metadata information. Then, I’ll show a different approach based on symlinks.

It was time to try KVM again because it’s faster than VirtualBox.

I’ll describe the installation steps for EndeavourOS and pure Arch Linux. I guess the steps for other distributions are similar.

Installation and configuration

Let’s install a few packages for KVM, QEMU, and the Virtual Machine Manager:

If you get this message, accept to remove “iptables”:

To use your user without entering the root password, we need to edit the file “/etc/libvirt/libvirtd.conf” and uncomment the following lines:

Or, append them at the end of the file:

Add your user account to the “libvirt” group.

Now comes the crucial part for letting QEMU handle machines on external drives: we need to add our user to “/etc/libvirt/qemu.conf”. This can be done by setting the appropriate entries in the file or by simply appending the entries at the end of the file:

If you want to start the virtualization service and the default virtual network automatically at boot, you run:

Since I’m not using virtual machines daily, I prefer to start them when needed, so I don’t run the above commands. Of course, I must remember to run these commands (note, for the network is “start” instead of “autostart”) before starting the “Virtual Machine Manager”:

Remember you can always use:

to see the service status and possible errors shown when running this command.

OK, time to reboot now.

Let’s create a virtual machine on an external drive

I created a directory “kvm/images” on my external USB SD to store the virtual machine images.

Let’s start the “Virtual Machine Manager” program. We should see “QEMU/KVM”:

Let’s create a new virtual machine with the leftmost toolbar button.

I specify a local ISO.

I don’t create a pool for ISOs and use “Browse Local” to select an ISO in my external drive.

In this example, I will install EndeavourOS on the virtual machine. I have to select the operating system manually (start typing, and you get completions):

Time to allocate resources for the virtual machine. I’m giving the VM half my RAM and half my cores:

Now here’s the essential part of disk selection. Remember, I want to use my external drive, so I select custom storage and press “Manage”:

In the following dialog, I use the “+” button in the bottom left corner to create a new pool:

I give the pool the name “images” and specify the directory I mentioned above on my external drive:

After pressing “Finish”, I select the created pool and add a “Volume” (with the other “+” button)

I give the disk image a proper name and enough size (recall that the image will NOT allocate all the size immediately, but only on-demand):

Select the volume and press “Choose Volume”:

On the final dialog, make sure the default network is selected and that you check “Customize configuration before install” (note that I also changed the name for the virtual machine):

Let’s press “Finish,” and get to the configuration dialog. I changed the Firmware from “BIOS” to “UEFI”, pressed “Apply,” and finally, we can start the installation with “Begin Installation”.

We should not get any error from QEMU because it cannot access the external drive, thanks to the configuration shown above in the qemu.conf file!

After the GRUB menu, we should see the installer log:

And then, the EndeavourOS installer dialog:

Since I’ve already blogged about EndeavourOS installation, I’ll skip the detailed steps. I’ll install the GNOME desktop environment and let the installer use the whole disk space with the BTRFS filesystem and SWAP with hibernate (later, I might want to check whether hibernate works in the VM).

In a few minutes, the installation finishes! We get to the GRUB menu of the installed system:

And to the installed GNOME desktop:

The disk image is correctly created in the external drive:

And the information about the virtual machine is in the:

Export the virtual machine

First, let’s shut down the machine.

Let’s export the virtual machine to use it from another computer. I understand that having the same software on the other host is crucial. Since I’m using EndeavourOS or Arch on my main computers, that is not a problem.

But isn’t the virtual machine already in an external drive? Why do I have to export it?

That’s the main difference with VirtualBox I mentioned at the beginning. The disk image is on an external drive, but the virtual machine information (configuration and metadata) is on a local XML file (see the listing of “/etc/libvirt/qemu” above; the XML file of the virtual machine is “eos-kvm-gnome.xml”, after the name I gave to the virtual machine when I created it).

Remember that the XML has an absolute path pointing to the disk image on the external drive:

So, again, in the other computers, the mount point of the external drive must be the same; otherwise, the absolute path must be manually adapted.

We could copy the XML file directly on the external drive (somewhere near the disk image to be easily found), e.g.:

Alternatively, if we don’t remember the location of the XML file, we can use the “dump” command.

For example, we can first list the current machines (in the example, I have only one):

And then, we dump its XML configuration:

We’re ready to import and use the VM on another computer

Import the virtual machine

I have already installed and configured KVM on another computer, following the same procedure at the beginning of the post.

Since I haven’t enabled the services at boot time, I run the following:

I connect the external drive and ensure it’s mounted (remember, on the same mount point as in the other computer).

Then, I create the virtual machine information locally by using the XML file on the drive I created above:

We can verify that the XML is now in the directory of QEMU:

Let’s start “Virtual Machine Manager,” and we can see the virtual machine:

We can start it, and it should work as on the other computer.

Cloning and Snapshots

Let’s create a clone of this virtual machine, e.g., with the context menu of the machine in the main user interface.

The destination path is based on the path of the current machine, the external drive, which is good.

Let’s wait for the clone to finish, and then we have two virtual machines:

If I want this clone to be usable on other computers, I repeat the export procedure for this new virtual machine:

I’ll leave this clone virtual machine as it is for now, and I’ll create a snapshot in the other virtual machine, the original one.

Snapshot information is stored somewhere else, NOT in the XML of the virtual machine:

So we need them as well if we want to use them on another computer.

To add the snapshot to the other computer, I have to run:

However, keep in mind that if you try to start a snapshot, you get this warning:

So if you don’t want to lose the current state, create another snapshot for the current state before restoring a previous one. Moreover, if the snapshot’s state is “Shutoff”, “starting” the snapshot only restores it. Then, you must start the virtual machine.

A different approach: symlinks

In the previous sections, I showed how machine information (including snapshots) and images could be put on external drives. Besides the machine images residing on external drives from the beginning, the machine metadata is still on your hard disk. In fact, you must first export them (e.g., on the external drive) and then import them on another computer.

A more radical approach consists of keeping the metadata on the external drive only and creating symlinks in each computer’s libvirt/qemu directories.

On the first computer, the XML files of machine information and snapshots have to be copied onto the external drive. IMPORTANT: don’t dump information as we did above; you need to copy the original XML files themselves. Dumping does not generate the exact XML files stored on the libvirt/qemu directories. In fact, as shown above, the dumped XML files must be imported with dedicated commands.

In my case, on the first computer, I run:

So, on the external drive, I end up with these contents:

On the same computer, I run the following commands (make sure the “libvirtd.service” is not running):

Now, I can start the “libvirtd.service” and the default network, and I make sure I can still access all my machines stored on the external drive, including all the machine information.

Of course, if you have never created virtual machines and want to start creating them on the external drive, it is enough to run the above commands. Then, start creating machines. Remember to select the external drive for the image location.

Then, on the other computers where I have already installed the same software for KVM, QEMU, etc., I first ensure the “libvirtd.service” is not running (in case stop it). Then, I connect my external drive and run the above commands (these will remove possible existing machines’ information, so be careful).

Of course, the above commands must be run only the first time.

Now, I can start the “libvirtd.service” and the default network, and I can access all my machines stored on the external drive, including all the machine information. Every modification (an image content or a machine configuration) will be stored on the external drive.

This approach works if you want to store ALL your machines on the external drive. You won’t have to keep the information in sync because they are stored in a single place.

If you need to keep some machines on your computers and others on different external drives, you must use the above-shown manual procedure for exporting and importing. It is then up to you to remember to re-export/re-import if you change a machine’s configuration or a snapshot.

Happy virtualization! 🙂

Customizing Gnome in Arch Linux on a PineBook Pro

In a previous blog post, I showed how to install Arch Linux on a PineBook Pro.

In this blog post, I’m showing how I customize Gnome on that installation.

First, Gnome 43 has “Gnome Console” as the default terminal application. I wouldn’t say I like it since it’s too basic. So I install the traditional “Gnome Terminal”:

Then, I set “Ctrl+Alt+T” as a shortcut for opening the terminal:

Then, I install an AUR helper. I like “yay,” so I first installed the needed dependencies:

And then

I install, by using “yay”, the helper the “Gnome Browser Connector” to install Gnome extensions from Firefox (Some extensions are already installed by default as system extensions. You can use the “Extensions” application to enable/disable extensions):

Now I can navigate to https://extensions.gnome.org and install and enable a few extensions (you also need to install the Firefox extension add-on when asked). For example, “AppIndicator and KStatusNotifierItem Support” and “X11 Gestures”.

The last extension helps enable Touchpad gestures in the X11 session (Gnome Wayland already provides touchpad gestures, but I prefer to use the X11 session). This extension relies on “touchegg” that must be installed. For ARM, we need to install the AUR package:

You will get this warning, but proceed anyway: it compiles and works fine:

Let’s start “touchegg” and verify that gestures work

And then let’s enable it so that it automatically starts on the subsequent boots:

Let’s move on to ZSH, which I prefer as a shell:

Since I’m going to install “Oh My Zsh” and other Zsh plugins, I install these fonts (remember from the previous post that I had already installed “noto-fonts” and “noto-fonts-emoji”) and finder tool (“curl” is required for the installation of “Oh My Zsh”):

Let’s install “Oh My Zsh” by running the following command as documented on its website:

When asked, I agreed to change my default shell to Zsh. In the end, we should see the prompt changed to the default one of “Oh My Zsh”:

I then install some external plugins:

And I enable them by editing the ~/.zshrc, in particular, the “plugins” line (I also enable other plugins that are part of the OMZ distribution):

Once saved, you have to start a new terminal with zsh to see the plugins in action (remember that, until you log out and log in, the default shell is still BASH, so you might have to run “zsh” manually to switch to ZSH in the currently logged session).

Besides the syntax highlighting for commands, you have completion after “cd” (press TAB), excellent command history (with Ctrl+R), suggestions, etc.

Let’s switch to the “Starship” prompt. Let’s run the documented installation program:

Now, let’s edit the ~/.zshrc file again; we comment out the line starting with “ZSH_THEME,” and we add to the end of the file:

Opening another ZSH shell, we should see the fantastic Starship prompt in action, e.g.,

To quickly search for file names from the command line, I install “locate”, enable its periodic indexing and run the indexing once the first time (if you’re on a BTRFS file system, you might want to have a look at this older post of mine):

Then, you should be able to look for files with the command “locate” quickly.

Gnome uses “Tracker” (in the current version, the command is “tracker3”) for file indexing and searching, e.g., from the “Activities” view. I like it, and it quickly keeps the index up to date. However, the “tracker extract” service also indexes the file contents, and that uses too many resources, so I disable that service:

I also use the “guake” drop-down terminal a lot:

I run it once (it’s enabled by default by pressing “F12”), and I configure it to start automatically when Gnome starts (by running “Guake Preferences” -> “Start Guake at login”).

I hope you enjoyed this post! 🙂

Installing EndeavourOS Linux on an Acer Aspire Vero

I have already blogged about my new computer Acer Aspire Vero and how to install Ubuntu on that.

In this blog post, I’ll briefly discuss installing EndeavourOS on the same computer. I wrote it some months ago, so it’s not based on the new EndeavourOS Cassini version. It’s based on Endeavour OS Nova. However, the procedure and the results should be the same also with the current version of the EndeavourOS installer.

First of all, the installer detected my Ethernet card and nicely proposed using a working driver:

I choose the default.

Then, after the WiFi connection has been established, it’s time to start the installation:

I still haven’t tried “Customizing the install process”, https://discovery.endeavouros.com/installation/customizing-the-endeavouros-install-process/2022/03/. I’ll have a look at it in the future, maybe.

First, I updated the mirrors, choosing my country (actually, it had already been detected by the installer):

I started the installer and chose the “Online” method to install KDE, not Xfce (the default DE).

I choose America English (though I’m Italian, I always prefer to have my OS in English). The location has been automatically detected again, and I’ll stick with the proposed settings:

I choose “Manual partitioning” because I want to keep Windows and my current two other Linux installations.

I mount the EFI partition to “/boot/efi” (the “boot” flag is automatically selected):

I create a new partition for the root partition on the free space, choosing BTRFS:

I also mount the existing EXT4 partition to share some common work data (including Docker images, containers, and Java-related stuff). The final layout is as follows:

When I continue, I get a warning because of the EFI partition, which is expected to be at least 300Mb; mine is smaller, but I’m sure there’s enough space, so I continue:

For the desktop, I select “Plasma KDE”.

Now we get to the package selection. Some packages are already selected by default:

I deselect from “Desktop Base” => “GPU drivers” the “xf86-video-intel” since it’s known to give a few problems (including the screenshot tool Spectacle capturing old screen contents), and I’ll rely on the default mesa. I also select the LTS kernel in additions since I prefer an LTS besides the latest kernel (in case of problems, the LTS kernel usually works best).

Moreover, I also select everything concerning printing:

After the user details, it’s time to review the partitioning, which looks reasonable.

Let’s start the installation! Remember to “Toggle log” to see what the installer is doing under the hood.

In a matter of minutes, the installation finished successfully.

Before rebooting, you might want to save the “endeavour-install.log” file generated by the installer in the home folder of the “liveuser”.

And here’s the installed system:

I set the fonts to 120 (that is, 25% bigger) so that I could read better.

The sound does not work fine. I tried to play a video on YouTube, and it worked, but now and then, I get no sound at all (even if I increase/decrease the volume, I get no sound from the DE). I guess that’s due to the “wireplumber” installed by default. On Arch News, they suggest using “pipewire-media-session” instead of “wireplumber”. So I do as suggested:

And reboot (you have to accept the removal of “wireplumber”).

EndeavourOS works great on this laptop! 🙂

Snapper and grub-btrfs in Arch Linux

Up to now, I’ve been using Timeshift and grub-btrfs in my Linux installations because I found Timeshift easy to use and straightforward to install. I was scared by Snapper because I thought it was hard to use and complex to install. I had been fooled by many tutorials I found online, but maybe they were obsolete, or they were not using the right packages. I was wrong: using the right packages provided in Arch and AUR repositories makes it straightforward to use Snapper and grub-btrfs. You also get a program that automatically takes a snapshot when installing/updating your system.

This is more of a report than a tutorial.

I tried this procedure on EndeavourOS and Arch, and, as expected, the final result was the same. However, as shown later, Arch requires a few adjustments in the /etc/fstab file.

The BTRFS subvolume layout of EndeavourOS is ideal for snapper snapshots and for booting them with grub: the subvolume “@” for “/”, “@home” for “/home”, and separate subvolumes for “/var/log” and “/var/cache”. That’s basically the same as I use for Arch installations.

If you already have grub-btrfs because you use it with Timeshift (e.g., with the procedure described in one of my previous posts), it’s better to remove the package so that it will also remove possible enabled services and the custom configurations for timeshift:

If you were using Timeshift, you also provided a custom configuration for grub-btrfsd, which is not automatically removed during the previous command. The files must be removed explicitly:

Also, remove timeshift and timeshift-autosnap

If you don’t do that now, you will be asked to do that when installing the packages for snapper anyway.

If you installed Arch the Arch way, when generating /etc/fstab, the command has added “subvolid=…” entries in /etc/fstab, which will disturb when restoring snapper snapshots. For example, if you tried to restore a snapshot with btrfs-assistant (which we’ll install in a minute), you’ll get such a warning dialog:

Since the generated /etc/fstab contains both “subvolid=…” and “subvol=…”, I find it safe to remove the “subvolid” parts. I do that with this sed command (of course, do that at your own risk and take a backup of the file first):

If you installed another Arch-based distro, like EndeavourOS, the /etc/fstab should already contain only “subvol=…” entries, so the above command is not required.

Install the following two packages with an AUR helper (e.g., “yay” in my case). The first one is a meta-package that will install snapper and other utilities like “snap-pac” (“Pacman hooks that use snapper to create pre/post btrfs snapshots when installing/upgrading/removing packages”) and “grub-btrfs” (the default configuration of grub-btrfs works already with snapshots created by snapper).

If you haven’t previously uninstalled timeshift and timeshift-autosnap, you’ll get this message, as I mentioned above:

During the installation of the above two packages, we can see a few interesting things in the log:

The installation creates a configuration for snapper for the root subvolume and configures the service to automatically update the grub menu for booting snapshots. It also creates the very first snapshot, “1”. Since we then install btrfs-assistant, it also creates a “pre” snapshot, “2”, and when the installation of btrfs-assistant finishes, it creates a “post” snapshot, “3”.

Let’s run btrfs-assistant:

Let’s explore its tabs:

Note the existing subvolumes and the newly created subvolume for snapshots “.snapshots”.

The next tab shows the snapshots taken during the installation command we issued to install the programs. Note the numbers of the snapshots and compare them with the installation log shown above. Moreounlike from “timeshift-autosnap”, “snap-pac” creates meaningful and comprehensible names for the snapshots.

Note that you act on a single configuration in this and in the next tab. By default, we have the one created during the installation for the root subvolume (see the “Select config” drop-down menu). If you have other configurations (e.g., for snapshots of the home subvolume), you must select the intended configuration.

With the first tab, you can create/delete snapshots. With the second tab, you can browse them or restore them:

On the last tab, you can see the enabled services and possibly perform further configuration. For the moment, I’m not touching that part: I’m OK without automatic snapshots (since I know they will be taken when installing/upgrading/removing commands) and with the automatic cleanup of old snapshots:

From btrfs-assistant, you can also select the checkbox to show existing Timeshift snapshots:

You might want to remove them once you’re sure the new setup with snapper and grub-btrfs works correctly.

Let’s do some experiments browsing the current snapshots. For example, let’s select the second one and click “Browse”:

Navigating to “/usr/bin,” we can verify that “btrfs-assistant” is not there. In fact, snapshot “2” was taken before installing btrfs-assistant.

Let’s browse snapshot “3”:

This time, “btrfs-assistant” is present in “/usr/bin”. In fact, snapshot “3” was taken after the installation of btrfs-assistant (it’s a “post” snapshot).

From the screenshots above, we can see that snapshots are also browsable from the file system: they are all inside “/.snapshots” (for the root subvolume configuration), each one with the corresponding number. You must be root to browse them.

Let’s experiment with booting snapshots from grub.

Before installing snapper and the other programs, I had previously installed “neofetch” on this machine. I’m going to remove it:

Two new snapshots have been automatically created by “snap-pac” (one before the removal and one afterward):

Let’s reboot the machine and navigate through the snapshots menus, selecting the snapshot corresponding to the state before the removal of neofetch:

Now, we’re inside that snapshot, and we can verify that neofetch is still there:

Let’s say that we want to restore this snapshot for good. Let’s run btrfs-assistant, select the snapshot we have just booted, and press “Restore”:

We get a confirmation dialog, and we can specify a name for the backup that will be taken (in this example, I’ll specify “before-restoring”):

Upon confirmation, we get a warning that urges us to reboot as soon as possible:

Let’s reboot. This time we select the default grub menu entry (not a snapshot).

We can verify once again that neofetch is still there.

From btrfs-assistant, we can see the subvolume with the backup, which we can delete once we’re sure that everything is still working:

If you are using “plocate” or “locate” (see also my older post about locate and BTRFS), you should also exclude “.snapshots” to “PRUNENAMES” (this should already contain some directories like “.git .hg .svn”:

And add “.snapshots” to “PRUNENAMES”, e.g.,

Configuration files are in the directory “/etc/snapper/configs/“. Currently, we have only one configuration, “root” (for the root subvolume), created during the installation.

In that file, we can see the line

corresponding to the setting in btrfs-assistant that disables the automatic timeline snapshots.

Moreover, we have the following:

which again corresponds to the setting in the btrfs-assistant screenshot shown above.

For further configurations, I suggest looking at Snapper’s great Arch wiki page.

To summarize, snapper with these additional programs looks nice and is more flexible than Timeshift and timeshift-autosnap.

You might want to give it a try! As usual, you might start with a virtual machine 😉

Using Dropbox on a PineBook Pro with Maestral

Dropbox does not provide a client for the Linux Arm architecture, so you don’t have a client for a PineBook Pro.

However, you can use the open-source project Maestral:

Maestral is a lightweight Dropbox client for macOS and Linux. It provides powerful command line tools, supports gitignore patterns to exclude local files from syncing and allows syncing multiple Dropbox accounts.

The Arch AUR repository provides packages for Maestral:

So I’m going to install the “Qt interface for Maestral” with the “yay” AUR helper I already have on my PineBook Pro (this will install a lot of python packages, and the installation will take a few minutes):

Let’s run maestral GUI from the command line:

And the app appears:

Let’s click on the button to link to the Dropbox Account

And click on “here” to retrieve the authorization token. This should open the browser; alternatively, you’ll be asked to select an application to open the full URL in KDE. Unfortunately, selecting a browser in KDE does not work, and it will keep asking for the application.

Thus, in KDE, I run from the command line:

This allows me to print the auth URL to the console, copy it and paste it into a browser. I can then authorize Maestral on the Dropbox site.

The Dropbox website then shows me a token that I copy and paste on the Maestral window above and press “Link”. (The other run command, “maestral auth link,” can be interrupted.)

The setup proposes to select a local folder to synchronize with Dropbox. Note that by default, it proposes “Dropbox (Maestral)”, but I prefer the standard one “Dropbox”, so I modify it accordingly.

And now, it’s time to select the folders to synchronize. I start with a very minimal subset of my Dropbox folders. As noted below, the initial synchronization will take a lot of time (depending on the size of all the Dropbox contents, not the folders you select).

In the taskbar, you can see the Maestral icon that started to synchronize. The icon provides a context menu.

From the command line, you can see the status of the synchronization with

The first run takes much time, especially for the initial “Indexing”. That’s due to a known issue, https://github.com/samschott/maestral/issues/832:

It does index the entire Dropbox, even if only a few items are selected in Selective Sync.

For example, my Dropbox usage is

It took more than an hour just for “indexing”.

I guess that for the time being, we’ll have to accept that if we want to use Dropbox on the PineBook Pro.

Exa and icon fonts in Arch Linux

I finally took the time to try exa, “a modern replacement for ls”.

This is a brief article for installing exa in Arch Linux with an additional package for the icon fonts (in a few installations, boxes were shown instead of icons, that’s why I’m writing this blog article, hoping to save you some time).

Installing exa in Arch is just a matter of running:

However, you need a “Nerd” font to get the icon symbols. This is the one I install:

In EndeavourOS KDE, this should already be installed. I seem to understand that this is not the case for EndeavourOS GNOME. If these fonts are not installed, you can install them with the command above and make sure to reboot.

The output is excellent, and I aliased many of my previous ls commands to exa:

This is the beautiful colored output you get, and note the icons for directories and known files types in Gnome (in particular, a “cup of coffee” for Java files):

The same holds for KDE:

I also have another alias for the tree output of exa:

And this is the output:

Note the “–git-ignore” command line argument to ask exa to skip all the files that match the patterns in the current “.gitignore” file.

Beautiful, isn’t it? 🙂

Network Printers Discovery in Arch Linux

In Arch Linux (and Arch-based distros like EndeavourOS), it’s easy to add a network printer if you already know its address. Still, network printer discovery does not work out of the box as it happens on other distributions like Fedora or Ubuntu.

The procedure to enable network printer discovery is, of course, documented in the Arch wiki. Still, in this post, I’d like to detail the steps to achieve that just as a confirmation or as additional help documentation.

First of all, let’s install the packages for printing:

I also install the following packages for drivers and HP (because I have HP printers):

Of course, we must enable the CUPS service

We must also make sure the following packages (“avahi” and “nss-mdns”) are installed:

And that the “avahi-daemon.service” is running and enabled:

Then, we must edit the file “/etc/nsswitch.conf” and change the line

into

Now, we should be able to discover local network printers.

I prefer the “system-config-printer” package for this purpose (in case you want to install it).

You can run it by searching for the application “Print Settings”. I’m showing an example in KDE:

“Unlock” by providing the password, press “Add,” and expand the “Network Printer”. If you have a firewall, like “firewalld”, you’ll be asked again for the password to change the firewall settings to enable the services for printer discovery:

Of course, you have to accept to adjust the firewall.

Then, the local network printer(s) should be discovered. In my example, my HP printer is discovered with the possible network protocols:

I chose the second one (the one with the local IP address) and HPLIP as the connection protocol (remember I had already installed the corresponding packages):

By pressing “Forward”, you wait for the drivers to be selected. You can print a “test page” and configure the printer as you see fit.

Ansible, Molecule, Docker and GitHub Actions

UPDATES:

  • 19 February 2023: exclude ansible-lint problems on tests/test.yml
  • 27 April 2023: updated the molecule docker plugin

Last year, I got familiar with Ansible, the automation platform I now use to install and configure my Linux installations. I must thank Jeff Geerling and his excellent book “Ansible for DevOps“, which I highly recommend!

I have already started blogging about Ansible and its testing framework, Molecule. However, in the first blog post, I used Ansible and Molecule to demonstrate Gitpod with a minimal example.

In this blog post, I’d like to document the use of Ansible and Molecule with a slightly more advanced example and how to test an Ansible role against 3 main Linux distributions, Fedora, Ubuntu, and Arch. To test the Ansible role, we will use Molecule and Docker. Finally, I’ll show how to implement a continuous integration process with GitHub Actions. The example consists of a role for installing zsh, setting it as the user’s default shell, and creating an initial “.zshrc” file. It will be a long post because it will be step-by-step.

The source code used in this tutorial can be found here: https://github.com/LorenzoBettini/ansible-role-zsh. The GitHub repository is configured to be used with Gitpod (see my other blog post concerning using the online IDE Gitpod).

Install ansible and molecule

I’m assuming Docker, Python, and Pip are already installed.

First, let’s install Ansible and Molecule (with Docker support). We’ll use pip to install these tools. This method works on all distributions since it’s independent of the ansible and molecule packages provided by the distribution (Ubuntu does not even provide a package for molecule):

This will install ansible and molecule in “$HOME/.local/bin,” so this path must be in your PATH (it should already be the case in most distributions).

Create the role

This is the command to initialize a role with the directories and files also for molecule (with docker):

In this example, I’ll run:

This is the resulting directory structure of the created project (note that, at the time of writing, the official guide, https://molecule.readthedocs.io/en/latest/getting-started.html, is not updated with the directory and file structure):

What I do next is to enter the directory, remove “.travis.yml” (since we want to build on GitHub Actions), and create a Git repository (with “git init”). I’m also pushing to GitHub.

First, let’s adjust the file meta/main.yml with the information about this role and author:

The role’s name should be the same as the one specified in the “init” command (I don’t know why this file has not been generated with the role_name already set). Otherwise, the other generated files for Molecule will not work.

The role’s main tasks are defined in tasks/main.yml. Currently, the generated file does not execute any task.

Manual tests

The “init” command also created a tests directory to manually and locally test the role. We are interested in automatically testing the role. However, since the role is currently empty, it is safe to try to run it against our own machine. At least, we can check that the syntax of the role is OK, and we can perform a “dry-run” without modifying anything on our machine.

The current contents of the files generated in the “tests” directory will not work out of the box.

First, the tests/test.yml playbook:

Correctly refers to our role, but ansible will not be able to find the role in the default search path (because the role is the project’s path).

We can change the role reference with a relative path (the use of a relative path will require a few configurations to make linting happy, as we will see later):

Then, we can try to run it, checking the syntax and doing a “dry-run”:

The “dry-run” (–check) fails because, on my machine, there’s no SSH server, and by default, the tests/inventory file (specifying “localhost”) would imply an SSH connection:

To avoid SSH, we can change the file as follows:

Let’s try again with the “–check” argument, and now it works.

Run the complete Molecule default scenario

The “init” command created a default Molecule scenario in the file default/molecule.yml:

As we can see from this file, the Docker image used by Molecule is centos:stream8. For the moment, we’ll stick with this image.

Molecule will execute a playbook against a Docker container of this Docker image. We’re implementing a role, not a playbook. The playbook is defined in the file default/converge.yml:

In fact, “converge” is the action of performing the playbook against the Docker image, the “instance”. As you see, the “init” command generated this file automatically based on the role that we created.

There’s also a default/verify.yml file that is used to verify that some expected conditions are true once we run the playbook against the Docker instance. We’ll get back to this file later to write our own assertions. The contents of this generated file are as follows (the assertion is always verified):

To check that the scenario already works, we can run it end-to-end with the command “molecule test” issued from the project’s root. Remember that Molecule will download the Docker image during the first run, which takes time, depending on your Internet connection. This is the simplified output:

As reported in the first line, this is the entire lifecycle sequence:

dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy

Thus running the entire scenario always implies starting from scratch, that is, from a brand new Docker container (of course, the pulled image will be reused). Note that after “converge,” the scenario checks “idempotence,” which is a desired property of Ansible roles and playbooks. After verification, the Docker instance is also destroyed. Of course, if any of these actions fail, the lifecycle stops with failure.

Setup the CI on GitHub Actions

Our role doesn’t do anything yet, but we verified that we could run the complete Molecule scenario. Before going on, let’s set up the GitHub Actions CI workflow. We’ll use the Ubuntu runner, where Docker and Python are already installed. We’ll have first to install ansible and molecule with pip, and then we run the “molecule test”.

Concerning the pip installation step, I created the file pip/requirements.txt in the project with these contents (they correspond to the pip packages we installed on our machine):

Then, I create the file .github/workflows/molecule-ci.yml with these contents:

Now that our CI is in place, GitHub Actions will run the complete Molecule test scenario at each pushed commit. The environment variables at the end of the file will allow for colors in the GitHub Actions build output:

Familiarize with Molecule commands

While implementing our role, we could run single Molecule commands instead of the whole scenario (which, in any case, will be executed by the CI).

With “molecule create,” we create the Docker instance. Unless we run “molecule destroy” (which is executed by the entire scenario at the beginning), the Docker container will stay on our machine. Once the instance is created, you can enter the container with “molecule login“. This is useful to inspect the state of the container after running the playbook (with “molecule converge“) or to run a few commands before writing the tasks for our role:

The “login” command is more straightforward than running a “docker” command to enter the container (you don’t need to know its name). Remember that unless you run “molecule destroy,” you’ll find the same state if you exit the container and back in.

Once you run “molecule converge“, you can run “molecule verify” to check that the assertions hold.

To get rid of the instance, just run “molecule destroy“.

Let’s start implementing our role’s tasks

To start experimenting with Molecule for testing Ansible roles, the official Fedora Docker image is probably the easiest. In fact, such an image comes with “python” already installed (and that’s required to run Ansible playbooks). Moreover, it also contains “sudo”, another command typically used in Ansible tasks (when using “become: yes”).

Thus, let’s change the image in the file default/molecule.yml:

You can commit, push, and let GitHub Actions verify that everything is still OK.

Now it’s time to edit the primary role’s file, tasks/main.yml. Let’s add the task to install ZSH. In this example, I’m using “ansible.builtin.package module – Generic OS package manager” so that we are independent of the target OS. This is useful later because we want to test our role against different Linux distributions. This Ansible module is less powerful than the specific package manager modules, but for our goals, it is sufficient. Moreover, in the Linux distributions that we’ll test, the name of the package for ZSH is always the same, “zsh”.

If we had already created the instance, we first need to run “molecule destroy” to avoid errors due to the previous Docker container.

Let’s run “molecule converge“. If you don’t have the “fedora:36” Docker image already in your cache, this command will take some time the first time. Moreover, also the task of installing the “zsh” package might take some time since the package must be downloaded from the Internet, not to mention that dnf is not the fastest package manager on earth. In fact, the Ansible package module will use the distribution package manager, that is, dnf in Fedora. Here’s the output:

Let’s enter the container with “molecule login“. Now, zsh should be installed in the container:

Of course, you could always run the entire “molecule test,” but that takes more time, and for the moment, we don’t have anything to verify yet. The idempotency of the Anslibe package module implies Idempotency.

Change the user’s shell and verify it

Now, we want to change the user’s shell to zsh, and we will verify it. Let’s follow a Test-Driven Development approach, which I’m a big fan of. We first write the verification tasks in verify.yml, make sure that “molecule verify” fails, and then implement the task in our role to make the test succeed.

First, how to get the user’s shell? In the Docker container, the $SHELL environment variable is not necessarily set, so we directly inspect the contents of the file “/etc/passwd” and some shell commands to get the user’s current shell. To write the shell commands, we can enter the container (molecule login), assuming we have already created the instance, and perform some experiments there. Remember that when we’re inside the container, we are “root”, so in our experiments, we’ll try to get the root’s shell.

So, we have our shell piped command to get the root’s shell:

In verify.yml, we want to get the shell of the user executing Ansible. In our molecule tests, it will be root, but the user will be different in the general use case. Thus, we use Ansible’s fact “ansible_user_id”:

Then, we’ll compare it against the desired value, NOT “/bin/bash”, but “/bin/zsh”. Note that, by default, the generated molecule/verify.yml has “gather_facts: false”. We need to remove or set that line to true so that Ansible populates the variable with the current user. Here are the contents (we must use the module “shell” and not “command” because we need the “|”):

Since we have already created the instance and converged that, let’s run “molecule verify“:

As expected, it fails.

Let’s add the task in our role to set the current user’s shell to zsh (we rely on the Ansible user module):

Let’s run “molecule converge” (since we had already converged before adding this task, the installation of zsh does not change anything):

And let’s run “molecule verify“, and this time it succeeds!

As usual, we commit, push, and let the CI run the whole scenario.

The verification would not be really required since we should rely on the correctness of the Ansible user module). However, I thought this could be the moment to experiment with Molecule verification.

Note that if you enter the container, the “/etc/passwd” has been modified, but you’re still on bash. That’s because the change becomes effective when you log out and log in as a user. In a Docker container, that’s not possible, as far as I know. However, since log out and login are expected in a real system, as long as the shell is modified in “/etc/passwd”, we’re fine.

Add the file .zshrc

Since we want to set up zsh for the user, we should also add to the converged system a “.zshrc” with some reasonable defaults. For example, if you enter the container and run zsh, you’ll see that you have no command history. The history should be enabled in the file “.zshrc”.

Files are searched for in the directory “files” of the project, which the “init” command created for us. I had an existing small “.zshrc” with the enabled history, command completion, and a few aliases:

I put such a file in files/zshrc (I prefer not to have hidden source files, so I removed the “.”). In the role, I added this task, which copies the source file into the converged system in the current user’s home directory with the name “.zshrc”:

Of course, the “copy module” is idempotent and performs the action only if the source and the target files differ.

Let’s converge, enter the Docker container and run “zsh”. Now, the command history works.

Linting

Let’s enable linting to the Molecule scenario. Remember that the scenario has an initial phase for linting.

First of all, we have to install the two additional pip packages, yamllint, and ansible-lint. In our system, we run the following:

Of course, for the CI, we have to update pip/requirements.txt accordingly, adding these two packages.

Then, we have to enable the “lint:” section in default/molecule.yml:

Before going on, let’s exclude our “tests/test.yml” file from ansible-lint: as mentioned above, using a relative path will make ansible-lint complain. However, for that simple test file we don’t care. So, let’s create a file “.ansible-lint” in the root directory with these contents:

Now we can run “molecule lint“:

The output also suggests how to skip some of these issues. However, let’s try to fix these problems.

Concerning the meta/main.yml this modified version fixes the reported issues (we’ll deal with Ubuntu and Arch later):

Then, we adjust the other two files, “converge.yml” and “verify.yml”. These are the relevant changed parts, respectively:

Now, “molecule lint” should be happy.

Commit and push. Everything should work on GitHub Actions.

Note: when we created the project with “molecule init”, the command also created a “.yamllint” configuration file in the root:

This configuration file enables and disables linting rules. This is out of the scope of this post. However, to experiment a bit, if we remove the last line “truthy: disable” and run “molecule lint,” we get new linting violations:

Because “become: yes” should be changed to “become: true”. I guess it’s a matter of taste whether to enable such a linting rule or not. I’ve seen many examples of Ansible files with “become: yes”. After fixing “main.yml”, there is still a warning (not an error) on the YAML file of our GitHub Actions in correspondence with the “on” line:

You can find a few issues on this being considered false positive or not. A simple workaround is to add a comment in the “molecule-ci.yml” file to make yamllint skip that:

Testing with Ubuntu, the “prepare” step

Let’s say that, besides “fedora:36“, we also want to test our role against the Docker image “ubuntu:jammy“. Instead of creating a new scenario (i.e., another directory inside the directory “molecule”), let’s parameterize the molecule.yml with an environment variable, e.g., MOLECULE_DISTRO, which defaults to “fedora:36”, but that can be passed on the command line with a different value. This is the interesting part:

Nothing changes if we run molecule commands as we did before: we still use the Fedora Docker image. If we want to try with another image, like “ubuntu:jammy”, we prefix the molecule command with that value for our environment variable.

IMPORTANT: Before trying with another Docker image, make sure you run “molecule destroy” since now we want to use a different Docker image.

Let’s try to converge with the Ubuntu Docker image…

What could go wrong?

As I anticipated, while the Fedora Docker image comes with python preinstalled, the Ubuntu Docker image does not. The latter does not even have “sudo” installed, which is required for running our tasks with “becomes: yes”. The converge failed, but the Ubuntu image has been created so you can enter the Docker container and verify that these packages are not pre-installed.

One could try to add the tasks in the role to install python and sudo (not by using “package” because such Ansible modules require python already installed). However, this would not make sense: our role is meant to be executed against an actual distribution, where these two packages are already installed as base ones. Jeff Geerling provides a few Docker images meant for Ansible, where python and sudo are already installed. However, instead of using his Ubuntu image, let’s explore another Molecule step: prepare.

As documented:

The prepare playbook executes actions which bring the system to a given state prior to converge. It is executed after create, and only once for the duration of the instances life. This can be used to bring instances into a particular state, prior to testing.

So, let’s modify this part in molecule.yml (this modification is not strictly required because if the directory of molecule/default contains a file “prepare.yml,” it will be automatically executed; it might still be good to know how to specify such a file, in case it’s in a different directory or it has a different name):

Now, in molecule/prepare.yml we create the preparation playbook. This is kind of challenging because we cannot rely on Ansible facts nor on most of its modules (remember: they require python, which we want to install in this playbook). We can rely on the “ansible.builtin.raw module – Executes a low-down and dirty command”. And looking at its documentation, we can see that it fits our needs:

This is useful and should only be done in a few cases. A common case is installing python on a system without python installed by default.

So, here’s the prepare.yml playbook:

Of course, we must run this task only when we are using Ubuntu (see the condition). We also specify “changed_when: false” to avoid linting problems (“no-changed-when # Commands should not change things if nothing needs doing.”).

Running “molecule converge” now succeeds (note the “prepare” step):

Of course, also “MOLECULE_DISTRO=ubuntu:jammy molecule verify” should succeed:

But it doesn’t. That’s because the “pipefail” we added to make lint happy works in bash but not in sh, which is used by default in Ubuntu (in Fedora, it was bash). It’s just a matter of adjusting that verification’s task accordingly:

And now verification succeeds in Ubuntu as well.

If you run this against Fedora (remember that you must destroy the Ubuntu instance first), the task “Install python in Ubuntu” will be skipped.

Note that if you run

Let’s try to converge with the Ubuntu Docker image…

and then execute

Molecule will reuse the image just created: it does not recreate the Docker image even if you haven’t specified any environment variable. This means that, as mentioned above, if you want to test with another value of the environment variable (including the default case), you first have to destroy the current image. By defining several scenarios, as we will see in a minute, there’s no such limitation.

Add a GitHub Actions build matrix

Let’s modify the GitHub Actions workflow to test our role with Fedora and Ubuntu in two jobs using a build matrix. These are the relevant parts to change to use the environment variable MOLECULE_DISTRO that we introduced in the previous section:

Now GitHub Actions will execute two jobs for each pushed commit:

Using different scenarios

We now see a different technique to test with different Linux distributions. Instead of using an environment variable to parameterize Molecule, we create another Molecule scenario. To do that, it’s enough to create another subdirectory inside the “molecule” directory. We’ll use the “Ubuntu” example to see this technique in action. (Before doing that, remember to run “molecule destroy” first).

First, let’s undo the modification we did in the file “molecule.yml”:

And let’s create another subdirectory, say “ubuntu”, inside “molecule”, where we create this “molecule.yml” file (it’s basically the same as the one inside “default” where we specify “ubuntu:jammy” and a different name for the “image”):

Let’s copy the “default/verify.yml” and “default/converge.yml” into this new directory, and let’s move the “default/prepare.yml” into this new directory, where we change the contents as follows (that is, we get rid of the “when” condition since this will be used only in this new scenario):

To summarize, this should be the layout of the “molecule” directory (we’ll get rid of duplicated contents in a minute):

Now, running any molecule command will use the “default” scenario. If we want to execute molecule commands against the “ubuntu” scenario, we must use the argument “-s ubuntu” (where “-s” is the short form of the command line argument “–scenario-name”).

For example

So we can converge, verify, and experiment with the two scenarios without destroying a previously created instance.

Of course, we adapt the GitHub Actions workflow accordingly to use scenarios instead of environment variables:

Now, let’s clean up our files to avoid duplications. For example, “verify.yml” and “converge.yml” are duplicated in the “default” and “ubuntu” directories. We take inspiration from the official documentation https://molecule.readthedocs.io/en/latest/examples.html#sharing-across-scenarios.

Let’s move the shared files “verify.yml” and “converge.yml” to a new subdirectory, say “shared”. So the layout should be as follows:

The last part of both “molecule.yml” files in “default” and “ubuntu” must be changed to refer to files in another directory (note that the “verifier” part has been removed since it’s specified in the “provisioner” part):

Now we reused the common files between the two scenarios. Of course, we verify that everything still works in both scenarios.

Testing with Arch, a custom Dockerfile

Let’s now test this simple role also with Arch Linux. The idea is to create another scenario, e.g., another subdirectory, say “arch”. We could follow the same technique that we used for Ubuntu because also the Arch Docker image has to be “prepared” with “python” and “sudo”. However, to try something different, let’s rely on a custom Docker image specified with a Dockerfile.

The “molecule.yml” in the “arch” directory is as follows:

NOTE: the specification “platform: linux/amd64” is not required because we use a custom Dockerfile. It is required if you want to test this scenario on a Mac m1 (by the way, see my other blog post about Docker on a Mac m1): while Ubuntu and Fedora also provide Docker images for the aarch64 (arm) architecture, Arch Linux does not. So we must force the use of the Intel platform on Arm architectures (of course, on Mac m1, the Docker container will be emulated).

And in the same directory, we create the Dockerfile for our Arch Docker image:

For this example, the Docker image for Arch is simple because we only need “python” and “sudo” to test our role.

Now the directory layout should be as follows:

Now, when Molecule creates the instance, it will use our custom Dockerfile.

We verify that the “arch” scenario also works and update the GitHub Actions workflow by adding “arch” to the scenario matrix.

I hope you find this post helpful in getting started with Ansible and Molecule.

Stay tuned for more posts on Ansible! 🙂

Installing Ubuntu Linux on an Acer Aspire Vero

As I wrote in this other article, I recently bought an Acer Aspire Vero, which I greatly enjoy. Of course, I immediately installed Linux on this machine. In this article, I’ll report my experience installing Ubuntu (Ubuntu 22.10 “Kinetic Kudu”) on this Acer Aspire Vero.

Although nowadays I’m mainly an Arch Linux user, when installing Linux on a brand new laptop, I typically prefer to start with Ubuntu.

Preparation

I have already downloaded the Ubuntu ISO and copied it into a USB stick with Ventoy.

I need the F12 boot menu to boot from the USB stick. This is also useful later because I typically have several Linux distributions installed on the same computer. To enter the BIOS, you must press F2 while the laptop is booting (when you see the “Acer” logo). Make sure the “F12 Boot Menu” is enabled (by default, it’s disabled):

I also disabled “Secure Boot”. If you go to the BIOS “boot” tab, you see that you cannot change the boot entries.

The official documentation describes the procedure to make those entries changeable: https://community.acer.com/en/kb/articles/88-enable-or-disable-secure-boot-on-an-acer-notebook. The idea is to (at least temporarily) set a supervisor password (take note of that password!):

This will allow you to change the entry of “Secure Boot”:

Then, you can disable the Supervisor Password (you have to use the password you had previously chosen).

Then, it’s time to prepare some room in the SSD for Linux. I do that by shrinking the Windows partition from Windows itself. After installing a few programs on Windows (and performing the system updates), that’s the amount of used space:

I opened “Disk Management”, selected the primary partition, and used the context menu “Shrink Volume…”; since I’m not planning to use Windows much, 137Gb should be enough for the Windows partition after shrink:

And that’s the result:

Before installing Linux, I also disabled “fast startup” in Windows: this will allow me later to access the Windows partition from Linux (otherwise, the Windows partition would be in an inconsistent state):

OK, let’s reboot with the USB stick (I’m using Ventoy) and press F12 to get to the boot menu to choose to boot from the USB:

Ubuntu boots fine.

I decided first to try Ubuntu and see whether everything works in the live environment:

The sound works. I’d say that WiFi and Bluetooth are also working from the new GNOME 43 menu in the top-right corner  (in fact, I can connect to my WiFi). Moreover, the “Balanced” profile is automatically selected, meaning power profiles also work.

Installation

The overall installation process went smoothly and fast.

I prefer to manually partition the disk because I want a swap partition (for hibernation), a standard EXT4 partition mounted on a directory that I will share with other Linux installations, and the root partition as BTRFS.

The installed system

So here we are on the installed system; as usual, I’m greeted by the initial setup dialog:

Wayland works and touchpad gestures work as well.

Grub detected my existing Windows installation so that I could boot Windows from the grub menu.

Usually, I have to increase the font size on my computers. This laptop provides 1920×1080 (16:9) on a 15,6 screen. Typically, I have to use Gnome Tweaks, but in this case, using the “Accessibility” menu and selecting “Large Text” was enough for having a readable screen (this corresponds to a font scaling factor of 1.25):

Power consumption

I selected “Power Saver” as the power profile in the Gnome menu, and I have installed “powertop”. I ran “sudo powertop –auto-tune,” and then I ran “sudo powertop” to see the power consumption without further interacting with the computer:

If I decrease the brightness a bit, it looks even better:

Since 9 hours is the declared time in the computer spec, I’d say Linux works great on this computer in this respect (even better than Windows).

See also later in this article another mechanism to improve power consumption.

Other configurations

I had to perform some additional tweaks, which I had already blogged about:

Finally, I read on the Arch Wiki and in other articles that it’s better to disable the VMD controller in the BIOS to optimize power consumption.

WARNING: if you disable VMD in the BIOS, Windows will refuse to boot. To avoid this problem:

  • boot Windows and configure it to boot in safe mode and reboot;
  • disable VMD in the BIOS (as shown in the following);
  • boot Windows (in safe mode) and reboot Windows in normal mode.

To disable VMD in the BIOS, go to the “Main” section:

Press Ctrl+S to show the advanced hidden entries (including the VMD Controller) and disable it:

After rebooting into Ubuntu, I notice that the fan is almost always off, so maybe disabling VMD does something concerning power consumption.

That’s all!

Linux runs fine on this laptop! 🙂

Stay tuned for other blog posts about other Linux distributions installed on this laptop.

Installing Amarok on Arch Linux

I have always liked Amarok, the (initially) default KDE media player. It’s very feature-rich, nothing compared to Elisa. Moreover, it has two crucial features that I haven’t found in any other players:

  • it saves statistics (play count and stars) directly into the music file
  • it synchronizes statistics with iPod

Unfortunately, while still maintained, you won’t find pre-built packages in mainstream distributions (e.g., Ubuntu). Thus, you must install that from sources, which is problematic. However, for Arch Linux, there’s an AUR package, which takes care of the compilation and, most of all, its dependencies.

In this blog post, I’ll summarize the steps for installing Amarok to access iPods (I still have an iPod classic).

First, you need to install the Phonon backend required by Amarok:

If you want to use Amarok with an iPod, you must first install

IMPORTANT: the iPod library (this must be present when Amarok is compiled from sources; if you forget about that, you’ll need to recompile Amarok, e.g., by specifying “–rebuild” as a command line argument to the AUR helper):

Then, we’re ready to install (i.e., compile from sources) Amarok from the AUR repository (I’m using the “yay” AUR helper here, but if you use another one, use your preferred one):

Now be patient: it will take several minutes for the compilation to finish (about 20 minutes on a decent machine)!

If you’re on KDE, you can now enjoy Amarok.

If you’re on GNOME, there’s still something to fix. In particular, you’ll see Amarok lacks lots of icons:

You need to install Breeze icons:

And now you can also enjoy icons:

Concerning the iPod: first, you have to mount it, and then you start Amarok so that Amarok can see the mounted iPod.

Enjoy your music! 🙂

Docker on macOS M1

Although I’m a Linux user, I also recently bought a Mac Air M1, and I wanted to use Docker (a big part of my TDD book) to ensure that my projects based on Docker work on m1 as well.

I then went to the Docker website for macOS and downloaded the version for the Apple m1 chip:

Then, I continued with the installation:

Let’s start it. Although m1 is fast, starting Docker Desktop takes some time.

Although I’m already familiar with Docker (on Linux), I decided to follow the “getting started” tutorial, which is well done:

At least, I’m sure that Docker is working on this machine.

The desktop app is well done, with a few sections to inspect Images, Containers, etc.

And, of course, there’s the “Preferences” section. For the moment, I stick with the defaults.

From the terminal, I ran the usual “hello-world” image:

I also tried to run a Ubuntu container. Inside the container, I verified that it’s running an “aarch64” version instead of the x86 one (“amd64”).

I also installed “file” to verify that it’s using aarch64 binaries (“arm64”):

From the Desktop application, you can quickly enter a container with a terminal:

Now, it’s time to verify that my Java projects based on Docker work as expected.

Java & Maven

This is a simple Maven example (a pom.xml file) that uses the https://github.com/fabric8io/docker-maven-plugin to start and stop a MySql container:

Run “mvn docker:start” to start a MySql container with a random mapped port. The command will wait for the container to be ready (it looks for a “ready” string within 20 seconds). After the command succeeds, the container will be running in the background. Run “mvn docker:stop” to stop the started container.

To avoid errors in this shape:

You need a recent version of the https://github.com/fabric8io/docker-maven-plugin. For example, 0.38.1. It also works with the current latest version, 0.40.2.

Actually, the first time I tried this project, it did not work (not because of the above error), but after a recent update, it started to work, maybe because of this added link:

Note that not all the images are available for this architecture “aarch64”. For example, if you try to use this older version of “mysql”, you get this error:

However, you can force the intel architecture for an image, as documented here https://docs.docker.com/engine/reference/commandline/run/, e.g., with this environment variable correctly set:

For example, for the above Maven project:

If you now enter the container, you can verify that you’re running an x86_64 image:

However, such images will run slower because they are emulated:

Testcontainers

I also use Testcontainers to start Docker containers from the JUnit tests in my projects.

For example, I’m using this example from my TDD bookhttps://github.com/LorenzoBettini/it-docker-mongo-example, and it works out of the box.

Eclipse Docker Tooling

Currently, the Eclipse plugin for Docker, Docker Tooling, does not work: it cannot connect to Docker. This has been reported (https://github.com/eclipse-linuxtools/org.eclipse.linuxtools/issues/61), and a patch is available to make it work: follow the instructions detailed here https://github.com/eclipse-linuxtools/org.eclipse.linuxtools/issues/61#issuecomment-1326838270. I tested it, and it works:

To summarize, using Docker on macOS m1 seems to work fine! 🙂

Installing Arch Linux the (not so) hard way

After using EndeavourOS, an Arch-based distro, for some time with much pleasure and appreciating Arch mechanisms (packages and AUR), I decided it was time to try the “real thing” and install Arch the “hard way” 🙂 Spoiler: it’s not that hard!

I thought it was hard. For sure, it’s more complicated than other distro installation procedures, but, to be honest, after using Linux for more than 20 years, I thought there was not much to be scared of 😉

I now use Arch (besides other distros) on my machines greatly. Of course, I did many experiments with virtual machines before installing Arch on bare metal. I know there are many guides and tutorials, but I’d like to summarize my steps for installing Arch (with a SWAP partition, an EXT4 partition for data to be shared among distros, and a BTRFS primary partition). In particular, in this blog post, I’ll describe my steps for installing Arch on a virtual machine, which, as I’ve just said, it’s the best way to get confident with Arch and not be scared of installing Arch on a real computer. Moreover, on many guides, I noticed a few missing points, which, instead, are essential.

Of course, the best reference is the excellent official guide, and I’ll use the official guide as a reference while following along, https://wiki.archlinux.org/title/installation_guide. Note that there are still a few parts in the guide that refer to other parts of the excellent Arch wiki, and I had a few minor problems the first time I tried the Arch installation.

Here we go!

Create and configure the virtual machine with enough disk space (dynamically allocated so you won’t waste space on your disk), let’s say 100Gb. Make sure you enable EFI in the virtual machine configuration. Of course, insert the Arch Linux ISO as a live CD in the virtual machine. I’m going to use archlinux-2022.09.03-x86_64.iso.

As described in a previous post, I’d suggest performing the installation by connecting via SSH to the virtual machine. This way, you’re using a local terminal: copy and paste will work (since you’re on a local terminal). In particular, since the Arch installer is textual, being able to copy and paste commands from a local terminal makes everything easier. Moreover, the keyboard layout will be the host system’s keyboard layout. Thus, the keyboard layout will be already configured correctly. While in the virtual machine, you’d have to configure the keyboard layout.

(On a side note, even when installing Arch on a real computer, I prefer doing that via SSH, of course, from another computer.)

Before starting the virtual machine, we must map the SSH port of the virtual machine to a local port to connect from our computer. This requires knowing the name you gave to your virtual machine. In this example, I called the virtual machine “Arch Gnome” (because I’ll then install Gnome on the Arch installation). We must run these instructions from the host computer:

Port 2522 is the one we’ll have to use later for connecting to the virtual machine via localhost. Of course, feel free to use another free port number as long as you’ll use it consistently from now on.

Start the virtual machine:

Inside the live environment, the SSH server is already up and running. However, since we’ll connect with the root account (the only one present), we must give the root account a password. By default, it’s empty, and SSH will not allow you to log in with a blank password. Choose a password. This password is temporary, and if you’re in a trusted local network, you can choose an easy one.

Now, we can connect via SSH to the virtual machine through localhost. if you have already connected via SSH to localhost, you might get an error of the shape:

All you have to do is edit the known_hosts file by removing the offending lines and try again. You will have to remove all the lines that start with “[127.0.0.1]:2522”.

Note that we’re using port 2522 because we previously used that for creating the port mapping. Let’s connect to the virtual machine and type the password we have previously specified for the root account inside the virtual machine (Accept the fingerprint when asked.):

In your local terminal, you see that you get the colors of the virtual machine (now, you’re inside the virtual machine):

Let’s set the console keyboard layout (the default layout is US; if that’s fine with you, skip the next step). This step is not strictly required for our local terminal: even if we’re inside the virtual machine, we’re using our local terminal, so we already use the correct layout. However, let’s do that anyway since we want to simulate an actual installation. Moreover, having the proper layout is good if we want to run commands directly from the VirtualBox window.

I already know the layout I want for my Italian keyboard, so I run:

If you don’t know the exact layout, you can list the available ones with

You can verify in the VirtualBox window that the layout is applied correctly.

Since we are in a virtual machine, the machine should already be able to access the Internet if your host is correctly connected (and that’s required to install Arch Linux). However, if you want to simulate what you would do with an actual installation on bare metal, you can ping a remote host and verify that everything’s OK:

Before going on, as suggested in the official guide, it’s better to make sure the system clock is accurate by enabling network synchronization NTP:

Partitioning the disk

How to partition the disk is your choice. In this example, I will partition the disk according to my needs. However, you need at least two partitions: one for booting in UEFI mode and one for the root filesystem.

In this example, I’ll create four partitions:

  • the one for booting in UEFI mode, formatted as FAT32, 300Mb (it should be enough for UEFI, but if unsure, go on with 512Mb)
  • a swap partition, 20Gb (I have 16Gb, and if I want to enable hibernation, i.e., suspend to disk, that should be enough)
  • a partition meant to host common data that I want to share among several Linux installations on the same machine (maybe I’ll blog about that in the future), formatted as EXT4, 30Gb
  • the root partition, formatted as BTRFS, the rest of the disk

To do that, I’m using cfdisk, a textual partition manager, which I find easy to use.

Now, it is time to get to know the device name of our disk using the command lsblk:

As you might guess, sda is the disk we are installing Arch Linux upon. In a virtual machine, that’s probably always like that. On a real machine, it might be different (for example, if you have an NVME SSD, it will be something like nvme0n1). It’s needless to say that using the correct device name is crucial, especially on a real machine, or you might end up wiping away essential data. The nice thing about a virtual machine is that you’re in a “sandbox,” so, at worst, you’ll break your virtual machine.

So I run

If it is a new virtual machine, you’ll be asked a partition table: choose gpt.

Start creating your partitions. Just use the menus of cfdisk; it’s easy (on the bottom, you will find some help). Once you create a partition, set the “Type” correctly. By default, the type is “Linux filesystem”. For UEFI, you have to specify the type “EFI System,” and for the swap partition, “Linux swap”.

That’s my final result:

Let’s “Write” the partition table to disk and “Quit”. We can also verify with lsblk that the result is as expected:

Of course, you must know which partition is meant for what. In my example, sda1 is for UEFI, sda2 for swap, sda3 for my shared data, and sda4 for root.

Format the partitions

According to my intended layout shown above, I’ll format the four partitions with the following commands:

Mount the partitions

This is also delicate, so you must use the correct device names. What follows is, of course, correct according to my layout.

First of all, let’s deal with the swap partition:

The presence of the BTRFS file system for the root partition makes things a bit more interesting (or a bit more complicated, as you prefer 😉

First, we must mount the BTRFS filesystem on /mnt. Note that we are mounting the BTRFS on /mnt only temporarily and to create the subvolumes (in a minute, we will mount the subvolumes in their final shape on /mnt, together with the other partitions):

This will allow us to create the subvolumes. Again, what follows is the BTRFS subvolume layout I prefer. You might want to choose a different one. To use Timeshift, you must have at least @ for / and @home for /home. This is how I create the subvolumes I want:

As I said, this mount was temporary, just for creating subvolumes. In fact, we now unmount /mnt:

And we mount every single subvolume in its final “position” inside /mnt by also specifying a few additional options like the general “noatime” and the BTRFS-specific “compress” to enable the ztsd compression:

The “-m” option makes mount create the target directory if it does not exist.

Finally, we can mount the remaining partitions. The UEFI one should be mounted to “/boot/efi” inside “/mnt”. I like to mount the “common” partition in “/media/bettini/common” inside “/mnt” because that’s where I’ll use it (relying on the fact that I’ll create a user “bettini” for myself). Again, choose something else for yourself. These are the commands:

This is the final layout of /mnt, which, remember, is where our system will be installed:

Select the mirrors

This part, documented in the official installation guide, is usually skipped in several blog posts I found online. Instead, this step is essential.

The mirrors are specified in the file /etc/pacman.d/mirrorlist.

The guide says:

On the live system, after connecting to the internet, reflector updates the mirror list by choosing 20 most recently synchronized HTTPS mirrors and sorting them by download rate.

The higher a mirror is placed in the list, the more priority it is given when downloading a package. You may want to inspect the file to see if it is satisfactory. If it is not, edit the file accordingly, and move the geographically closest mirrors to the top of the list, although other criteria should be taken into account

You can verify that by inspecting the file /etc/pacman.d/mirrorlist. In my case, the Italian mirror is the last one, so it will be given the lowest priority. This sounds wrong to me. In particular, the documentation also points out:

This file will later be copied to the new system by pacstrap, so it is worth getting right.

Thus, I prefer to run the program reflector myself (see the reflector documentation for the single arguments; of course, I’m using “Italy” as the country because that’s where I leave; I could also specify several values separated by a comma, e.g., “Italy,Germany”):

The following step does not seem to be required in the installation guide. However, to make sure we have an updated PGP keyring (for checking signatures of packages), at this point, I also run:

Running pacstrap

Now, it’s time to install the base packages, Linux kernel, and firmware for standard hardware using the pacstrap script. You specify the target directory, which, as you might guess, it’s /mnt, and the packages.

This is the command I run (I prefer to use the LTS kernel; if you want the latest kernel, use “linux” package instead of “linux-lts”; you can also install them both and then select one from the grub menu):

This command will download about 500Mb, which might take time depending on your Internet speed.

Configuring the system

Since we have already manually mounted all our partitions (on /mnt), the Arch ISO can generate for us the file fstab automatically through the command genfstab:

The “-U” option tells genfstab to use UUID to refer to partitions (alternatively, “-L” can be used to use labels instead).

You can have a look at the result (of course, UUID will be different in your case):

Note that although we used the option “compress=zstd” when mounting our BTRFS subvolumes, genfstab turned that into “compress=zstd:3” because “3” is the default compression value for zstd in BTRFS. If we wanted to make the compression value explicit, e.g., “1”, we should have done that when mounting the subvolumes. Of course, you can always tweak the generate fstab as you see fit.

Now, we can “enter” our installation with “chroot” or, better, with the enhanced arch-chroot, which automatically binds other things like /dev and /proc:

The root directory is what’s inside /mnt, so / refers to what’s inside /mnt. Also, the prompt has changed to reflect this:

We now set the timezone of the installed system. You must use Region/City according to your location. Timezones are available in the directory /usr/share/zoneinfo/. In my case (Italy), I run:

Then, we use hwclock to set the Hardware Clock from the System Clock:

Then, we edit /etc/locale.gen and uncomment the locales we need; in my case, en_US.UTF-8 UTF-8 and it_IT.UTF-8. Since I already know the two locales, instead of editing the file (where all locales are commented out), I append the two locales to the end of that file:

And we generate the locales:

We must also create the /etc/locale.conf file, and set the LANG variable accordingly. This can be done as follows:

We also make permanent the initial changes to the console layout (remember, I used “it”; in your case, you need to use the code you previously specified):

We’ll deal with the network configuration (of the installed system) in a minute. But we can already create the files /etc/hostname and /etc/hosts. You have to choose your preferred hostname. In this example, I’m going to use “arch-vm-gnome”. So I generate the two files with the following two commands:

The boot loader

Since we use BTRFS, we might want to tweak the file /etc/mkinitcpio.conf with these two modules:

And regenerate the initramfs images:

Let’s now install the bootloader. I prefer GRUB. So let’s install a few packages:

During the installation, you might want to take note of the recommendation for installing and configuring grub and the optional dependencies:

Now we install grub in the UEFI partition. Note that, unlike standard GUI Linux installations, you can specify the “–bootloader-id”, which will be the identifier of this grub installation in UEFI. This is useful if you have several bootloaders on your machine. In this example, I’m using ArchGnome:

Hopefully, the installation should succeed, and we can generate the grub menu:

Here’s the output of these two commands:

User accounts

Don’t forget to set the following passwords, or you will not be able to log in to the installed system once you reboot later.

We can now set the root password. This is the effective password for root in the installed system. This should be chosen carefully:

I prefer to use “sudo”, so I first install that

This is also the moment to create your own user account; in my case, it is “bettini”.

I add the user to a few essential groups, in particular, “wheel” which makes my user a superuser account. Just relying on the group “wheel” is not enough: we must allow members of the group wheel to execute any command. This is done by uncommenting this line in the /etc/sudoers: “%wheel ALL=(ALL:ALL) ALL”. A sed command will accomplish that:

That would be enough to reboot and try our installation. However, we would have no networking (actually, since this is a virtual machine, networking should work out of the box since you don’t need to configure any WiFi network, for example) and no desktop environment. So let’s go on with some further installations and configurations:

Install Gnome

In this example, I’m going to install the GNOME desktop environment. Besides GNOME, I’m installing other necessary packages like the “NetworkManager” (for easily configuring networking in GNOME), “firewall”, “firefox”, the package for choosing a power profile (useful for laptops), and other base packages, including the kernel headers (here I’m using “linux-lts-headers” because I installed the LTS kernel; otherwise, use “linux-headers”):

About 700Mb will be downloaded.

Once done, we have to enable the services at boot (in particular, GDM, the login manager, and the NetworkManager):

Time to reboot!

Now, it’s time to leave the environment and unmount all the partitions:

And reboot into our new Arch Linux installation.

If everything goes fine, we should see the login manager, and we can enter Gnome:

So, in the end, it’s not so hard to install Arch 😉

Maybe it’s a long procedure, but… most of that can be scripted! That’s will be the subject of another blog post, so stay tuned! 🙂