Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

My Ansible Role for GNOME

I have already started blogging about Ansible; in particular, I have shown how to develop and test an Ansible role with Molecule and Docker, also on Gitpod.

This blog post will describe my Ansible role for installing the GNOME desktop environment with several programs and configurations. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.

This role is for my personal installation and configuration and is not meant to be reusable.

The role can be found here: https://github.com/LorenzoBettini/my_gnome_role.

The role assumes that at least the basic GNOME DE is already installed in the Linux distribution. The role then installs several programs I’m using on a daily basis and performs a few configurations (it also installs a few extensions I use).

At the time of writing, the role has the following directory structure, which is standard for Ansible roles tested with Molecule.

The role has a few requirements, listed in “requirements.yml”:

These requirements must also be present in playbooks using this role; my playbooks (which I’ll write about in future articles) have such dependencies in the requirements.

The main file “tasks/main.yml” is as follows:

This shows a few debug information about the current Linux distribution. Indeed, the whole role has conditional tasks and variables depending on the current Linux distribution.

The file installs a few programs, mainly Gnome programs, but also other programs I’m using in GNOME.

The “vars/main.yml” only defines a few default variables used above:

As seen above, the package for “python psutils” has a different name in Arch, and it is overridden.

For Arch, we have to install a few additional packages, which are not required in the other distributions (file “gnome-arch.yml”):

For the Guake dropdown terminal, we install it (see the corresponding YAML file).

The file “gnome-templates.yml” creates the template for “New File”, which, otherwise, would not be available in recent versions of GNOME, at least in the distributions I’m using.

For the search engine GNOME Tracker, I performed a few configurations concerning the exclusion mechanisms. This is done by using the Community “dconf” module:

This also ensures that possibly previous versions of Tracker are not installed. Moreover, while I use Tracker to quickly look for files (e.g., with the GNOME Activities search bar), I don’t want to use “Tracker extract”, which also indexes file contents. For indexing file contents, I prefer “Recoll”, which is installed and configured in my dedicated playbooks for specific Linux distributions (I’ll blog about them in the future).

Then, the file “gnome-configurations.yml” configures a few aspects (the comments should be self-documented), including some custom keyboard shortcuts (including the one for Guake, which, in Wayland, must be set explicitly as a GNOME shortcut):

Then, by using the “petermosmans.customize-gnome” role (see the requirements file above), I install a few GNOME extensions, which are specified by their identifiers (these can be found on the GNOME extensions website). I leave a few of them commented out, since I don’t use them anymore, but I might need them in the future):

Then, we have the files for installing and configuring Flatpak, which I use only to install the GNOME Extension manager:

I installed them system-wide (the “user” option is commented out).

Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. The “default” scenario is Arch, which nowadays is my daily driver.

For Ubuntu, we have a “prepare.yml” file:

The reason why is explained in my previous posts on Ansible and Molecule.

By default, I test that “flatpak” is installed. (see the default variables above: by default, Flatpak is installed)

But I also have a scenario (in Arch) where I run the role without Flatpak:

For this scenario, the “verify.yml” verifies Flatpak is not installed:

Of course, this is tested on GitHub Actions and can be developed directly on the web IDE Gitpod.

I hope you find this post useful for inspiration on how to use Ansible to automatize your Linux installations 🙂

Ubuntu, Oh My Zsh, Powerlevel10k and Meslo fonts

I haven’t been using Ubuntu for a while, but I wanted to give it another try. I’m using my Ansible playbook for installing ZSH, Oh My Zsh, and p10k (Powerlevel10k), so I thought everything would work like a charm.

However, after running the playbook and restarting, the terminal did not look quite right:

You see the OS logo before the “>” is not displayed, and other icon fonts (I’m using exa/eza instead of “ls”) are missing, too (e.g., the one for YAML and Markdown files). In Arch, I knew how to solve icon problems for exa. Here in Ubuntu, I never experimented in that respect.

However, the p10k GitHub repository provides many hints in that respect. Unfortunately, Ubuntu does not provide packages for Nerd fonts. However, the p10k GitHub repository provides some Meslo fonts that can be directly downloaded.

The commands to solve the problem (provided you already have “fontconfig” and “wget” installed, otherwise, do install them) are:

And then issue

You can verify that they are now installed:

Now, reboot (this seems to be required), and the next time you open the terminal, everything looks fine (note the OS icon and the icons for YAML and Markdown files):

Of course, you could also download another Nerd font from the corresponding GitHub repository, but this procedure seems to work like a charm, and you use the p10k recommended font (Meslo).

By the way, the Gnome Text Editor automatically uses the new icon fonts. Other programs like Kate (which I use in Gnome as well) have to be configured to use the Meslo font.

Dell OptiPlex 5040 MiniTower: upgrading SSD

I am writing this report about my (nice) experience upgrading the SSD (1 TB) to my Dell OptiPlex 5040 MiniTower. That’s an old computer (I bought it in 2016), but it’s still working great. However, its default SSD of 256 GB was becoming too small for Windows and my Linux distributions. This computer also came with a secondary mechanical hard disk (1 TB).

DISCLAIMER: This is NOT meant to be a tutorial; it’s just a report. You will do that at your own risk if you perform these operations! Ensure you did not void the warranty by opening your laptop.

I wrote this blog post as a reminder for myself in case I have to open this desktop again in the future!

To be honest, my plan was to add the new SSD as an additional SSD, but, as described later, I found out that the mechanical hard disk was a 2.5 one, so I replaced the old SSD with the new one (after cloning it). I’ve used a “FIDECO YPZ220C” to perform the offline cloning, which worked great!

This is the BIOS status BEFORE the upgrade:

I seem to remember that “RAID” is required to have Linux installed on such a machine.

This is the new SSD (a Samsung 870 EVO, 1 TB, SATA 2.5”):

The cool thing about this desktop PC, similar to other Dell computers I had in the past, is that you don’t need a screwdriver: you disassemble it just with your hands. However, I suggest you have a look at a disassembling video like the one I’ve used: https://www.youtube.com/watch?v=gXePa1N_8iI. I know the video is about a Dell Optiplex 7040 MT, while mine is a Dell Optiplex 5040 MT, but their shapes and internals look the same. On the contrary, the Dell Optiplex 5040 SmallFactor videos are not useful because there’s a huge difference between my MiniTower and a SmallFactor 5040.

These are a few photos of the disassembling, showing the handles to use to open the computer, disconnect a few parts, and access the part holding the 2.5 drives.

This is the part holding the two 2.5 drives (as I said, at this point, I realized that also the mechanical hard disk is occupying one such place):

The SSD (I will replace) is the first one on top.

It’s easy to remove that: just use the handles to pull it off:

There are no screws to remove: you just enlarge the container to remove the SSD and insert the new one.

As I said above, I inserted the new one after performing the offline cloning.

Once I closed the desktop, the BIOS confirmed that the new SSD was recognized! 🙂

Now, some bad news (which is easy to fix, though): if you use a partition manager, e.g., in Linux, the SSD is seen as 1 TB, but the partitions are based on the original source SSD, so you end up with lots of free space that you cannot use!

For example, here’s the output of fdisk, which understands there’s something wrong with the partition table:

It also suggests that it’s not a good idea to try to fix it when one of the partitions is mounted.

Using a live ISO, e.g., the one from EndeavourOS, is just a matter of fixing the partition table as follows.

Now, you have access to the whole space in the disk.

For example, this is the output of “gparted” (Yes, I have a few Linux distributions installed on this PC):

That’s all! 🙂

Hyprland and Waybar

Up to now, I have shown how to get started with Hyprland with the initial configurations.

Now, I’ll show how to install the mainstream status bar in Hyprland: Waybar. Again, I’m going to do that for Arch Linux. As in the previous posts, I will NOT focus on the look and feel configuration.

Before continuing, the Waybar module for keyboard status (Caps lock and Num lock) requires the user to be part of the “input” group. Ensure your user is part of that group by running “groups”. If not, then add it with

Then, you must log out and log in.

First of all, the official package waybar already supports Hyprland (in the past, you had to install an AUR package). So, let’s install the main packages (as usual, you might want to make sure your packages are up-to-date before going on):

Let’s open a terminal and start Waybar

The result is not that good-looking

Waybar heavily relies on Nerd fonts for icons, and, currently, we don’t have any installed (unless you have already installed a few yourself).

The terminal will also be filled with a few warnings about a few missing things (related to Sway) and errors about failures to connect to MPD.

Let’s quit Waybar (close the terminal from where you launched it), and let’s fix the font problem by installing a few font packages:

Let’s start Waybar again, and this time it looks better:

Try to click on the modules and see what happens. For some of them, the information shown will change (e.g., the time will turn into the date).

Let’s quit Waybar again, and let’s start configuring it. We must create the configuration files for Waybar (by default, they are searched for in “~/.config/waybar”). We can do that by using the default ones:

The above command will copy “config” (with the configuration of Waybar modules, i.e., the “boxes” shown in the bar; The configuration uses the JSON file format) and style.css (for the style).

Let’s edit “config”. At the time of writing, this is the initial part of the configuration file:

The initial parts specify the position and other main configurations. This part must be enabled:

Otherwise, Waybar popups render behind the windows

Let’s edit the modules that must be shown on the bar’s left, center, and right. Of course, this is subjective; here, I show a few examples. The modules starting with “sway” are for the Sway Window Manager, while we’re using Hyprland, and we must use the corresponding ones:

You might want to have a look at the Waybar Wiki for an explanation of the modules.

Remember that for each module you mention here, you can have a configuration in the rest of the file, e.g.:

Otherwise, you get the defaults for that module.

Each module, together with its configuration, is documented in the Waybar Wiki.

Let’s focus on the left and center modules. I’ve opened three workspaces, and here’s the result (note the workspace indicator on the left, and on the center, we see the currently focused window’s title, in this case, Firefox):

By editing the “style.css” file, we can change the workspace indicator so that we better highlight the current workspace:

Restart Waybar, and now the current workspace is well distinguished:

The “tray” module is useful to show applications running in the background, like “skype”, “dropbox,” or the “network-manager-applet”.

Let’s now define a custom module, for example, one for showing a menu for locking the screen, logging out, rebooting, etc. To do that, first, we need to install the AUR package “wlogout”:

Let’s say we want to add it as the last module on the right. We edit the Waybar config file like this:

Then, in the same file, before closing the last JSON element, we define such a module (remember to add a comma after the previously existing last module):

Note that I’ve used a character using the installed Nerd font. Of course, you can choose anything you like. The “wlogout” menu will appear when you click on that module. Let’s restart Waybar and verify that:

By editing the “style.css”, you can customize the style of this custom module, e.g.,

When we’re happy with the configuration, we modify the Hyprland configuration file to start Waybar automatically when we enter Hyprland:

Restart Hyprland and Waybar will now appear automatically.

Finally, you can have several Waybar bars, i.e., instances, in different parts of the screen, each one with a different configuration.

For example, let’s create another Waybar configuration for showing a Taskbar to show all the running applications from all workspaces. This can be useful to quickly look at all the running applications and quickly switch to any of them, especially in the presence of many workspaces.

I create another configuration file in the “~/.config/waybar” directory, e.g., “config-taskbar”, with these contents (you could also configure several Waybar instances in the same configuration file, but I prefer to have one configuration file for each instance):

We can call it from the command line as follows:

Here’s the second instance of Waybar on the bottom, showing all the running applications (from all the workspaces):

Unfortunately, in my experience, not all icons for all running applications are correctly shown: for example, for “nemo”, you get an empty icon in the taskbar. You can click it, but visually, you don’t see that… maybe it has something to do with the icon set. I still have to investigate.

You can run both instances when Hyprland starts by putting these two lines in the “hyprland.conf” file:

Stay tuned for more posts about Hyprland. 🙂

Maven, parents, aggregators and version update

In this blog post, I will describe a few scenarios where you want to update versions in a multi-module Maven project consistently. In these examples, you have both a parent POM, which is meant to contain common properties and configurations to be inherited throughout the other projects, and aggregator POMs, which are meant to be used only to build the multi-module project. Thus, the parent itself is NOT meant to be used to build the multi-module project.

The source code of the projects used in this post can be found here: https://github.com/LorenzoBettini/maven-versions-example.

I’m not saying that such a POM configuration and structure is ideal. I mean that it can be seen as a good practice to separate the concept of parent and aggregator POMs (though, typically, they are implemented in the same POM). However, in complex multi-module projects, you might want to separate them. In particular, as in this example, we can have different separate aggregator POMs because we want to be able to build different sets of children’s projects. The aggregators inherit from the parent to make things a bit more complex and interesting. Again, this is not strictly required, but it allows the aggregators to inherit the shared configurations and properties from the parent. However, this adds a few more (interesting?) problems, which we’ll examine in this article.

We’re going to use the standard Maven plugin’s goal: org.codehaus.mojo:versions-maven-plugin:set:

Description:
Sets the current project’s version and based on that change propagates that change onto any child modules as necessary.

This is the aggregator1 POM (note that it inherits from the parent and also mentions the parent as a child because we want to build the parent during the reactor, e.g., to deploy it):

Let’s make sure we can build it:

The aggregator2 is similar; here’s the build result:

Note that parent2 has parent1 as a parent:

Let’s say we want to update the version of parent1 consistently. Where do we run the following Maven command?

Let’s try to do that on the aggregator1:

Maybe on the parent1?

It looks like it worked… or it didn’t?

As you can see, it updated only the version of parent1, and now all the other children will refer to an old version. It didn’t work!

In fact, if we try to build the whole project with the aggregator1, it fails:

Before going on, let’s revert the change of version in parent1.

Let’s add references to aggregators in parent1

Let’s run the Maven command on parent1:

That makes sense: the aggregator has the parent as a child, and the parent has the aggregator as a child.

But what if we help Maven a little to detect all the children without a cycle?

It looks like it is enough to “hide” the references to children inside a profile that is NOT activated:

And the update works. All the versions are consistently updated in all the Maven modules:

The important thing is that aggregator2 does not have parent1 as a module (just parent2), or the Maven command will not terminate.

We can also consistently update the version of a single artifact; if the artifact is a parent POM, the references to that parent will also be updated in children. For example, let’s update only the version of parent2 by running this command from the parent1 project and verify that the versions are updated consistently:

Unfortunately, this is not the correct result: the version of parent2 has not been updated. Only the references to parent2 in the children have been updated to a new version that will not be found.

For this strategy to work, parent2 must have its version, not the one inherited from parent1.

Let’s verify that: let’s manually change the version of parent2 to the one we have just set in its children:

And let’s try to update to a new version the parent2:

Nothing has changed… it did not work.

Let’s try and be more specific by specifying the old version (after all, we’re running this command from parent1 asking to change the version of a specific child):

This time it worked! It updated the version in parent2 and all the children of parent2.

Let’s reset all the versions to the initial state.

Let’s remove the “hack” of child modules from parent1 and create a brand new aggregator that does not inherit from any parent (in fact, it configures the versions plugin itself) but serves purely as an aggregator:

Let’s try to run the version update from this aggregator:

It updated the version of the aggregator only! That’s not what we want.

Let’s revert the change.

We know that we can use the artifactId

What if the same child is included in our aggregators aggregator1 and aggregator2? For example:

We get an error if we try to update the version as above because the same module is present twice in the same reactor:

But what if we apply the same trick of the modules inside a profile in this new aggregator project, which is meant to be used only to update versions consistently?

For example,

This time, the version update works even when the same module is present in both our aggregator1 and aggregator2! Moreover, versions are updated only once in the module mentioned in both our aggregators:

Maybe, this time, this is not to be considered a hack because we use this aggregator only as a means to keep track of version updates consistently in all the children of our parent POMs.

As I said, these might be seen as complex configurations; however, I think it’s good to experiment with “toy” examples before applying version changes to real-life Maven projects, which might share such complexity.

LG GRAM 16 (16Z90P): Adding a second SSD

I am writing this report about my (nice) experience adding a second SSD (an NVMe, 1 Tb) to my LG GRAM 16, my main laptop that I’ve enjoyed for two years.

DISCLAIMER: This is NOT meant to be a tutorial; it’s just a report. You will do that at your own risk if you perform these operations! Ensure you did not void the warranty by opening your laptop.

I wrote this blog post as a reminder for myself in case I have to open the laptop again in the future!

I also decided to describe my experience because there seems to be some confusion and doubts about which kind of SSD you can add in the second slot (SATA? or PCI?). At least for this model, LG GRAM 16 (16Z90P), I could successfully and seamlessly insert an NVMe M.2 PCIe3. This is the SSD I added, Samsung SSD 970 EVO Plus (NOTE: this SSD does NOT include a screw for securing the SSD to the board; however, the LG GRAM has a screw in the second slot, so no problem!):

You can also see the internal of the laptop below and what’s written in the second slot.

These are the tools I’ve used:

It’s time to open the back cover. That’s the first time I did that, and I found it not very easy… not impossible, but not even as easy as I thought. Fortunately, there are several videos that show this procedure. In particular, there’s an “official” one from LG, which I suggest to follow: https://www.youtube.com/watch?v=55gM-r2xtmM.

Removing the rubber feet (there are three types) was not easy from the beginning: they are “sticky”, but with a proper tool (the grey one in the picture above), I managed to remove the bigger ones and the one at the top.

For the other smaller ones, I had to use a cutter. Be careful because they tend to jump on your face (mind your eyes). 😉 Moreover, you have to be careful not to use too much force because you might break the bigger ones (in the linked video, you can see that the person breaks the one in the low left corner)

And here’s the back cover with all the screws revealed:

The screws are not of the same type either, so I ensured to remember their places:

Again, removing the cover after removing all the screws might not be straightforward: take inspiration from the linked video above! It took me some effort and a few attempts, but I finally made it! Here’s the removed cover and the internal of the laptop (well organized and neat, isn’t it?):

Now, let’s zoom in on the second SLOT:

You can see that it says NVME and SATA3. I haven’t tried with a SATA3, but, as I said, I had no problem with the Samsung NVME!

IMPORTANT: unplug the battery cable before continuing (at least, that’s what the LG video says). That’s easy: pull it gently.

Remove the screw from the second slot:

Insert the SSD (that’s easy):

And secure it with the screw:

Of course, now we have to reconnect the battery cable!

Let’s take a final look at the result:

OK! Let’s close the laptop: putting the cover back is easier than removing it, but you still have to ensure you close the cover correctly. Put the screws back and the rubber feet (that’s also easy because they are still sticky).

The moment of truth… will the computer recognize the added SSD? Let’s enter the BIOS and… suspense… 😀

IT DOES!!! (see that at the bottom. Yes, I have several Linux distributions installed on this laptop)

I booted into Linux (sorry, I almost never use Windows, and, to be honest, I still haven’t checked whether Windows is happy with the new SSD) and used the KDE partition manager to create a new partition to make a few experiments:

Everything looks fine! In the meantime, I’ve also created another partition for virtual machines on the new SSD, which works like a charm! I haven’t checked whether it’s faster than the primary SSD, which comes with the laptop.

That’s all! 🙂

Hyprland and ssh-agent

In this post, I’d like to document how to use ssh-agent in Hyprland to store SSH key passphrases.

This is part of my blog series on Hyprland.

Assuming you use SSH keys protected with a passphrase, each time you use an SSH connection with the SSH key, you are prompted for the passphrase.

You can use ssh-agent and ssh-add.

First, start the agent:

Then, use ssh-add to add a specific key (see the documentation) or all the keys:

You are prompted for the passphrase, but then, the passphrase is remembered, and you are not asked anymore (unless the lifetime expires, by default, 1 hour).

Unfortunately, this holds only in the current terminal. It works for other applications started from that terminal. For example, if you start Visual Studio from that terminal and you access a Git repository with your SSH key, the passphrase is reused without prompting you. If you start another terminal or program using its launcher, you are prompted for the passphrase again; moreover, in such a situation, the passphrase is not remembered since you should rerun ssh-add.

Instead, I’d like to be prompted for the passphrase only the first time I use ssh; for the current desktop session, I don’t want to enter the passphrase again. Of course, if I reboot, I’m OK with re-entering the passphrase the first time I need it.

In GNOME, you can rely on its keyring to prompt you for the passphrase and store it for the current session or permanently. In KDE, you have a similar mechanism, which, however, has to be appropriately configured (that’s out of the scope of this post).

In Hyprland, you have to set up such mechanisms manually.

The Arch Wiki, as usual, documents an easy solution, which I’ll report here (I haven’t tried alternatives, but this one is pretty easy to set up).

First (https://wiki.archlinux.org/title/SSH_keys#ssh-agent), add this option to your “~/.ssh/config”:

This way, all SSH clients, including Git, store keys in the agent on first use.

We must ensure an ssh-agent is automatically started when you enter Hyprland.

Again, the Arch Wiki (https://wiki.archlinux.org/title/SSH_keys#Start_ssh-agent_with_systemd_user) tells you how to do that by starting ssh-agent a systemd user service.

Create this file “~/.config/systemd/user/ssh-agent.service” with these contents:

Then ensure the environment variable “SSH_AUTH_SOCK” is set to “$XDG_RUNTIME_DIR/ssh-agent.socket”. For example, in the Hyperland configuration file:

Now, start the service for your user at boot:

Reboot to ensure the environment variable is set correctly and the service is started.

Try to use ssh, and you will be prompted for your passphrase. Try to use ssh again for that passphrase, and you should not be asked for the passphrase. Start a new terminal, use SSH again, even with Git, and you will not be asked for the passphrase. This also works for other programs that need SSH, for example, Visual Studio Code when accessing a Git repository or Unison when connecting through SSH.

From now on, you’ll be asked for the passphrase only the first time you use ssh from any program and never more for that session.

Stay tuned for more posts about Hyprland. 🙂

KDE Plasma and Wayland: one year later

I had blogged about KDE Plasma and Wayland, and now I’m evaluating that again one year later.

In general, it looks like it improved a lot, even though it looked already promising last year.

Again, I’m testing this on EndeavourOS.

When I logged into the Plasma Wayland session with a brand-new user, the system automatically switched to 150% scaling. This is good because my LG GRAM 16 needs at least that scaling level.

Unlike my experiments last year, this is enough to have a nice-looking environment, and even GTK applications look great without blurring! For example, this is the EndeavourOS Welcome application (GTK-based):

Usually, a 150% scaling on this computer is not enough for my eyes, and I prefer 175%. I then configured the new scaling, pressed “Apply,” and everything was applied immediately: no logout was required (instead, on X11, this is usually required to have everything scaled correctly):

Touchpad gestures still work great, but they are still not configurable:

  • 4 Finger Swipe Left –> Next Virtual Desktop.
  • 4 Finger Swipe Right –> Previous Virtual Desktop.
  • 4 Finger Swipe Up –> Desktop Grid.
  • 4 Finger Swipe Down –> Available Window Grid.

Context menus of desktop and other Plasma widgets (e.g., the applications menu) in the presence of fractional scaling look nicer in Wayland than in X11!

For example, in X11, menu entries look too crowded:

While on Wayland, they look nice (the fonts also look better on Wayland):

Concerning GTK applications, my main one is Eclipse. As it was happening when I tried Plasma Wayland last year, the Eclipse splash screen has the title bar and window buttons, which looks strange:

Besides that, Eclipse looks nice:

Note, however, that there are still a few bad things: the Eclipse icon is not recognized, and you get a generic Wayland icon. This happens also in GNOME Wayland. There’s an open bug, but still no solution. Moreover, things like hover pop-ups are dismissed too soon in Wayland.

One last bad thing I noted is that the login manager SDDM does not remember the X11 or Wayland session per user: it just uses the last used session globally. I’m pretty sure it wasn’t like that in the past. I don’t know, though, whether SDDM or Plasma is to blame here 😉

I guess it’s time to use KDE Plasma with Wayland daily and see how it goes. 🙂

Hyprland: getting started (part 2)

This is the second blog post on getting started with Hyprland (see the first post here).

In this article, we install and configure a few other tools. We will also look at the customization of keyboard shortcuts.

Other tools

As noted here https://wiki.hyprland.org/Useful-Utilities/Must-have/, you need an Authentication Agent:

Authentication agents are the things that pop up a window asking you for a password whenever an app wants to elevate its privileges.

Let’s install the suggested one:

And then we start it in the Hyprland configuration file with the “exec-once” directive:

Let’s restart Hyprland (such a change in the configuration file needs a restart), e.g., with the default shortcut SUPER + M, we exit Hyprland, and then we can log back in. When a program needs to elevate its privileges, we get the KDE dialog. For example, if we use the EndeavourOS Welcome App to update the mirrors, we get the dialog as soon as the mirror file must be saved:

The same happens if we run from a terminal a “systemctl” command that needs superuser privileges:

Having the authentication dialog tiled as the other windows is not ideal. So let’s create a Window rule in the Hyprland configuration to make it floating:

TIP: to know the values for “class”, you can use “hyprctl clients” when the desired application is running and inspect its output by looking for the “class:” part.

Keyboard shortcuts

Hyprland is about using keyboard shortcuts a lot. You might want to take some time to get familiar with the main keyboard shortcuts for launching and closing (look at the configuration file). Change them as you see fit if you don’t like the default ones.

These are the default ones as set in the example configuration we started with:

I prefer these (note that SUPER+Q now has an entirely different behavior):

Some additional shortcuts might be helpful as well, such as the following (“grouping” has to do with tabbed windows):

And for moving tiled windows:

Mouse gestures

Hyprland provides mouse gestures (swipe) for switching among workspaces. This is not enabled by default, but it’s easy to do: change the existing “gestures” section as follows:

Screenshots

Let’s configure the system to take screenshots.

First, we install “grim” (A screenshot utility for Wayland)

Let’s also install an image viewer, like “Eye of Gnome”:

You can try to run “grim” from a terminal to see how it works: by default, it takes a screenshot of the whole screen and save the corresponding images with names containing date and time in the “Pictures” folder. For example, after running “grim” twice, I get the following:

What if we want to take a screenshot of a region? We need another program, “slurp” (Select a region in a Wayland compositor)

And we configure a few key bindings (note the last one, which takes a screenshot of the currently active window: this requires several commands to get the active window through Hyprland and then compute a few screen coordinates to pass to “grim”):

Brightness and volume

How to set the screen’s brightness and volume through the corresponding keys?

First, install “brightnessctl”:

You can get the current brightness by simply running the program (or with “get” or “-m”) and changing it with the “set” and the value (e.g., increase/decrease by percentage). For example:

So, we need to bind the appropriate special keys to such commands:

For volume, we do something similar: assuming that “wireplumber” is installed, we use “wpctl”:

Note the use of “-l 1.0” meaning that we don’t want to allow the wireplumber to increase the volume above 100%.

Screen locking

If we want to have screen locking (using a keyboard shortcut), we need these two programs:

  • swayidle, Idle management daemon for Wayland
  • swaylock, Screen locker for Wayland

And then, configure the shortcuts (note that we define a variable, $lock, in the configuration file):

Now, when we press SUPER + L, the screen is locked (swaylock can be configured with colors and the like, but I won’t discuss that). You have to type your password: when you start doing that, you’ll see a circle with some parts changing. If you get the password wrong, swaylock will notify you.

The “exec-once” (remember, you need to restart Hyprland for that) will lock the screen after 300 seconds, but it will also turn it off using a “hyprctl” dispatch command. Note that when that happens, you need to press a key or move the mouse, and the instruction above instructs the system to turn the screen back on. Of course, then you’ll have to type your password.

That’s all for now! Stay tuned for more posts about Hyprland 🙂

Hyprland: getting started (part 1)

In the past few months, I’ve heard (i.e., read articles and seen videos) many good things about the Wayland compositor Hyprland. I decided to try it, and I’ve been using it for almost one month as my daily driver. I’m still not into “tiling” that much, but in Hyprland, you can also switch to classic “stack” window management. I like Hyprland; it feels fast and reactive (also on a PineBook Pro; I’ll blog about Hyprland on a PineBook Pro in the future).

By the way, if you don’t already know:

Hyprland is a dynamic tiling Wayland compositor based on wlroots that doesn’t sacrifice on its looks. It supports multiple layouts, fancy effects, has a very flexible IPC model allowing for a lot of customization, a powerful plugin system and more.

This post is the first of a few articles showing how to install, configure and use Hyprland and additional tools. You can find many GitHub repositories with installation scripts and configuration files for Hyprland, but you end up with the configurations of those repositories, probably without understanding the basic details of Hyprland. I found starting from scratch (following the Hyprland wiki) much more helpful, taking inspiration from some of the above-mentioned GitHub repositories.

By the way, most Hyprland configurations you find on GitHub are primarily about “ricing” (i.e., heavy aesthetic customizations of the desktop). While I love good-looking desktops, I won’t blog about aesthetic customizations much. I’ll focus mostly on configurations and tools for usability.

This first post is only about getting started and having a usable environment with minimal helpful tools: there will be follow-up posts for installing other tools (like a bar and notification system) and configuring other programs (actually, I have already blogged about Variety in Hyprland).

Moreover, all these posts are about Hyprland in Arch Linux since that’s the only OS where I experimented with Hyprland. In particular, I’m using EndeavourOS.

First, install EndeavourOS without a desktop environment (when you get to the installer’s part, where you have to select a desktop environment).

I will use the AUR helper “yay”, which is already installed in EndeavourOS. On Arch, you’ll have to install it yourself, e.g., with the following commands:

Let’s start from https://wiki.hyprland.org/Getting-Started/Master-Tutorial/ and install Hyprland from the official Arch repositories:

As suggested, let’s install the terminal “Kitty” (the default Hyprland configuration has a shortcut to run that).

Of course, later, you can also install another terminal.

Now, you can execute “Hyprland” in your tty. (Remember, I haven’t installed any desktop environment or a login manager).

Note for virtual machines: If you test this in a virtual machine, ensure that 3D is enabled. Moreover, it’s crucial to start Hyprland with the following environment variables so that the mouse is usable; please, remember that the experience in a virtual machine will not be optimal anyway:

When Hyprland starts, you see a warning and a few pieces of information:

To make the warning go away, we edit the generated default configuration file (use either “vi” or “nano” text editors that are already installed in EndeavourOS). To do that, we must start a terminal: by default, the keyboard shortcut is “SUPER + Q” (as shown in the yellow warning):

Now we can edit the file .config/hypr/hyprland.conf and remove the following line:

Save the file, and the warning will go away. In fact, one of the cool features of Hyprland is that it automatically applies changes to that file.

Let’s change the configuration file further. By default, the configuration uses a US keyboard layout. I had to change it to use the Italian layout: Edit that file and change the following part accordingly (in my case, I have an Italian keyboard):

Save the file, and the new keyboard layout will be immediately set.

You might want to install “neofetch” and run it in a terminal (in this example, I’m running inside a KVM virtual machine):

The default configuration uses the shortcut SUPER + E to start the file manager “Dolphin”, which is not installed by default. You could install it. Here, I’m doing something different: Let’s install the file manager “nemo”:

and change the line

into

Let’s save the file, press SUPER + E, and Nemo appears (tiled automatically)

Let’s install the application launcher “wofi” (personally, I prefer “rofi”, but I’ll blog about that in the future):

Wofi is already configured with the following keyboard shortcut:

For example, let’s use SUPER + R and run Firefox (already installed in EndeavourOS) using Wofi: just start typing “fir” until it appears in the list, move the cursor down to select it, and press ENTER (or keep on typing the other letters til “firefox” is the only choice).

Let’s exploit the blur effects of Hyprland: let’s modify the Kitty configuration file (create it if it doesn’t exist) ~/.config/kitty/kitty.conf by adding this line:

Save it and start another instance of Kitty and enjoy the blur effect with the default Hyprland background:

If “0.5” is too much transparency, make the value a bit bigger.

Let’s make Nemo transparent as well with an Hyprland window rule. By default, Nemo is not transparent:

Let’s modify the Hyprland configuration file by adding this line:

Save and restart Nemo, which is now transparent:

The two values in “opacity” set the opacity for the window when it’s focused and not, respectively. By changing the above line as follows:

The Nemo window will be less transparent when active and more transparent when not focused.

Monitor(s) configurations are specified in the Hyprland configuration and are applied on the fly as soon as you save the configuration file. This is the default configuration:

The last value is the scale value. Try to change it to “1.5” or “1.75”, save, and see the scaling automatically applied.

Note that, by default, when running on a real computer (not a virtual machine), Hyprland already scales the display for high resolutions (e.g., it sets it to “1.5” by default).

Running from a Display Manager

The default installation already created a file in the appropriate folder to let SDDM start the Hyprland session.

Let’s install the AUR package “sddm-git” (we need the Git version to avoid a bug that has been fixed but not in the current release; when reading this post, the official package might have already been fixed) with yay:

Then, we enable the service at boot:

If we want to start it without rebooting, the first time we run:

And now you can enter Hyprland from here.

If you’re running inside a virtual machine, you lose the environment variables we saw above: “WLR_NO_HARDWARE_CURSORS=1 WLR_RENDERER_ALLOW_SOFTWARE=1”. To restore them, you must modify the “/usr/share/wayland-sessions/hyprland.desktop” accordingly, in particular, the “Exec” line:

Then, restart “sddm” (by switching to a tty):

Or by rebooting the system.

That’s all for now! Stay tuned for more posts about Hyprland 🙂

Installing EndeavourOS ARM on a PineBook Pro

I have already blogged about installing Arch on a PineBook Pro: the first article and the second article.

In this blog post, I’ll describe how to install EndeavourOS on a PineBook Pro.

As detailed here, https://arm.endeavouros.com/endeavouros-arm-install/, there are 3 ways to install EndeavourOS on an Arm device like the PineBook Pro. In this blog post, I’ll experiment with the first one.

This method consists of a two-step installation process:

  1. use the standard EndeavourOS ISO, booting that from a PC, to install the installation image on an external device (in this example, I will use a USB stick);
  2. then boot the PineBook Pro with the created USB stick and use Calamares to finalize the installation on the very same device you booted from.

Note that I will install EndeavourOS for Arm on an external device, NOT on the eMMC of the PineBook Pro. In this article, I’ll leave a few hints on how to do that on the internal eMMC.

First step

On a standard PC, boot the EndeavourOS ISO (in this example, I’m using the Cassini 2023-03 R2):

After adjusting the keyboard layout and connecting to the Internet, choose “EndeavourOS ARM Image Installer”.

As noted, you need first to insert a USB stick. If you plan to install it on the PineBook Pro’s internal eMMC, you must extract it and place it in a USB adapter. Then, choose “Strat ARM Installer”. That is a textual installation procedure so the installer will open a terminal in full-screen mode.

After pressing OK, you must select the ARM computer (in this case, “PineBook Pro”):

Concerning the file system, in all my experiments, BTRFS has never worked: when rebooting the USB stick (see later), the screen stays blank forever after selecting the boot media. So, the only working solution is EXT4:

Then, you have to write the device where you want to install the installer; the dialog shows all the devices, and you must write the main path of the device, NOT of a possibly existing single partition (in this case, it’s “/dev/sdb”):

Small note: unfortunately, the colors of this textual installer are not ideal 😉

Then, the procedure will prepare the device and download an archive from the Internet for the image to put on the USB stick (it’s a big image, so be patient):

Ultimately, it tells you about the temporary username and password for the installer copied on the USB. It also suggests unmounting the USB with a file manager. In the live environment, you use Thunar to unmount the USB stick. You can recognize the mounted USB stick to unmount because it should show two mounted partitions (the first one is about 128 Mb):

Umounting one of them will also unmount the other one.

Second step

It’s time to boot the PineBook Pro with the UBS stick we created with the abovementioned process. If, in the previous procedure, you created the installer on the eMMC (connected with a USB adapter), you should put the eMMC inside the PineBook Pro.

When the PineBook Pro starts, you should find a way to boot from the USB stick. If you’ve always used the Manjaro installation that comes with the PineBook Pro, you have U-Boot as the bootloader (see my previous blog post for a screenshot of U-Boot booting from the USB stick). If you’re lucky, it should give precedence to the USB stick (I’ve read that this is not always the case, depending on the version of the installed U-Boot). In this example, I have Tow-Boot as the bootloader, so when you see the message telling you to press ESCAPE (or Ctrl-C) to enter the boot menu, please do so:

And then, select the USB as the boot media (of course, if you installed the image on an SD, choose accordingly):

After some textual logs, you should get to the graphical environment for the actual installation. The Window Manager is OpenBox, so, differently from the standard EndeavourOS installer for PC, you don’t have a fully-fledged Desktop Environment (Xfce):

Now, you can choose whether to install an “Official” (e.g., KDE or GNOME) or a “Community” edition (e.g., Sway).

Remember: the installation will be performed on the same media you have just booted. In this example, it’s a USB stick. Again, if you want to install EndeavourOS on the internal eMMC, you first need to extract the eMMC, put it on a USB adapter, do the first procedure described above, put the eMMC back into the PineBook Pro, and start the installation from the eMMC.

As you can see from the screenshots above, there’s no section for partitioning the disk. The partitions have already been created during the first procedure. This installation procedure only finalizes the installation.

I’ve tried both KDE and GNOME.

Enjoy your EndeavourOS installation 🙂

If you like it on a USB stick (remember, it should be a fast USB), you might want to install it on the eMMC (see the notes in this blog post about that). I have already done that, and it works much better than the original Manjaro!

My Ansible Role for “Oh My Zsh” and other CLI programs

I have already started blogging about Ansible; in particular, I have shown how to develop and test an Ansible role with Molecule and Docker, also on Gitpod.

This blog post will describe my Ansible role for installing “Oh My Zsh” and several command line programs. The role also installs the starship prompt or the p10k theme. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.

This role is for my personal installation and configuration and is not meant to be reusable.

The role can be found here: https://github.com/LorenzoBettini/ansible-molecule-oh-my-zsh-example.

My other post has already described many parts related to zsh installation, configuration, and verification with Ansible and Molecule.

The main file “tasks/main.yml” is as follows:

Besides “zsh” and “git” (which are needed for installing other things, and, in general, I need it daily), this installs several command line tools, like ripgrep, procs, dust, exa, bat, zoxide. Note that, depending on the operating system, these tools must be installed differently (e.g., from the package manager or by downloading a binary distribution). In a few cases, the package names differ depending on the operating system. In such cases, the default names are defined in “vars/main.yml” and properly overridden depending on the operating system:

The task also installs a few fonts (nerd fonts, with emoji, or with icon characters), which are needed because “starship” and “p10k” need a few icon characters; the same holds for other tools like exa.

“Oh My Zsh” is installed by cloning its Git repository; the same holds for some external plugins. The task also sets “zsh” as the default shell for the current user.

Finally, depending on the variable with_starship, which defaults to true, it installs the starship prompt or the p10k theme. These are handled in the corresponding included files “starship.yml” and “p10k.yml”, respectively.

Note that both files copy the corresponding template for “.zshrc” (depending on starship or p10k, the contents of “.zshrc” are slightly different). For “p10k”, it also copies my theme configuration”; for “starship”, I’m OK with the default prompt. The copied “.zshrc” contains several “aliases” for the command line programs installed by this role (e.g., “ls” is aliased to “exa” commands).

Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. In such cases, I test the “starship” installation and verify that the tools that differ for their installations in different operating systems are installed correctly. This is the “verify.yml” (remember that this installs “starship” and NOT “p10k”, so it ensures that only the former is installed):

Concerning “p10k,” I have a separate scenario with a different “verify.yml” (I test this only on Arch since “starship” and “p10k” installations and configurations are the same in all three operating systems):

However, this “verify.yml” could also be used for the other operating systems since it performs the same verifications concerning installed programs. It differs only for the final part.

Of course, this is tested on GitHub Actions and can be developed directly on the web IDE Gitpod.

I hope you find this post useful for inspiration on how to use Ansible to automatize your Linux installations 🙂

TLP: Limiting Battery Charge on LG Gram in Linux

I had already blogged on how to limit battery charge on LG Gram in Linux. In that post, you had to manually set the threshold “80” in the file “/sys/devices/platform/lg-laptop/battery_care_limit”.

With TLP, the procedure is easier and more automatic.

First, you must install tlp (remember that tlp conflicts with power-profiles-daemon, so you have to disable the latter first or uninstall it). In Arch-based distros:

Ensure that the tlp service is enabled on boot and the first time you should start it (“sudo systemctl start tlp”).

By running “sudo tlp-stat”, you should see near the end this line:

Edit the file “/etc/tlp.conf” and uncomment the following lines (note there’s one also for the start of charging, but that option doesn’t seem to be supported in this laptop):

Restart the service (“sudo systemctl restart tlp.service”), and it should be already active (run “tlp-stat” again):

That’s all. This will persist on reboot. However, this will not persist if you hibernate and return from hibernation (unless you restart the tlp service as shown above).

Customizing KDE in Arch Linux on a PineBook Pro

In a previous blog post (and another one), I showed how to install Arch Linux on a PineBook Pro.

In this blog post, I’m showing how I customize KDE on that installation (thus, it is similar to this other one for GNOME).

I’m not using Wayland because I still don’t find it usable in Plasma. Indeed, I haven’t even installed the Wayland session: I’m using X11.

First of all, the default screen resolution of KDE is too tiny for my eyes. Thus, I set 150% (fractional) scaling:

After that, you have to log out and log in to see the setting in effect.

I also enable “Tap-to-click” in the “Touchpad” settings.

Then, I set “Ctrl+Alt+T” as a shortcut for opening the terminal (Konsole). Actually, the shortcut is already configured in the “Custom Shortcuts”, but it’s not enabled by default in this distribution. It’s just a matter of selecting the corresponding checkbox (the “Examples” checkbox must be selected first to enable the other checkbox):

Then, I install an AUR helper. I like “yay,” so I first installed the needed dependencies:

And then

I install, by using “yay”, the “touchegg” program for touchpad gestures. (Plasma Wayland already provides touchpad gestures, but I prefer to use the X11 session, as I said above). For ARM, we need to install the AUR package:

You will get this warning, but proceed anyway: it compiles and works fine:

I customize touchegg touchpad gestures for KDE by creating the file “~/.config/touchegg/touchegg.conf” with the following contents:

In particular, note the 3-finger gestures (for the “Expose” effect, hide all windows, switch workspace, etc.).

Let’s start “touchegg” and verify that gestures work

And then let’s enable it so that it automatically starts on the subsequent boots:

Let’s move on to ZSH, which I prefer as a shell:

Since I’m going to install “Oh My Zsh” and other Zsh plugins, I install these fonts (remember from the previous post that I had already installed “noto-fonts” and “noto-fonts-emoji”) and finder tool (“curl” is required for the installation of “Oh My Zsh”):

Let’s install “Oh My Zsh” by running the following command as documented on its website:

When asked, I agreed to change my default shell to Zsh. In the end, we should see the prompt changed to the default one of “Oh My Zsh”:

I then install some external plugins:

And I enable them by editing the ~/.zshrc, in particular, the “plugins” line (I also enable other plugins that are part of the OMZ distribution):

Once saved, you have to start a new terminal with zsh to see the plugins in action (remember that, until you log out and log in, the default shell is still BASH, so you might have to run “zsh” manually to switch to ZSH in the currently logged session).

Besides the syntax highlighting for commands, you have completion after “cd” (press TAB), excellent command history (with Ctrl+R), suggestions, etc.

Let’s switch to the “Starship” prompt. Let’s run the documented installation program:

Now, let’s edit the ~/.zshrc file again; we comment the line starting with “ZSH_THEME,” and we add to the end of the file:

Opening another ZSH shell, we should see the fantastic Starship prompt in action, e.g.,

To quickly search for file names from the command line, I install “locate”, enable its periodic indexing and run the indexing once the first time (if you’re on a BTRFS file system, you might want to have a look at this older post of mine):

Then, you should be able to look for files with the command “locate” quickly.

Gnome uses “Baloo” for file indexing and searching, e.g., from “KRunner” (Alt+Space) or the application launcher (Alt+F1 or simply Meta). I like it, and it quickly keeps the index up to date. However, by default, baloo also indexes the file contents, which uses too many resources, so I disable the content indexing feature. I blogged about configuring baloo in another post.

Speaking about KRunner, I find it extremely slow on this laptop; especially the first time you run it, it might take a few seconds to show up. I often use it to search for files or programs, and I need such a mechanism to be fast. I found that the application launcher is instead fast to start (Meta key). However, when you start typing to search, the results are all mixed: applications, files, and settings are all together, and it might not be easy to find what you need:

For this reason, I right-click on the KDE icon and select the “Alternative” “Application Menu”:

This launcher has a valuable feature to categorize the search results. For example, with the exact search string as above, I get results clearly separated:

I also use the “yakuake” drop-down terminal a lot:

I run it once (it’s enabled by default by pressing “F12”), and it will start automatically upon logging in.

That’s all! I hope you enjoyed this post. 🙂

You might also look at my other posts on this PineBook Pro laptop.

Xtext, monorepo and Maven/Tycho

TL; DR: Xtext sources are now in a single Git monorepo (https://github.com/eclipse/xtext), and the build infrastructure is based entirely on Maven/Tycho (Gradle is not used anymore).

Background

A few years ago, Xtext sources were split into 6 separate GitHub repositories. I did not take part in that decision (I guess at that time, I wasn’t even a committer).

I think that at that time, the splitting was carefully thought out, aiming at making contributions and maintenance easier. In fact, while Xtext is mainly based on Eclipse, it has several core parts that are independent of Eclipse (including LSP support and plain Maven/Gradle projects support, without the Eclipse UI).

Those core parts were built with Gradle. I’m not a fan of Gradle: I’ve always found Gradle gives you too much power as a build tool, not to mention that, typically, the build breaks with new releases of Gradle. On the contrary, Maven has been around for several years and is rock solid. It’s rigid, so you have to obey its rules. But if you do, everything works.
Moreover, its files (POM) and build infrastructures always look the same in Maven projects, so you know where to look when you deal with a Maven project. For Eclipse projects, Tycho complements Maven. Tycho is not perfect, but everything works if you stick with its rules.

The rigidness of Maven/Tycho usually forces you to keep the build simple. And that’s good! 🙂

Of course, also previously, the Eclipse projects of Xtext had to be built with Maven/Tycho. Thus, the build of Xtext was split into 6 repositories with a mixture of Gradle builds and Maven/Tycho builds.

As a result, Xtext was a nightmare to build and maintain. That’s my opinion, of course, but if you read Xtext GitHub issues, PRs, and discussions, you see I wasn’t the only one to say that.

For example, the main problem was that if you changed something in one of the core projects (i.e., on its Git repository), you had to make sure that all the other projects did not break:

  1. once the core project was built successfully, archive its artifacts (p2 repository and Maven artifacts) in Jenkins (JIRO in Eclipse)
  2. trigger downstream jobs, making sure to use the archived artifacts corresponding to the branch with the changes
  3. wait for all downstream jobs to finish
  4. if anything breaks, go back from the beginning and start again!

Moreover, while it was easy to parametrize the Maven and Gradle files to use the archived Maven artifacts, it was a bit harder to use the archived p2 repositories in the target platforms of the Eclipse downstream projects. Typically, you had to temporarily modify and commit the target platform files to refer to the archived p2 repositories corresponding to the current feature branch. Before merging, you must remember to revert the changes to the target platform files.

In the end, the code in the single Git repositories was actually coupled, so the splitting in single Git repositories has always sounded wrong to me.

The release process used to be complex as well. Again, that was due to Git repository splitting and a mixture of Maven/Gradle builds.

A big change

I had always wanted to bring Xtext sources back into a single Git repository and to use Maven/Tycho only. I told the project lead Christian Dietrich about that many times, but he was always reluctant due to the time and effort it would have taken. On January 2023, I finally managed to convince him to give me a chance 🙂

I first started porting all single repositories to Maven/Tycho, removing the Gradle build.

Things were not so easy, but not even impossible. A few tests were failing in the new build infrastructure, but Christian and Sebastian Zarnekow helped me to bring everything to a working situation. We also took the chance to fix a few things in that respect.

In this stage, I also took the chance to set up the CI building also on GitHub Actions (our primary CI is, of course, Eclipse Jenkins “JIRO”).

Then, I moved to the repo merging. This part was easy. I had prepared the single Git repositories thinking about repository merging, so fixing the foreseen merge conflicts was just a matter of adjusting a few things in only a few files (mainly the parent POM).

Finally, I simplified the release infrastructure a lot. We used to have a vast Jenkinsfile for the release, with many complex operations based on Git cloning all the repositories, tagging, and tons of shell operations.

Now, the whole release is carried on during the Maven build (as it should be, in my opinion), including the uploading to the Eclipse web directories. The Jenkinsfile is very small.

The Oomph setup has been simplified as well. It takes time to have a development workspace, but that is due to dependency resolutions and the first Eclipse build. It takes place automatically.

Now, the complete Maven/Tycho build of Xtext, which consists of compilation and running all the tests (almost 40k tests!), takes about 1 hour and a half on Linux (Jenkins and GitHub Actions) and about 2 hours on macOS in GitHub Actions. It takes much less than before when you had to wait for the builds of the single Git repositories and downstream projects. The building of some single Git repositories was, of course, faster, but the overall time for building all the Git repositories was much more significant in the end.

Final thoughts

I hope this vast change improves the maintainability of the project. It should also help contributions from the community. Maybe I’m biased, but personally, I find working and contributing to Xtext much better now, as it used to be in the old days 😉

To me, the effort was very worthwhile!

In the end, it didn’t take long. Of course, I do not measure in months passed from the start of the porting (I started in the first days of January, and the first release with the new infrastructure, version 2.31.0, was at the end of May, though the first milestone with the new infrastructure was at the beginning of April). The time I spent working on that was much less. Of course, I hadn’t worked on that full-time: I did that in my spare time 🙂

In my humble opinion: If your project’s infrastructure and build are too complex, don’t change the build tools: change the latter and keep it simple. 😉 A single big Git monorepo is easier to deal with than 6 small Git repositories, especially when the single Git repositories contain code meant to belong together.

Many thanks to Christian and Sebastian for letting me work on this restructuring and promptly helping me during the process! 🙂

Hyprland and the Variety wallpaper manager

I’ve just started experimenting with the Wayland compositor Hyprland and wanted to use my favorite wallpaper manager, Variety. Unfortunately, Variety does not support Hyprland out of the box. However, it’s easy to make it work also on Wayland.

I’m going to use Arch Linux in this blog post.

First of all, you must install “swaybg”, a wallpaper tool for Wayland compositors, and “variety”:

Now, start variety and do the first-time configuration. Currently, trying to change the wallpaper will not work.

Variety creates the directory “~/.config/variety/scripts”. Edit the file “set_wallpaper” inside that directory and search for the block starting like this:

Change it like that (you could also remove the part about SWAYSOCK if you want or if you don’t plan to use “sway” at all):

This relies on the XDG_CURRENT_DESKTOP environment variable to be set accordingly, which should be like that automatically; you might want to check that:

Restart Variety, and now you can change the wallpaper!

Stay tuned for more posts on Hyprland 🙂

 

Installing Arch Linux with BTRFS on a PineBook Pro (external storage)

This is a follow-up to the article Installing Arch Linux on a PineBook Pro (external storage); differently from the previous post, this one is based on more automatic mechanisms, so it will be easier to copy and paste commands once a few variables have been correctly and carefully set. Moreover, in this post, I’ll install KDE instead of GNOME. Finally, we’ll use BTRFS for the main partition, instead of EXT4.

This post will describe my experience installing Arch Linux on a PineBook Pro on external storage (a micro SD card in this example). Thus, the Manjaro default installation on the eMMC will not be touched. You should use a fast card, or the overall experience will be extremely slow.

The post is based on the instructions found at https://wiki.pine64.org/wiki/Pinebook_Pro_Software_Release#Arch_Linux_ARM.

The installation process consists of two steps:

  • First, install the official Arch Linux ARM distribution; this will not be enough to have many hardware parts working (e.g., WiFi, battery management, and sound).
  • Then, add the repositories with kernels and drivers for the PineBook Pro.

The first part must be performed from an existing Linux installation on the PineBook Pro. I will use the Manjaro installation that comes with the PineBook Pro. The second part will be performed on the installed Arch Linux system on an external drive (a USB stick in this example). Since after the Arch Linux installation, the WiFi is not working, for this part, you need to connect to the wired network, e.g., with an ethernet USB adapter.

Finally, I’ll also show how to install KDE.

First part

This is based on https://wiki.pine64.org/wiki/Installing_Arch_Linux_ARM_On_The_Pinebook_Pro.

I insert my SD card, which is /dev/sda. (Use “lsblk” to detect that.) By the way, typically, an SD card should be detected with a device name of the shape “/dev/mmcblkX”, but in this example, the SD card is inserted in a USB adapter, so its device name has the typical shape “/dev/sdX”.

From now on, I’m using this device. In your case, the device name might be different.

From now on, all the instructions are executed as “root” from a terminal window; thus, I first run:

I will do the following steps in a directory of the root’s home:

We need to download and extract the latest release of Tow-Boot for the Pinebook Pro from https://github.com/Tow-Boot/Tow-Boot/releases. At the time of writing, the latest one is “2021.10-005”:

Now we flash Tow-Boot to /dev/sda (replace this with the device you are using).

Remember: this will wipe all the contents of the specified device. Moreover, make sure you specify the correct device, or you might overwrite the eMMC of the computer.

To make things easily reproducible and minimize the chances of specifying the wrong device name (which is extremely dangerous), I will use environment variables:

The process creates the partition table for the device, with the first partition for Tow-Boot. This first partition must not be modified further. As you see in a minute, we skip the first partition when we further partition the disk.

The output should be something like this:

Now, we must create the partitions on the USB stick. The process is documented step-by-step here https://wiki.pine64.org/wiki/Installing_Arch_Linux_ARM_On_The_Pinebook_Pro#Creating_the_partitions, and must be followed strictly:

The instructions must be followed strictly concerning, at least, the very first partition (the boot partition) that will be created, which must NOT touch the one created in the previous step. Then, after creating the boot partition, I’ll do things slightly differently: I will create a SWAP partition (not contemplated in the above instructions; The PineBook Pro has only 4 Gb of RAM, and it is likely to exhaust it, so it’s better to have a SWAP partition to avoid system hangs. Then, I’ll create the root partition.

These are the essential contents of my terminal where I follow the above-mentioned instructions (since I had already used this USB stick for some experiments before writing this blog post, fdisk detects an existing ext4 signature). Remember, though, that I created a SWAP partition that was not described in the above-mentioned instructions:

Now I format the boot, the swap, and the root partitions. I will use EXT4 for the boot partition and BTRFS for the root partition.

Again, to increase reproducibility and avoid possible mistakes, I’m going to define additional environment variables, based on the one I have already created above, to refer to the 3  partitions:

It’s worthwhile to double-check that all the environment variables refer to the right partitions:

Remember that I’m using the environment variables set above:

Now we mount the root partition to create the BTRFS subvolumes, following a standard scheme, and we unmount it:

Now we have to mount all the subvolumes to the corresponding directories (the “-m” flag creates the mounting subdirectory if it does not exist); I’m enabling BTRFS compression (by default, the compression level for zstd will be 3):

Then, we mount the boot partition on “/mnt/boot” (again, by creating that):

Let’s verify that the layout of the destination filesystem is as expected:

Now, we download the tarball for the rootfs of our USB stick installation. The instructions are once again taken from the link mentioned above, and they also include the verification of the contents of the downloaded archive:

And we extract the root filesystem onto the mounted root partition of our USB stick:

This is another operation that takes time.

Now, we must create the “/etc/fstab” on the mounted partition. To do that, we need to know the UUID of the two partitions by using “blkid”. You need to take note of the UUID from the output (which will be completely different according to the used external device):

Let’s take note of the UUIDs (remember, they will be different in your case) and create the corresponding environment variables:

We create the file “/etc/fstab” in “/mnt” according to the BTRFS subvolumes and to the other two partitions. This can be done by running the following command, which relies on the values of the 3 environment variables that we have just created:

Finally, we need to create the file “/mnt/boot/extlinux/extlinux.conf” (the directory must be created first, with:

Once again, the contents are generated by the following command that relies on the environment variable for the UUID of the root partition:

Note that we must specify “rootflags=subvol=@” because the “/” of is on the subvolume “@”. Otherwise, the system can boot, but then nothing else will work.

We can now unmount the filesystems

And we can reboot into the (hopefully) installed Arch Linux on the USB stick to finish a few operations. Remember that we need a wired connection for the next steps.

Upon rebooting, you should see the two entries (if you let the timeout expire, it will automatically boot the first entry):

After we get to the prompt, we can log in with “root” and password “root” (needless to say: change the password immediately).

Let’s connect a network cable (you need a USB adapter for that), and after a few seconds, we should be online. We verify that with “ifconfig”, which should show the assigned IP address for “eth0”.

Since there’s no DE yet, I suggest you keep following the official web pages (and this blog post) by connecting to the PineBook Pro via SSH so that it will be easy to copy and paste commands into the terminal window of another computer. Moreover, when logged into the PineBook Pro directly, you will see lots of logging information directly on the console (I guess this could be prevented by passing specific options to the kernel, but we’ll install a DE later, so I don’t care about that much). The SSH server is already up and running in the PineBook Pro installed system, so once we know the IP address from the output of “ifconfig”, we can connect via SSH. However, root access via SSH is disabled, so we must connect with the other predefined account “alarm” and password “alarm” (again, you might want to change this password right away):

Once we’re logged in since “sudo” is not yet configured, we switch to root:

We have to initialize the pacman keyring:

The guide https://wiki.pine64.org/wiki/Installing_Arch_Linux_ARM_On_The_Pinebook_Pro ends at this point.

What follows are my own instructions I usually run when installing Arch.

In particular, I configure time, network time synchronization, and timezone (Italy, in my case):

The next step is required for doing gnome-terminal work (and it’s also part of the Arch standard installation instructions):

Edit “/etc/locale.gen” and uncomment “en_US.UTF-8 UTF-8” and other needed locales.

Generate the locales by running:

Edit the “/etc/locale.conf” file, and set the LANG variable accordingly, for example, for the UTF-8 local above:

We could run a first system upgrade

I don’t know if that’s strictly required because we’ll add the additional repository for the PineBook Pro in a minute. However, just in case, it might be better to update the system.

Let’s reboot and verify that everything still works.

The kernel at the time of writing is

NOTE: By the way, I noted that if I want to boot from the USB stick, it’s better to use the right-hand side USB port (which is USB 2) instead of the left-hand side port (USB 3). Otherwise, the system seems to ignore the system on the USB stick and boots directly to the installed Manjaro system.

Second part

As mentioned above, I suggest connecting to the PineBook Pro via SSH. In any case, what follows must be executed as “root” (“su -“).

Let’s now add the repositories with kernels and drivers specific to PineBook Pro.

The project is documented here: https://github.com/SvenKiljan/archlinuxarm-pbp, and these are the contents of the additional repository that we’ll add in a minute https://pacman.kiljan.org/archlinuxarm-pbp/os/aarch64/.

Note that this project also provides a way to install Arch Linux directly with these repositories, with a procedure similar to the one in the first part. I prefer to install official Arch Linux first and then add the additional repositories, though.

The addition of the PineBook Pro repository to an existing Arch Linux installation and the installation of specific kernel and drivers is documented as a FAQ: https://github.com/SvenKiljan/archlinuxarm-pbp/blob/main/FAQ.md#how-do-i-migrate-from-other-arch-linux-arm-releases-for-the-pinebook-pro.

The addition of the PGP key and the repositories to “/etc/pacman.conf” is done by pasting the following commands (remember, as the user “root”):

Let’s now synchronize the repositories

And let’s install the packages specific to the PineBook Pro (note that we’re going to install the Linux kernel patched by Manjaro for the PineBook Pro):

Of course, we’ll have to answer “y” to the following question:

Let’s reboot and verify that everything still works (again, by connecting via SSH).

Now, we should be using the new kernel:

Before installing a DE, I prefer creating a user for myself (“bettini”) and configuring it as a “sudoer”. (We must install “sudo” first).

Then (by simply running “visudo”), we enable the users of the group “wheel” in “/etc/sudoers”; that is, we uncomment this line:

Then, I try to re-connect with my user and verify that I can run commands with “sudo” (e.g., “sudo pacman -Syu”).

Install KDE

As usual, I’m still doing these steps via SSH.

I’m going to install KDE with some fonts, pipewire media session, firefox, and the NetworkManager:

It’s about 680 Mb of packages to install, so please be patient.

Now, I enable the primary services (the login manager, the NetworkManager to select a network from KDE, and the profile daemon for switching between power profiles, e.g., “Balanced” and “Powersave”):

OK, time to reboot.

The graphical SDDM login manager should greet us this time, allowing us to get into KDE and select a WiFi connection.

NOTE: I always hear a strange noise when the login manager or the KDE DE starts. It also happens with the pre-installed Manjaro. It must be the sound card that gets activated…

IMPORTANT NOTE: Upon rebooting, the WiFi does not always work (it looks like the WiFi card is not seen at all); that also happens with Manjaro. The only solution is to shut down the computer (i.e., NOT simply rebooting it) and boot it again.

Here’s the KDE About dialog:

And of course, once installed, let’s run “neofetch”:

That’s all for now!

In a future blog post, I’ll describe my customizations to KDE (installed programs and configurations).

Stay tuned! 🙂

My script for automated Arch Linux installation

In a previous post, I reported the procedure for installing Arch Linux. The procedure is basically the one shown in the official Arch Wiki.

After a few manual steps, this post will show my installation script for automatically installing Arch Linux. I took inspiration from https://github.com/ChrisTitusTech/ArchTitus, but, differently from that project, my script is NOT meant to be reusable. The script is heavily tailored to my needs. I describe it in this post in case it might inspire others to follow a similar approach 🙂

The script (which actually consists of several scripts called from the main script) is available here: https://github.com/LorenzoBettini/my-archlinux-install-script.

I’ll describe the script by demonstrating its use for installing Arch Linux on a virtual machine (VirtualBox). However, I use the script for my computers. Also, for real computers, I perform the installation via SSH from another computer for the same reasons I have already explained.

The virtual machine preparation is the same as in my previous post, so I’ll start from the already configured machine.

I start the virtual machine with the Arch ISO mounted:

Inside the live environment, the SSH server is already up and running. However, since we’ll connect with the root account (the only one present), we must give the root account a password. By default, it’s empty, and SSH will not allow you to log in with a blank password. Choose a password. This password is temporary; if you’re in a trusted local network, you can choose an easy one.

Then, I connect to the virtual machine via SSH.

From now on, I’ll insert all the commands from a local terminal connected to the virtual machine.

Initial manual steps

First, I ensure the system clock is accurate by enabling network synchronization NTP:

Then, I partition the disk according to my needs. My script heavily relies on this partitioning scheme consisting of four partitions:

  • the one for booting in UEFI mode, formatted as FAT32, 300Mb (it should be enough for UEFI, but if unsure, go on with 512Mb)
  • a swap partition, 20Gb (I have 16Gb, and if I want to enable hibernation, i.e., suspend to disk, that should be enough)
  • a partition meant to host common data that I want to share among several Linux installations on the same machine (maybe I’ll blog about that in the future), formatted as EXT4, 30Gb
  • the root partition, formatted as BTRFS, the rest of the disk

To do that, I’m using cfdisk, a textual partition manager, which I find easy to use. In the virtual machine, the disk is “/dev/sda”:

The partitions must be manually formatted:

Sometimes, I have problems with the keyring, so I first run the following commands that ensure the keyring is up-to-date:

I’m going to clone the installation script from GitHub, so I need to install “git”:

And now, I’m ready to use the installation script.

Running the installation script

First, I clone the installation script from GitHub:

The script has no parameter but relies on a few crucial environment variables to set appropriately. The first four variables refer to the partitions I created above. The last one is the name for the machine (in this example, it will be “arch-gnome”):

The script will check that all these variables are set. However, it does not check whether the specified partitions are correct, so I always double-check the environment variables.

And now, let’s run it:

The script will do all the Arch Linux installation steps. These automatic steps correspond to the ones I showed in my previous post, where I ran them manually.

When the script finishes (it takes a few minutes), I have to perform a few additional manual operations before rebooting. I’ll detail these latter manual operations at the end of the post. In the next section, I’ll describe the script’s parts.

The installation script(s)

As I anticipated, the script actually consists of several scripts.

The main one, install.sh, is as follows:

Note that the installation logs are saved in the “bettini” user’s home directory (the last run script will create the user). These can be inspected later.

The main script calls the other scripts.

We have the script for checking that all the needed environment variables are set (00_check.sh):

The script 01_mount-partitions.sh mounts the partitions and, for the main BTRFS partition, also creates the BTRFS subvolumes:

The script 02_pacstrap.sh performs the “pacstrap” (it also sets the mirrors) and generates the /etc/fstab:

Then, 03_prepare-for-arch-chroot.sh prepares the script for arch-chroot: it copies all the shell scripts into the /mnt/root:

In fact, by looking at the main script, you see that further shell scripts are executed using arch-chroot.

The script 04_configuration.sh takes care of all the configuration steps:

Note the use of the environment variable INST_HOSTNAME for creating the file /etc/hosts. I’m using en_US.UTF-8 for the language, but other local configurations are for Italy.

The script 05_bootloader.sh configures and installs GRUB. It also configures GRUB for the “mem_sleep_default” parameter (for suspend) and for hibernation; in that respect, it also configures mkinitcpio accordingly (note the “resume” hook):

Note that it uses the generated /etc/fstab to retrieve the UUID of the swap partition.

Finally, the script 06_user.sh creates my user and configures it so that I can use “sudo”:

It also sets the right permissions for my user in the mount point where I want the shared partition.

That’s all. The script also prints a message to remind me to set the password for my user.

Final manual steps

I execute a few manual steps to finalize the installation when the script finishes.

First of all, I once again use arch-chroot:

And I set the password for my user:

Then, I install KDE or GNOME (not both).

For KDE, I would run the following:

For GNOME, I would run the following:

And that ends the installation.

I exit chroot and unmount /mnt:

As you see, most of the steps are performed by the script! 🙂

I can restart the system (in this example, the virtual machine) and enjoy the installed Arch!

That’s another reason why I love Arch Linux so much: the installation can be easily scripted!

It took me some time to finalize all the scripts, but using a virtual machine, especially with snapshots, wasn’t that hard. I encourage you to bake your installation script. It’ll be fun 🙂

By the way, before existing chroot and rebooting, I usually run my Ansible playbook for installing other programs (either KDE or GNOME) and configure the system and user according to my needs. I’ll blog about such a playbook in the future.

KVM Virtual Machine Manager and Virtual Machines on external drives

Last year, I blogged about my first experiences with KVM and Virtual Machine Manager.

Then, I stopped using KVM because I’ve always found VirtualBox easier for my experiments. In particular, with VirtualBox, it is trivial to store virtual machines on an external drive (I mean, a fast external SD, of course): you specify to use a directory on the external drive, and all information about the virtual machine will be stored there. Then, you attach the drive to another computer with VirtualBox and open the virtual machine from the external drive. Easy!

Things are more complicated with KVM, QEMU, and Virtual Machine Manager. Even making QEMU access an external drive requires additional configuration steps.

In this blog post, I’ll summarize the steps to achieve that.

I’ll first show the manual export/import procedure for the machines’ metadata information. Then, I’ll show a different approach based on symlinks.

It was time to try KVM again because it’s faster than VirtualBox.

I’ll describe the installation steps for EndeavourOS and pure Arch Linux. I guess the steps for other distributions are similar.

Installation and configuration

Let’s install a few packages for KVM, QEMU, and the Virtual Machine Manager:

If you get this message, accept to remove “iptables”:

To use your user without entering the root password, we need to edit the file “/etc/libvirt/libvirtd.conf” and uncomment the following lines:

Or, append them at the end of the file:

Add your user account to the “libvirt” group.

Now comes the crucial part for letting QEMU handle machines on external drives: we need to add our user to “/etc/libvirt/qemu.conf”. This can be done by setting the appropriate entries in the file or by simply appending the entries at the end of the file:

If you want to start the virtualization service and the default virtual network automatically at boot, you run:

Since I’m not using virtual machines daily, I prefer to start them when needed, so I don’t run the above commands. Of course, I must remember to run these commands (note, for the network is “start” instead of “autostart”) before starting the “Virtual Machine Manager”:

Remember you can always use:

to see the service status and possible errors shown when running this command.

OK, time to reboot now.

Let’s create a virtual machine on an external drive

I created a directory “kvm/images” on my external USB SD to store the virtual machine images.

Let’s start the “Virtual Machine Manager” program. We should see “QEMU/KVM”:

Let’s create a new virtual machine with the leftmost toolbar button.

I specify a local ISO.

I don’t create a pool for ISOs and use “Browse Local” to select an ISO in my external drive.

In this example, I will install EndeavourOS on the virtual machine. I have to select the operating system manually (start typing, and you get completions):

Time to allocate resources for the virtual machine. I’m giving the VM half my RAM and half my cores:

Now here’s the essential part of disk selection. Remember, I want to use my external drive, so I select custom storage and press “Manage”:

In the following dialog, I use the “+” button in the bottom left corner to create a new pool:

I give the pool the name “images” and specify the directory I mentioned above on my external drive:

After pressing “Finish”, I select the created pool and add a “Volume” (with the other “+” button)

I give the disk image a proper name and enough size (recall that the image will NOT allocate all the size immediately, but only on-demand):

Select the volume and press “Choose Volume”:

On the final dialog, make sure the default network is selected and that you check “Customize configuration before install” (note that I also changed the name for the virtual machine):

Let’s press “Finish,” and get to the configuration dialog. I changed the Firmware from “BIOS” to “UEFI”, pressed “Apply,” and finally, we can start the installation with “Begin Installation”.

We should not get any error from QEMU because it cannot access the external drive, thanks to the configuration shown above in the qemu.conf file!

After the GRUB menu, we should see the installer log:

And then, the EndeavourOS installer dialog:

Since I’ve already blogged about EndeavourOS installation, I’ll skip the detailed steps. I’ll install the GNOME desktop environment and let the installer use the whole disk space with the BTRFS filesystem and SWAP with hibernate (later, I might want to check whether hibernate works in the VM).

In a few minutes, the installation finishes! We get to the GRUB menu of the installed system:

And to the installed GNOME desktop:

The disk image is correctly created in the external drive:

And the information about the virtual machine is in the:

Export the virtual machine

First, let’s shut down the machine.

Let’s export the virtual machine to use it from another computer. I understand that having the same software on the other host is crucial. Since I’m using EndeavourOS or Arch on my main computers, that is not a problem.

But isn’t the virtual machine already in an external drive? Why do I have to export it?

That’s the main difference with VirtualBox I mentioned at the beginning. The disk image is on an external drive, but the virtual machine information (configuration and metadata) is on a local XML file (see the listing of “/etc/libvirt/qemu” above; the XML file of the virtual machine is “eos-kvm-gnome.xml”, after the name I gave to the virtual machine when I created it).

Remember that the XML has an absolute path pointing to the disk image on the external drive:

So, again, in the other computers, the mount point of the external drive must be the same; otherwise, the absolute path must be manually adapted.

We could copy the XML file directly on the external drive (somewhere near the disk image to be easily found), e.g.:

Alternatively, if we don’t remember the location of the XML file, we can use the “dump” command.

For example, we can first list the current machines (in the example, I have only one):

And then, we dump its XML configuration:

We’re ready to import and use the VM on another computer

Import the virtual machine

I have already installed and configured KVM on another computer, following the same procedure at the beginning of the post.

Since I haven’t enabled the services at boot time, I run the following:

I connect the external drive and ensure it’s mounted (remember, on the same mount point as in the other computer).

Then, I create the virtual machine information locally by using the XML file on the drive I created above:

We can verify that the XML is now in the directory of QEMU:

Let’s start “Virtual Machine Manager,” and we can see the virtual machine:

We can start it, and it should work as on the other computer.

Cloning and Snapshots

Let’s create a clone of this virtual machine, e.g., with the context menu of the machine in the main user interface.

The destination path is based on the path of the current machine, the external drive, which is good.

Let’s wait for the clone to finish, and then we have two virtual machines:

If I want this clone to be usable on other computers, I repeat the export procedure for this new virtual machine:

I’ll leave this clone virtual machine as it is for now, and I’ll create a snapshot in the other virtual machine, the original one.

Snapshot information is stored somewhere else, NOT in the XML of the virtual machine:

So we need them as well if we want to use them on another computer.

To add the snapshot to the other computer, I have to run:

However, keep in mind that if you try to start a snapshot, you get this warning:

So if you don’t want to lose the current state, create another snapshot for the current state before restoring a previous one. Moreover, if the snapshot’s state is “Shutoff”, “starting” the snapshot only restores it. Then, you must start the virtual machine.

A different approach: symlinks

In the previous sections, I showed how machine information (including snapshots) and images could be put on external drives. Besides the machine images residing on external drives from the beginning, the machine metadata is still on your hard disk. In fact, you must first export them (e.g., on the external drive) and then import them on another computer.

A more radical approach consists of keeping the metadata on the external drive only and creating symlinks in each computer’s libvirt/qemu directories.

On the first computer, the XML files of machine information and snapshots have to be copied onto the external drive. IMPORTANT: don’t dump information as we did above; you need to copy the original XML files themselves. Dumping does not generate the exact XML files stored on the libvirt/qemu directories. In fact, as shown above, the dumped XML files must be imported with dedicated commands.

In my case, on the first computer, I run:

So, on the external drive, I end up with these contents:

On the same computer, I run the following commands (make sure the “libvirtd.service” is not running):

Now, I can start the “libvirtd.service” and the default network, and I make sure I can still access all my machines stored on the external drive, including all the machine information.

Of course, if you have never created virtual machines and want to start creating them on the external drive, it is enough to run the above commands. Then, start creating machines. Remember to select the external drive for the image location.

Then, on the other computers where I have already installed the same software for KVM, QEMU, etc., I first ensure the “libvirtd.service” is not running (in case stop it). Then, I connect my external drive and run the above commands (these will remove possible existing machines’ information, so be careful).

Of course, the above commands must be run only the first time.

Now, I can start the “libvirtd.service” and the default network, and I can access all my machines stored on the external drive, including all the machine information. Every modification (an image content or a machine configuration) will be stored on the external drive.

This approach works if you want to store ALL your machines on the external drive. You won’t have to keep the information in sync because they are stored in a single place.

If you need to keep some machines on your computers and others on different external drives, you must use the above-shown manual procedure for exporting and importing. It is then up to you to remember to re-export/re-import if you change a machine’s configuration or a snapshot.

Happy virtualization! 🙂

Customizing Gnome in Arch Linux on a PineBook Pro

In a previous blog post, I showed how to install Arch Linux on a PineBook Pro.

In this blog post, I’m showing how I customize Gnome on that installation.

First, Gnome 43 has “Gnome Console” as the default terminal application. I wouldn’t say I like it since it’s too basic. So I install the traditional “Gnome Terminal”:

Then, I set “Ctrl+Alt+T” as a shortcut for opening the terminal:

Then, I install an AUR helper. I like “yay,” so I first installed the needed dependencies:

And then

I install, by using “yay”, the helper the “Gnome Browser Connector” to install Gnome extensions from Firefox (Some extensions are already installed by default as system extensions. You can use the “Extensions” application to enable/disable extensions):

Now I can navigate to https://extensions.gnome.org and install and enable a few extensions (you also need to install the Firefox extension add-on when asked). For example, “AppIndicator and KStatusNotifierItem Support” and “X11 Gestures”.

The last extension helps enable Touchpad gestures in the X11 session (Gnome Wayland already provides touchpad gestures, but I prefer to use the X11 session). This extension relies on “touchegg” that must be installed. For ARM, we need to install the AUR package:

You will get this warning, but proceed anyway: it compiles and works fine:

Let’s start “touchegg” and verify that gestures work

And then let’s enable it so that it automatically starts on the subsequent boots:

Let’s move on to ZSH, which I prefer as a shell:

Since I’m going to install “Oh My Zsh” and other Zsh plugins, I install these fonts (remember from the previous post that I had already installed “noto-fonts” and “noto-fonts-emoji”) and finder tool (“curl” is required for the installation of “Oh My Zsh”):

Let’s install “Oh My Zsh” by running the following command as documented on its website:

When asked, I agreed to change my default shell to Zsh. In the end, we should see the prompt changed to the default one of “Oh My Zsh”:

I then install some external plugins:

And I enable them by editing the ~/.zshrc, in particular, the “plugins” line (I also enable other plugins that are part of the OMZ distribution):

Once saved, you have to start a new terminal with zsh to see the plugins in action (remember that, until you log out and log in, the default shell is still BASH, so you might have to run “zsh” manually to switch to ZSH in the currently logged session).

Besides the syntax highlighting for commands, you have completion after “cd” (press TAB), excellent command history (with Ctrl+R), suggestions, etc.

Let’s switch to the “Starship” prompt. Let’s run the documented installation program:

Now, let’s edit the ~/.zshrc file again; we comment out the line starting with “ZSH_THEME,” and we add to the end of the file:

Opening another ZSH shell, we should see the fantastic Starship prompt in action, e.g.,

To quickly search for file names from the command line, I install “locate”, enable its periodic indexing and run the indexing once the first time (if you’re on a BTRFS file system, you might want to have a look at this older post of mine):

Then, you should be able to look for files with the command “locate” quickly.

Gnome uses “Tracker” (in the current version, the command is “tracker3”) for file indexing and searching, e.g., from the “Activities” view. I like it, and it quickly keeps the index up to date. However, the “tracker extract” service also indexes the file contents, and that uses too many resources, so I disable that service:

I also use the “guake” drop-down terminal a lot:

I run it once (it’s enabled by default by pressing “F12”), and I configure it to start automatically when Gnome starts (by running “Guake Preferences” -> “Start Guake at login”).

I hope you enjoyed this post! 🙂