Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

Timeshift and grub-btrfs in Linux Arch

UPDATED 02/Jan/2023, ChangeLog:

  • 02/Jan/2023: documented that the new version of grub-btrfs is now an official package (you still have to install another package: inotify-tools);
  • 02/Dec/2022: documented the new version of grub-btrfs and its new grub-btrfsd daemon; the configuration for Timeshift is much simpler, but you have to install another package: inotify-tools.

After looking at the very nice videos of Stephen’s Tech Talks, in particular, this one https://www.youtube.com/watch?v=6wUtRkEWBwE, I decided to try to set up Timeshift, Timeshift-autosnap, and grub-btrfs in my Linux Arch installation, where I’m using BTRFS as the filesystem. These three packages allow a timeshift snapshot to be automatically created each time you update your system; moreover, a new grub entry is automatically generated to boot into a specific snapshot.

The video mentioned above is handy, but unfortunately, some recent changes in Timeshift itself broke the behavior of the two other packages. In this post, I’ll try to show how to fix the problem and go back to a working behavior. I’ll also show an experiment using the snapshots so that, hopefully, it’s clear what’s going on in the presence of such snapshots and how to use them in case you want to revert your system.

Install timeshift and timeshift-autosnap

First of all, let’s install timeshift and timeshift-autosnap (the latter depends on the former, and they are both available from AUR; I’m using the yay AUR helper here):

The programs will be installed from sources; thus, they will be compiled (it might take some time).

Let’s create a new Timeshift snapshot to make sure it works (the first time, you will have to configure Timeshift; of course, it is crucial that you choose “BTRFS”).

You can configure timeshift-autosnap with the number of snapshots to keep (in this example, I specify 10):

Install grub-btrfs (new version, installation from the official repository, 02/Jan/2023)

The new version of grub-btrfs is now available as an official package (Please remove the old AUR version if you still have it installed):

So you can simply install it with pacman:

Now, let’s make sure grub-btrfs can find Timeshift’s snapshots (remember, we’ve just created one). So let’s update the grub configuration, and we should see in the end something like the following output:

The last lines prove that grub-btrfs can detect snapshots.

We now need to configure that to monitor the Timeshift snapshot directory instead of the default one (/.snapshots).

Automatically update the grub menu upon snapshot creation or deletion (2 December 2022)

What follows is based on the new version of grub-btrfs. At the bottom of the post, there are still the old instructions, which are to be considered stale and left there only for “historical reasons”.

Grub-btrfs provides a daemon watching the snapshot directory and updates the grub menu automatically every time a snapshot is created or deleted.

Important: This daemon requires an additional package:

By default, this daemon watches the directory “/.snapshots” (the default directory for Snapper). Since Timeshift uses a different directory, we have to tweak the configuration for the daemon.

Let’s run:

We must change the line

into

This is required for Timeshift version 22.06 and later because Timeshift creates a new directory named after their process ID in /run/timeshift every time they are started. Since the PID will be different every time, also the directory will be different. Grub-btrfs provides the command line argument –timeshift-auto to correctly detect the current snapshot directory (In previous versions of grub-btrfs, we had to tweak /etc/fstab to deal with that, as shown later in the old section).

Let’s start the daemon:

In the journalctl log, we should see something like (where the date and time have been stripped off):

Let’s start Timeshift. In the journalctl log, we should see something like this:

Let’s verify that if we create a new snapshot, grub-btrfs automatically updates the GRUB menu: in a terminal window, run “journalctl -f” to look at the log, then create a new snapshot in Timeshift. In the log, you should see something like the following lines:

Similarly, if we delete an existing snapshot, we should see something similar in the log.

Remember that it takes a few seconds for grub-btrfs to recreate the grub menu.

Once we’re sure everything works, we can enable the daemon to always start at boot:

The next time we boot, our grub menu will also show a submenu to boot snapshots.

Concerning doing some experiments booting a snapshot and restoring it, please look at the next section.

IMPORTANT: If you have several Linux distributions on your computer and you use a multiboot system like the one I blogged about, and this distribution is not the main one, you will have to manually tweak the entry in your main distribution’s GRUB menu. See the linked blog post near the end.

Some experiments

Let’s do some experiments with this configuration.

Here’s the kernel I’m currently running:

I’m updating the system (I’m skipping some output below, and you can ignore the “stale mount” errors):

So it created a snapshot before updating the system (in particular, it installed a new kernel version). Let’s reboot and verify we are running the new kernel (5.18.8 instead of 5.18.7):

Let’s reboot and select from GRUB the latest snapshot (remember, the one before applying the upgrade), so timeshift-btrfs/snapshots/2022-07-02_15-35-53 (snapshots are presented in the grub submenu from the most recent to the oldest one). We do that by pretending that the update broke the system (it’s not the case), and we want to get back to a working system before the update we have just performed.

You see that the “Authentication Required” dialog greets us, and in the background, you can see the notification that we “booted into Timeshift Snapshot, please restore the snapshot”:

The password is required because it’s trying to run Timeshift:

In the screenshot, you can see that we are now using the older kernel since we booted in that snapshot, where the update has not yet been performed. We have to restore the snapshot manually; otherwise, on the next boot, we’ll get back to the updated system version and not in the snapshot anymore.

So, let’s restore the snapshot:

You see, Timeshift has created another snapshot ([LIVE]). We now reboot normally (that is, using the main grub entry, NOT the snapshot entries).

Once rebooted normally, we can verify again that we are running the old kernel:

Let’s have a look at Timeshift, and we can see the last snapshot is an effective one, not a LIVE one:

Yes, we are now in a system where the update above has never been applied.

Let’s try to rerun the update command (we don’t effectively execute the update, it’s just an experiment):

Why? Because the snapshot had been created automatically by timeshift-autosnap before applying the updates while the package manager was running, its lock is still there.

Let’s remove the lock and try to rerun the update:

The output is similar to the one shown above (unless there are even more new updates in the meantime, which might happen in a rolling release), but something is missing:

Why? Because the downloaded packages in the cache are NOT part of the saved snapshot, they are still present in the current system, even though we restored the snapshot. Why are the cached packages still there, but the lock has been restored with the snapshot? That’s due to the way subvolumes are specified in the /etc/fstab:

You see, the cache of downloaded packages and the logs are NOT part of the snapshots, while /var/lib (including the pacman lock) is part of the snapshots.

Let’s now revert the snapshot: we select the one with “Before restoring…”.

Again, we are now in a LIVE situation, and Timeshift tells us again to reboot to make it effective.

Let’s reboot (by using the main grub entry).

We’re back to the updated system, and there’s nothing to update (again, unless new updates have been made available in the meantime):

If we’re happy with the updated system, we can also remove those two snapshots (remember that grub-btrfs monitors the snapshots so that it will update its grub submenu entries):

I hope you find this blog post helpful, and I hope it complements the beautiful video of Stephen’s Tech Talks mentioned above.

Old version (with old release 4.11 of grub-btrfs)

UPDATE 02/Dec/2022: These are the older instructions for the previous version of grub-btrfs, where there was no “grub-btrfsd.service” and there was another systemd program (“grub-btrfs.path”).

I leave these instructions here just for “historical reasons”.

The first problem is that timeshift has recently changed the strategy for creating snapshots. Instead of creating them in /run/timeshift/backup/timeshift-btrfs/snapshots, it now creates them in /run/timeshift/<PID>/backup/timeshift-btrfs/snapshots, where <PID> is the PID of the Timeshift process. Each time you run Timeshift, the directory will be different, breaking grub-btrfs (which expects to find the snapshots always in the same directory).

Fortunately, there’s a workaround: we add an entry to /etc/fstab in order to mount explicitly the path /run/timeshift/backup/timeshift-btrfs/snapshots:

where, of course, <UUID> has to be replaced with the same UUID of the physical disk partition.

Reboot, and then Timeshift will also put the snapshot in that directory (besides the one with the PID, as mentioned above). You can try to create a snapshot to verify that (this also allows us to use the Timeshift wizard so that we specify to create BTRFS snapshots).

Let’s make sure the mount point is active (and note the unit name)

Let’s now install grub-btrfs

We need to configure that to monitor the Timeshift snapshot directory instead of the default one (/.snapshots).

The file contents

should be replaced with

Let’s reload and re-enable the monitoring service:

If we have already created a few snapshots, we can run update-grub (or, if you have not installed the package update-grub, use the command “grub-mkconfig -o /boot/grub/grub.cfg”) and verify that new grub entries are created for the found snapshots:

We can also restart the system and prove that we can access the GRUB submenu with the generated entries for the snapshots.

KDE Plasma 5.25 in Arch

After the recent release of KDE Plasma 5.25, this version landed a few days ago in Arch-based distros like EndeavourOS (the one I’m writing from).

Although I’m mostly a GNOME user, I also have a few distributions installed where I’m using KDE Plasma.

The new features that impressed me most are related to eye candies 🙂

First, the “Present Windows” effect now looks the same as the new “Overview” effect. If we compare the “Present Windows” effect in the previous version (5.24):

with the new one:

we can see a significant improvement: in the earlier versions, the windows not selected were too dark, making it hard to distinguish them. This behavior relates to an old bug (10 years old): https://bugs.kde.org/show_bug.cgi?id=303438. This bug has been fixed by rewriting this effect “to use the same modern, maintainable backend technology found in the Overview effect.”

I use this effect a lot (I also configured the “Super” key to use this effect, simulating what happens in Gnome for its “Activities” view), and I use the filter to filter the open windows quickly. So I appreciate this usability change a lot!

One detail I do not like in this new version of “Present Windows” is that the filter textbox remembers the entered text. Thus, the next time you use it, the presented windows are already filtered according to the previously entered text. I’m not sure I like this.

The other cool thing introduced is the automatic accent color! Accent colors were introduced a few versions ago in Plasma, but now you can have Plasma automatically adjust the accent color from the current wallpaper:

If you use a wallpaper changer mechanism (like the one provided by Plasma), possibly by downloading new wallpapers (like Variety), you will get nice accent colors during the day. Here are a few examples produced running Variety to change the wallpaper:

Maybe it’s not an important feature, but, as we say in Italy, “Anche l’occhio vuole la sua parte” 😉

The last new feature that positively impressed me is that now KRunner also shows Java files (and probably other programming languages related files) when you search a string. Previously, although “Baloo” (the file indexing and file search framework for KDE) knew about these files, KRunner was only showing .txt files and a few others, but not Java files.

Concerning Wayland, one thing I noted is that if I start a Plasma Wayland session using a brand new user, it automatically scales the display in case of an HDPI screen. Wayland usability in Plasma has not improved since my last experiments (see KDE Plasma and Wayland: usability).

 

Xtext 2.27.0: update your Xbase compiler tests

If you update to Xtext 2.27.0 and have compiler tests for your Xbase DSL that assert the output of the compilation, you’ll get lots of failures after the update.

I am guilty of that 😉
Well, for a good reason, at least 🙂

In fact, I worked on this issue: https://github.com/eclipse/xtext-extras/issues/772 and its fix is included in Xtext 2.27.0.

Now, the Xbase compilation mechanism does not generate useless empty lines anymore (before, it added lines with two spaces). Your compiler tests will fail because the output is different.

I personally fixed my tests in my DSLs by simply using the Find/Replace mechanism of Eclipse with this substitution pattern (there are two space characters between the tab character and the newline character):

If you have deep nesting in your compilation output, you might have to repeat this substitution with more than two characters, but this should not be required unless you generate nested classes or something like that.

With the above substitution a test like the following one:

will become like the following one (you see the difference: no empty line with two characters between the two generated constructors:

Now your tests should be fixed 🙂

Configure Arch Pacman

Pacman is the package manager in Linux Arch and Linux Arch-based distributions.

I’ve been using EndeavourOS for some time, and I enjoy it. EndeavourOS is pretty close to vanilla Arch. I also experimented with pure Arch (more on that in future blog posts). However, the output of pacman in EndeavourOS is much more excellent and “eye candy” than in Arch. However, it’s just a matter of configuring /etc/pacman.conf a bit in Arch to have the “eye candy” output.

These are the options to enable in the [options] section in that file (the ParallelDownloads does not have to with the output, but it’s a nice optimization):

Without these options, this is the output of pacman (e.g., during an upgrade):

And this is the output with the options above enabled:

Besides the colors, you can spot c’s for the progress representing “Pacman,” the video-game character, eating candies (that’s the aim of the option ILoveCandy)… waka waka waka!  🙂

The colors are also helpful when searching for packages:

Happy Pacman! 🙂

macOS: switch between different windows of the same application

Maybe this is well-known to macOS users, but it wasn’t clear to me as a Linux user.

As a Linux user, I’m used to using Alt+Tab to switch between different windows. But I also use the shortcut to switch between different windows of the same application. In Gnome, the shortcut is Alt+<the key above Tab>, which is cool because it works with any keyboard layout. In KDE it is Alt+backtick (`), which has to be changed in Italian keyboards, like mine to Alt+\. Indeed, in the Italian keyboard layout, the key over tab is \.

In macOS it’s the same as in KDE: the shortcut is bound by default to ⌘+`, which of course it’s unusable in Italian keyboards (you should use a complex combination of keys only to insert the backtick ` character). You then have to configure the shortcut “Move focus to next window”, which is quite counterintuitive to me (I had always thought that it wasn’t possible in macOS to switch between windows of the same application if not by using the touchpad gesture or by pressing the down key after using the standard switcher):

Change it to something suitable for your keyboard layout. For the Italian layout I change it to ⌘+\:

And then you’re good to go! 🙂

KDE Plasma and Wayland: usability

It looks like KDE Plasma is getting usable with Wayland!

This is my current testing environment for this blog post:

Operating System: EndeavourOS
KDE Plasma Version: 5.24.5
KDE Frameworks Version: 5.94.0
Qt Version: 5.15.4
Kernel Version: 5.15.41-1-lts (64-bit)
Graphics Platform: Wayland
Processors: 8 × Intel® Core™ i7-8550U CPU @ 1.80GHz
Memory: 15,3 GiB of RAM
Graphics Processor: Mesa Intel® UHD Graphics 620

I had tested KDE Plasma with Wayland in the past, and the main problem I was experiencing, which made it unusable to me, was that I had to scale the display. I could scale the display, but the main problem was that, while KDE applications looked nice, the GTK applications looked blurred. This problem is still there, as you can see from this screenshot (here, I scaled the display to 150%):

You can see that the System settings dialog and Dolphin (in the background) look nice, but the EndeavourOS Welcome app and Firefox (in the background), which are GTK applications, look blurred!

Thus, I tried another way: I went back to 100% Display and tried to work on the Font HDPI scaling, though Plasma discourages doing that (it suggests using the display scaling). I tried with both 120 and 140 the result is satisfactory, as you can see from these screenshots:

IMPORTANT: You have to log out and log in to apply these changes. At least, I had to do that in my experiments.

There’s still one caveat to solve: GTK4 applications, like Gedit (the Gnome text editor) and Eye of Gnome (the Gnome image viewer), which, in this version of EndeavourOS, are already provided in their 42 version (using libadwaita). These applications are not considering font scaling. To solve that, you have to install Gnome Tweaks and adjust the “Scaling Factor” from there. Then, everything works also for those applications (Gedit is the one with “Untitled Document 1,” and Eye of Gnome is the dark window in the foreground):

With the Wayland session in Plasma, you can enjoy the default touchpad gestures (which, at the moment, are not configurable):

  • 4 Finger Swipe Left –> Next Virtual Desktop.
  • 4 Finger Swipe Right –> Previous Virtual Desktop.
  • 4 Finger Swipe Up –> Desktop Grid.
  • 4 Finger Swipe Down –> Available Window Grid.

Moreover, the scrolling speed for the touchpad can be configured (while, on X11, I wasn’t able to):

There are still a few strange things happening: the splash screen has the title bar and window buttons if you start Eclipse! 😀

I’ll try to experiment with this configuration also in other distributions.

Let’s cross our fingers! 😉

Dropbox and Gnome 42

Now that Gnome 42 has been released and available in most Linux distributions, I started experiencing problems with the Dropbox icon in the system tray.

First of all, I have no problem with Ubuntu 22.04, which comes with the extension “AppIndicator and KStatusNotifierItem Support” https://extensions.gnome.org/extension/615/appindicator-support/. Moreover, I think the problem is not there because, while Ubuntu 22.04 ships Gnome 42, it still ships Nautilus in version 41.

In Fedora and EndeavourOS, I usually install the same extension in the Gnome DE, and it has been working quite well.

Unfortunately, with Gnome 42 (provided by Fedora 36 and currently by EndeavourOS), I started experiencing problems, even with the extension above installed and activated.

If you had already installed Dropbox in your Gnome 41 DE and upgraded to Gnome 42 (e.g., you upgraded Fedora 35 to Fedora 36 after installing Dropbox), the icon is clickable. Still, you get a context menu always saying “Connecting…”

At least you can access “Preferences…”.

However, suppose you had never installed Dropbox in that Gnome 42 environment. In that case, the icon in the system tray appears (again, after installing the above extension), but no matter how you click on that, no context menu appears at all. That’s a disgrace because you cannot access Dropbox preferences, like “selective sync” (you have to use the command line, as I suggested in the previous post).

Instead of the extension “AppIndicator and KStatusNotifierItem Support” (disable it if you had already activated that), you can use the extension “Tray Icons: Reloaded,” https://extensions.gnome.org/extension/2890/tray-icons-reloaded/. Install it, activate it, logout and login, and now the context menu works as expected:

Remember that this extension does not seem to support all system tray icons. For example, Variety does not seem to be supported.

At least you can use this extension to set up Dropbox (e.g., selective sync) and then go back to the previous extension!

Testing the new Fedora 36

Fedora 36 has just been released, and I couldn’t resist trying it right away. I had already started using Fedora 35 daily (though I have several Linux distributions installed), and I’ve been enjoying it so far.

Before upgrading my Fedora 35 installations, I decided to install Fedora 36 on a virtual machine with VirtualBox.

These are a few screenshots of the installation procedure.

As usual, you’re greeted by a dialog for installing or trying Fedora, and I went for the latter.

The installation procedure is available from the dock:

To be honest, I’m not a big fan of the Fedora installer: compared to other installers like Ubuntu and EndeavourOS or Manjaro, I find the Fedora installer much more confusing. Maybe it’s just that I’m not used to such an installer, but I never had problems with Calamares in EndeavourOS or Manjaro, not even the very first time I tried Calamares.

For example, once a subsection is selected, the button “Done” is at the upper left corner, why I would expect buttons at the bottom (right).

I appreciate that you can select the NTP server time synchronization (at my University, I cannot use external NTP servers, and in fact, the default one is not working: I have to use the one provided by my University). Unfortunately, this setting does not seem to be persisted in the installed system. UPDATE 12/May: Actually it is persisted: I thought I’d find it in the file /etc/systemd/timesyncd.conf but instead it is in /etc/chrony.conf. Well done!

Since I’m installing the system on a VM hard disk for the partitioning, I chose the “Automatic” configuration. On a real computer, I’d go for manual partitioning. Even in this task, the Fedora installer is a bit confusing. Maybe the “Advanced Custom (Blivet GUI)” is more accessible than the default “Custom” GUI, or, at least, it’s much similar to what I’m accustomed to.

Finally, we’re ready to start the installation.

Even on a virtual machine, the installation does not take that long.

Once rebooted (actually, in the virtual machine, the first reboot did not succeed, and I had to force the shutdown of the VM), you’re greeted by a Welcome program. This program allows you to configure a few things, including enabling 3rd party repositories and online accounts and specifying your user account.

Then, there is the Gnome welcome tour, which I’ll skip here.

Here is the information about the installed system. As you can see, Fedora ships with the brand new Gnome 42 and with Wayland by default:

Fedora uses offline updates, so once notified of updates, you have to restart the system, and the updates will be installed on the next boot:

The installation is not bloated with too much software. Gnome 42 new theme looks fine, with folder icons in blue (instead of the old-fashioned light brown). Fedora also ships with the new Gnome Text Editor. Differently from the old Gedit, the new text editor finally allows you to increase/decrease the font size with Ctrl and +/-, respectively. I cannot believe Gedit did not provide such a mechanism. I used to install Kate in Gnome because I was not too fond of Gedit for that missing feature.

Instead, Fedora does not install the new Gnome terminal (gnome-console) by default. I installed that with DNF, and I wouldn’t say I liked it that much: with Ctrl +/-, you can zoom the terminal’s font, but the terminal does not resize accordingly. For that reason, I prefer to stay with the good old Gnome terminal (gnome-terminal).

First impressions

First of all, although I tried this installation in a VM, Fedora 36 seems pretty responsive and efficient. I might even say that the guest Fedora 36 VM looked faster than my host (Ubuntu 22.04). Maybe that was just an impression 😉

Since I chose the Automatic partitioning, Fedora created two BTRFS subvolumes (one for / and one for /home) with compression, and a separate ext4 /boot partition:

It also uses swap on zram:

I soon installed the extension “AppIndicator, KStatusNotifierItem and legacy Tray icons support to the Shell” by Ubuntu (https://extensions.gnome.org/extension/615/appindicator-support/) and it works in Gnome 42.

However, after installing Dropbox, while the icon shows on the system tray, clicking on that Dropbox icon (left or right-click or double-click) does not make the context menu appear, making that unusable. I seem to understand that it is a known problem, and maybe they are already working on that. For the time being, if you need the Dropbox context menu for settings like “selective sync,” you’re out of luck. However, you can use the dropbox command-line program for the settings. In that case, I first ignore all the folders and then remove the exclusion for the folders I want to have in sync.

For example, I only want “Screenshot” and “sync” from my Dropbox on my local computer, and I run:

On a side note, I find the Dropbox support for Linux a kind of an insult…

I look forward to upgrading my existing Fedora 35 installations on my computers, and maybe I’ll get back with more impressions on Fedora 36 on real hardware.

For the moment, it looks promising 🙂

Mirror Eclipse p2 repositories with Tycho

I had previously written about mirroring Eclipse p2 repositories (see this blog tag), but I’ll show how to do that with Tycho and one of its plugins in this post.

The goal is always the same: speed up my Maven/Tycho builds that depend on target platforms and insulate me from external servers.

The source code of this example can be found here: https://github.com/LorenzoBettini/tycho-mirror-example.

I will show how to create a mirror of a few features and bundles from a few p2 repositories so that I can then resolve a target definition file against the mirror. In the POM, I will also create a version of the target definition file modified to use the local mirror (using Ant). Moreover, I will also use a Tycho goal to validate such a modified target definition file against the local mirror. The overall procedure is also automatized in the CI (GitHub Actions). This way, we are confident that we will create a mirror that can be used locally for our builds.

First of all, let’s see the target platform I want to use during my Maven/Tycho builds. The target platform definition file is taken from my project Edelta, based on Xtext.

As you see, it’s rather complex and relies on several p2 repositories. The last repository is the Orbit repository; although it does not list any installable units, that is still required to resolve dependencies of Epsilon (see the last but one location). We have to consider this when defining our mirroring strategy.

As usual, we define a few properties at the beginning of the POM for specifying the versions of the plugin and the parts of the p2 update site we will mirror from:

Let’s configure the Tycho plugin for mirroring (see the documentation of the plugin for all the details of the configuration):

The mirror will be generated in the user home subdirectory “eclipse-mirrors” (<destination> tag); we also define a few other mirroring options. Note that in this example, we cannot mirror only the latest versions of bundles (<latestVersionOnly>), as detailed in the comment in the POM. We also avoid mirroring the entire contents of the update sites (it would be too much). That’s why we specify single installable units. Remember that also dependencies of the listed installable units will be mirrored, so it is enough to list the main ones. You might note differences between the installable units specified in the target platform definition and those listed in the plugin configuration. Indeed, the target platform file could also be simplified accordingly, but I just wanted to have slight differences to experiment with.

If you write the above configuration in a POM file (a <packaging>pom</packaging> will be enough), you can already build the mirror running:

Remember that the mirroring process will take several minutes depending on your Internet connection speed since it will have to download about 500 Mb of data.

You can verify that all the specified repositories are needed to create the mirror correctly. For example, try to remove this part from the POM:

Try to create the mirror, and you should see this warning message because some requirements of Epsilon bundles cannot be resolved:

Those requirements are found in the Orbit p2 repository, which we have just removed for testing purposes.

Unfortunately, I found no way to make the build fail in such cases, even because it’s just a warning, not an error. I guess this is a limitation of the Eclipse mirroring mechanism. However, we will now see how to verify that the mirror contains all the needed software using another mechanism.

We create a modified version of our target definition file pointing to our local mirror. To do that, we create an Ant file (create_local_target.ant):

Note that this also handles path separators in Windows correctly. The idea is to replace lines of the shape <repository location=”https://…”/> with <repository location=”file:/…/eclipse-mirrors”/>. This file assumes the original target file is example.target, and the modified file is generated into local.target.

Let’s call this Ant script from the POM:

Finally, let’s use Tycho to validate the local.target file (see the documentation of the goal):

Now, if we run:

we build the mirror, and we create the local.target file.

Then, we can run the above goal explicitly to verify everything:

If this goal also succeeds, we managed to create a local mirror that we can use in our local builds. Of course, in the parent POM of your project, you must configure the build so that you can switch to local.target instead of using your standard .target file. (You might want to look at the parent POM of my Edelta project to take some inspiration.)

Since we should not trust a test that we never saw failing (see also my TDD book 🙂 let’s try to verify with the incomplete mirror that we learned to create by removing the Orbit URL. We should see that our local target platform cannot be validated:

Alternatively, let’s try to build our mirror with <latestVersionOnly>true</latestVersionOnly>, and during the validation of the target platform, we get:

In fact, we mirror only the latest version of org.antlr.runtime (4.7.2.v20200218-0804), which does not satisfy that requirement. That’s why we must use with <latestVersionOnly>false</latestVersionOnly> in this example.

For completeness, this is the full POM:

And this is the YAML file to build and verify in GitHub Actions:

I hope you found this post valuable, and happy mirroring! 🙂

Multibooting with GRUB

4th July, updated with BTRFS installations.

There’s also a more recent and simpler version of this post.

Besides Windows (which I rarely use) on my computers, I have a few Linux distributions. Grub 2 does a good job booting Windows and Linux, especially thanks to os-prober, in autodetecting other operating systems in other partitions of the same computer. However, there are a few “buts” in this strategy:

  1. Typically, the last installed Linux distribution, say L1, installs its own grub as the main one, and when you upgrade the kernel in another Linux distribution, say L2, you have to boot into L1 and “update-grub” so that the main grub configuration learns about the new kernel of L2. Only then can you boot the new kernel of L2. Of course, you can change the main grub by reordering the EFI entries, e.g., by using the computer’s BIOS, but again, that’s far from optimal.
  2. Not all Linux distributions’ grub configurations can boot other Linux distributions. For example, Arch-based distros like EndeavourOS and Manjaro can boot Ubuntu-based distros, but not the other way around (unless you fix a few things in the grub configuration of Ubuntu)! Recently, I also started to use Fedora. I found out that os-prober in Ubuntu and EndeavourOS does not detect the configurations correctly to boot Fedora: recently, Fedora switched to “blscfg” (https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault), and as a result, Ubuntu and EndeavourOS create grub configurations that do not consider the changes you made in Fedora’s /etc/default/grub.

That’s why I started to experiment with grub configurations. I still have a “main grub” in a Linux installation, which simply “delegates” to the grub configurations of the other Linux installations. This way, I can solve both the problems above!

In this blog post, I’ll show how I did that. Note that this assumes you use EFI to boot.

I have Windows 10, Kubuntu, EndeavourOS, and Fedora on the same computer in this example. I will configure the grub installation of Fedora so that it delegates to Windows, Kubuntu, and EndeavourOS without relying on os-prober.

This is the disk layout of my computer so that you understand the numbers in the grub configuration that I’ll show later (I omit other partitions like Windows recovery).

The key point is modifying the file /etc/grub.d/40_custom. I guess you already know that you should not modify directly grub.cfg, because a system update or a grub update (e.g., “update-grub”) will overwrite that file.

The file /etc/grub.d/40_custom already has some contents that must be left as they are: you add your lines after the existing ones. For example, in Fedora, you have:

We will use the option configfile of grub configuration (see https://www.gnu.org/software/grub/manual/grub/grub.html#configfile): “Load file as a configuration file. If file defines any menu entries, then show a menu containing them immediately.” The Arch wiki also explains it well:

If the other distribution has already a valid /boot folder with installed GRUB, grub.cfg, kernel and initramfs, GRUB can be instructed to load these other grub.cfg files on-the-fly during boot.

The idea is to put in /etc/grub.d/40_custom an entry for each Linux distribution, pointing to the grub.cfg of that distribution after setting the root partition. Thus, the path to the grub.cfg must be intended as an absolute path in that partition. If you look at the partition numbers above, these are the two entries for booting EndeavourOS and Kubuntu:

NOTE: the “rmmod tpm” is required to avoid TPM errors when booting those systems (“Unknown TPM error”, “you need to load the kernel first”). It happened on my Dell XPS 13, for example. Adding that line (i.e., not loading the module “tpm”) solved the problem.

Remember that the path assumes that the /boot directory is not mounted on a separate partition. If, instead, that’s the case, you probably have to remove “/boot”, but I haven’t tried that.

Concerning the entry for Windows, here it is:

In this entry, the root must correspond to the EFI partition, NOT to the partition of Windows.

Save the file and regenerate the grub configuration. In other Linux distributions, it would be a matter of running “update-grub,” but in Fedora, it is:

Now reboot, and you should see the grub menu of Fedora and then, at the bottom, the entries for EndeavourOS, Kubuntu, and Windows. Choosing “EndeavourOS” or “Kubuntu” will NOT boot directly in these systems: it will show the grub menu of “EndeavourOS” or “Kubuntu.”

If you upgrade the kernel on one of these two systems, their grub configuration will be correctly updated. There’s no need to boot into Fedora to update its grub configuration 🙂

If you want to configure the grub in another Linux distribution, please remember that Fedora stores the grub.cfg in /boot/grub2 instead of /boot/grub, so you should write the entry for Fedora with the right path. However, if you plan to boot Fedora with this mechanism, you should disable “blscfg” in the Fedora grub configuration, or you will not be able to boot Fedora (errors “increment.mod” and “blscfg.mod” not found).

Now that we verified that it works, we can remove the entries generated by os-prober. In /etc/default/grub add the line:

and regenerate the grub configuration.

If you want Grub to remember the last choice, you can look at this post.

On a side note, due to the way Fedora uses grub (https://fedoraproject.org/wiki/Changes/HiddenGrubMenu), without os-prober, you will not see the grub menu unless you press ESC. After the timeout, it will simply boot on the default entry. To avoid that and see the grub menu, just run:

And the grub menu will get back as usual.

Then, you can also remove os-prober from the other Linux installations since it is useless now.

These were the original grub menus of Fedora and EndeavourOS before applying the modifications described in this post:

Pretty crowded!

This is the result after the procedure described in this post (note that from the Fedora grub menu, you select EndeavourOS to land its grub menu and Kubuntu to land its grub menu):

Much better! 🙂

If you need to boot an installation in a BTRFS filesystem (which also includes the /boot directory and the grub.cfg), things are slightly more complex. In fact, BTRFS installations are typically based on subvolumes. The root subvolume is typically denoted by the label “@”. This must be taken into consideration when creating the menu entry.

For example, I’ve also installed Arch on my computer using BTRFS, and the root subvolume is denoted by “@”. The menu entry is as follows:

Note the presence of “/@” in the configfile specification.

That’s not all. If the GRUB configuration specified in configfile has submenus, for example, automatically generated by grub-btrfs, you also must define the prefix variable appropriately (in fact, grub-btrfs generates entries relying on such a variable):

 

Getting started with KVM and Virtual Machine Manager

After playing with VirtualBox (see my posts), I’ve decided to try also KVM (based on QEMU) and Virtual Machine Manager (virt-manager).

The installation is straightforward.

In Ubuntu systems:

In Arch-based systems:

Then, you need to add your user to the corresponding group:

Reboot, and you’re good to go.

In this post, I’m going to install Fedora 35 on a virtual machine through Virtual Machine Manager (based on KVM and QEMU).

So, first, download the ISO of this distribution if you want to follow along.

Let’s start Virtual Machine Manager (virt-manager):

Press the “+” button to create a new virtual machine, and we select the first entry since we have downloaded an ISO.

Here, we select the ISO and let the manager detect the installed OS. Otherwise, we can choose the OS manually (the manager might not catch the OS correctly in some cases: it happened to me with ArcoLinux, for example).

Then, we allocate some resources. Since I have 16GB and a quad-core, I give the virtual machine 8GB and two cores.

Then, we allocate storage for the machine. Alternatively, we can select or create a custom image file in another location. By default, the image will NOT occupy the whole space physically on your disk. Thus, I will not lose 60GB (unless I’ll effectively use such a space on the virtual machine). The file will appear of the specified size on your drive, but if you check the free disk space on your drive, you will note that you haven’t lost so many Gigas (more on that in the next steps).

In the last step, we can give a custom name to our machine and customize a few settings before starting the installation by selecting the appropriate checkbox (we also make sure that the network is configured correctly).

If we selected “Customize configuration before install,” by pressing “Finish,” we get to the settings of our virtual machine.

In this example, I’m going to change the chipset and specify a UEFI firmware:

We can also get other information, like the path of the disk image:

And we can click “Begin Installation.” After the boot menu, we’ll get to the live environment of the distribution ISO we chose:

You can also specify to resize the display of the VM automatically if you resize the window and when to do that. (WARNING: this will work correctly only after installing the OS in the virtual machine since this feature requires some software in the guest operating system. Typically, such a software, spice-vdagent, is automatically installed in the guest during the OS installation, from what I’ve seen in my experiments.)

And we can start the installation of the distribution (or try it live before the actual installation), as usual. Of course, the whole installation process will be a bit slower than on real hardware.

I’ll choose the “Automatic” choice for disk partitioning since the disk image will be allocated only to this machine, so I will not bother customizing that.

While installing, you might want to check the disk image size and the effective space on the disk:

After a few minutes, the installation should be complete, and we can reboot our virtual machine

And upon reboot, we’ll get to our new installed OS on the virtual machine:

In the primary Virtual Machine Manager window, you can see your virtual machines, and, if they are running, a few statistics:

In the virtual machine window’s “View” menu, you can switch between the “Console” view (that is, the virtual machine installed and running OS) and the “Details” view, where you can see its settings, and change a few of them.

Note that now the automatic resize of the machine display and the window works: in the screenshot I resized the window (made it bigger) and the display of the machine resized accordingly.

When you later restart a virtual machine from the manager, you might have to double-click on the virtual machine element and possibly switch to the “Console” view.

After installing the OS, you might want to check the image file and the actual disk usage again. You will find that while the image file size did not change, the disk usage has:

What I’ve shown in this blog post was one of my first experiments with KVM and the Virtual Machine Manager. To be honest, I still prefer VirtualBox, but maybe that’s only because I’m more used to VirtualBox, while I’ve just started using virt-manager.

That’s all for now! Stay tuned for further posts on KVM and virt-manager, and happy virtualization! 🙂

Limiting Battery Charge on LG Gram in Linux

I’ve been using this laptop for some months now (see my other posts). In Windows, you can easily set the battery charge limit to 80% using the LG Gram control center. In Linux, I did not find any specific configuration in any system settings in any DE (not even in KDE Plasma, where, for some laptops, there’s support for setting the battery charge limit).

However, since kernel 5.15, you can do it yourself, thanks to some specific LG Gram kernel features, https://www.kernel.org/doc/html/latest/admin-guide/laptops/lg-laptop.html:

Writing 80/100 to /sys/devices/platform/lg-laptop/battery_care_limit sets the maximum capacity to charge the battery. Limiting the charge reduces battery capacity loss over time.

This value is reset to 100 when the kernel boots.

So you need to write ’80’ in that file. I do that like that:

After that, you can see that when charging reaches 80%, the laptop will not charge the battery anymore. Depending on the DE, either you see the charging notice disappear or the charging stuck at 80%. The DE might even tell you that it still needs some time until fully charged, but you can ignore that. That notice will stay like that, as shown in these two screenshots (KDE Plasma), taken at different times:

Note that in the quotation shown above, you also read

This value is reset to 100 when the kernel boots.

If you reboot, the value in that file will go back to ‘100’, and charging will effectively continue. Note that this also holds if you hibernate (suspend to disk) the laptop since when you restart it from hibernate, you’ll boot it anyway, so that will reset the value in the file. However, if you put the laptop to sleep, the value of the file will not change.

Above I said that you need kernel 5.15. I think the feature described above was introduced even before, but in kernel 5.13, that does not seem to work: no matter what you write in that file, the change will not be persisted. In my experience, this only works starting from kernel 5.15.

With kernel 5.15, it works for me in EndeavourOS, Manjaro, and Kubuntu.

UPDATE: I’ve written another post based on TLP.

Linux EndeavourOS review

I’ve been using Linux EndeavourOS (the latest version, “Atlantis neo”) for a few days now, and I love it!

I mainly use Ubuntu and Kubuntu, but I recently enjoyed Manjaro, an Arch-based distro. I still haven’t tried to install the pure Arch distribution, but I learned about EndeavourOS, an Arch-based distro, which is pure Arch. For sure, it’s more Arch than Manjaro since EndeavourOS uses the Arch repositories, plus a small EndeavourOS repository. On the contrary, Manjaro heavily relies on its independent repositories (which also contain software packages not provided by Arch). So, they’re both rolling releases, but EndeavourOS is Arch but with a much simpler installation procedure.

I’ll first briefly recap the installation procedure and then do a short review.

Installation

The installation starts with an XFCE desktop and a dialog where you can set a few things, including the screen resolution in case you need to:

Now it’s time to connect to the Internet, e.g., with a WIFI (the setting will be remembered in the final installation so that you will not have to re-enter the WIFI username and password).

Then, we can start the installer:

I prefer to choose “Online” so that I can select a different desktop environment (I don’t use XFCE, which is the only choice if you perform the “Offline” method):

One of the exciting aspects of the EndeavourOS installation process is that it automatically shows a terminal with the log. This terminal can be helpful to debug possible installation problems.

The installer is Calamares, which you might already know if you used Manjaro.

I’m going to show only the interesting parts of the installation.

The partitioning already found the main SSD drive.

Since I have a few Linux installations already on this computer, I choose to replace one of them with EndeavourOS.

In particular, I select the Manjaro Linux (21.2rc) checkbox to replace that installation (see the “Current:” and the “After:” parts):

Since I chose the “Online” installer, I can now select the software to install. Note the printing support software:

I also decide to install both KDE and GNOME (maybe I’ll blog in the future about the coexistence of the two desktop environments). That’s another exciting feature of EndeavourOS: it lets you install as many desktop environments as you want right during the installation. Other distributions typically only provide ISOs for specific desktop environments (the so-called “spins”).

If you expand the nodes in the tree, you can see the installed software for each DE. I can anticipate that for both KDE and GNOME, the installed programs are not so many.

Time for looking at the summary, and then we’re good to start the installation, which takes only a few minutes on my computer.

Review

As I have already anticipated, I’m enjoying this distribution so far.

I mainly use the KDE Plasma desktop. Plasma looks like it is very close to vanilla Plasma in this distribution. It does not come with many preinstalled KDE software, but all the necessary KDE applications are there.

I had to install a few additional KDE applications I like to have. The corresponding packages are plasma-systemmonitor, kdeplasma-addons (for other task switchers), and kcalc.

Of course, pacman is already installed, but you also have yay already installed.

Since I like the GUI front-end pamac, I had to install that manually:

Remember that, besides an EndeavourOS repository, everything else comes from the official Arch repositories.

EndeavourOS ships with the latest Linux kernel 5.15, and on my computers, it works like a charm.

The “Welcome” application automatically appears when you log in, and it provides a few helpful buttons: for updating the mirrors, the packages, and configuring package cache cleaning:

For updating the software packages, yay will start in a terminal window. Indeed, EndeavourOS defines itself as a “terminal-centric distro.”

Speaking about software updates, you get a system tray notification when they are available:

But unfortunately, clicking on that does not do anything: you have to update the software manually (e.g., by using the above-mentioned “Welcome” app).

Another minor defect (if I have to find defects) is the empty icon on the panel: it refers to the KDE “Discover” application, which is not installed by default. That is confusing, and probably the installation should have taken care of not putting it there by default.

Besides that, I enjoy the KDE Plasma experience provided by EndeavourOS.

Concerning GNOME, again, the installed software is minimal, but you get the essential software, including Gnome Tweaks. No specific GNOME extensions are provided, but you can install them yourself. In the end, it’s vanilla GNOME.

All in all, I guess I’ll be using EndeavourOS as my daily driver in the next few days!

I hope you try EndeavourOS yourself and enjoy it as much as I do 🙂

How to install Linux on a USB drive with UEFI support using VirtualBox

That’s the third post on installing Linux on a USB drive!

Remember that the idea is to have a USB drive that will work as a portable Linux operating system on any computer.

In the first post, How to install Linux on a USB drive using Virtualbox, the USB drive with Linux installed could be used when booting from a computer with “Legacy boot” enabled: it could not boot if UEFI were the only choice in that computer.

In the second post, How to install Linux on a USB with UEFI support, I showed how to install Linux on the USB drive directly, without using VirtualBox, while creating a UEFI bootable device. However, you had to be careful during the installation to avoid overwriting the UEFI boot loader of your computer.

In this post, I’ll show how to install Linux on a USB drive, with UEFI support, using VirtualBox. In the end, we’ll get a UEFI bootable device, but without being scared of breaking the UEFI boot loader of your computer, since we’ll do that using a virtual machine.

The scenario

First of all, let’s summarize what I want to do. I want to install Linux on a portable external USB SSD. I don’t want a live distribution: a live distribution only allows you a little testing experience, it’s not easily maintainable and upgradable, it’s harder to keep your data in there. On the contrary, installing Linux on a USB drive will give you the whole experience (and if the USB drive is fast, it’s almost like using Linux on a standard computer; that’s undoubtedly the case for an external SSD, which are pretty cheap nowadays).

In the previous post, I described how to create such an installation from VirtualBox. As I said, you can boot the USB drive only in Legacy mode. This time, we’ll be able to boot the USB drive in any UEFI computer.

I’m going to perform this experiment:

  • I’m going to use VirtualBox installed on a Dell XPS 13 where I already have (in multi-boot, UEFI), Windows, Ubuntu, Kubuntu, and Manjaro GNOME
  • I’m going to install Ubuntu 21.10 into an external USB SanDisk SSD (256 Gb)
  • then I’m going to install on the same external USB drive also EndeavourOS (an excellent distribution I’ve just started to enjoy) along with the installed Ubuntu

I have already downloaded the two distributions’ ISOs.

I’ve installed VirtualBox in Ubuntu following this procedure

and then reboot.

By the way, since the second distribution will take precedence over an existing UEFI configuration on the USB, it’s better to start with Ubuntu and then proceed with EndeavourOS (Arch based). While an Arch GRUB configuration has no problem booting other distributions, Ubuntu cannot boot an Arch-based distribution. Of course, the second distribution’s GRUB menu will let you also boot the first one. We could solve the booting problem later, but I prefer to keep things easy and install them in the above order.

In the screenshots of the running virtual machine, the USB SanDisk is /dev/sda.

I will boot a virtual machine where I set the ISO of the current distribution as a LIVE CD. I’m going to use a different virtual machine for each distribution. Maybe that’s not strictly required, but since the two OSes are different (the first one is an Ubuntu OS, while the second one is an Arch Linux), I prefer to keep the two virtual machines separate, just in case.

Create the first virtual machine and install Ubuntu on the USB drive

I’m assuming you’re already familiar with VirtualBox, so I’ll post the main screenshots of the procedure.

Let’s create a virtual machine.

We don’t need a hard disk in the virtual machine since we’ll use it only for installing Linux on a USB drive, so we’ll ignore the warning.

.

Now it’s time to configure a few settings.

The important setting is “Enable EFI” to make our virtual machine aware of UEFI, and the booted Live OS will also be aware of it. As we will see later, the booted Live OS will correctly install GRUB in a UEFI partition.

We also specify to insert the ISO of the distribution (Ubuntu 21.10) so that when the virtual machine starts, it will boot the Live ISO.

Let’s start the virtual machine, and we will see the boot menu of the Live ISO.

We choose to Try Ubuntu, and then we plug the external SanDisk in the computer, and we make the virtual machine aware of that by using the context menu of the USB connection icon and selecting the item corresponding to the USB hard disk (in your case it will be different)

After that, the Ubuntu Live OS should notify about the connected disk. We can start the installation, and when it comes to the disk selection and partition, I chose to erase the entire disk and install Ubuntu:

Of course, you can choose to partition the hard disk manually, but then you’ll have to remember to create a GPT partition table, and you’ll also have to create the FAT32 partition for UEFI manually. By using “Erase disk and install Ubuntu,” I’ll let the installer do all this work.

You can see the summary before actually performing the partition creation. Note that we are doing such operations on the external USB drive, which, as I said above, corresponds to /dev/sda.

Now, we have to wait for the installation to finish. In the end, instead of restarting the virtual machine, we shut it down.

Let’s restart the computer with the USB drive connected. Depending on the computer setup, you’ll have to press some function key (e.g., F2 or, in my Dell XPS 13, F12, to choose to boot from a different device). Here’s the menu in my Dell XPS 13, where we can see that the external USB (SanDisk) is detected as a UEFI bootable device. It’s also detected as a Legacy boot device, but we’re interested in the UEFI one:

We can then verify that we can boot the Ubuntu distribution installed in the USB drive.

By the way, I also verified that, without the USB drive connected, I can always boot my computer: indeed, the existing UEFI Grub configuration is intact (remember, I have Windows, Ubuntu, Kubuntu, and Manjaro GNOME; the grub menu with higher priority is the one of Manjaro):

Create the second virtual machine and install EndeavourOS on the USB drive

Let’s create the second virtual machine to install on the same USB drive EndeavourOS, along with the Ubuntu we have just installed.

To speed up things, instead of creating a brand new machine, we clone the previous one, and we change a few settings (basically the name, the version of Linux, which now is Arch, and finally we change the Live ISO):

Let’s start the virtual machine and land into the EndeavourOS Live system

As before, we have to connect the USB drive to the computer and let the virtual machine detect that (see the procedure already shown in the first installation section).

We start the installer and choose the “Online” version so that we can choose what to install next (including several Desktop Environments). The installer is Calamares (if you used Manjaro before, you already know this installer).

When it comes to the partitioning part, we make sure we select the SanDisk external drive (as usual, /dev/sda). Note that the installer detects the existing Ubuntu installation. This time, we choose to install EndeavourOS alongside:

And we use the slider to specify how much space the new installation should take:

Let’s select a few packages to install (a cool feature of EndeavourOS)

And this is the summary before starting the installation:

Once the installation has finished, we shut down the virtual machine and reboot the computer with the USB drive inserted. This time we see the EndeavourOS grub configuration, including the previously installed Ubuntu. Remember, these are the installations in the USB drive (as usual, note the /dev/sda representing the USB drive):

And now we have a USB drive with two Linux distributions installed that we can use to boot our computers! However, some drivers for some specific computer configurations might not be installed in the Linux installation of the external USB. Also, other configurations like screen resolutions and scaling might depend on the computer you’re booting and might have to be adjusted each time you test the external USB drive in a different computer.

I hope you enjoyed the tutorial!

Happy installations and Happy New Year! 🙂

Using the Unison File Synchronizer on macOS

For ages, I’ve been using the excellent Unison file synchronizer to synchronize my directories across several Linux machines, using the SSH protocol. I love it! 🙂

Unison gives you complete control over the synchronization, and, most of all, it’s a two-way synchronizer.

Quoting from its home page:

Unison is a file-synchronization tool for OSX, Unix, and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

On Linux, I never experienced problems with Unison, especially from the installation point of view: it’s available on most distributions’ package managers. If that’s not the case, you can download a binary package from https://github.com/bcpierce00/unison/releases.

However, I had never used Unison on a macOS computer, so today, I decided to try it.

Please, keep in mind that you must use the same version of Unison on all the computers you want to synchronize (at least, I seem to understand, the major.minor version numbers must be the same on all computers, and this also includes the version of OCaml, on which Unison relies).

For macOS, you go to https://github.com/bcpierce00/unison/releases, and you download the .app.tar.gz file according to the Unison (and OCaml) version you need. The other macOS .tar.gz archives, without the .app, contain the command-line binary and a GTK UI binary, which, however, requires the GTK libraries to be already installed on your system and, to be honest, I have no idea how to do that in a compatible way. On the contrary, the .app.tar.gz contains the macOS application, which, I seem to understand, it’s self-contained.

By the way, there’s also a brew package for Unison, but that’s only the command line application, so you won’t get any UI, which is quite helpful, especially when you want to have complete control over the elements to be synchronized and you want to have the last chance to select or unselect the files before the synchronization starts. Moreover, the UI is quite helpful when you have conflicts to solve.

Then, you extract the archive, and you need to run this command (assuming you have extracted it in the Downloads folder):

otherwise, macOS will complain (with an unhelpful error message about a damaged app) since it does not recognize the archive provider.

Move the Unison.app into your Applications, and you’re good to go, assuming you already know how to use Unison.

The first time you run the app, it will ask you to install also the command-line version of Unison, which is also helpful:

And here’s a screenshot showing the files that are going to be synchronized in an example of mine (from the direction of the arrows, you can see that this is a two-way synchronization):

I find the Linux UI of Unison much simpler to understand and deal with, but maybe that’s because I’ve been using it for ages, and I still do.

Happy synchronization! 🙂

Playing with KDE Plasma Themes

I want to share some of my experiences with KDE Plasma Themes in this post.

These themes are pretty powerful, but, as it often happens with KDE and its configuration capabilities, it might not be immediately clear how to benefit from all its power and all its themes’ power.

I’m assuming that you already enabled the KWin Blur effect (In “Desktop Effects”), which is usually the case by default. Please remember that desktop effects, like “blur,” applied to menus, windows, etc., will use more CPU. This CPU usage might increase battery usage (but, at least from my findings, it’s not that much).

First, installing a theme using “Get New Global Themes…” is not ideal. In my experiments, the installation often makes the System Settings crash; the artifacts of the theme might be out of date concerning the current Plasma version. Moreover, the installation usually does not install other required artifacts, like icons and, most of all, the Kvantum theme corresponding to the Plasma theme. In particular, the themes that I use in this post all come with the corresponding Kvantum theme. Using such an additional theme configuration is crucial to enjoying that Plasma theme thoroughly.

Thus, I’ll always install themes and icons from sources in this post.

I mentioned Kvantum, which you have to install first. In recent Ubuntu distributions

In other distributions, the package(s) names might be different.

Quoting from Kvantum site:

Kvantum […] is an SVG-based theme engine for Qt, tuned to KDE and LXQt, with an emphasis on elegance, usability and practicality. Kvantum has a default dark theme, which is inspired by the default theme of Enlightenment. Creation of realistic themes like that for KDE was my first reason to make Kvantum but it goes far beyond its default theme: you could make themes with very different looks and feels for it, whether they be photorealistic or cartoonish, 3D or flat, embellished or minimalistic, or something in between, and Kvantum will let you control almost every aspect of Qt widgets. Kvantum also comes with many other themes that are installed as root and can be selected and activated by using Kvantum Manager.

As described in https://github.com/tsujan/Kvantum/blob/master/Kvantum/INSTALL.md,

The contents of theme folders (if valid) can also be installed manually in the user’s home. The possible installation paths are ~/.config/Kvantum/$THEME_NAME/, ~/.themes/$THEME_NAME/Kvantum/ and ~/.local/share/themes/$THEME_NAME/Kvantum/, each one of which takes priority over the next one, i.e. if a theme is installed in more than one path, only the instance with the highest priority will be used by Kvantum.

On the contrary, the KDE themes artifacts are searched for in ~/.local/share. Since some of the themes we will install from the source do not provide an installation script, we will have to copy artifacts manually. In the meantime, you might want to create the Kvantum config directory (though the installation commands we will see in this post will take care of that anyway):

After Kvantum is installed, going to “Appearance” -> “Application Style,” you’ll see the kvantum style that you can select as an application style (we won’t do that right now). Once that’s set, the application style will be configured through the Kvantum Manager, which we’ll see in a minute.

So let’s start installing and playing with a few themes. As I anticipated initially, we’ll install the themes from sources. You’ll need git to do that. If not already installed, you should do that right now.

Nordic KDE

https://github.com/EliverLara/Nordic

This theme does not come with an installation script, so I’ll show all the commands to clone its source repository and manually copy its contents to the correct directories (see the note above concerning directories for Plasma and Kvantum themes):

Now, we got to “Appearance” -> “Global Theme,” and we find two new entries for the Nordic theme we’ve just installed:

Select one of the Nordic global themes (I chose “Nordic”) and press “Apply.”

Here’s the result (this is not yet the final intended look of the theme):

We can see that the menus are nicely blurred (of course, if you like blur effect 🙂

Go to “Application Style,” you see that “kvantum” is selected (that has happened automatically when selecting the “Nordic” global theme):

However, we still need to apply the Nordic Kvantum theme.

Launch Kvantum Manager, select one of the Nordic themes (Kvantum finds the Nordic Kvantum theme because we installed them in the correct position in the home folder), and press “Use this theme”:

and now everything looks consistent with the Nordic theme (the menu is still blurred). Keep in mind that applications have to be restarted to see the new theme applied to them:

Let’s make sure that applications like Dolphin are blurred themselves: go to the tab “Configure Active Theme” -> “Hacks” and make sure “Transparent Dolphin View” is selected. IMPORTANT: if you use fractional scaling in Plasma (e.g., I use 150% or 175%), you must ensure that “Disable translucency with non-integer scaling” is NOT selected.

Scroll down and press “Save”; remember to restart the applications. Now enjoy the nice translucent blurred effect in many applications (including the Kvantum manager itself); I changed the wallpaper to something lighter to appreciate the transparency better:

Of course, you can change a few Kvantum Nordic theme configurations parameters, including the opacity and other things.

You might also have to log out and log in to the Plasma session to see the theming applied to everything.

This theme also installs a Konsole color scheme, so you can create a new Konsole profile using such a color scheme: here’s the excellent result (this color scheme comes with blurred background by default):

In this example, I’m still using the standard Breeze icon theme, but of course, you might want to select a different icon theme.

Lyan

https://github.com/vinceliuice/Layan-kde

Lyan is one of my favorite themes (and one of the most appreciated in general). It’s based on the Tela icon theme (also very beautiful), https://github.com/vinceliuice/Tela-icon-theme, so we’ll have to install the icon theme first. Both come with an installation (and, in case, an uninstallation) script so that everything will be much easier! We’ll have to clone their repositories and then run the installation script.

These are the command lines to run to set them both up (note the -a in the Tela installation command: this will install all the color variants; if you only want to install the default variant or just a subset, please have a look at the project site):

Then, the procedure to apply the Global Theme and the corresponding Kvantum theme is the same as before. Once you have selected the Global Theme and the Kvantum theme for Lyan, you should get something like that:

Look at all the beautiful transparency and blur effects on Dolphin, on the title bars, and in some parts of Kate and the System Settings, not to mention the blur on menus.

WhiteSur

https://github.com/vinceliuice/WhiteSur-kde

This theme is for macOS look and feel fans. I’m not one of those but let’s try that as well 🙂

For this theme as well, we’ll install the recommended icons (we can rely on their installation scripts also in this case):

Once the corresponding Global Theme is selected, if you are using a fractional scaling like me (as I also said before), you’ll get a nasty surprise: huge borders as shown in the screenshot (independently from the selected Kvantum theme):

However, this theme comes with a few versions suitable for fractional scaling concerning “Window Decorations” (ignore the previews shown in the selection, which do not look good: that does not matter for the final result). There’s no version for my current 175% scaling, however, selecting the Window Decoration for 1.5, the borders are better, though still a bit too thick:

If you have a Window scaling factor for which there’s a specific Window Decoration of this theme, then everything looks fine. For example, with 150% scaling and by selecting the corresponding Window Decoration, the window borders look fine:

The rest of the screenshots are based on 175% scaling and the x1.5 variant. As I said, it does not look perfect, but that’s acceptable 😉

Note that if we apply this Global Theme, the installed WhiteSur icons are used as well.

After applying the Kvantum theme as before, changing the wallpaper with the one provided by this theme, and setting the Task Switcher to Large Icons here’s the result:

Please keep in mind that if you end up with thick borders, the resizing point is not exactly on the edge, but slightly inside, as shown in this screenshot:

Edna

https://gitlab.com/jomada/edna.git

This does not come with an installation script either. We’ll have to copy all the artifacts manually in the correct folders (in the following commands, directories are created as well if not already there):

This is another theme with the huge border problem when using fractional scaling:

Unfortunately, this one comes with a single version and no variant for fractional scaling. Thus, when using fractional scaling, we have to manually patch the theme: open the file ~/.local/share/aurorae/themes/Edna/Ednarc and change the lines

as follows (these values fit 150% scaling)

For 175% scaling, use the same values but this one

The windows already opened will still show the huge border, but the new windows will show smaller borders. Feel free to play with such values til you get a border size you like most. In the following screenshot, I’m using this theme together with the Tela icon (the green variant) that we installed before and I also created a new Konsole profile using the Konsole Edna color scheme (which comes with transparent background):

This theme also provides a complex Latte dock layout, with several docks. We won’t see this feature in this post, you might want to experiment with that.

Conclusions

I hope you enjoyed this tutorial and that you’ll start playing with KDE themes as well 🙂

How to install Linux on a USB with UEFI support

I have already blogged about installing Linux on an external USB stick or drive (better if it’s an SSD) to make such an installation portable on any computer. In that old blog post, I was using VirtualBox to do the actual installation. I was relying on VirtualBox because when I had tried to install Linux directly to an external USB drive after booting with another USB with a live image I ended up breaking my current computer’s grub configuration: if I tried to boot my computer without the newly created USB installation I couldn’t select any OS to boot. At that time, I did not investigate further, because the VirtualBox solution was working like a charm for me.

However, the USB with Linux installed through VirtualBox could be used only when booting from a computer with “Legacy boot” enabled, that is, it could not be booted if UEFI was the only choice in that computer. Even in that case, it wasn’t a problem for me: it was enough to enable Legacy Boot in the computer’s BIOS. Unfortunately, when I tried to boot such a Linux USB in my new LG GRAM 16, I realized that this LG GRAM provides no way to specify Legacy boot! Then, I found this interesting post from “It’s FOSS” that both explains the problem with UEFI I had previously experienced (that is, the fact I couldn’t boot my computer at all because the GRUB configuration was broken) and a way to circumvent the problem. I suggest you go and read that post!

In this post, I’d like to summarize my experience applying the suggested workaround and also report that you might still get into trouble in some circumstances, but fixing things will be easy.

Just like suggested in https://itsfoss.com/intsall-ubuntu-on-usb/, before you start experimenting with the procedure of this tutorial, read it entirely.

The scenario

First of all, let’s summarize what I want to do. I want to install Linux on a portable external USB SSD. I don’t want a live distribution: a live distribution only allows you a small testing experience, it’s not easily maintainable and upgradable, it’s harder to keep your data in there. On the contrary, installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer; that’s surely the case for an external SSD, which are quite cheap nowadays).

In the previous post, I described how to create such an installation from VirtualBox. As I said, such a USB drive can only be booted with Legacy mode (I still have to investigate if you can get a UEFI bootable USB drive by installing it through VirtualBox There’s also a more recent post where I achieve the same goal, a UEFI bootable USB drive, by using VirtualBox).

Now I want to install Linux on a USB drive performing a real installation: I’m going to boot my computer with a USB stick with a Linux Live distribution, then I’m going to attach a USB external SSD drive, where I’m going to perform the actual installation. Thus, I’m NOT going to install Linux on the very computer, but on the USB external drive.

I’m going to perform this experiment:

  • I’m going to use a Dell XPS 13 where I already have (in multi-boot, UEFI), Windows, Ubuntu, Kubuntu, and Manjaro GNOME
  • I’m going to install Manjaro KDE (I have already created a LIVE USB stick) into an external USB SanDisk SSD

The USB stick with the Live distribution is a SanDisk as well (just to let you know, in case you see SanDisk in the screenshots; I’ll try to make it clear when I’m talking about the Live USB stick and the USB SSD).

I know that I’ve said that I cannot boot with Legacy boot from the LG GRAM, while I can from the Dell XPS, but to make the experiment more interesting I decided to install Linux on the external USB drive using the Dell so that I can then test it both from the Dell and from the new LG GRAM.

The problem

That’s already well explained in the blog post https://itsfoss.com/intsall-ubuntu-on-usb/. I’ll briefly summarize it here: a system can only have one active ESP partition at a time. Even though you choose the USB as the destination for the bootloader while installing Linux, the EFI file for the new distribution (remember, installed on an external drive) will be put in the existing ESP partition (belonging to the computer you’re using just for performing the installation in the external drive). Thus, the computer you used for installing Linux on the external drive will not boot if you don’t have the Linux external USB drive plugged in.

I might add that the fact that the Linux installer lets you use another device for the boot loader, while it will silently use an existing ESP partition might be seen as a bug. Indeed, I read about that in many places. However, it looks like all the Linux installers share this behavior, so we’ll have to live with that, and use a workaround.

The solution

The solution (workaround) for the problem above, as described in https://itsfoss.com/intsall-ubuntu-on-usb/ is simple and clever: you fool the installer by removing the ESP flag from the ESP partition (of the current computer’s SSD) before installing Linux on the external USB drive. Of course, it is crucial to put the ESP back after installation, before rebooting (as instructed at the end of the installation procedure). The removal and addition of the ESP flag can be done after booting into the live system with a partition manager program, which is usually part of the live installation media.

As I anticipated, you might still get into little trouble. I’ll talk about that at the end of the post. However, fear not, the trouble is not as bad as breaking the whole booting procedure of your computer 😉

My experience

So I booted with the Live USB stick with Manjaro KDE. Remember, the Live USB must have been created appropriately with UEFI support, or everything from now on will not work at all. Remember that I’m booting the Live USB from the Dell XPS, where the Legacy boot is enabled. So I have to make sure to boot the Live USB with UEFI, NOT with Legacy:

Being KDE, the partition manager available in the Live system is the one of KDE. I prefer Gparted instead, so it’s just a matter of installing that in the live system, using the package manager. Since I’m using Manjaro in this example, I just run

For Ubuntu, it will be a command based on apt (if it’s not already there) and for Fedora, you’ll have to use dnf (again, unless it’s already installed).

Then, I launch GParted; in my system, you can see the complex configuration of the internal SSD of my computer (/dev/nvme0n1)You can see a few reserved partitions for recovering the Windows installation (the one that came with this computer), 3 ext4 partitions for the 3 Linux OSes mentioned above, 1 ext4 partition that is mounted in all the 3 installed Linuxes, the Windows installation and, the one we’re interested in, the first one, with label ESP. You can see its flags boot and esp. We have to remove those flags before starting the installation. Right-click on that partition, choose “manage flags” and unselect one of the two flags boot or esp: the other one will be automatically unselected and a new flag mfsdata will be selected, but that’s not important.

Let’s close GParted, and let’s run the Manjaro installer. This part is not documented because I’m assuming you’re already familiar with the installation procedure of the Linux distribution you want to install in the external USB drive. The important part is when you have to partition the target drive. Of course, it is crucial to select the right one (the external USB drive where you want to install Linux), NOT the SSD of your computer or you’ll be in real trouble, as you can imagine 😉

In my example, I selected /dev/sdb, because /dev/sda is the Live USB stick (as seen above, the internal SSD is /dev/nvme0n1). Then it’s up to you to partition the target drive appropriately. In this example, I decide to let the Manjaro installer erase that entire disk, specifying to create a SWAP partition with hibernate. Depending on the installer you might choose something else. I chose this strategy because this way, the installer will also create the ESP partition for the boot manager on the target drive automatically, with the right flags. If you want to partition that manually, again, depending on the installer, you might have to create the ESP partition manually. (you can see an example of such a manual partition in the mentioned article https://itsfoss.com/intsall-ubuntu-on-usb/.)Remember that you still have a chance to review such changes before starting the actual modifications on the file system

When the installation finishes, you see the message to reboot into the newly installed system… DON’T DO THAT YET. Remember: you have to reset the esp and boot flag of the ESP partition of the internal drive of the computer: simply use GParted again and follow a procedure similar to the one performed at the beginning.

Since you’re still in GParted, you might also want to verify that also the external drive where you’ve just installed Linux looks correct, for example in my case

Now, it’s finally time to reboot and see what happens…

Small Trouble

First of all, I wanted to make sure that I could still boot the operating systems on my Dell XPS computers, so I made sure I booted with all the USB drives unplugged. Everything seemed to work but… wait… the main UEFI loader on my computer was the Manjaro GNOME one, which was automatically configured to boot also the Ubuntu OSes, simply relying on the os-prober, which is usually part of most Linux installations (apart from PopOS, from what I know). However, there was no trace of the old Manjaro UEFI loader: the Ubuntu UEFI loader showed up. You know that you can have several UEFI loaders on the same machine, and you can also reorder them from the BIOS. Also getting into the BIOS, the Manjaro UEFI was gone! The problem, in this case, is that Ubuntu doesn’t seem to be able to boot a Manjaro distribution (I still don’t know why). The Manjaro installation was still there but I could not boot it!

But wait… I still have the brand new installation on the external USB drive! I booted with that and that one shows both the new Manjaro KDE (the first one of course in the menu) and the entry for booting the Manjaro GNOME of my computer. Indeed the os-prober kicked in also during the installation on the external hard drive: it detected also the OSes installed on my computer (that’s expected). You can see that in the photo (note the reference to the /dev/nvme0n1 partition):

Great! I could boot into the computer’s Manjaro GNOME, using the boot loader of the external drive. Once there, I disconnected the external drive and reinstalled the GRUB UEFI of Manjaro GNOME into my computer. It’s just a matter of running sudo grub-install (no need to specify anything else: the existing installed OS already knows where to install GRUB) and then sudo update-grub:

Rebooted and everything went back to normal on my computer!

all’s well that ends well! 😉

Why did that happen anyway? To be honest, I’m not completely sure… what I noted before when I installed Ubuntu and then Kubuntu on this computer was that since the GRUB configurations of both systems use “Ubuntu” as the label, the Kubuntu installation, which was done after the Ubuntu installation, replaced the UEFI entry of the former; that had never been a problem because I can boot Ubuntu from Kubuntu, and vice-versa in case. Maybe that happened also in my experiment since I had a “Manjaro” (GNOME) UEFI entry on my computer and I installed on the external hard drive another “Manjaro” (KDE) distribution: both use the same “Manjaro” label. That shouldn’t have happened because the ESP partition should not have been detected from the installer, but maybe that was a wrong assumption (after all, os-prober can still detect existing OS installations).

This situation is NOT described in https://itsfoss.com/intsall-ubuntu-on-usb/ and indeed in that article the experiment was slightly different: the author installs on the external hard drive an “Ubuntu” distribution, while the computer already had a “Debian” distribution, so the labels were different.

Anyway, even in case of problems like the one experienced, it was pretty easy to fix things!

Hope you find this tutorial useful for experimenting with installing Linux on portable USB hard drives, or even USB sticks, provided they are fast ones 😉

Concluding Remarks

Please keep in mind that the created Linux installation on the USB external drive is effectively portable: you can use it on several computers and laptops. However, some drivers for some specific computer configurations might not be installed in the Linux installation of the external USB. Also, other configurations like screen resolutions and scaling might really depend on the computer you’re booting and might have to be adjusted each time you test the external USB drive in a different computer.

Problems with Linux 5.13 in LG GRAM 16

I recently bought an LG GRAM 16 and I really enjoy that (I’ll blog about that in the near future, hopefully). I had no problems installing Linux, nor with Manjaro Gnome (Phavo) neither with Kubuntu.

However, in Manjaro Gnome I soon started to note some lags, especially with the touchpad and some repainting issues. I had no problems with Kubuntu (it was 21.04). The main difference was that Manjaro was using Linux kernel 5.13, while Kubuntu 21.04 was using Linux kernel 5.11. As soon as I updated to Kubuntu 21.10, which comes with Linux kernel 5.13, I started to have the same problems also in Kubuntu.

Long story short: switching to Linux kernel 5.14 on both systems solved all the problems 🙂

In Manjaro you can use its kernel management system. Alternatively, from the command line, you can run

On (K)ubuntu things are slightly more complicated because the current version 21.10 does not provide a package for kernel 5.14.

However, you can manually download the DEB files of the kernel (and kernel headers) from the mainline repository https://kernel.ubuntu.com/~kernel-ppa/mainline/. Then, you run dpkg -i on all such downloaded files. However, I prefer to use a nice GUI for such mainline kernels, mainline, https://github.com/bkw777/mainline. It’s just a matter of adding the corresponding PPA repository and installing it:

The GUI application is called “Ubuntu Mainline Kernel Installer”. You select the kernel you want (in this case I’m choosing the latest version of the stable 5.14 version) and choose Install. Reboot and you’re good to go 🙂

Accessing Google Online Account from GNOME and KDE

In this post, I’d like to share my experiences in setting a Google Online Account in GNOME and KDE. Actually, I have more than one Google account, and the procedures I show can be repeated for all your Google accounts.

First, a disclaimer: I’ve always loved KDE and I’ve used that since version 3. Lately, I have started to appreciate GNOME though. I’ve been using GNOME most of the time now, in most of my computers, for a few years. But lately, I started to experiment with KDE again, and I started to install that on some of my computers.

KDE is well-known for its customizability, while GNOME is known for the opposite. However, I must admit that in GNOME settings most of the things are trivial, while in KDE, you pay a lot for its customizability.

I think setting a Google Account is a good example of what I’ve just said. Of course, I might be wrong concerning the procedure I’ll show in this post, but, from what I’ve read around, especially for KDE, there doesn’t seem to be an easier way. Of course, if you know an easier procedure I’d like to know in the comments 🙂

In the following, I’m showing how to set a Google Account so that its features, mainly the calendar and access to Google Drive, get integrated into GNOME and KDE. I tested these procedures both in Ubuntu/Kubuntu and in Manjaro GNOME/KDE, but I guess that’s the same in other distributions.

TL;DR: in Gnome it’s trivial, in KDE you need some effort.

GNOME

Just open “Online Accounts”, and choose Google. Use the web form to log in and give the permissions so that GNOME can access all your Google data. As I said, I’m focusing on the calendar and drive. Repeat the same procedure for all your Google accounts you want to connect.

Done! In Files (Nautilus) you can see on the left, the links to your Google drive (or drives, if you configured several accounts). In the Gnome Calendar, you can choose the Google calendars you want to show. The events will be automatically shown in the top Gnome shell clock and calendar widget. Notifications will be automatically shown (by Evolution). For the Gnome Contacts, things are similar. By the way, also Gnome Tasks and other Gnome applications will be automatically able to access your Google accounts data.

To summarize, one single configuration and everything else is automatically integrated.

KDE

Now be prepared for an overwhelming number of steps, most of which, I’m afraid, I find rather complex and counter-intuitive.

In particular, you won’t get access to your Google account data in a single step. In fact, I’ll first show how to mount a Google drive and then how to set up the calendar.

Mount your Google drive

Go to

System Settings -> Online Accounts -> Add New Account -> Google

As usual, you get redirected to the “Web authentication for google”, login and give the consent allowing “KDE Online Accounts” to access some of your Google information, including drive, manage your YouTube videos, access your contacts, and calendar. (This procedure can be repeated for all your Google accounts if you have many.) Note that with all the permissions you give, you’d expect that then everything is automatically configured in KDE, but that’s not the case…

Back to the system settings, you get a “Google account”, not with your Google username or email, which is what I’d expect, but a simple “google” and a progress number (of course, you can rename it).

OK, now I can access my Google drive files from Dolphin and have my local calendar automatically connected to my Google calendar? Just like in Gnome? I’m afraid not… we’re still far away from that.

If you go to Dolphin’s Network place you see no Google drive mounted, nor a mechanism to do that… First, you have to install the package kio-gdrive (at least in Kubuntu and Manjaro KDE that’s not installed by default…). After that, back to Dolphin’s Network place you can expand the “Google Drive” folder and you get asked for the Google account you had previously configured. Select that, and “Use This Account For” -> “Drive” in Accounts Details. Now you can access your Google drive from Dolphin.

Add your Google calendar

What about my Google Calendar? First, you have to install the package korganizer (or the full suite kontact); again, at least in Kubuntu and Manjaro KDE, that’s not installed by default… Great, once installed I can simply select my previously configured Google account? Ehm… no… you “just” have to go to

Settings -> Configure KOrganizer -> General -> Calendars -> Add… -> Google Groupware -> a dialog appears, click “Configure…”

Now the browser (not a web dialog as before) is opened to login into your Google account. Then, give the permissions so that “This will allow Akonadi Resources for Google Services to…” (Again, you have to do the same for all your Google accounts you want to connect to.) In the browser, you then see: “You can close this tab and return to the application now.” Go back to the dialog in KOrganizer, and your calendars and tasks should already be selected (unselect anything you don’t want). OK, now in the previous dialog you should see KOrganizer synchronizing with your Google calendar and tasks.

Now I should get notifications from Google calendars events, right? Ehm… not necessarily: you need to make sure that in the “Status and Notifications” system tray, by right-clicking on “KOrganizer Reminders”, the “Enable Reminders” and “Start Reminder Daemon at Login” are selected (I see different default behaviors under that respect in different distributions). If not, enable them and log out and log in.

OK! But what about my Google calendar events in the standard “Digital Clock” widget in the corner of the system tray? Are they automatically shown just like in GNOME? No! There’s some more work to do! First, install kdepim-addons (guess what? At least in Kubuntu and Manjaro KDE, that’s not installed by default…). Now, go to “Digital Clock Settings” -> “Calendar” -> check “PIM Events Plugin” (quite counter-intuitive!) -> Apply; now a new “PIM Events Plugin” appears on the left, select that. Fortunately, this one will automatically propose to select all the calendars that have been previously configured in KOrganizer.

something similar for kaddressbook; probably with kontact the steps will be less, but I’ve always found Kontact chaotic…

Summary

Now, I like KDE customizations possibilities (while GNOME is pretty rigid about customizations and most things cannot be customized at all), but the above steps are far too much! After a few weeks, I wouldn’t be able to remember them correctly… In KDE, even the number of steps of the above procedures is overwhelming. You have to follow complex, heterogeneous and counter-intuitive procedures in KDE, and long menu chains. Maybe it’s the distribution’s fault? I doubt. I guess it’s an issue with the overall organization and integration in the KDE desktop environment. In GNOME the integration is just part of the desktop environment.

Eclipse p2 site references

Say you publish a p2 repository for your Eclipse bundles and features. Typically your bundles and features will depend on something external (other Eclipse bundles and features). The users of your p2 repository will have to also use the p2 repositories of the dependencies of your software otherwise they won’t be able to install your software. If your software only relies on standard Eclipse bundles and features, that is, something that can be found in the standard Eclipse central update site, you should have no problem: your users will typically have the Eclipse central update site already configured in their Eclipse installations. So, unless your software requires a specific version of an Eclipse dependency, you should be fine.

What happens instead if your software relies on external dependencies that are available only in other p2 sites? Or, put it another way, you rely on an Eclipse project that is not part of the simultaneous release or you need a version different from the one provided by a specific Eclipse release.

You should tell your users to use those specific p2 sites as well. This, however, will decrease the user experience at least from the installation point of view. One would like to use a p2 site and install from it without further configurations.

To overcome this issue, you should make your p2 repository somehow self-contained. I can think of 3 alternative ways to do that:

  • If you build with Tycho (which is probably the case if you don’t do releng stuff manually), you could use <includeAllDependencies> of the tycho-p2-repository plugin to “to aggregate all transitive dependencies, making the resulting p2 repository self-contained.” Please keep in mind that your p2 repository itself will become pretty huge (likely a few hundred MB), so this might not be feasible in every situation.
  • You can put the required p2 repositories as children of your composite update site. This might require some more work and will force you to introduce composite update sites just for this. I’ve written about p2 composite update sites many times in this blog in the past, so I will not consider this solution further.
  • You can use p2 site references that are meant just for the task mentioned so far and that have been introduced in the category.xml specification for some time now. The idea is that you put references to the p2 sites of your software dependencies and the corresponding content metadata of the generated p2 repository will contain links to the p2 sites of dependencies. Then, p2 will automatically contact those sites when installing software (at least from Eclipse, from the command line we’ll have to use specific arguments as we’ll see later). Please keep in mind that this mechanism works only if you use recent versions of Eclipse (if I remember correctly this has been added a couple of years ago).

In this blog post, I’ll describe such a mechanism, in particular, how this can be employed during the Tycho build.

The simple project used in this blog post can be found here: https://github.com/LorenzoBettini/tycho-site-references-example. You should be able to easily reuse most of the POM stuff in your own projects.

IMPORTANT: To benefit from this, you’ll have to use at least Tycho 2.4.0. In fact, Tycho started to support site references only a few versions ago, but only in version 2.4.0 this has been implemented correctly. (I personally fixed this: https://github.com/eclipse/tycho/issues/141.) If you use a (not so) older version, e.g., 2.3.0, there’s a branch in the above GitHub repository, tycho-2.3.0, where some additional hacks have to be performed to make it work (rewrite metadata contents and re-compress the XML files, just to mention a few), but I’d suggest you use Tycho 2.4.0.

There’s also another important aspect to consider: if your software switches to a different version of a dependency that is available on a different p2 repository, you have to update such information consistently. In this blog post, we’ll deal with this issue as well, keeping it as automatic (i.e., less error-prone) as possible.

The example project

The example project is very simple:

  • parent project with the parent POM;
  • a plugin project created with the Eclipse wizard with a simple handler (so it depends on org.eclipse.ui and org.eclipse.core.runtime);
  • a feature project including the plugin project. To make the example more interesting this feature also requires, i.e., NOT includes, the external feature org.eclipse.xtext.xbase. We don’t actually use such an Xtext feature, but it’s useful to recreate an example where we need a specific p2 site containing that feature;
  • a site project with category.xml that is used to generate during the Tycho build our p2 repository.

To make the example interesting the dependency on the Xbase feature is as follows

So we require version 2.25.0.

The target platform is defined directly in the parent POM as follows (again, to keep things simple):

Note that I explicitly added the Xtext 2.25.0 site repository because in the 2020-12 Eclipse site Xtext is available with a lower version 2.24.0.

This defines the target platform we built (and in a real example, hopefully, tested) our bundle and feature.

The category.xml initially is defined as follows

The problem

If you generate the p2 repository with the Maven/Tycho build, you will not be able to install the example feature unless Xtext 2.25.0 and its dependencies can be found (actually, also the standard Eclipse dependencies have to be found, but as said above, the Eclipse update site is already part of the Eclipse distributions). You then need to tell your users to first add the Xtext 2.25.0 update site. In the following, we’ll handle this.

A manual, and thus cumbersome, way to verify that is to try to install the example feature in an Eclipse installation pointing to the p2 repository generated during the build. Of course, we’ll keep also this verification mechanism automatic and easy. So, before going on, following a Test-Driven approach (which I always love), let’s first reproduce the problem in the Tycho build, by adding this configuration to the site project (plug-in versions are configured in the pluginManagement section of the parent POM):

The idea is to run the standard Eclipse p2 director application through the tycho-eclipserun-plugin. The dependency configuration is standard for running such an Eclipse application. We try to install our example feature from our p2 repository into a temporary output directory (these values are defined as properties so that you can copy this plugin configuration in your projects and simply adjust the values of the properties). Also, the arguments passed to the p2 director are standard and should be easy to understand. The only non-standard argument is -followReferences that will be crucial later (for this first run it would not be needed).

Running mvn clean verify should now highlight the problem:

This would mimic the situation your users might experience.

The solution

Let’s fix this: we add to the category.xml the references to the same p2 repositories we used in our target platform. We can do that manually (or by using the Eclipse Category editor, in the tab Repository Properties):

The category.xml initially is defined as follows

Now when we create the p2 repository during the Tycho build, the content.xml metadata file will contain the references to the p2 repository (with a syntax slightly different, but that’s not important; it will contain a reference to the metadata repository and to the artifact repository, which usually are the same). Now, our users can simply use our p2 repository without worrying about dependencies! Our p2 repository will be self-contained.

Let’s verify that by running mvn clean verify; now everything is fine:

Note that this requires much more time: now the p2 director has to contact all the p2 sites defined as references and has to also download the requirements during the installation. We’ll see how to optimize this part as well.

In the corresponding output directory, you can find the installed plugins; you can’t do much with such installed bundles, but that’s not important. We just want to verify that our users can install our feature simply by using our p2 repository, that’s all!

You might not want to run this verification on every build, but, for instance, only during the build where you deploy the p2 repository to some remote directory (of course, before the actual deployment step). You can easily do that by appropriately configuring your POM(s).

Some optimizations

As we saw above, each time we run the clean build, the verification step has to access remote sites and has to download all the dependencies. Even though this is a very simple example, the dependencies during the installation are almost 100MB. Every time you run the verification. (It might be the right moment to stress that the p2 director will know nothing about the Maven/Tycho cache.)

We can employ some caching mechanisms by using the standard mechanism of p2: bundle pool! This way, dependencies will have to be downloaded only the very first time, and then the cached versions will be used.

We simply introduce another property for the bundle pool directory (I’m using by default a hidden directory in the home folder) and the corresponding argument for the p2 director application:

Note that now the plug-ins during the verification step will NOT be installed in the specified output directory (which will store only some p2 properties and caches): they will be installed in the bundle pool directory. Again, as said above, you don’t need to interact with such installed plug-ins, you only need to make sure that they can be installed.

In a CI server, you should cache the bundle pool directory as well if you want to benefit from some speed. E.g., this example comes with a GitHub Actions workflow that stores also the bundle pool in the cache, besides the .m2 directory.

This will also allow you to easily experiment with different configurations of the site references in your p2 repository. For example, up to now, we put the same sites used for the target platform. Referring to the whole Eclipse releases p2 site might be too much since it contains all the features and bundles of all the projects participating in Eclipse Simrel. In the target platform, this might be OK since we might want to use some dependencies only for testing. For our p2 repository, we could tweak references so that they refer only to the minimal sites containing all our features’ requirements.

For this example we can replace the 2 sites with 4 small sites with all the requirements (actually the Xtext 2.25.0 is just the same as before):

You can verify that removing any of them will lead to installation failures.

The first time this tweaking might require some time, but you now have an easy way to test this!

Keeping things consistent

When you update your target platform, i.e., your dependencies versions, you must make sure to update the site references in the category.xml accordingly. It would be instead nice to modify this information in a single place so that everything else is kept consistent!

We can use again properties in the parent POM:

We want to rely on such properties also in the category.xml, relying on the Maven standard mechanism of copy resources with filtering.

We create another category.xml in the subdirectory templates of the site project using the above properties in the site references (at least in the ones where we want to have control on a specific version):

and in the site project we configure the Maven resources plugin appropriately:

Of course, we execute that in a phase that comes BEFORE the phase when the p2 repository is generated. This will overwrite the standard category.xml file (in the root of the site project) by replacing properties with the corresponding values!

By the way, you could use the property eclipse-version also in the configuration of the Tycho Eclipserun plugin seen above, instead of hardcoding 2020-12.

Happy releasing! 🙂