Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

Using the Unison File Synchronizer on macOS

For ages, I’ve been using the excellent Unison file synchronizer to synchronize my directories across several Linux machines, using the SSH protocol. I love it! 🙂

Unison gives you complete control over the synchronization, and, most of all, it’s a two-way synchronizer.

Quoting from its home page:

Unison is a file-synchronization tool for OSX, Unix, and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

On Linux, I never experienced problems with Unison, especially from the installation point of view: it’s available on most distributions’ package managers. If that’s not the case, you can download a binary package from https://github.com/bcpierce00/unison/releases.

However, I had never used Unison on a macOS computer, so today, I decided to try it.

Please, keep in mind that you must use the same version of Unison on all the computers you want to synchronize (at least, I seem to understand, the major.minor version numbers must be the same on all computers, and this also includes the version of OCaml, on which Unison relies).

For macOS, you go to https://github.com/bcpierce00/unison/releases, and you download the .app.tar.gz file according to the Unison (and OCaml) version you need. The other macOS .tar.gz archives, without the .app, contain the command-line binary and a GTK UI binary, which, however, requires the GTK libraries to be already installed on your system and, to be honest, I have no idea how to do that in a compatible way. On the contrary, the .app.tar.gz contains the macOS application, which, I seem to understand, it’s self-contained.

By the way, there’s also a brew package for Unison, but that’s only the command line application, so you won’t get any UI, which is quite helpful, especially when you want to have complete control over the elements to be synchronized and you want to have the last chance to select or unselect the files before the synchronization starts. Moreover, the UI is quite helpful when you have conflicts to solve.

Then, you extract the archive, and you need to run this command (assuming you have extracted it in the Downloads folder):

otherwise, macOS will complain (with an unhelpful error message about a damaged app) since it does not recognize the archive provider.

Move the Unison.app into your Applications, and you’re good to go, assuming you already know how to use Unison.

The first time you run the app, it will ask you to install also the command-line version of Unison, which is also helpful:

And here’s a screenshot showing the files that are going to be synchronized in an example of mine (from the direction of the arrows, you can see that this is a two-way synchronization):

I find the Linux UI of Unison much simpler to understand and deal with, but maybe that’s because I’ve been using it for ages, and I still do.

Happy synchronization! 🙂

Playing with KDE Plasma Themes

I want to share some of my experiences with KDE Plasma Themes in this post.

These themes are pretty powerful, but, as it often happens with KDE and its configuration capabilities, it might not be immediately clear how to benefit from all its power and all its themes’ power.

I’m assuming that you already enabled the KWin Blur effect (In “Desktop Effects”), which is usually the case by default. Please remember that desktop effects, like “blur,” applied to menus, windows, etc., will use more CPU. This CPU usage might increase battery usage (but, at least from my findings, it’s not that much).

First, installing a theme using “Get New Global Themes…” is not ideal. In my experiments, the installation often makes the System Settings crash; the artifacts of the theme might be out of date concerning the current Plasma version. Moreover, the installation usually does not install other required artifacts, like icons and, most of all, the Kvantum theme corresponding to the Plasma theme. In particular, the themes that I use in this post all come with the corresponding Kvantum theme. Using such an additional theme configuration is crucial to enjoying that Plasma theme thoroughly.

Thus, I’ll always install themes and icons from sources in this post.

I mentioned Kvantum, which you have to install first. In recent Ubuntu distributions

In other distributions, the package(s) names might be different.

Quoting from Kvantum site:

Kvantum […] is an SVG-based theme engine for Qt, tuned to KDE and LXQt, with an emphasis on elegance, usability and practicality. Kvantum has a default dark theme, which is inspired by the default theme of Enlightenment. Creation of realistic themes like that for KDE was my first reason to make Kvantum but it goes far beyond its default theme: you could make themes with very different looks and feels for it, whether they be photorealistic or cartoonish, 3D or flat, embellished or minimalistic, or something in between, and Kvantum will let you control almost every aspect of Qt widgets. Kvantum also comes with many other themes that are installed as root and can be selected and activated by using Kvantum Manager.

As described in https://github.com/tsujan/Kvantum/blob/master/Kvantum/INSTALL.md,

The contents of theme folders (if valid) can also be installed manually in the user’s home. The possible installation paths are ~/.config/Kvantum/$THEME_NAME/, ~/.themes/$THEME_NAME/Kvantum/ and ~/.local/share/themes/$THEME_NAME/Kvantum/, each one of which takes priority over the next one, i.e. if a theme is installed in more than one path, only the instance with the highest priority will be used by Kvantum.

On the contrary, the KDE themes artifacts are searched for in ~/.local/share. Since some of the themes we will install from the source do not provide an installation script, we will have to copy artifacts manually. In the meantime, you might want to create the Kvantum config directory (though the installation commands we will see in this post will take care of that anyway):

After Kvantum is installed, going to “Appearance” -> “Application Style,” you’ll see the kvantum style that you can select as an application style (we won’t do that right now). Once that’s set, the application style will be configured through the Kvantum Manager, which we’ll see in a minute.

So let’s start installing and playing with a few themes. As I anticipated initially, we’ll install the themes from sources. You’ll need git to do that. If not already installed, you should do that right now.

Nordic KDE

https://github.com/EliverLara/Nordic

This theme does not come with an installation script, so I’ll show all the commands to clone its source repository and manually copy its contents to the correct directories (see the note above concerning directories for Plasma and Kvantum themes):

Now, we got to “Appearance” -> “Global Theme,” and we find two new entries for the Nordic theme we’ve just installed:

Select one of the Nordic global themes (I chose “Nordic”) and press “Apply.”

Here’s the result (this is not yet the final intended look of the theme):

We can see that the menus are nicely blurred (of course, if you like blur effect 🙂

Go to “Application Style,” you see that “kvantum” is selected (that has happened automatically when selecting the “Nordic” global theme):

However, we still need to apply the Nordic Kvantum theme.

Launch Kvantum Manager, select one of the Nordic themes (Kvantum finds the Nordic Kvantum theme because we installed them in the correct position in the home folder), and press “Use this theme”:

and now everything looks consistent with the Nordic theme (the menu is still blurred). Keep in mind that applications have to be restarted to see the new theme applied to them:

Let’s make sure that applications like Dolphin are blurred themselves: go to the tab “Configure Active Theme” -> “Hacks” and make sure “Transparent Dolphin View” is selected. IMPORTANT: if you use fractional scaling in Plasma (e.g., I use 150% or 175%), you must ensure that “Disable translucency with non-integer scaling” is NOT selected.

Scroll down and press “Save”; remember to restart the applications. Now enjoy the nice translucent blurred effect in many applications (including the Kvantum manager itself); I changed the wallpaper to something lighter to appreciate the transparency better:

Of course, you can change a few Kvantum Nordic theme configurations parameters, including the opacity and other things.

You might also have to log out and log in to the Plasma session to see the theming applied to everything.

This theme also installs a Konsole color scheme, so you can create a new Konsole profile using such a color scheme: here’s the excellent result (this color scheme comes with blurred background by default):

In this example, I’m still using the standard Breeze icon theme, but of course, you might want to select a different icon theme.

Lyan

https://github.com/vinceliuice/Layan-kde

Lyan is one of my favorite themes (and one of the most appreciated in general). It’s based on the Tela icon theme (also very beautiful), https://github.com/vinceliuice/Tela-icon-theme, so we’ll have to install the icon theme first. Both come with an installation (and, in case, an uninstallation) script so that everything will be much easier! We’ll have to clone their repositories and then run the installation script.

These are the command lines to run to set them both up (note the -a in the Tela installation command: this will install all the color variants; if you only want to install the default variant or just a subset, please have a look at the project site):

Then, the procedure to apply the Global Theme and the corresponding Kvantum theme is the same as before. Once you have selected the Global Theme and the Kvantum theme for Lyan, you should get something like that:

Look at all the beautiful transparency and blur effects on Dolphin, on the title bars, and in some parts of Kate and the System Settings, not to mention the blur on menus.

WhiteSur

https://github.com/vinceliuice/WhiteSur-kde

This theme is for macOS look and feel fans. I’m not one of those but let’s try that as well 🙂

For this theme as well, we’ll install the recommended icons (we can rely on their installation scripts also in this case):

Once the corresponding Global Theme is selected, if you are using a fractional scaling like me (as I also said before), you’ll get a nasty surprise: huge borders as shown in the screenshot (independently from the selected Kvantum theme):

However, this theme comes with a few versions suitable for fractional scaling concerning “Window Decorations” (ignore the previews shown in the selection, which do not look good: that does not matter for the final result). There’s no version for my current 175% scaling, however, selecting the Window Decoration for 1.5, the borders are better, though still a bit too thick:

If you have a Window scaling factor for which there’s a specific Window Decoration of this theme, then everything looks fine. For example, with 150% scaling and by selecting the corresponding Window Decoration, the window borders look fine:

The rest of the screenshots are based on 175% scaling and the x1.5 variant. As I said, it does not look perfect, but that’s acceptable 😉

Note that if we apply this Global Theme, the installed WhiteSur icons are used as well.

After applying the Kvantum theme as before, changing the wallpaper with the one provided by this theme, and setting the Task Switcher to Large Icons here’s the result:

Please keep in mind that if you end up with thick borders, the resizing point is not exactly on the edge, but slightly inside, as shown in this screenshot:

Edna

https://gitlab.com/jomada/edna.git

This does not come with an installation script either. We’ll have to copy all the artifacts manually in the correct folders (in the following commands, directories are created as well if not already there):

This is another theme with the huge border problem when using fractional scaling:

Unfortunately, this one comes with a single version and no variant for fractional scaling. Thus, when using fractional scaling, we have to manually patch the theme: open the file ~/.local/share/aurorae/themes/Edna/Ednarc and change the lines

as follows (these values fit 150% scaling)

For 175% scaling, use the same values but this one

The windows already opened will still show the huge border, but the new windows will show smaller borders. Feel free to play with such values til you get a border size you like most. In the following screenshot, I’m using this theme together with the Tela icon (the green variant) that we installed before and I also created a new Konsole profile using the Konsole Edna color scheme (which comes with transparent background):

This theme also provides a complex Latte dock layout, with several docks. We won’t see this feature in this post, you might want to experiment with that.

Conclusions

I hope you enjoyed this tutorial and that you’ll start playing with KDE themes as well 🙂

How to install Linux on a USB with UEFI support

I have already blogged about installing Linux on an external USB stick or drive (better if it’s an SSD) to make such an installation portable on any computer. In that old blog post, I was using VirtualBox to do the actual installation. I was relying on VirtualBox because when I had tried to install Linux directly to an external USB drive after booting with another USB with a live image I ended up breaking my current computer’s grub configuration: if I tried to boot my computer without the newly created USB installation I couldn’t select any OS to boot. At that time, I did not investigate further, because the VirtualBox solution was working like a charm for me.

However, the USB with Linux installed through VirtualBox could be used only when booting from a computer with “Legacy boot” enabled, that is, it could not be booted if UEFI was the only choice in that computer. Even in that case, it wasn’t a problem for me: it was enough to enable Legacy Boot in the computer’s BIOS. Unfortunately, when I tried to boot such a Linux USB in my new LG GRAM 16, I realized that this LG GRAM provides no way to specify Legacy boot! Then, I found this interesting post from “It’s FOSS” that both explains the problem with UEFI I had previously experienced (that is, the fact I couldn’t boot my computer at all because the GRUB configuration was broken) and a way to circumvent the problem. I suggest you go and read that post!

In this post, I’d like to summarize my experience applying the suggested workaround and also report that you might still get into trouble in some circumstances, but fixing things will be easy.

Just like suggested in https://itsfoss.com/intsall-ubuntu-on-usb/, before you start experimenting with the procedure of this tutorial, read it entirely.

The scenario

First of all, let’s summarize what I want to do. I want to install Linux on a portable external USB SSD. I don’t want a live distribution: a live distribution only allows you a small testing experience, it’s not easily maintainable and upgradable, it’s harder to keep your data in there. On the contrary, installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer; that’s surely the case for an external SSD, which are quite cheap nowadays).

In the previous post, I described how to create such an installation from VirtualBox. As I said, such a USB drive can only be booted with Legacy mode (I still have to investigate if you can get a UEFI bootable USB drive by installing it through VirtualBox There’s also a more recent post where I achieve the same goal, a UEFI bootable USB drive, by using VirtualBox).

Now I want to install Linux on a USB drive performing a real installation: I’m going to boot my computer with a USB stick with a Linux Live distribution, then I’m going to attach a USB external SSD drive, where I’m going to perform the actual installation. Thus, I’m NOT going to install Linux on the very computer, but on the USB external drive.

I’m going to perform this experiment:

  • I’m going to use a Dell XPS 13 where I already have (in multi-boot, UEFI), Windows, Ubuntu, Kubuntu, and Manjaro GNOME
  • I’m going to install Manjaro KDE (I have already created a LIVE USB stick) into an external USB SanDisk SSD

The USB stick with the Live distribution is a SanDisk as well (just to let you know, in case you see SanDisk in the screenshots; I’ll try to make it clear when I’m talking about the Live USB stick and the USB SSD).

I know that I’ve said that I cannot boot with Legacy boot from the LG GRAM, while I can from the Dell XPS, but to make the experiment more interesting I decided to install Linux on the external USB drive using the Dell so that I can then test it both from the Dell and from the new LG GRAM.

The problem

That’s already well explained in the blog post https://itsfoss.com/intsall-ubuntu-on-usb/. I’ll briefly summarize it here: a system can only have one active ESP partition at a time. Even though you choose the USB as the destination for the bootloader while installing Linux, the EFI file for the new distribution (remember, installed on an external drive) will be put in the existing ESP partition (belonging to the computer you’re using just for performing the installation in the external drive). Thus, the computer you used for installing Linux on the external drive will not boot if you don’t have the Linux external USB drive plugged in.

I might add that the fact that the Linux installer lets you use another device for the boot loader, while it will silently use an existing ESP partition might be seen as a bug. Indeed, I read about that in many places. However, it looks like all the Linux installers share this behavior, so we’ll have to live with that, and use a workaround.

The solution

The solution (workaround) for the problem above, as described in https://itsfoss.com/intsall-ubuntu-on-usb/ is simple and clever: you fool the installer by removing the ESP flag from the ESP partition (of the current computer’s SSD) before installing Linux on the external USB drive. Of course, it is crucial to put the ESP back after installation, before rebooting (as instructed at the end of the installation procedure). The removal and addition of the ESP flag can be done after booting into the live system with a partition manager program, which is usually part of the live installation media.

As I anticipated, you might still get into little trouble. I’ll talk about that at the end of the post. However, fear not, the trouble is not as bad as breaking the whole booting procedure of your computer 😉

My experience

So I booted with the Live USB stick with Manjaro KDE. Remember, the Live USB must have been created appropriately with UEFI support, or everything from now on will not work at all. Remember that I’m booting the Live USB from the Dell XPS, where the Legacy boot is enabled. So I have to make sure to boot the Live USB with UEFI, NOT with Legacy:

Being KDE, the partition manager available in the Live system is the one of KDE. I prefer Gparted instead, so it’s just a matter of installing that in the live system, using the package manager. Since I’m using Manjaro in this example, I just run

For Ubuntu, it will be a command based on apt (if it’s not already there) and for Fedora, you’ll have to use dnf (again, unless it’s already installed).

Then, I launch GParted; in my system, you can see the complex configuration of the internal SSD of my computer (/dev/nvme0n1)You can see a few reserved partitions for recovering the Windows installation (the one that came with this computer), 3 ext4 partitions for the 3 Linux OSes mentioned above, 1 ext4 partition that is mounted in all the 3 installed Linuxes, the Windows installation and, the one we’re interested in, the first one, with label ESP. You can see its flags boot and esp. We have to remove those flags before starting the installation. Right-click on that partition, choose “manage flags” and unselect one of the two flags boot or esp: the other one will be automatically unselected and a new flag mfsdata will be selected, but that’s not important.

Let’s close GParted, and let’s run the Manjaro installer. This part is not documented because I’m assuming you’re already familiar with the installation procedure of the Linux distribution you want to install in the external USB drive. The important part is when you have to partition the target drive. Of course, it is crucial to select the right one (the external USB drive where you want to install Linux), NOT the SSD of your computer or you’ll be in real trouble, as you can imagine 😉

In my example, I selected /dev/sdb, because /dev/sda is the Live USB stick (as seen above, the internal SSD is /dev/nvme0n1). Then it’s up to you to partition the target drive appropriately. In this example, I decide to let the Manjaro installer erase that entire disk, specifying to create a SWAP partition with hibernate. Depending on the installer you might choose something else. I chose this strategy because this way, the installer will also create the ESP partition for the boot manager on the target drive automatically, with the right flags. If you want to partition that manually, again, depending on the installer, you might have to create the ESP partition manually. (you can see an example of such a manual partition in the mentioned article https://itsfoss.com/intsall-ubuntu-on-usb/.)Remember that you still have a chance to review such changes before starting the actual modifications on the file system

When the installation finishes, you see the message to reboot into the newly installed system… DON’T DO THAT YET. Remember: you have to reset the esp and boot flag of the ESP partition of the internal drive of the computer: simply use GParted again and follow a procedure similar to the one performed at the beginning.

Since you’re still in GParted, you might also want to verify that also the external drive where you’ve just installed Linux looks correct, for example in my case

Now, it’s finally time to reboot and see what happens…

Small Trouble

First of all, I wanted to make sure that I could still boot the operating systems on my Dell XPS computers, so I made sure I booted with all the USB drives unplugged. Everything seemed to work but… wait… the main UEFI loader on my computer was the Manjaro GNOME one, which was automatically configured to boot also the Ubuntu OSes, simply relying on the os-prober, which is usually part of most Linux installations (apart from PopOS, from what I know). However, there was no trace of the old Manjaro UEFI loader: the Ubuntu UEFI loader showed up. You know that you can have several UEFI loaders on the same machine, and you can also reorder them from the BIOS. Also getting into the BIOS, the Manjaro UEFI was gone! The problem, in this case, is that Ubuntu doesn’t seem to be able to boot a Manjaro distribution (I still don’t know why). The Manjaro installation was still there but I could not boot it!

But wait… I still have the brand new installation on the external USB drive! I booted with that and that one shows both the new Manjaro KDE (the first one of course in the menu) and the entry for booting the Manjaro GNOME of my computer. Indeed the os-prober kicked in also during the installation on the external hard drive: it detected also the OSes installed on my computer (that’s expected). You can see that in the photo (note the reference to the /dev/nvme0n1 partition):

Great! I could boot into the computer’s Manjaro GNOME, using the boot loader of the external drive. Once there, I disconnected the external drive and reinstalled the GRUB UEFI of Manjaro GNOME into my computer. It’s just a matter of running sudo grub-install (no need to specify anything else: the existing installed OS already knows where to install GRUB) and then sudo update-grub:

Rebooted and everything went back to normal on my computer!

all’s well that ends well! 😉

Why did that happen anyway? To be honest, I’m not completely sure… what I noted before when I installed Ubuntu and then Kubuntu on this computer was that since the GRUB configurations of both systems use “Ubuntu” as the label, the Kubuntu installation, which was done after the Ubuntu installation, replaced the UEFI entry of the former; that had never been a problem because I can boot Ubuntu from Kubuntu, and vice-versa in case. Maybe that happened also in my experiment since I had a “Manjaro” (GNOME) UEFI entry on my computer and I installed on the external hard drive another “Manjaro” (KDE) distribution: both use the same “Manjaro” label. That shouldn’t have happened because the ESP partition should not have been detected from the installer, but maybe that was a wrong assumption (after all, os-prober can still detect existing OS installations).

This situation is NOT described in https://itsfoss.com/intsall-ubuntu-on-usb/ and indeed in that article the experiment was slightly different: the author installs on the external hard drive an “Ubuntu” distribution, while the computer already had a “Debian” distribution, so the labels were different.

Anyway, even in case of problems like the one experienced, it was pretty easy to fix things!

Hope you find this tutorial useful for experimenting with installing Linux on portable USB hard drives, or even USB sticks, provided they are fast ones 😉

Concluding Remarks

Please keep in mind that the created Linux installation on the USB external drive is effectively portable: you can use it on several computers and laptops. However, some drivers for some specific computer configurations might not be installed in the Linux installation of the external USB. Also, other configurations like screen resolutions and scaling might really depend on the computer you’re booting and might have to be adjusted each time you test the external USB drive in a different computer.

Problems with Linux 5.13 in LG GRAM 16

I recently bought an LG GRAM 16 and I really enjoy that (I’ll blog about that in the near future, hopefully). I had no problems installing Linux, nor with Manjaro Gnome (Phavo) neither with Kubuntu.

However, in Manjaro Gnome I soon started to note some lags, especially with the touchpad and some repainting issues. I had no problems with Kubuntu (it was 21.04). The main difference was that Manjaro was using Linux kernel 5.13, while Kubuntu 21.04 was using Linux kernel 5.11. As soon as I updated to Kubuntu 21.10, which comes with Linux kernel 5.13, I started to have the same problems also in Kubuntu.

Long story short: switching to Linux kernel 5.14 on both systems solved all the problems 🙂

In Manjaro you can use its kernel management system. Alternatively, from the command line, you can run

On (K)ubuntu things are slightly more complicated because the current version 21.10 does not provide a package for kernel 5.14.

However, you can manually download the DEB files of the kernel (and kernel headers) from the mainline repository https://kernel.ubuntu.com/~kernel-ppa/mainline/. Then, you run dpkg -i on all such downloaded files. However, I prefer to use a nice GUI for such mainline kernels, mainline, https://github.com/bkw777/mainline. It’s just a matter of adding the corresponding PPA repository and installing it:

The GUI application is called “Ubuntu Mainline Kernel Installer”. You select the kernel you want (in this case I’m choosing the latest version of the stable 5.14 version) and choose Install. Reboot and you’re good to go 🙂

Accessing Google Online Account from GNOME and KDE

In this post, I’d like to share my experiences in setting a Google Online Account in GNOME and KDE. Actually, I have more than one Google account, and the procedures I show can be repeated for all your Google accounts.

First, a disclaimer: I’ve always loved KDE and I’ve used that since version 3. Lately, I have started to appreciate GNOME though. I’ve been using GNOME most of the time now, in most of my computers, for a few years. But lately, I started to experiment with KDE again, and I started to install that on some of my computers.

KDE is well-known for its customizability, while GNOME is known for the opposite. However, I must admit that in GNOME settings most of the things are trivial, while in KDE, you pay a lot for its customizability.

I think setting a Google Account is a good example of what I’ve just said. Of course, I might be wrong concerning the procedure I’ll show in this post, but, from what I’ve read around, especially for KDE, there doesn’t seem to be an easier way. Of course, if you know an easier procedure I’d like to know in the comments 🙂

In the following, I’m showing how to set a Google Account so that its features, mainly the calendar and access to Google Drive, get integrated into GNOME and KDE. I tested these procedures both in Ubuntu/Kubuntu and in Manjaro GNOME/KDE, but I guess that’s the same in other distributions.

TL;DR: in Gnome it’s trivial, in KDE you need some effort.

GNOME

Just open “Online Accounts”, and choose Google. Use the web form to log in and give the permissions so that GNOME can access all your Google data. As I said, I’m focusing on the calendar and drive. Repeat the same procedure for all your Google accounts you want to connect.

Done! In Files (Nautilus) you can see on the left, the links to your Google drive (or drives, if you configured several accounts). In the Gnome Calendar, you can choose the Google calendars you want to show. The events will be automatically shown in the top Gnome shell clock and calendar widget. Notifications will be automatically shown (by Evolution). For the Gnome Contacts, things are similar. By the way, also Gnome Tasks and other Gnome applications will be automatically able to access your Google accounts data.

To summarize, one single configuration and everything else is automatically integrated.

KDE

Now be prepared for an overwhelming number of steps, most of which, I’m afraid, I find rather complex and counter-intuitive.

In particular, you won’t get access to your Google account data in a single step. In fact, I’ll first show how to mount a Google drive and then how to set up the calendar.

Mount your Google drive

Go to

System Settings -> Online Accounts -> Add New Account -> Google

As usual, you get redirected to the “Web authentication for google”, login and give the consent allowing “KDE Online Accounts” to access some of your Google information, including drive, manage your YouTube videos, access your contacts, and calendar. (This procedure can be repeated for all your Google accounts if you have many.) Note that with all the permissions you give, you’d expect that then everything is automatically configured in KDE, but that’s not the case…

Back to the system settings, you get a “Google account”, not with your Google username or email, which is what I’d expect, but a simple “google” and a progress number (of course, you can rename it).

OK, now I can access my Google drive files from Dolphin and have my local calendar automatically connected to my Google calendar? Just like in Gnome? I’m afraid not… we’re still far away from that.

If you go to Dolphin’s Network place you see no Google drive mounted, nor a mechanism to do that… First, you have to install the package kio-gdrive (at least in Kubuntu and Manjaro KDE that’s not installed by default…). After that, back to Dolphin’s Network place you can expand the “Google Drive” folder and you get asked for the Google account you had previously configured. Select that, and “Use This Account For” -> “Drive” in Accounts Details. Now you can access your Google drive from Dolphin.

Add your Google calendar

What about my Google Calendar? First, you have to install the package korganizer (or the full suite kontact); again, at least in Kubuntu and Manjaro KDE, that’s not installed by default… Great, once installed I can simply select my previously configured Google account? Ehm… no… you “just” have to go to

Settings -> Configure KOrganizer -> General -> Calendars -> Add… -> Google Groupware -> a dialog appears, click “Configure…”

Now the browser (not a web dialog as before) is opened to login into your Google account. Then, give the permissions so that “This will allow Akonadi Resources for Google Services to…” (Again, you have to do the same for all your Google accounts you want to connect to.) In the browser, you then see: “You can close this tab and return to the application now.” Go back to the dialog in KOrganizer, and your calendars and tasks should already be selected (unselect anything you don’t want). OK, now in the previous dialog you should see KOrganizer synchronizing with your Google calendar and tasks.

Now I should get notifications from Google calendars events, right? Ehm… not necessarily: you need to make sure that in the “Status and Notifications” system tray, by right-clicking on “KOrganizer Reminders”, the “Enable Reminders” and “Start Reminder Daemon at Login” are selected (I see different default behaviors under that respect in different distributions). If not, enable them and log out and log in.

OK! But what about my Google calendar events in the standard “Digital Clock” widget in the corner of the system tray? Are they automatically shown just like in GNOME? No! There’s some more work to do! First, install kdepim-addons (guess what? At least in Kubuntu and Manjaro KDE, that’s not installed by default…). Now, go to “Digital Clock Settings” -> “Calendar” -> check “PIM Events Plugin” (quite counter-intuitive!) -> Apply; now a new “PIM Events Plugin” appears on the left, select that. Fortunately, this one will automatically propose to select all the calendars that have been previously configured in KOrganizer.

something similar for kaddressbook; probably with kontact the steps will be less, but I’ve always found Kontact chaotic…

Summary

Now, I like KDE customizations possibilities (while GNOME is pretty rigid about customizations and most things cannot be customized at all), but the above steps are far too much! After a few weeks, I wouldn’t be able to remember them correctly… In KDE, even the number of steps of the above procedures is overwhelming. You have to follow complex, heterogeneous and counter-intuitive procedures in KDE, and long menu chains. Maybe it’s the distribution’s fault? I doubt. I guess it’s an issue with the overall organization and integration in the KDE desktop environment. In GNOME the integration is just part of the desktop environment.

Eclipse p2 site references

Say you publish a p2 repository for your Eclipse bundles and features. Typically your bundles and features will depend on something external (other Eclipse bundles and features). The users of your p2 repository will have to also use the p2 repositories of the dependencies of your software otherwise they won’t be able to install your software. If your software only relies on standard Eclipse bundles and features, that is, something that can be found in the standard Eclipse central update site, you should have no problem: your users will typically have the Eclipse central update site already configured in their Eclipse installations. So, unless your software requires a specific version of an Eclipse dependency, you should be fine.

What happens instead if your software relies on external dependencies that are available only in other p2 sites? Or, put it another way, you rely on an Eclipse project that is not part of the simultaneous release or you need a version different from the one provided by a specific Eclipse release.

You should tell your users to use those specific p2 sites as well. This, however, will decrease the user experience at least from the installation point of view. One would like to use a p2 site and install from it without further configurations.

To overcome this issue, you should make your p2 repository somehow self-contained. I can think of 3 alternative ways to do that:

  • If you build with Tycho (which is probably the case if you don’t do releng stuff manually), you could use <includeAllDependencies> of the tycho-p2-repository plugin to “to aggregate all transitive dependencies, making the resulting p2 repository self-contained.” Please keep in mind that your p2 repository itself will become pretty huge (likely a few hundred MB), so this might not be feasible in every situation.
  • You can put the required p2 repositories as children of your composite update site. This might require some more work and will force you to introduce composite update sites just for this. I’ve written about p2 composite update sites many times in this blog in the past, so I will not consider this solution further.
  • You can use p2 site references that are meant just for the task mentioned so far and that have been introduced in the category.xml specification for some time now. The idea is that you put references to the p2 sites of your software dependencies and the corresponding content metadata of the generated p2 repository will contain links to the p2 sites of dependencies. Then, p2 will automatically contact those sites when installing software (at least from Eclipse, from the command line we’ll have to use specific arguments as we’ll see later). Please keep in mind that this mechanism works only if you use recent versions of Eclipse (if I remember correctly this has been added a couple of years ago).

In this blog post, I’ll describe such a mechanism, in particular, how this can be employed during the Tycho build.

The simple project used in this blog post can be found here: https://github.com/LorenzoBettini/tycho-site-references-example. You should be able to easily reuse most of the POM stuff in your own projects.

IMPORTANT: To benefit from this, you’ll have to use at least Tycho 2.4.0. In fact, Tycho started to support site references only a few versions ago, but only in version 2.4.0 this has been implemented correctly. (I personally fixed this: https://github.com/eclipse/tycho/issues/141.) If you use a (not so) older version, e.g., 2.3.0, there’s a branch in the above GitHub repository, tycho-2.3.0, where some additional hacks have to be performed to make it work (rewrite metadata contents and re-compress the XML files, just to mention a few), but I’d suggest you use Tycho 2.4.0.

There’s also another important aspect to consider: if your software switches to a different version of a dependency that is available on a different p2 repository, you have to update such information consistently. In this blog post, we’ll deal with this issue as well, keeping it as automatic (i.e., less error-prone) as possible.

The example project

The example project is very simple:

  • parent project with the parent POM;
  • a plugin project created with the Eclipse wizard with a simple handler (so it depends on org.eclipse.ui and org.eclipse.core.runtime);
  • a feature project including the plugin project. To make the example more interesting this feature also requires, i.e., NOT includes, the external feature org.eclipse.xtext.xbase. We don’t actually use such an Xtext feature, but it’s useful to recreate an example where we need a specific p2 site containing that feature;
  • a site project with category.xml that is used to generate during the Tycho build our p2 repository.

To make the example interesting the dependency on the Xbase feature is as follows

So we require version 2.25.0.

The target platform is defined directly in the parent POM as follows (again, to keep things simple):

Note that I explicitly added the Xtext 2.25.0 site repository because in the 2020-12 Eclipse site Xtext is available with a lower version 2.24.0.

This defines the target platform we built (and in a real example, hopefully, tested) our bundle and feature.

The category.xml initially is defined as follows

The problem

If you generate the p2 repository with the Maven/Tycho build, you will not be able to install the example feature unless Xtext 2.25.0 and its dependencies can be found (actually, also the standard Eclipse dependencies have to be found, but as said above, the Eclipse update site is already part of the Eclipse distributions). You then need to tell your users to first add the Xtext 2.25.0 update site. In the following, we’ll handle this.

A manual, and thus cumbersome, way to verify that is to try to install the example feature in an Eclipse installation pointing to the p2 repository generated during the build. Of course, we’ll keep also this verification mechanism automatic and easy. So, before going on, following a Test-Driven approach (which I always love), let’s first reproduce the problem in the Tycho build, by adding this configuration to the site project (plug-in versions are configured in the pluginManagement section of the parent POM):

The idea is to run the standard Eclipse p2 director application through the tycho-eclipserun-plugin. The dependency configuration is standard for running such an Eclipse application. We try to install our example feature from our p2 repository into a temporary output directory (these values are defined as properties so that you can copy this plugin configuration in your projects and simply adjust the values of the properties). Also, the arguments passed to the p2 director are standard and should be easy to understand. The only non-standard argument is -followReferences that will be crucial later (for this first run it would not be needed).

Running mvn clean verify should now highlight the problem:

This would mimic the situation your users might experience.

The solution

Let’s fix this: we add to the category.xml the references to the same p2 repositories we used in our target platform. We can do that manually (or by using the Eclipse Category editor, in the tab Repository Properties):

The category.xml initially is defined as follows

Now when we create the p2 repository during the Tycho build, the content.xml metadata file will contain the references to the p2 repository (with a syntax slightly different, but that’s not important; it will contain a reference to the metadata repository and to the artifact repository, which usually are the same). Now, our users can simply use our p2 repository without worrying about dependencies! Our p2 repository will be self-contained.

Let’s verify that by running mvn clean verify; now everything is fine:

Note that this requires much more time: now the p2 director has to contact all the p2 sites defined as references and has to also download the requirements during the installation. We’ll see how to optimize this part as well.

In the corresponding output directory, you can find the installed plugins; you can’t do much with such installed bundles, but that’s not important. We just want to verify that our users can install our feature simply by using our p2 repository, that’s all!

You might not want to run this verification on every build, but, for instance, only during the build where you deploy the p2 repository to some remote directory (of course, before the actual deployment step). You can easily do that by appropriately configuring your POM(s).

Some optimizations

As we saw above, each time we run the clean build, the verification step has to access remote sites and has to download all the dependencies. Even though this is a very simple example, the dependencies during the installation are almost 100MB. Every time you run the verification. (It might be the right moment to stress that the p2 director will know nothing about the Maven/Tycho cache.)

We can employ some caching mechanisms by using the standard mechanism of p2: bundle pool! This way, dependencies will have to be downloaded only the very first time, and then the cached versions will be used.

We simply introduce another property for the bundle pool directory (I’m using by default a hidden directory in the home folder) and the corresponding argument for the p2 director application:

Note that now the plug-ins during the verification step will NOT be installed in the specified output directory (which will store only some p2 properties and caches): they will be installed in the bundle pool directory. Again, as said above, you don’t need to interact with such installed plug-ins, you only need to make sure that they can be installed.

In a CI server, you should cache the bundle pool directory as well if you want to benefit from some speed. E.g., this example comes with a GitHub Actions workflow that stores also the bundle pool in the cache, besides the .m2 directory.

This will also allow you to easily experiment with different configurations of the site references in your p2 repository. For example, up to now, we put the same sites used for the target platform. Referring to the whole Eclipse releases p2 site might be too much since it contains all the features and bundles of all the projects participating in Eclipse Simrel. In the target platform, this might be OK since we might want to use some dependencies only for testing. For our p2 repository, we could tweak references so that they refer only to the minimal sites containing all our features’ requirements.

For this example we can replace the 2 sites with 4 small sites with all the requirements (actually the Xtext 2.25.0 is just the same as before):

You can verify that removing any of them will lead to installation failures.

The first time this tweaking might require some time, but you now have an easy way to test this!

Keeping things consistent

When you update your target platform, i.e., your dependencies versions, you must make sure to update the site references in the category.xml accordingly. It would be instead nice to modify this information in a single place so that everything else is kept consistent!

We can use again properties in the parent POM:

We want to rely on such properties also in the category.xml, relying on the Maven standard mechanism of copy resources with filtering.

We create another category.xml in the subdirectory templates of the site project using the above properties in the site references (at least in the ones where we want to have control on a specific version):

and in the site project we configure the Maven resources plugin appropriately:

Of course, we execute that in a phase that comes BEFORE the phase when the p2 repository is generated. This will overwrite the standard category.xml file (in the root of the site project) by replacing properties with the corresponding values!

By the way, you could use the property eclipse-version also in the configuration of the Tycho Eclipserun plugin seen above, instead of hardcoding 2020-12.

Happy releasing! 🙂

Fixing Right Click Touchpad in PineBook Pro

I recently bought a PineBook Pro (maybe I’ll review it in the future in another post). What annoyed me first was that the right click on the touchpad with a two-finger tap was basically unusable: you should be extremely fast.

In some forums, the solution is to issue this command

but then you have to make it somehow permanent.

Actually, it’s much easier than that: just use the KDE Setting (Tap DetectionMaximum time), and this will make it permanent right away:

Publishing an Eclipse p2 composite repository on GitHub Pages

I had already described the process of publishing an Eclipse p2 composite update site:

Well, now that Bintray is shutting down, and Sourceforge is quite slow in serving an Eclipse update site, I decided to publish my Eclipse p2 composite update sites on GitHub Pages.

GitHub Pages might not be ideal for serving binaries, and it has a few limitations. However, such limitations (e.g., published sites may be no larger than 1 GB, sites have a soft bandwidth limit of 100GB per month and sites have a soft limit of 10 builds per hour) are not that crucial for an Eclipse update site, whose artifacts are not that huge. Moreover, at least my projects are not going to serve more than 100GB per month, unfortunately, I might say 😉

In this tutorial, I’ll show how to do that, so that you can easily apply this procedure also to your projects!

The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted). The idea is that you can run this Maven build (basically, “mvn deploy”) on any CI server (as long as you have write-access to the GitHub repository hosting the update site – more on that later). Thus, you will not depend on the pipeline syntax of a specific CI server (Travis, GitHub Actions, Jenkins, etc.), though, depending on the specific CI server you might have to adjust a few minimal things.

These are the main points:

The p2 children repositories and the p2 composite repositories will be published with standard Git operations since we publish them in a GitHub repository.

Let’s recap what p2 composite update sites are. Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

As repositories continually grow in size they become harder to manage. The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with its own p2 metadata that should never be overwritten. On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

Directory Structure

I want to be able to serve these composite update sites:

  • the main one collects all the versions
  • a composite update site for each major version (e.g., 1.x, 2.x, etc.)
  • a composite update site for each major.minor version (e.g., 1.0.x, 1.1.x, 2.0.x, etc.)

What I aim at is to have the following paths:

  • releases: in this directory, all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20210307-2037, 1.1.0.v20210307-2104, etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory, the metadata for major and major.minor composite sites will be uploaded.
  • root: the main composite update site collecting all versions.

To summarize, we’ll end up with a remote directory structure like the following one

Thus, if you want, you can provide these sites to your users (I’m using the URLs that correspond to my example):

  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates for the main global update site: every new version will be available when using this site;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x for all the releases with major version 1: for example, the user won’t see new releases with major version 2;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x/1.0.x for all the releases with major version 1 and minor version 0: the user will only see new releases of the shape 1.0.0, 1.0.1, 1.0.2, etc., but NOT 1.1.0, 1.2.3, 2.0.0, etc.

If you want to change this structure, you have to carefully tweak the ant file we’ll see in a minute.

Building Steps

During the build, before the actual deployment, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build are:

  • Clone the repository hosting the composite update site (in this example, https://github.com/LorenzoBettini/p2composite-github-pages-example-updates);
  • Create the p2 repository (with Tycho, as usual);
  • Copy the p2 repository in the cloned repository in a subdirectory of the releases directory (the name of the subdirectory has the same qualified version of the project, e.g., 1.0.0.v20210307-2037);
  • Update the composite update sites information in the cloned repository (using the p2 tools);
  • Commit and push the updated clone to the remote GitHub repository (the one hosting the composite update site).

First of all, in the parent POM, we define the following properties, which of course you need to tweak for your own projects:

It should be clear which properties you need to modify for your project. In particular, the github-update-repo is the URL (with authentication information) of the GitHub repository hosting the composite update site, and the site.label is the label that will be put in the composite metadata.

Then, in the parent POM, we configure in the pluginManagement section all the versions of the plugin we are going to use (see the sources of the example on GitHub).

The most interesting configuration is the one for the tycho-packaging-plugin, where we specify the format of the qualified version:

Moreover, we create a profile release-composite (which we’ll also use later in the POM of the site project), where we disable the standard Maven plugins for install and deploy. Since we are going to release our Eclipse p2 composite update site during the deploy phase, but we are not interested in installing and deploying the Maven artifacts, we skip the standard Maven plugins bound to those phases:

The interesting steps are in the site project, the one with <packaging>eclipse-repository</packaging>. Here we also define the profile release-composite and we use a few plugins to perform the steps involving the Git repository described above (remember that these configurations are inside the profile release-composite, of course in the build plugins section):

Let’s see these configurations in detail. In particular, it is important to understand how the goals of the plugins are bound to the phases of the default lifecycle; remember that on the phase package, Tycho will automatically create the p2 repository and it will do that before any other goals bound to the phase package in the above configurations:

  • with the build-helper-maven-plugin we parse the current version of the project, in particular, we set the properties holding the major and minor versions that we need later to create the composite metadata directory structure; its goal is automatically bound to one of the first phases (validate) of the lifecycle;
  • with the exec-maven-plugin we configure the execution of the Git commands:
    • we clone the Git repository of the update site (with –depth=1 we only get the latest commit in the history, the previous commits are not interesting for our task); this is done in the phase pre-package, that is before the p2 repository is created by Tycho; the Git repository is cloned in the output directory target/checkout
    • in the phase verify (that is, after the phase package), we commit the changes (which will be done during the phase package as shown in the following points)
    • in the phase deploy (that is, the last phase that we’ll run on the command line), we push the changes to the Git repository of the update site
  • with the maven-resources-plugin we copy the p2 repository generated by Tycho into the target/checkout/releases directory in a subdirectory with the name of the qualified version of the project (e.g., 1.0.0.v20210307-2037);
  • with the tycho-eclipserun-plugin we create the composite metadata; we rely on the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant task for managing composite repositories (p2.composite.repository). The Ant tasks are defined in the Ant file packaging-p2composite.ant, stored in the site project. In this file, there are also a few properties that describe the layout of the directories described before. Note that we need to pass a few properties, including the site.label, the directory of the local Git clone, and the major and minor versions that we computed before.

Keep in mind that in all the above steps, non-existing directories will be automatically created on-demand (e.g., by the maven-resources-plugin and by the p2 Ant tasks). This means that the described process will work seamlessly the very first time when we start with an empty Git repository.

Now, from the parent POM on your computer, it’s enough to run

and the release will be performed. When cloning you’ll be asked for the password of the GitHub repository, and, if not using an SSH agent or a keyring, also when pushing. Again, this depends on the URL of the GitHub repository; you might use an HTTPS URL that relies on the GitHub token, for example.

If you want to make a few local tests before actually releasing, you might stop at the phase verify and inspect the target/checkout to see whether the directories and the composite metadata are as expected.

You might also want to add another execution to the tycho-eclipserun-plugin to add a reference to another Eclipse update site that is required to install your software. The Ant file provides a task for that, p2.composite.add.external that will store the reference into the innermost composite child (e.g., into 1.2.x); here’s an example that adds a reference to the Eclipse main update site:

For example, in my Xtext projects, I use this technique to add a reference to the Xtext update site corresponding to the Xtext version I’m using in that specific release of my project. This way, my update site will be “self-contained” for my users: when using my update site for installing my software, p2 will be automatically able to install also the required Xtext bundles!

Releasing from GitHub Actions

The Maven command shown above can be used to perform a release from your computer. If you want to release your Eclipse update site directly from GitHub Actions, there are a few more things to do.

First of all, we are talking about a GitHub Actions workflow stored and executed in the GitHub repository of your project, NOT in the GitHub repository of the update site. In this example, it is https://github.com/LorenzoBettini/p2composite-github-pages-example.

In such a workflow, we need to push to another GitHub repository. To do that

  • create a GitHub personal access token (selecting repo);
  • create a secret in the GitHub repository of the project (where we run the GitHub Actions workflow), in this example it is called ACTIONS_TOKEN, with the value of that token;
  • when running the Maven deploy command, we need to override the property github-update-repo by specifying a URL for the GitHub repository with the update site using the HTTPS syntax and the encrypted ACTIONS_TOKEN; in this example, it is https://x-access-token:${{ secrets.ACTIONS_TOKEN }}@github.com/LorenzoBettini/p2composite-github-pages-example-updates;
  • we also need to configure in advance the Git user and email, with some values, otherwise, Git will complain when creating the commit.

To summarize, these are the interesting parts of the release.yml workflow (see the full version here: https://github.com/LorenzoBettini/p2composite-github-pages-example/blob/master/.github/workflows/release.yml):

The workflow is configured to be executed only when you push to the release branch.

Remember that we are talking about the Git repository hosting your project, not the one hosting your update site.

Final thoughts

With the procedure described in this post, you publish your update sites and the composite metadata during the Maven build, so you never deal manually with the GitHub repository of your update site. However, you can always do that! For example, you might want to remove a release. It’s just a matter of cloning that repository, do your changes (i.e., remove a subdirectory of releases and update manually the composite metadata accordingly), commit, and push. Now and then you might also clean up the history of such a Git repository (the history is not important in this context), by pushing with –force after resetting the Git history. By the way, by tweaking the configurations above you could also do that every time you do a release: just commit with amend and push force!

Finally, you could also create an additional GitHub repository for snapshot releases of your update sites, or for milestones, or release candidate.

Happy releasing! 🙂

Caching dependencies in GitHub Actions

I recently started to port all my Java projects from Travis CI to GitHub Actions, since Travis CI changed its pricing model. (I’ll soon update also my book on TDD and Build Automation under that respect.)

I’ve always used caching mechanisms during the builds in Travis CI, to speed up the builds: caching Maven dependencies, especially in big projects, can save a lot of time. In my case, I’m mostly talking of Eclipse plug-in projects, built with Maven/Tycho, and the target platform resolution might have to download a few hundreds of megabytes. Thus, I wanted to use caching also in GitHub Actions, and there’s an action for that.

In this post, I’ll show my strategies for using the cache, in particular, using different workflows based on different operating systems, which are triggered only on some specific events. I’ll use a very simple example, but I’m using this strategy currently on this Xtext project: https://github.com/LorenzoBettini/edelta, which uses more than 300 Mb of dependencies.

The post assumes that you’re already familiar with GitHub Actions.

Warning: Please keep in mind that caches will also be evicted automatically (currently, the documentation says that “caches that are not accessed within the last week will also be evicted”). However, we can still benefit from caches if we are working on a project for a few days in a row.

To experiment with building mechanisms, I suggest you use a very simple example. I’m going to use a simple Maven Java project created with the corresponding Maven archetype: a Java class and a trivial JUnit test. The Java code is not important in this context, and we’ll concentrate on the build automation mechanisms.

The final project can be found here:
https://github.com/LorenzoBettini/github-actions-cache-example.

This is the initial POM for this project:

This is the main workflow file (stored in .github/workflows/maven.yml):

This is a pretty standard workflow for a Java project built with Maven. This workflow runs for every push on any branch and every PR.

Note that we specify to cache the directory where Maven stores all the downloaded artifacts, ~/.m2.

For the cache key, we use the OS where our build is running, a constant string “-m2-” and the result of hashing all the POM files (we’ll see how we rely on this hashing later in this post).

Remember that the cache key will be used in future builds to restore the files saved in the cache. When no cache is found with the given key, the action searches for alternate keys if the restore-keys has been specified. As you see, we specified as the restore key something similar to the actual key: the running OS and the constant string “-m2-” but no hashing. This way, if we change our POMs, the hashing will be different, but if a previous cache exists we can still restore that and make use of the cached values. (See the official documentation for further details.) We’ll then have to download only the new dependencies if any. The cache will then be updated at the end of the successful job.

I usually rely on this strategy for the CI of my projects:

  • build every pushes in any branch using a Linux build environment;
  • build PRs in any branch also on a Windows and macOS environment (actually, I wasn’t using Windows with Travis CI since it did not provide Java support on that environment; that’s another advantage of GitHub Actions, which provides Java support also on Windows)

Thus, I have another workflow definition just for PRs (stored in .github/workflows/pr.yml):

Besides the build matrix for OSes, that’s basically the same as the previous workflow. In particular, we use the same strategy for defining the cache key (and restore key). Thus, we have a different cache for each different operating system.

Now, let’s have a look at the documentation of this action:

A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch. For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch feature-b has the base branch feature-a, a workflow triggered on feature-b would have access to caches created in the default branch (main), feature-a, and feature-b.

Access restrictions provide cache isolation and security by creating a logical boundary between different workflows and branches. For example, a cache created for the branch feature-a (with the base main) would not be accessible to a pull request for the branch feature-b (with the base main).

What does that mean in our scenario? Since the workflow running on Windows and macOS is executed only in PRs, this means that the cache for these two configurations will never be saved for the master branch. In turns, this means that each time we create a new PR, this workflow will have no chance of finding a cache to restore: the branch for the PR is new (so no cache is available for such a branch) and the base branch (typically, “master” or “main”) will have no cache saved for these two OSes. Summarizing, the builds for the PRs for these two configurations will always have to download all the Maven dependencies from scratch. Of course, if we don’t immediately merge the PR and we push other commits on the branch of the PR, the builds for these two OSes will find a saved cache (if the previous builds of the PR succeeded), but, in any case, the first build for each new PR for these two OSes will take more time (actually, much more time in a complex project with lots of dependencies).

Thus, if we want to benefit from caching also on these two OSes, we have to have another workflow on the OSes Windows and macOS that runs when merging a PR, so that the cache will be stored also for the master branch (actually we could use this strategy also when merging any PR with any base branch, not necessarily the main one).

Here’s this additional workflow (stored in .github/workflows/pr-merge.yml):

Note that we intercept the event push (since a merge of a PR is actually a push) but we have an if statement that enables the workflow only when the commit message contains the string “Merge pull request”, which is the default message when merging a PR on GitHub. In this example, we are only interested in PR merged with the master branch and with any branch starting with “experiments”, but you can adjust that as you see fit. Furthermore, since this workflow is only meant for updating the Maven dependency cache, we skip the tests (with -DskipTests) so that we save some time (especially in a complex project with lots of tests).

This way, after the first PR merged, the PR workflows running on Windows and macOS will find a cache (at least as a starting point).

We can also do better than that and avoid running the Maven build if there’s no need to update the cache. Remember that we use the result of hashing all the POM files in our cache key? We mean that if our POMs do not change then basically we don’t expect our dependencies to change (of course if we’re not using SNAPSHOT dependencies). Now, in the documentation, we also read

When key matches an existing cache, it’s called a cache hit, […] When key doesn’t match an existing cache, it’s called a cache miss, and a new cache is created if the job completes successfully.

The idea is to skip the Maven step in the above workflow “Updates Cache on Windows and macOS” if we have a cache hit since we expect no new dependencies are needed to be downloaded (our POMs haven’t changed). This is the interesting part to change:

Note that we need to define an id for the cache to intercept the cache hit or miss and the id must match the id in the if statement.

This way, if we have a cache hit the workflow for updating the cache on Windows and macOS will be really fast since it won’t even run the Maven build for updating the cache.

If we change the POM, e.g., switch to JUnit 4.13.1, push the change, create a PR, and merge it, then, the workflow for updating the cache will actually run the Maven build since we have a cache miss: the key of the cache has changed. Of course, we’ll still benefit from the already cached dependencies (and all the Maven plugins already downloaded) and we’ll update the cache with the new dependencies for JUnit 4.13.1.

Final notes

One might think to intercept the merge of a PR by using on: pull_request: (as we did in the pr.yml workflow). However, “There’s no way to specify that a workflow should be triggered when a pull request is merged”. In the official forum, you can find a solution based on the “closed” PR event and the inspection of the PR “merged” event. So one might think to update the pr.yml workflow accordingly and get rid of the additional pr-merge.yml workflow. However, from my experiments, this solution will not make use of caching, which is the main goal of this post. The symptoms of such a problem are this message when the workflow initially tries to restore a cache:

Warning: Cache service responded with 403

and this message when the cache should be saved:

Unable to reserve cache with key …, another job may be creating this cache.

Another experiment that I tried was to remove the running OS from the cache key, e.g., m2-${{ hashFiles(‘**/pom.xml’) }} instead of ${{ runner.os }}-m2-${{ hashFiles(‘**/pom.xml’) }}, and to use a restore accordingly key, like m2- instead of ${{ runner.os }}-m2-. I was hoping to reuse the same cache across different OS environments. This seems to work for macOS, which seems to be able to reuse the Linux cache. Unfortunately, this does not work for Windows. Thus, I gave up that solution.

Grub remembers the last choice

In all my computers I have dual boot, Ubuntu and Windows, though I’m using the former 99% of the time 😉 On one of my laptop I started to evaluate also Manjaro (probably a blog post will come in the near future). I let Manjaro install the main efi grub boot loader and I noticed that upon reboots its grub configuration remembers the last choice! That is, if I booted Ubuntu (not the first choice in the menu) and I reboot then “Ubuntu” entry is the one selected by default. The same holds for Windows.

I find this feature really cool and useful:

  • if I had previously used Ubuntu and possibly hibernated the computer, then, even after a few days, when I boot the laptop I know which OS I had booted the last time;
  • if I boot Windows (…once in a month?) I will probably experience many updates which require a few reboots; if I left the computer unattended during rebooting I used to find myself back to Linux (the primary default choice in grub) and I had to reboot and choose Windows so that updates are installed (if Windows updates require a few reboots that’s quite annoying).

I thought that Manjaro had some special tweaks in the installed grub, but then I learned that’s a standard feature of Grub!

You just have to add these two lines in your /etc/default/grub:

Save and run

And that’s all! From then on Grub will remember your last choice 🙂

Enabling Hibernation on Ubuntu 20.04

I have never been able to make hibernation (suspend to disk) work on my laptops (Dell M3800 and Dell XPS 13 9370) on Ubuntu with systemd. The symptom was that running

was making the system shutdown, but then upon restart the system was not restored: it was just like booting the system from scratch.

I had also tried with uswsusp (which is installed if you install the package hibernate), with its program s2disk, but I experienced many problems: it wasn’t working reliably and it was making booting (even standard booting) much longer (several seconds more).

Then, after looking at several blog posts, I found that the solution is rather simple, and I’ll detail the steps here. I’ll also show how to use suspend-then-hibernate.

First, you need to have swap already setup, e.g., a swap partition (though I think a swap file would work as well, but in that case the configuration is slightly more complex). For example in /etc/fstab you should have something like

The UUID is important and you should take note of it.

How big should the swap be? You can find some hints here: https://help.ubuntu.com/community/SwapFaq. I have 16Gb of RAM and my swap partition is 20 Gb.

Then, you must make sure initramfs is “aware” of your swap partition and that it is already able to “resume” from that. This should already be the case but you can try to run

and after some time you should see something like:

The UUID must be the same as your swap UUID in the /etc/fstab.

Now, it’s just a matter of editing your /etc/default/grub and make sure you specify resume in GRUB_CMDLINE_LINUX_DEFAULT, with the UUID of your swap partition. So it should be something like (remember that <UUID of your swap partition> must be replaced with the UUID):

Save the file and update grub:

Reboot the system and now try to hibernate again (first you might want to start a few applications so that you’re sure that the system is effectively restored to the same state):

Wait for the system to shut down and switch it on again. The splash screen should tell you something about that it is “resuming from <your swap partition>”. If all goes well you’ll have to enter your password to unlock the system which you should find in the state you left it before hibernating! 🙂

suspend-then-hibernate

Another interesting mechanism provided by systemd is suspend-then-hibernate: the system is suspended (to RAM) and after some time it is hibernated (suspended to disk).

The amount of time before hibernating is defined in the file /etc/systemd/sleep.conf. Let’s have a look at the default contents:

By default everything is commented out, but the values, as stated at the beginning of the file, represent the default values. So you can see that suspend-then-hibernate is enabled and that the default delay time before hibernating is 180 minutes. If you’re not happy with that value, uncomment the line and change the value. For example, I set it to 10 minutes:

You can now test this functionality with this command:

The system will suspend to RAM and if you don’t touch the computer after 10 minutes you can hear some sounds: the system will effectively hibernate.

If you want to make this mechanisms the default suspend mechanism, e.g., you close the lid and the system will suspend and then after some time it will hibernate, you CANNOT set the value of SuspendMode in the file above, since that has another meaning. To make suspend-then-hibernate the default suspend mechanism you have to create this symlink:

No need to restart, try to close the lid and the laptop will suspend, after 10 minutes it will hibernate.

Please, keep in mind that the above command will completely replace the behavior of suspend.

If you want to have a finer grain control, you might want to edit the file /etc/systemd/logind.conf, in particular uncomment and set the entries (then you’ll have to restart or restart the systemd-logind.service service):

which should be self-explicative, but I haven’t tested this approach.

Happy hibernating 🙂

Remove SNAPSHOT and Qualifier in Maven/Tycho Builds

Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.

To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.

I’m going to show two different ways of doing that. The example used in this post can be found here: https://github.com/LorenzoBettini/tycho-set-version-example.

First method

The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).

Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.

This is the Maven command to run:

The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.

Second method

Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.

This is the Maven command to run:

The first goal will change the versions in POMs while the second one will change the versions in Eclipse projects’ metadata.

Configuring the plugins

As usual, it’s best practice to configure the used plugins (in this case, their versions) in the pluginManagement section of your parent POM.

For example, in the parent POM of https://github.com/LorenzoBettini/tycho-set-version-example we have:

 

Conclusions

In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.

Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)

Publishing a Maven Site to GitHub Pages

In this tutorial I’d like to show how to publish a Maven Site to GitHub Pages. You can find several documents on the web about this subject, but I decided to publish one myself because the documents I found refer to a deprecated plugin (https://github.com/github/maven-plugins/tree/master/github-site-plugin) or show complicated settings, also when using the official Maven plugin that I’m going to use myself in this post, Apache Maven SCM Publish Plugin, https://maven.apache.org/plugins/maven-scm-publish-plugin/index.html (probably because the documents I found are somehow old, and they refer to an old version of such a plugin, which has probably been improved a lot since then).

I’m going to use a very simple Java example. The final project is available here: https://github.com/LorenzoBettini/maven-site-github-example.

Create the Java Maven project

Let’s create a simple Java project using the archetype (of course, I’m going to use fake names for the groupId and artifactId, but such values should be used consistently from now on):

This will create the my-app folder with the Maven project. Let’s cd into that folder and make sure it compiles fine:

We can also generate the site of this project (in its default shape):

The site will be generated in the target/site folder and can be opened with a browser; for example, let’s open its index.html:

GitHub setup

Let’s publish the Maven project on GitHub; in my case it’s in https://github.com/LorenzoBettini/maven-site-github-example.

Now we have to setup gh-pages branch on our Git repository. This step is required the first time only. (You might want to have a look at https://help.github.com/en/github/working-with-github-pages/about-github-pages if you’re note familiar with GitHub pages).

Let’s prepare the gh-pages branch: this will be a separate branch in the Git repository, we create it as an orphan branch (see also https://maven.apache.org/plugins/maven-scm-publish-plugin/various-tips.html). WARNING: the following commands will wipe out the current contents of the local Git repository, so make sure you don’t have any uncommitted changes! Let’s run the following commands in the main folder of the Git repository:

  1. git checkout --orphan gh-pages to create the branch locally,
  2. rm .git/index ; git clean -fdx to clean the branch content and let it empty,
  3. echo "It works" > index.html to create an initial site content
  4. git add . && git commit -m "initial site content" && git push origin gh-pages to commit and push to the branch gh-pages

The corresponding web site will be published in your GitHub project’s corresponding URL; in my case it is https://lorenzobettini.github.io/maven-site-github-example:  This should show “It works”

Now we can go back to the master branch:

and remove the index.html we previously created.

Maven POM configuration

Let’s see how to configure our POM to automatically publish the Maven site to GitHub pages.

First, we must specify the distribution management section:

Remember that you will have to use the URL corresponding to your own GitHub project (including your GitHub user id).

Then, we configure the maven-scm-publish-plugin configuration in the pluginManagement section (we configure it there, and then we will call its goals directly from the command line – such a plugin can also be configured to be bound to the Maven lifecycle, but that’s out of the scope of this tutorial, you might want to have a look here in case: https://maven.apache.org/plugins/maven-scm-publish-plugin/usage.html). Note that we specify the gh-pages branch:

Publish the site

Now, publishing the Maven site to GitHub pages it’s just a matter of running:

Wait for the plugin to perform its job.

What the plugin does is

  1. It first checks out the contents of the gh-pages branch from GitHub into a local folder (by default, target/scmpublish-checkout);
  2. Then locally staged site content is applied to the check-out, issuing appropriate Git commands to add and delete entries, followed by a commit and a push.

Now the site is on GitHub Pages:

Improve the site contents

Just for demonstration, let’s enrich the site a bit by using the Maven Archetype Site, https://maven.apache.org/archetypes/maven-archetype-site.

We layer this archetype upon our existing project, so we must run it from the directory containing our current Maven project and specify the same groupId and artifactId we specified when we created the Maven project:

The archetype will update our project with a src/site directory containing a few example contents in Markdown, APT, etc. It will also update our POM with some configuration for maven-project-info-reports-plugin and for i18n localization. Moreover, a new skin will be used for the final site, maven-fluido-skin.

You might want to have a look locally by regenerating the site.

Let’s publish the new site, again by running

Now it’s on GitHub Pages (it might take some time for the new website to be updated on GitHub and browser reload might help):

To jump to the published website you could also use the GitHub web interface: click on the “environment” link and then on the “View deployment” button corresponding to the latest pushed commit on the gh-pages branch:

Now you could experiment by adding/changing/removing the contents of the directory src/site and then publish the website again through Maven.

Remember

What’s in your Maven project (e.g., master branch) contains the sources of the site, while, what’s in the gh-pages branch on GitHub will contain the generated website. You won’t modify the contents of gh-pages branch manually: the Apache Maven SCM Publish will do that for you.

Hope you enjoy this tutorial 🙂

Installing KDE on top of Ubuntu

If you like to use KDE you probably install Kubuntu directly, instead of Ubuntu, which has been based on Gnome for a long time now.

However, I like to have several Desktop Environments, and, now and then, I like to switch from Gnome to KDE and then back. Currently, I’m using Gnome for most of the time, that’s why I install Ubuntu (instead of Kubuntu).

In any case, you can still install KDE Plasma on top of Ubuntu. The following has been tested on an Ubuntu Disco 19.04, but I guess it will work also on previous distributions.

For a reduced installation of KDE you might want to install only these packages

In particular, kwin-addons includes some useful things: it contains additional KWin desktop and window switchers shipped in the Plasma 5 addons module.

When installation has finished you may want to reboot and then, on the Login screen, you can use the gear icon for specifying that you want to enter the KDE Plasma environment instead of the default Gnome environment.

The above packages should provide you with enough stuff to enjoy a Plasma experience, but it lacks many (K)ubuntu configurations and addons for KDE.

If you want more Kubuntu stuff, you might want to install the “huge” package:

And then you get a real Kubuntu KDE Plasma experience.

Note that this will replace the classic Ubuntu splash screen when booting the OS: it replaces it with the Kubuntu splash screen. If you want to go back to the original splash screen it’s just a matter of removing the following packages:

Remember that you can also use the Kubuntu Backport PPA for enjoying more recent versions of KDE software.

Enjoy Gnome and KDE! 🙂

My new book on TDD, Build Automation and Continuous Integration

I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.

The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration
(with Java, Eclipse and friends)
” and can be bought from https://leanpub.com/tdd-buildautomation-ci.

The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.

The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.

On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.

In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.

Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.

it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.

We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.

The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.

When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to

configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.

Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.

The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.

I hope you enjoy the book 🙂

How to install Linux on a USB drive using Virtualbox

In this tutorial I’m going to show how to install Linux on a USB drive using Virtualbox. I find this useful to test a LInux distribution. Note that using a live distribution only allows you a small testing experience, while installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer).

You can use this procedure to install Linux on any USB drive, that is, both a USB stick or an external USB hard drive.

You could burn a live image on a USB stick, boot it, and then install Linux on a second USB stick from the live system, but using Virtualbox is faster and does not require you to create a live USB stick just to install it on a second USB stick.

I’m going to use Ubuntu (17.10) as the main system and install Fedora on a USB stick (I’ve already downloaded the 27 iso).

First of all, let’s install Virtualbox:

Add your user to the Virtualbox users, or you won’t be able to use USB 3 in the virtual machine: run this command and reboot:

Then run Virtualbox and create a new Linux machine (increase the memory a bit, depending on your actual physical memory).

Since we’ll use this machine only for booting the live iso, there’s no need to create a disk

Now let’s go to Settings of the newly created machine, and in “USB” select USB 3.0:

In “Storage” select the “Empty” disk icon and in “Optical Drive” “Choose a virtual optical disk file…” and select the iso image of the Live distribution you want to boot the machine (in my case the Fedora 27 ISO I’ve already downloaded), then check the “Live CD/DVD” checkbox

Now “Start” the virtual machine (which will boot the Live ISO).

You’ll be asked to confirm the virtual optical disk file you had previously specified in the settings.

From now on, you’ll boot the live system into the virtual machine, and this depends on the distribution you choose (in my case Fedora). You should be already familiar with that procedure if you’ve installed a Linux distribution before starting with a Live system.

For example, we choose “Try Fedora” and we can perform some tests (for instance, that we can access the Internet from the virtual machine; being able to access the Internet might be crucial later when installing the distribution on the USB stick since the installation might want to download some upgrades).

Now let’s connect the USB stick where we want to install Linux; then in the Devices menu of the Virtualbox machine, you should select the USB stick you’ve just inserted (in my case it’s a SanDisk):

Now the USB stick is mounted in the virtual machine, and it will be the target disk of our installation.

We can now start the Fedora installation in the virtual machine; again, this assumes you’re familiar with the installation of this distribution. The important part will be to select the USB stick as the target of the installation. Actually, the USB stick should be the only available option (unless you manually mounted other drives in the virtual machine):

Continue the installation and once it’s done, you can reboot the computer and make sure to boot from the USB stick.

Enjoy your new Linux installation on the USB stick 🙂

Eclipse tested with a few Gnome themes

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme 😉

Analyzing Eclipse plug-in projects with Sonarqube

In this tutorial I’m going to show how to analyze multiple Eclipse plug-in projects with Sonarqube. In particular, I’m going to focus on peculiarities that have to be taken care of due to the standard way Sonarqube analyzes sources and to the structure of typical Eclipse plug-in projects (concerning tests and code coverage).

The code of this example is available on Github: https://github.com/LorenzoBettini/tycho-sonarqube-example

This can be seen as a follow-up of my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects. I’ll basically reuse almost the same structure of that example and a few things. The part of Jacoco report is not related to Sonarqube but I’ll leave it there.

The structure of the projects is as follows:

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The project example.tests.parent contains all the common configurations for test projects (and test report, please refer to my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects for the details of this report project, which is not strictly required for Sonarqube).

This is its pom

Note that this also shows a possible way of dealing with custom argLine for tycho-surefire configuration: tycho.testArgLine will be automatically set the jacoco:prepare-agent goal, with the path of jacoco agent (needed for code coverage); the property tycho.testArgLine is automatically used by tycho-surefire. But if you have a custom configuration of tycho-surefire with additional arguments you want to pass in argLine, you must be careful not to overwrite the value set by jacoco. If you simply refer to tycho.testArgLine in the custom tycho-surefire configuration’s argLine, it will work when the jacoco profile is active but it will fail when it is not active since that property will not exist. Don’t try to define it as an empty property by default, since when tycho-surefire runs it will use that empty value, ignoring the one set by jacoco:prepare-agent (argLine’s properties are resolved before jacoco:prepare-agent is executed). Instead, use another level of indirection: refer to a new property, e.g., additionalTestArgLine, which by default is empty. In the jacoco profile, set additionalTestArgLine referring to tycho.testArgLine (in that profile, that property is surely set by jacoco:prepare-agent). Then, in the custom argLine, refer to additionalTestArgLine. An example is shown in the project example.plugin2.tests pom:

You can check that code coverage works as expected by running (it’s important to verify that jacoco has been configured correctly in your projects before running Sonarqube analysis: if it’s not working in Sonarqube then it’s something wrong in the configuration for Sonarqube, not in the jacoco configuration, as we’ll see in a minute):

Mare sure that example.tests.report/target/site/jacoco-aggregate/index.html reports some code coverage (in this example, example.plugin1 has some code uncovered by intention).

Now I assume you already have Sonarqube installed.

Let’s run a first Sonarqube analysis with

This is the result:

So Unit Tests are correctly collected! What about Code Coverage? Something is shown, but if you click on that you see some bad surprises:

Code coverage only on tests (which is irrelevant) and no coverage for our SUT (Software Under Test) classes!

That’s because jacoco .exec files are by default generated in the target folder of the tests project, now when Sonarqube analyzes the projects:

  • it finds the jacoco.exec file when it analyzes a tests project but can only see the sources of the tests project (not the ones of the SUT)
  • when it analyzes a SUT project it cannot find any jacoco.exec file.

We could fix this by configuring the maven jacoco plugin to generate jacoco.exec in the SUT project, but then the aggregate report configuration should be updated accordingly (while it works out of the box with the defaults). Another way of fixing the problem is to use the Sonarqube maven property sonar.jacoco.reportPaths and “trick” Sonarqube like that (we do that in the parent pom properties section):

This way, when it analyzes example.plugin1 it will use the jacoco.exec found in example.plugin1.tests project (if you follow the convention foo and foo.tests this works out of the box, otherwise, you have to list all the jacoco.exec paths in all the projects in that property, separated by comma).

Let’s run the analysis again:

OK, now code coverage is collected on the SUT classes as we wanted. Of course, now test classes appear as uncovered (remember, when it analyzes example.plugin1.tests it now searchs for jacoco.exec in example.plugin1.tests.tests, which does not even exist).

This leads us to another problem: test classes should be excluded from Sonarqube analysis. This works out of the box in standard Maven projects because source folders of SUT and source folders of test classes are separate and in the same project (that’s also why code coverage for pure Maven projects works out of the box in Sonarqube); this is not the case for Eclipse projects, where SUT and tests are in separate projects.

In fact, issues are reported also on test classes:

We can fix both problems by putting in the tests.parent pom properties these two Sonarqube properties (note the link to the Eclipse bug about this behavior)

This will be inherited by our tests projects and for those projects, Sonarqube will not analyze test classes.

Run the analysis again and see the results: code coverage only on SUT and issues only on SUT (remember that in this example MyClass1 is not uncovered completely by intention):

You might be tempted to use the property sonar.skip set to true for test projects, but you will use JUnit test reports collection.

The final bit of customization is to exclude the Main.java class from code coverage. We have already configured the jacoco maven plugin to do so, but this won’t be picked up by Sonarqube (that configuration only tells jacoco to skip that class when it generates the HTML report).

We have to repeat that exclusion with a Sonarqube maven property, in the parent pom:

Note that in the jacoco maven configuration we had excluded a .class file, while here we exclude Java files.

Run the analysis again and Main is not considered in code coverage:

Now you can have fun in fixing Sonarqube issues, which is out of the scope of this tutorial 🙂

This example is also analyzed from Travis using Sonarcloud (https://sonarcloud.io/dashboard?id=example%3Aexample.parent).

Hope you enjoyed this tutorial and Happy new year! 🙂

 

Formatting Java method calls in Eclipse

Especially with lambdas, you may end up with a chain of method calls that you’d like to have automatically formatted with each invocation on each line (maybe except for the very first invocation).

You can configure the Eclipse Java formatter with that respect; you just need to reach the right option (the “Force split” is necessary to have each invocation on a separate line):

and then you can have method calls formatted automatically like that:

 

How to add Eclipse launcher in Gnome dock

In this post I’ll show how to add an Eclipse launcher as a favorite (pinned) application in the Gnome dock (I’m using Ubuntu Artful). This post is inspired by http://blog.ttoine.net/en/2016/06/30/how-to-add-eclipse-neon-launcher-in-gnu-linux-menus-and-launchers/.

First of all, you need to create a .desktop file, where you need to specify the full path of your Eclipse installation:

This is relative to my installation of Eclipse which is in the folder /home/bettini/eclipse/java-latest-released/eclipse, note the executable “eclipse” and the “icon.xpm”. The name “Eclipse Java” is what will appear as the launcher name both in Gnome applications and later in the dock.

Make this file executable.

Copy this file in your home folder in .local/share/applications.

Now in Gnome Activities search for such a launcher and it should appear:

Select it and make sure that Eclipse effectively runs.

Unfortunately, in the dock, there’s no contextual menu for you to add it as a favorite and pin it to the dock:

But you can still add it to the dock favorites (and thus pin it there) by using the corresponding contextual menu that is available when the launcher appears in the Activities:

And there you go: the Eclipse launcher is now on your dock and it’s there to stay 🙂