Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy.
He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena).
His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages.
He is also the author of about 90 research papers published in international conferences and international journals.
I recently bought a PineBook Pro (maybe I’ll review it in the future in another post). What annoyed me first was that the right click on the touchpad with a two-finger tap was basically unusable: you should be extremely fast.
In some forums, the solution is to issue this command
1
synclient MaxTapTime=225
but then you have to make it somehow permanent.
Actually, it’s much easier than that: just use the KDE Setting (Tap Detection, Maximum time), and this will make it permanent right away:
Well, now that Bintray is shutting down, and Sourceforge is quite slow in serving an Eclipse update site, I decided to publish my Eclipse p2 composite update sites on GitHub Pages.
GitHub Pages might not be ideal for serving binaries, and it has a few limitations. However, such limitations (e.g., published sites may be no larger than 1 GB, sites have a soft bandwidth limit of 100GB per month and sites have a soft limit of 10 builds per hour) are not that crucial for an Eclipse update site, whose artifacts are not that huge. Moreover, at least my projects are not going to serve more than 100GB per month, unfortunately, I might say 😉
In this tutorial, I’ll show how to do that, so that you can easily apply this procedure also to your projects!
The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted). The idea is that you can run this Maven build (basically, “mvn deploy”) on any CI server (as long as you have write-access to the GitHub repository hosting the update site – more on that later). Thus, you will not depend on the pipeline syntax of a specific CI server (Travis, GitHub Actions, Jenkins, etc.), though, depending on the specific CI server you might have to adjust a few minimal things.
The p2 children repositories and the p2 composite repositories will be published with standard Git operations since we publish them in a GitHub repository.
As repositories continually grow in size they become harder to manage. The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.
In order to achieve this, all published p2 repositories must be available, each one with its own p2 metadata that should never be overwritten. On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.
Directory Structure
I want to be able to serve these composite update sites:
the main one collects all the versions
a composite update site for each major version (e.g., 1.x, 2.x, etc.)
a composite update site for each major.minor version (e.g., 1.0.x, 1.1.x, 2.0.x, etc.)
What I aim at is to have the following paths:
releases: in this directory, all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20210307-2037, 1.1.0.v20210307-2104, etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
updates: in this directory, the metadata for major and major.minor composite sites will be uploaded.
root: the main composite update site collecting all versions.
To summarize, we’ll end up with a remote directory structure like the following one
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
├──compositeArtifacts.xml
├──compositeContent.xml
├──p2.index
├──README.md
├──releases
│ ├──1.0.0.v20210307-2037
│ │ ├──artifacts.jar
│ │ ├──...
│ │ ├──features...
│ │ └──plugins...
│ ├──1.0.0.v20210307-2046...
│ ├──1.1.0.v20210307-2104...
│ └──2.0.0.v20210308-1304...
└──updates
├──1.x
│ ├──1.0.x
│ │ ├──compositeArtifacts.xml
│ │ ├──compositeContent.xml
│ │ └──p2.index
│ ├──1.1.x
│ │ ├──compositeArtifacts.xml
│ │ ├──compositeContent.xml
│ │ └──p2.index
│ ├──compositeArtifacts.xml
│ ├──compositeContent.xml
│ └──p2.index
└──2.x
├──2.0.x
│ ├──compositeArtifacts.xml
│ ├──compositeContent.xml
│ └──p2.index
├──compositeArtifacts.xml
├──compositeContent.xml
└──p2.index
Thus, if you want, you can provide these sites to your users (I’m using the URLs that correspond to my example):
https://lorenzobettini.github.io/p2composite-github-pages-example-updates for the main global update site: every new version will be available when using this site;
https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x for all the releases with major version 1: for example, the user won’t see new releases with major version 2;
https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x/1.0.x for all the releases with major version 1 and minor version 0: the user will only see new releases of the shape 1.0.0, 1.0.1, 1.0.2, etc., but NOT 1.1.0, 1.2.3, 2.0.0, etc.
If you want to change this structure, you have to carefully tweak the ant file we’ll see in a minute.
Building Steps
During the build, before the actual deployment, we’ll have to update the composite site metadata, and we’ll have to do that locally.
The steps that we’ll perform during the Maven/Tycho build are:
Copy the p2 repository in the cloned repository in a subdirectory of the releases directory (the name of the subdirectory has the same qualified version of the project, e.g., 1.0.0.v20210307-2037);
Update the composite update sites information in the cloned repository (using the p2 tools);
Commit and push the updated clone to the remote GitHub repository (the one hosting the composite update site).
First of all, in the parent POM, we define the following properties, which of course you need to tweak for your own projects:
It should be clear which properties you need to modify for your project. In particular, the github-update-repo is the URL (with authentication information) of the GitHub repository hosting the composite update site, and the site.label is the label that will be put in the composite metadata.
Then, in the parent POM, we configure in the pluginManagement section all the versions of the plugin we are going to use (see the sources of the example on GitHub).
The most interesting configuration is the one for the tycho-packaging-plugin, where we specify the format of the qualified version:
XHTML
1
2
3
4
5
6
7
8
<plugin>
<groupId>org.eclipse.tycho</groupId>
<artifactId>tycho-packaging-plugin</artifactId>
<version>${tycho-version}</version>
<configuration>
<format>'v'yyyyMMdd'-'HHmm</format>
</configuration>
</plugin>
Moreover, we create a profile release-composite (which we’ll also use later in the POM of the site project), where we disable the standard Maven plugins for install and deploy. Since we are going to release our Eclipse p2 composite update site during the deploy phase, but we are not interested in installing and deploying the Maven artifacts, we skip the standard Maven plugins bound to those phases:
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<profile>
<!-- Activate this profile to perform the release to GitHub Pages -->
<id>release-composite</id>
<build>
<pluginManagement>
<plugins>
<plugin>
<artifactId>maven-install-plugin</artifactId>
<executions>
<execution>
<id>default-install</id>
<phase>none</phase>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
</profile>
The interesting steps are in the site project, the one with <packaging>eclipse-repository</packaging>. Here we also define the profile release-composite and we use a few plugins to perform the steps involving the Git repository described above (remember that these configurations are inside the profile release-composite, of course in the buildplugins section):
Let’s see these configurations in detail. In particular, it is important to understand how the goals of the plugins are bound to the phases of the default lifecycle; remember that on the phase package, Tycho will automatically create the p2 repository and it will do that before any other goals bound to the phase package in the above configurations:
with the build-helper-maven-plugin we parse the current version of the project, in particular, we set the properties holding the major and minor versions that we need later to create the composite metadata directory structure; its goal is automatically bound to one of the first phases (validate) of the lifecycle;
with the exec-maven-plugin we configure the execution of the Git commands:
we clone the Git repository of the update site (with –depth=1 we only get the latest commit in the history, the previous commits are not interesting for our task); this is done in the phase pre-package, that is before the p2 repository is created by Tycho; the Git repository is cloned in the output directory target/checkout
in the phase verify (that is, after the phase package), we commit the changes (which will be done during the phase package as shown in the following points)
in the phase deploy (that is, the last phase that we’ll run on the command line), we push the changes to the Git repository of the update site
with the maven-resources-plugin we copy the p2 repository generated by Tycho into the target/checkout/releases directory in a subdirectory with the name of the qualified version of the project (e.g., 1.0.0.v20210307-2037);
with the tycho-eclipserun-plugin we create the composite metadata; we rely on the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant task for managing composite repositories (p2.composite.repository). The Ant tasks are defined in the Ant file packaging-p2composite.ant, stored in the site project. In this file, there are also a few properties that describe the layout of the directories described before. Note that we need to pass a few properties, including the site.label, the directory of the local Git clone, and the major and minor versions that we computed before.
Keep in mind that in all the above steps, non-existing directories will be automatically created on-demand (e.g., by the maven-resources-plugin and by the p2 Ant tasks). This means that the described process will work seamlessly the very first time when we start with an empty Git repository.
Now, from the parent POM on your computer, it’s enough to run
1
mvn deploy-Prelease-composite
and the release will be performed. When cloning you’ll be asked for the password of the GitHub repository, and, if not using an SSH agent or a keyring, also when pushing. Again, this depends on the URL of the GitHub repository; you might use an HTTPS URL that relies on the GitHub token, for example.
If you want to make a few local tests before actually releasing, you might stop at the phase verify and inspect the target/checkout to see whether the directories and the composite metadata are as expected.
You might also want to add another execution to the tycho-eclipserun-plugin to add a reference to another Eclipse update site that is required to install your software. The Ant file provides a task for that, p2.composite.add.external that will store the reference into the innermost composite child (e.g., into 1.2.x); here’s an example that adds a reference to the Eclipse main update site:
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<execution>
<!-- Add composite of required software update sites...
(if already present they won't be added again) -->
For example, in my Xtext projects, I use this technique to add a reference to the Xtext update site corresponding to the Xtext version I’m using in that specific release of my project. This way, my update site will be “self-contained” for my users: when using my update site for installing my software, p2 will be automatically able to install also the required Xtext bundles!
Releasing from GitHub Actions
The Maven command shown above can be used to perform a release from your computer. If you want to release your Eclipse update site directly from GitHub Actions, there are a few more things to do.
create a secret in the GitHub repository of the project (where we run the GitHub Actions workflow), in this example it is called ACTIONS_TOKEN, with the value of that token;
when running the Maven deploy command, we need to override the property github-update-repo by specifying a URL for the GitHub repository with the update site using the HTTPS syntax and the encrypted ACTIONS_TOKEN; in this example, it is https://x-access-token:${{ secrets.ACTIONS_TOKEN }}@github.com/LorenzoBettini/p2composite-github-pages-example-updates;
we also need to configure in advance the Git user and email, with some values, otherwise, Git will complain when creating the commit.
The workflow is configured to be executed only when you push to the release branch.
Remember that we are talking about the Git repository hosting your project, not the one hosting your update site.
Final thoughts
With the procedure described in this post, you publish your update sites and the composite metadata during the Maven build, so you never deal manually with the GitHub repository of your update site. However, you can always do that! For example, you might want to remove a release. It’s just a matter of cloning that repository, do your changes (i.e., remove a subdirectory of releases and update manually the composite metadata accordingly), commit, and push. Now and then you might also clean up the history of such a Git repository (the history is not important in this context), by pushing with –force after resetting the Git history. By the way, by tweaking the configurations above you could also do that every time you do a release: just commit with amend and push force!
Finally, you could also create an additional GitHub repository for snapshot releases of your update sites, or for milestones, or release candidate.
I’ve always used caching mechanisms during the builds in Travis CI, to speed up the builds: caching Maven dependencies, especially in big projects, can save a lot of time. In my case, I’m mostly talking of Eclipse plug-in projects, built with Maven/Tycho, and the target platform resolution might have to download a few hundreds of megabytes. Thus, I wanted to use caching also in GitHub Actions, and there’s an action for that.
In this post, I’ll show my strategies for using the cache, in particular, using different workflows based on different operating systems, which are triggered only on some specific events. I’ll use a very simple example, but I’m using this strategy currently on this Xtext project: https://github.com/LorenzoBettini/edelta, which uses more than 300 Mb of dependencies.
The post assumes that you’re already familiar with GitHub Actions.
Warning: Please keep in mind that caches will also be evicted automatically (currently, the documentation says that “caches that are not accessed within the last week will also be evicted”). However, we can still benefit from caches if we are working on a project for a few days in a row.
To experiment with building mechanisms, I suggest you use a very simple example. I’m going to use a simple Maven Java project created with the corresponding Maven archetype: a Java class and a trivial JUnit test. The Java code is not important in this context, and we’ll concentrate on the build automation mechanisms.
This is a pretty standard workflow for a Java project built with Maven. This workflow runs for every push on any branch and every PR.
Note that we specify to cache the directory where Maven stores all the downloaded artifacts, ~/.m2.
For the cache key, we use the OS where our build is running, a constant string “-m2-” and the result of hashing all the POM files (we’ll see how we rely on this hashing later in this post).
Remember that the cache key will be used in future builds to restore the files saved in the cache. When no cache is found with the given key, the action searches for alternate keys if the restore-keys has been specified. As you see, we specified as the restore key something similar to the actual key: the running OS and the constant string “-m2-” but no hashing. This way, if we change our POMs, the hashing will be different, but if a previous cache exists we can still restore that and make use of the cached values. (See the official documentation for further details.) We’ll then have to download only the new dependencies if any. The cache will then be updated at the end of the successful job.
I usually rely on this strategy for the CI of my projects:
build every pushes in any branch using a Linux build environment;
build PRs in any branch also on a Windows and macOS environment (actually, I wasn’t using Windows with Travis CI since it did not provide Java support on that environment; that’s another advantage of GitHub Actions, which provides Java support also on Windows)
Thus, I have another workflow definition just for PRs (stored in .github/workflows/pr.yml):
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# This workflow will build a Java project with Maven
Besides the build matrix for OSes, that’s basically the same as the previous workflow. In particular, we use the same strategy for defining the cache key (and restore key). Thus, we have a different cache for each different operating system.
A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch. For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch feature-b has the base branch feature-a, a workflow triggered on feature-b would have access to caches created in the default branch (main), feature-a, and feature-b.
Access restrictions provide cache isolation and security by creating a logical boundary between different workflows and branches. For example, a cache created for the branch feature-a (with the base main) would not be accessible to a pull request for the branch feature-b (with the base main).
What does that mean in our scenario? Since the workflow running on Windows and macOS is executed only in PRs, this means that the cache for these two configurations will never be saved for the master branch. In turns, this means that each time we create a new PR, this workflow will have no chance of finding a cache to restore: the branch for the PR is new (so no cache is available for such a branch) and the base branch (typically, “master” or “main”) will have no cache saved for these two OSes. Summarizing, the builds for the PRs for these two configurations will always have to download all the Maven dependencies from scratch. Of course, if we don’t immediately merge the PR and we push other commits on the branch of the PR, the builds for these two OSes will find a saved cache (if the previous builds of the PR succeeded), but, in any case, the first build for each new PR for these two OSes will take more time (actually, much more time in a complex project with lots of dependencies).
Thus, if we want to benefit from caching also on these two OSes, we have to have another workflow on the OSes Windows and macOS that runs when merging a PR, so that the cache will be stored also for the master branch (actually we could use this strategy also when merging any PR with any base branch, not necessarily the main one).
Here’s this additional workflow (stored in .github/workflows/pr-merge.yml):
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# This workflow will build a Java project with Maven
# This runs only when a PR is merged, to create or update the cache
# This is useful because the builds on Windows and macOS are triggered
# only on PR, so, for new branches, the cache is always empty, since
# the builds on PR will not update the cache on the master branch.
# This workflow will update the cache for Windows and macOS on the master branch
# when the PR is merged (actually it will update on the base branch of a merged PR).
# The build consists in a Maven run but skipping tests
Note that we intercept the event push (since a merge of a PR is actually a push) but we have an if statement that enables the workflow only when the commit message contains the string “Merge pull request”, which is the default message when merging a PR on GitHub. In this example, we are only interested in PR merged with the master branch and with any branch starting with “experiments”, but you can adjust that as you see fit. Furthermore, since this workflow is only meant for updating the Maven dependency cache, we skip the tests (with -DskipTests) so that we save some time (especially in a complex project with lots of tests).
This way, after the first PR merged, the PR workflows running on Windows and macOS will find a cache (at least as a starting point).
We can also do better than that and avoid running the Maven build if there’s no need to update the cache. Remember that we use the result of hashing all the POM files in our cache key? We mean that if our POMs do not change then basically we don’t expect our dependencies to change (of course if we’re not using SNAPSHOT dependencies). Now, in the documentation, we also read
When key matches an existing cache, it’s called a cache hit, […] When key doesn’t match an existing cache, it’s called a cache miss, and a new cache is created if the job completes successfully.
The idea is to skip the Maven step in the above workflow “Updates Cache on Windows and macOS” if we have a cache hit since we expect no new dependencies are needed to be downloaded (our POMs haven’t changed). This is the interesting part to change:
YAML
1
2
3
4
5
6
7
8
9
10
- name: Cache Maven packages
uses: actions/cache@v2
id: m2cache # note the cache id,seebelowforcache-hit
Note that we need to define an id for the cache to intercept the cache hit or miss and the id must match the id in the if statement.
This way, if we have a cache hit the workflow for updating the cache on Windows and macOS will be really fast since it won’t even run the Maven build for updating the cache.
If we change the POM, e.g., switch to JUnit 4.13.1, push the change, create a PR, and merge it, then, the workflow for updating the cache will actually run the Maven build since we have a cache miss: the key of the cache has changed. Of course, we’ll still benefit from the already cached dependencies (and all the Maven plugins already downloaded) and we’ll update the cache with the new dependencies for JUnit 4.13.1.
Unable to reserve cache with key …, another job may be creating this cache.
Another experiment that I tried was to remove the running OS from the cache key, e.g., m2-${{ hashFiles(‘**/pom.xml’) }} instead of ${{ runner.os }}-m2-${{ hashFiles(‘**/pom.xml’) }}, and to use a restore accordingly key, like m2- instead of ${{ runner.os }}-m2-. I was hoping to reuse the same cache across different OS environments. This seems to work for macOS, which seems to be able to reuse the Linux cache. Unfortunately, this does not work for Windows. Thus, I gave up that solution.
In all my computers I have dual boot, Ubuntu and Windows, though I’m using the former 99% of the time 😉 On one of my laptop I started to evaluate also Manjaro (probably a blog post will come in the near future). I let Manjaro install the main efi grub boot loader and I noticed that upon reboots its grub configuration remembers the last choice! That is, if I booted Ubuntu (not the first choice in the menu) and I reboot then “Ubuntu” entry is the one selected by default. The same holds for Windows.
I find this feature really cool and useful:
if I had previously used Ubuntu and possibly hibernated the computer, then, even after a few days, when I boot the laptop I know which OS I had booted the last time;
if I boot Windows (…once in a month?) I will probably experience many updates which require a few reboots; if I left the computer unattended during rebooting I used to find myself back to Linux (the primary default choice in grub) and I had to reboot and choose Windows so that updates are installed (if Windows updates require a few reboots that’s quite annoying).
I thought that Manjaro had some special tweaks in the installed grub, but then I learned that’s a standard feature of Grub!
You just have to add these two lines in your /etc/default/grub:
Shell
1
2
3
# Remember the last choice
GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
Save and run
1
sudo update-grub
And that’s all! From then on Grub will remember your last choice 🙂
I have never been able to make hibernation (suspend to disk) work on my laptops (Dell M3800 and Dell XPS 13 9370) on Ubuntu with systemd. The symptom was that running
1
sudo systemctl hibernate
was making the system shutdown, but then upon restart the system was not restored: it was just like booting the system from scratch.
I had also tried with uswsusp (which is installed if you install the package hibernate), with its program s2disk, but I experienced many problems: it wasn’t working reliably and it was making booting (even standard booting) much longer (several seconds more).
Then, after looking at several blog posts, I found that the solution is rather simple, and I’ll detail the steps here. I’ll also show how to use suspend-then-hibernate.
First, you need to have swap already setup, e.g., a swap partition (though I think a swap file would work as well, but in that case the configuration is slightly more complex). For example in /etc/fstab you should have something like
Shell
1
2
# swap was on /dev/<your swap partition> during installation
UUID=<the UUID of your swap partition>none swap sw00
The UUID is important and you should take note of it.
Then, you must make sure initramfs is “aware” of your swap partition and that it is already able to “resume” from that. This should already be the case but you can try to run
1
sudo update-initramfs-u
and after some time you should see something like:
1
2
3
4
update-initramfs:Generating/boot/initrd.img-<your current kernel version>
I:The initramfs will attempt toresume from/dev/<your swap partition>
I:(UUID=<the UUID of your swap partition>)
I:Set the RESUME variable tooverride this.
The UUID must be the same as your swap UUID in the /etc/fstab.
Now, it’s just a matter of editing your /etc/default/grub and make sure you specify resume in GRUB_CMDLINE_LINUX_DEFAULT, with the UUID of your swap partition. So it should be something like (remember that <UUID of your swap partition> must be replaced with the UUID):
1
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash resume=UUID=<UUID of your swap partition>"
Save the file and update grub:
1
sudo update-grub
Reboot the system and now try to hibernate again (first you might want to start a few applications so that you’re sure that the system is effectively restored to the same state):
1
sudo systemctl hibernate
Wait for the system to shut down and switch it on again. The splash screen should tell you something about that it is “resuming from <your swap partition>”. If all goes well you’ll have to enter your password to unlock the system which you should find in the state you left it before hibernating! 🙂
suspend-then-hibernate
Another interesting mechanism provided by systemd is suspend-then-hibernate: the system is suspended (to RAM) and after some time it is hibernated (suspended to disk).
The amount of time before hibernating is defined in the file /etc/systemd/sleep.conf. Let’s have a look at the default contents:
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See systemd-sleep.conf(5) for details
[Sleep]
#AllowSuspend=yes
#AllowHibernation=yes
#AllowSuspendThenHibernate=yes
#AllowHybridSleep=yes
#SuspendMode=
#SuspendState=mem standby freeze
#HibernateMode=platform shutdown
#HibernateState=disk
#HybridSleepMode=suspend platform shutdown
#HybridSleepState=disk
#HibernateDelaySec=180min
By default everything is commented out, but the values, as stated at the beginning of the file, represent the default values. So you can see that suspend-then-hibernate is enabled and that the default delay time before hibernating is 180 minutes. If you’re not happy with that value, uncomment the line and change the value. For example, I set it to 10 minutes:
Shell
1
HibernateDelaySec=10min
You can now test this functionality with this command:
Shell
1
sudo systemctl suspend-then-hibernate
The system will suspend to RAM and if you don’t touch the computer after 10 minutes you can hear some sounds: the system will effectively hibernate.
If you want to make this mechanisms the default suspend mechanism, e.g., you close the lid and the system will suspend and then after some time it will hibernate, you CANNOT set the value of SuspendMode in the file above, since that has another meaning. To make suspend-then-hibernate the default suspend mechanism you have to create this symlink:
No need to restart, try to close the lid and the laptop will suspend, after 10 minutes it will hibernate.
Please, keep in mind that the above command will completely replace the behavior of suspend.
If you want to have a finer grain control, you might want to edit the file /etc/systemd/logind.conf, in particular uncomment and set the entries (then you’ll have to restart or restart the systemd-logind.service service):
Shell
1
2
3
4
5
6
#HandlePowerKey=poweroff
#HandleSuspendKey=suspend
#HandleHibernateKey=hibernate
#HandleLidSwitch=suspend
#HandleLidSwitchExternalPower=suspend
#HandleLidSwitchDocked=ignore
which should be self-explicative, but I haven’t tested this approach.
Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.
To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.
The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).
Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.
The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.
Second method
Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.
In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.
Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)
In this tutorial I’d like to show how to publish a Maven Site to GitHub Pages. You can find several documents on the web about this subject, but I decided to publish one myself because the documents I found refer to a deprecated plugin (https://github.com/github/maven-plugins/tree/master/github-site-plugin) or show complicated settings, also when using the official Maven plugin that I’m going to use myself in this post, Apache Maven SCM Publish Plugin, https://maven.apache.org/plugins/maven-scm-publish-plugin/index.html (probably because the documents I found are somehow old, and they refer to an old version of such a plugin, which has probably been improved a lot since then).
Let’s create a simple Java project using the archetype (of course, I’m going to use fake names for the groupId and artifactId, but such values should be used consistently from now on):
1
2
3
4
5
6
mvn archetype:generate\
-DgroupId=com.mycompany.app\
-DartifactId=my-app\
-DarchetypeArtifactId=maven-archetype-quickstart\
-DarchetypeVersion=1.4\
-DinteractiveMode=false
This will create the my-app folder with the Maven project. Let’s cd into that folder and make sure it compiles fine:
1
mvn clean verify
We can also generate the site of this project (in its default shape):
1
mvn clean site
The site will be generated in the target/site folder and can be opened with a browser; for example, let’s open its index.html:
Let’s prepare the gh-pages branch: this will be a separate branch in the Git repository, we create it as an orphan branch (see also https://maven.apache.org/plugins/maven-scm-publish-plugin/various-tips.html). WARNING: the following commands will wipe out the current contents of the local Git repository, so make sure you don’t have any uncommitted changes! Let’s run the following commands in the main folder of the Git repository:
git checkout --orphan gh-pages to create the branch locally,
rm .git/index ; git clean -fdx to clean the branch content and let it empty,
echo "It works" > index.html to create an initial site content
git add . && git commit -m "initial site content" && git push origin gh-pages to commit and push to the branch gh-pages
Remember that you will have to use the URL corresponding to your own GitHub project (including your GitHub user id).
Then, we configure the maven-scm-publish-plugin configuration in the pluginManagement section (we configure it there, and then we will call its goals directly from the command line – such a plugin can also be configured to be bound to the Maven lifecycle, but that’s out of the scope of this tutorial, you might want to have a look here in case: https://maven.apache.org/plugins/maven-scm-publish-plugin/usage.html). Note that we specify the gh-pages branch:
XHTML
1
2
3
4
5
6
7
8
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-scm-publish-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<scmBranch>gh-pages</scmBranch>
</configuration>
</plugin>
Publish the site
Now, publishing the Maven site to GitHub pages it’s just a matter of running:
1
mvn clean site site:stage scm-publish:publish-scm
Wait for the plugin to perform its job.
What the plugin does is
It first checks out the contents of the gh-pages branch from GitHub into a local folder (by default, target/scmpublish-checkout);
Then locally staged site content is applied to the check-out, issuing appropriate Git commands to add and delete entries, followed by a commit and a push.
We layer this archetype upon our existing project, so we must run it from the directory containing our current Maven project and specify the same groupId and artifactId we specified when we created the Maven project:
1
2
3
4
5
6
7
mvn archetype:generate\
-DgroupId=com.mycompany.app\
-DartifactId=my-app\
-DarchetypeGroupId=org.apache.maven.archetypes\
-DarchetypeArtifactId=maven-archetype-site\
-DarchetypeVersion=1.4\
-DinteractiveMode=false
The archetype will update our project with a src/site directory containing a few example contents in Markdown, APT, etc. It will also update our POM with some configuration for maven-project-info-reports-plugin and for i18n localization. Moreover, a new skin will be used for the final site, maven-fluido-skin.
You might want to have a look locally by regenerating the site.
Let’s publish the new site, again by running
1
mvn clean site site:stage scm-publish:publish-scm
Now it’s on GitHub Pages (it might take some time for the new website to be updated on GitHub and browser reload might help):
To jump to the published website you could also use the GitHub web interface: click on the “environment” link and then on the “View deployment” button corresponding to the latest pushed commit on the gh-pages branch:
Now you could experiment by adding/changing/removing the contents of the directory src/site and then publish the website again through Maven.
Remember
What’s in your Maven project (e.g., master branch) contains the sources of the site, while, what’s in the gh-pages branch on GitHub will contain the generated website. You won’t modify the contents of gh-pages branch manually: the Apache Maven SCM Publish will do that for you.
If you like to use KDE you probably install Kubuntu directly, instead of Ubuntu, which has been based on Gnome for a long time now.
However, I like to have several Desktop Environments, and, now and then, I like to switch from Gnome to KDE and then back. Currently, I’m using Gnome for most of the time, that’s why I install Ubuntu (instead of Kubuntu).
In any case, you can still install KDE Plasma on top of Ubuntu. The following has been tested on an Ubuntu Disco 19.04, but I guess it will work also on previous distributions.
For a reduced installation of KDE you might want to install only these packages
In particular, kwin-addons includes some useful things: it contains additional KWin desktop and window switchers shipped in the Plasma 5 addons module.
When installation has finished you may want to reboot and then, on the Login screen, you can use the gear icon for specifying that you want to enter the KDE Plasma environment instead of the default Gnome environment.
The above packages should provide you with enough stuff to enjoy a Plasma experience, but it lacks many (K)ubuntu configurations and addons for KDE.
If you want more Kubuntu stuff, you might want to install the “huge” package:
1
sudo apt install kubuntu-desktop
And then you get a real Kubuntu KDE Plasma experience.
Note that this will replace the classic Ubuntu splash screen when booting the OS: it replaces it with the Kubuntu splash screen. If you want to go back to the original splash screen it’s just a matter of removing the following packages:
I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.
The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration (with Java, Eclipse and friends)” and can be bought from https://leanpub.com/tdd-buildautomation-ci.
The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.
The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.
On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.
In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.
Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.
it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.
We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.
The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.
When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to
configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.
Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.
The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.
In this tutorial I’m going to show how to install Linux on a USB drive using Virtualbox. I find this useful to test a LInux distribution. Note that using a live distribution only allows you a small testing experience, while installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer).
You can use this procedure to install Linux on any USB drive, that is, both a USB stick or an external USB hard drive.
You could burn a live image on a USB stick, boot it, and then install Linux on a second USB stick from the live system, but using Virtualbox is faster and does not require you to create a live USB stick just to install it on a second USB stick.
I’m going to use Ubuntu (17.10) as the main system and install Fedora on a USB stick (I’ve already downloaded the 27 iso).
Add your user to the Virtualbox users, or you won’t be able to use USB 3 in the virtual machine: run this command and reboot:
1
sudousermod-aGvboxusers${USER}
Then run Virtualbox and create a new Linux machine (increase the memory a bit, depending on your actual physical memory).
Since we’ll use this machine only for booting the live iso, there’s no need to create a disk
Now let’s go to Settings of the newly created machine, and in “USB” select USB 3.0:
In “Storage” select the “Empty” disk icon and in “Optical Drive” “Choose a virtual optical disk file…” and select the iso image of the Live distribution you want to boot the machine (in my case the Fedora 27 ISO I’ve already downloaded), then check the “Live CD/DVD” checkbox
Now “Start” the virtual machine (which will boot the Live ISO).
You’ll be asked to confirm the virtual optical disk file you had previously specified in the settings.
From now on, you’ll boot the live system into the virtual machine, and this depends on the distribution you choose (in my case Fedora). You should be already familiar with that procedure if you’ve installed a Linux distribution before starting with a Live system.
For example, we choose “Try Fedora” and we can perform some tests (for instance, that we can access the Internet from the virtual machine; being able to access the Internet might be crucial later when installing the distribution on the USB stick since the installation might want to download some upgrades).
Now let’s connect the USB stick where we want to install Linux; then in the Devices menu of the Virtualbox machine, you should select the USB stick you’ve just inserted (in my case it’s a SanDisk):
Now the USB stick is mounted in the virtual machine, and it will be the target disk of our installation.
We can now start the Fedora installation in the virtual machine; again, this assumes you’re familiar with the installation of this distribution. The important part will be to select the USB stick as the target of the installation. Actually, the USB stick should be the only available option (unless you manually mounted other drives in the virtual machine):
Continue the installation and once it’s done, you can reboot the computer and make sure to boot from the USB stick.
Enjoy your new Linux installation on the USB stick 🙂
In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.
First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:
Numix has similar problems:
Adwaita, (the default Gnome theme) instead makes it look great:
The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:
So, in the end, stay away from Ubuntu default theme 😉
In this tutorial I’m going to show how to analyze multiple Eclipse plug-in projects with Sonarqube. In particular, I’m going to focus on peculiarities that have to be taken care of due to the standard way Sonarqube analyzes sources and to the structure of typical Eclipse plug-in projects (concerning tests and code coverage).
Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.
Note that this also shows a possible way of dealing with custom argLine for tycho-surefire configuration: tycho.testArgLine will be automatically set the jacoco:prepare-agent goal, with the path of jacoco agent (needed for code coverage); the property tycho.testArgLine is automatically used by tycho-surefire. But if you have a custom configuration of tycho-surefire with additional arguments you want to pass in argLine, you must be careful not to overwrite the value set by jacoco. If you simply refer to tycho.testArgLine in the custom tycho-surefire configuration’s argLine, it will work when the jacoco profile is active but it will fail when it is not active since that property will not exist. Don’t try to define it as an empty property by default, since when tycho-surefire runs it will use that empty value, ignoring the one set by jacoco:prepare-agent (argLine’s properties are resolved before jacoco:prepare-agent is executed). Instead, use another level of indirection: refer to a new property, e.g., additionalTestArgLine, which by default is empty. In the jacoco profile, set additionalTestArgLine referring to tycho.testArgLine (in that profile, that property is surely set by jacoco:prepare-agent). Then, in the custom argLine, refer to additionalTestArgLine. An example is shown in the project example.plugin2.tests pom:
You can check that code coverage works as expected by running (it’s important to verify that jacoco has been configured correctly in your projects before running Sonarqube analysis: if it’s not working in Sonarqube then it’s something wrong in the configuration for Sonarqube, not in the jacoco configuration, as we’ll see in a minute):
1
mvn clean verify-Pjacoco
Mare sure that example.tests.report/target/site/jacoco-aggregate/index.html reports some code coverage (in this example, example.plugin1 has some code uncovered by intention).
Now I assume you already have Sonarqube installed.
Let’s run a first Sonarqube analysis with
1
mvn clean verify-Pjacoco sonar:sonar
This is the result:
So Unit Tests are correctly collected! What about Code Coverage? Something is shown, but if you click on that you see some bad surprises:
Code coverage only on tests (which is irrelevant) and no coverage for our SUT (Software Under Test) classes!
That’s because jacoco .exec files are by default generated in the target folder of the tests project, now when Sonarqube analyzes the projects:
it finds the jacoco.exec file when it analyzes a tests project but can only see the sources of the tests project (not the ones of the SUT)
when it analyzes a SUT project it cannot find any jacoco.exec file.
We could fix this by configuring the maven jacoco plugin to generate jacoco.exec in the SUT project, but then the aggregate report configuration should be updated accordingly (while it works out of the box with the defaults). Another way of fixing the problem is to use the Sonarqube maven property sonar.jacoco.reportPaths and “trick” Sonarqube like that (we do that in the parent pom properties section):
XHTML
1
2
3
4
5
6
7
<!-- Always refer to the corresponding tests project (if it exists) otherwise
Sonarqube won't be able to collect code coverage. For example, when analyzing
project foo it wouldn't find code coverage information if it doesn't use
foo.tests jacoco.exec. -->
<sonar.jacoco.reportPaths>
../${project.artifactId}.tests/target/jacoco.exec
</sonar.jacoco.reportPaths>
This way, when it analyzes example.plugin1 it will use the jacoco.exec found in example.plugin1.tests project (if you follow the convention foo and foo.tests this works out of the box, otherwise, you have to list all the jacoco.exec paths in all the projects in that property, separated by comma).
Let’s run the analysis again:
OK, now code coverage is collected on the SUT classes as we wanted. Of course, now test classes appear as uncovered (remember, when it analyzes example.plugin1.tests it now searchs for jacoco.exec in example.plugin1.tests.tests, which does not even exist).
This leads us to another problem: test classes should be excluded from Sonarqube analysis. This works out of the box in standard Maven projects because source folders of SUT and source folders of test classes are separate and in the same project (that’s also why code coverage for pure Maven projects works out of the box in Sonarqube); this is not the case for Eclipse projects, where SUT and tests are in separate projects.
In fact, issues are reported also on test classes:
We can fix both problems by putting in the tests.parent pom properties these two Sonarqube properties (note the link to the Eclipse bug about this behavior)
XHTML
1
2
3
<!-- Workaround for https://bugs.eclipse.org/bugs/show_bug.cgi?id=397015 -->
<sonar.sources></sonar.sources>
<sonar.tests>src</sonar.tests>
This will be inherited by our tests projects and for those projects, Sonarqube will not analyze test classes.
Run the analysis again and see the results: code coverage only on SUT and issues only on SUT (remember that in this example MyClass1 is not uncovered completely by intention):
You might be tempted to use the property sonar.skip set to true for test projects, but you will use JUnit test reports collection.
The final bit of customization is to exclude the Main.java class from code coverage. We have already configured the jacoco maven plugin to do so, but this won’t be picked up by Sonarqube (that configuration only tells jacoco to skip that class when it generates the HTML report).
We have to repeat that exclusion with a Sonarqube maven property, in the parent pom:
XHTML
1
2
3
4
<!-- Example of skipping code coverage (comma separated Java files). -->
<sonar.coverage.exclusions>
**/plugin1/Main.java
</sonar.coverage.exclusions>
Note that in the jacoco maven configuration we had excluded a .class file, while here we exclude Java files.
Run the analysis again and Main is not considered in code coverage:
Now you can have fun in fixing Sonarqube issues, which is out of the scope of this tutorial 🙂
Especially with lambdas, you may end up with a chain of method calls that you’d like to have automatically formatted with each invocation on each line (maybe except for the very first invocation).
You can configure the Eclipse Java formatter with that respect; you just need to reach the right option (the “Force split” is necessary to have each invocation on a separate line):
and then you can have method calls formatted automatically like that:
This is relative to my installation of Eclipse which is in the folder /home/bettini/eclipse/java-latest-released/eclipse, note the executable “eclipse” and the “icon.xpm”. The name “Eclipse Java” is what will appear as the launcher name both in Gnome applications and later in the dock.
Make this file executable.
Copy this file in your home folder in .local/share/applications.
Now in Gnome Activities search for such a launcher and it should appear:
Select it and make sure that Eclipse effectively runs.
Unfortunately, in the dock, there’s no contextual menu for you to add it as a favorite and pin it to the dock:
But you can still add it to the dock favorites (and thus pin it there) by using the corresponding contextual menu that is available when the launcher appears in the Activities:
And there you go: the Eclipse launcher is now on your dock and it’s there to stay 🙂
This post is based on my Dell M3800 with Linux Neon.
KDE already does a good job with touchpad gestures (e.g., two fingers for scrolling, 3 finger tap for pasting, etc.) but it does not support 3 finger swype gestures like in MacOs, e.g., for displaying all the windows or for showing the desktop.
Today I tried this utility, Libinput-gestures, which works like magic! The utility comes with good default for typical gestures (including pinch) but I configured that to fit my needs (in particular, I wanted to mimic MacOs behavior for 3 finger swypes: up = display all windows, down = display all windows of the same class and for pinch out = show desktop.
The installation of Linput-gestures is really easy (just follow the instructions at its web page).
Remember that, first of all, your user must be in the input group, so first run
1
sudo gpasswd-a$USER input
Then logout from your current session, and login again.
and if you want it to be started at login time, then run
1
libinput-gestures-setup autostart
The default gestures are in /etc/libinput-gestures.conf. If you want to create your own custom gestures then copy that file to ~/.config/libinput-gestures.conf and edit it.
These are the lines I changed in my configuration (remember that each time you modify the configuration you need to restart libinput-gestures, i.e., instead of start in the command line above, just use restart):
Shell
1
2
3
4
5
6
7
8
# KDE: Present Windows (current desktop)
gesture swipe up xdotool key ctrl+F9
# KDE: Present Windows (Window class)
gesture swipe down xdotool key ctrl+F7
# KDE: Show desktop
gesture pinch out xdotool key ctrl+F12
You only need to know the keyboard shortcuts of the actions you want to associate to mouse gestures. With that respect, you might want to have a look at the current shortcuts in KDE Settings (the interesting components are “KWin” and “Plasma”):
Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.
The Maven parent pom file is written such that Jacoco is enabled only when the profile “jacoco” is explicitly activated:
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<profile>
<id>jacoco</id>
<activation>
<activeByDefault>false</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco-version}</version>
<configuration>
<excludes>
<exclude>**/plugin1/Main.class</exclude>
</excludes>
</configuration>
<executions>
<execution>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
This is just an example of configuration; you might want to tweak it as you see fit for your own projects (in this example I also created a Main.java with a main method that I exclude from the coverage). By default, the jacoco-maven-plugin will “prepare” the Jacoco agent in the property tycho.testArgLine (since our test projects are Maven projects with packaging eclipse-plugin-test); since tycho.testArgLine is automatically used by the tycho-surefire-plugin and since we have no special test configuration, the pom.xml of our test projects is just as simple as this:
if you need a custom configuration, then you have to explicitly mention ${tycho.testArgLine} in the <argLine>.
Now we want to create an aggregate Jacoco report for the classes in plugin1 and plugin2 projects (tested by plugin1.tests and plugin2.tests, respectively); each test project will generate a jacoco.exec file with coverage information. Before Jacoco 0.7.7, creating an aggregate report wasn’t that easy and required to store all coverage data in a single an .exec file and then use an ant task (with a manual configuration specifying all the source file and class file paths). In 0.7.7, the jacoco:report-aggregate has been added, which makes creating a report really easy!
Here’s an excerpt of the documentation:
Creates a structured code coverage report (HTML, XML, and CSV) from multiple projects within reactor. The report is created from all modules this project depends on. From those projects class and source files as well as JaCoCo execution data files will be collected. […] This also allows to create coverage reports when tests are in separate projects than the code under test. […]
Using the dependency scope allows to distinguish projects which contribute execution data but should not become part of the report:
compile: Project source and execution data is included in the report.
test: Only execution data is considered for the report.
So it’s just a matter of creating a separate project (I called that example.tests.report) where we:
configure the report-aggregate goal (in this example I bind that to the verify phase)
add as dependencies with scope compile the projects containing the actual code and with scope test the projects containing the tests and .exec data
Now run this command in the example.parent project:
Shell
1
mvn clean verify-Pjacoco
and when the build terminates, you’ll find the HTML code coverage report for all your projects in the directory (again, you can configure jacoco with a different output path, that’s just the default):
Since, besides the HTML report, jacoco will create an XML report, you can use any tool that keeps track of code coverage, like the online free solution Coveralls (https://coveralls.io/). Coveralls is automatically accessible from Travis (I assume that you know how to connect your github projects to Travis and Coveralls). So we just need to configure the coveralls-maven-plugin with the path of the Jacoco xml report (I’m doing this in the parent pom, in the pluginManagement section in the jacoco profile):
The second edition of the Xtext book should be published soon! In the meantime it is already available for preorders. At the time of writing, you can benefit for discounts and preorder it at 10$.
I’ll detail the differences and novelties of this second edition.
But, first things first! A huge thank you to Jan Köhnlein, for reviewing this second edition, and a special thank you to Sven Efftinge, for writing the foreword to this second edition. I am also grateful to itemis Schweiz, and in particular, to Serano Colameo for sponsoring the writing of this book.
While working on this second edition, I updated all the contents of the previous edition in order to make them up to date with respect to what Xtext provides in the most recent release (at the time of writing, it is 2.10).
All the examples have been rewritten from scratch. The main examples, Entities, Expressions and SmallJava, are still there, but many parts of the DSLs, including their features and implementations, have been modified and improved, focusing on efficient implementation techniques and the best practices I learned in these years. Thus, while the features of most of the main example DSLs of the book is the same as in the first edition, their implementation is completely new.
Moreover, In the last chapters, many more examples are also introduced.
Chapter 11 on Continuous Integration, which in the previous edition was called “Building and Releasing”, has been completely rewritten and it is now based on Maven/Tycho and on Gradle, since Xtext now provides a project wizard that also creates a build configuration for these build tools. Building with Maven/Tycho is described in more details in the chapter, and Gradle is briefly described. This new chapter also briefly describes the new Xtext features: DSL editor on the web and also on IntelliJ.
I also added a brand new chapter at the end of the book, Chapter 13 “Advanced Topics”, with much more advanced material and techniques that are useful when your DSL grows in size and features. For example, the chapter will show how to manually maintain the Ecore model for your DSL in several ways, including Xcore. This chapter also presents an advanced example that extends Xbase, including the customization of its type system and compiler. An introduction to Xbase is still presented in Chapter 12, as in the previous edition, but with more details.
As in the previous edition, the book fosters unit testing a lot. An entire chapter, Chapter 7 “Testing”, is still devoted to testing all aspects of an Xtext DSL implementation.
Most chapters, as in the previous edition, still have a tutorial nature.
Summarizing, while the title and the subject of most chapters is still the same, their contents have been completely reviewed, extended and, hopefully, improved.
If you enjoyed the first edition of the book and found it useful, I hope you’ll like this second edition even more.
HiDPI support in KDE Plasma has been recently improved! I’m afraid what’s not improved is the procedure for using that. In this post I’ll detail the steps to use HiDPI with KDE if you have a high resolution display (for example, I have that in my Linux Dell M3800).
Remember that the settings you change will not be applied completely until you logout and login again into KDE.
First of all, you need to go in Settings, then
“Display and Monitor” -> “Display Configuration“. If you scroll down you see a “Scale Display” button
Click on that and in the “Screen Scaling” dialog, drag the “Scale” in the middle, corresponding to a scale factor of 2 and press OK.
Then go back to the main page of Settings, select “Font“, and force to the DPI font to 168. (or even more if you want).
Apply the settings, logout and login again into KDE and you’ll enjoy your HiDPI display with a scale factor of 2, which basically means it will be usable 🙂
Be warned, KDE applications will look correctly, but there’ll still be other applications which might not have been implemented with HiDPI in mind… and they’ll still look horrible even with the scaling you set.
In a previous post I showed how to manage an Eclipse composite p2 repository and how to publish an Eclipse p2 composite repository on Sourceforge. In this post I’ll show a similar procedure to publish an Eclipse p2 composite repository on Bintray. The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted).
First of all, this procedure is quite different from the ones shown in other blogs (e.g., this one, this one and this one): in those approaches the p2 metadata (i.e., artifacts.jar and content.jar) are uploaded independently from a version, always in the same directory, thus overwriting the existing metadata. This leads to the fact that only the latest version of published features and bundles will be available to the end user. This is quite against the idea that old versions should still be available, and in general, all the versions should be available for the end users, especially if a new version has some breaking change and the user is not willing to update (see p2’s do’s and do not’s). For this reason, I always publish p2 composite repositories.
The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.
In order to achieve this, all published p2 repositories must be available, each one with their own p2 metadata that should never be overwritten.
On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.
What I aim at is to have the following remote paths on Bintray:
releases: in this directory all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20160129-1616/ etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
updates: in this directory the composite metadata will be uploaded. The URL https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/ should be used by your Eclipse users to install the features in their Eclipse of for target platform resolution (depending on the kind of projects you’re developing). All versions will be available from this composite update site; I call this main composite. Moreover, you can provide the URL to a child composite update site that includes all versions for a given major.minor stream, e.g., https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.0/, https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.1/, etc. I call each one of these, child composite.
zipped: in this directory we will upload the zipped p2 repository for each version.
Summarizing we’ll end up with a remote directory structure like the following
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root
|--releases
||--1.0.0.v2016...
|||--artifacts.jar
|||--content.jar
|||--features
||||--your feature.jar
|||--plugins
||||--your bundle.jar
||--1.1.0.v2016...
||--1.1.1.v2016...
||--2.0.0.v2016...
|...
|--updates
||--compositeContent.xml
||--compositeArtifacts.xml
||--1.0
|||--compositeContent.xml
|||--compositeArtifact.xml
||--1.1
|||--compositeContent.xml
|||--compositeArtifact.xml
||--2.0...
|...
|--zipped
|--your site1.0.0.v2016....zip
|--your site1.1.0.v2016....zip
...
Uploading using REST API
In the posts I mentioned above, the typical line to upload contents with the REST API is of the shape
Thanks to the Bintray Support, I managed to use a different scheme that allows me to store p2 metadata for a single p2 repository in the same directory of the p2 repository itself and to keep those metadata separate for each single release.
To achieve this, we need to use another URL scheme for uploading, using matrix params options or header options.
This means that we’ll upload everything with this URL
On the contrary, for uploading p2 composite metadata, we’ll use the schema of the other approaches, i.e., we will not associate it to any specific version; we just need to specify the desired remote path where we’ll upload the main and the child composite metadata.
Building Steps
During the build, we’ll have to update the composite site metadata, and we’ll have to do that locally.
The steps that we’ll perform during the Maven/Tycho build, which will rely on some Ant scripts can be summarized as follows:
Retrieve the remote composite metadata compositeContent/Artifacts.xml, both for the main composite and the child composite. If these metadata cannot be found remotely, we fail gracefully: it means that it is the first time we release, or, if only the child composite cannot be found, that we’re releasing a new major.minor version. These will be downloaded in the directories target/main-composite and target/child-composite respectively. These will be created anyway.
Preprocess possible downloaded composite metadata: if this property is present
We must temporarily set it to false, otherwise we will not be able to add additional elements in the composite site with the p2 ant tasks.
Update the composite metadata using the version information passed from the Maven/Tycho build using the p2 Ant tasks for composite repositories
Post process the composite metadata (i.e., put the property p2.atomic.composite.loading above to true, see https://bugs.eclipse.org/bugs/show_bug.cgi?id=356561 for further details about this property). UPDATE: Please have a look at the comment section, in particular, the comments from pascalrapicault, about this property.
Upload everything to bintray: both the new p2 repository, its zipped version and all the composite metadata.
IMPORTANT: the pre and post processing of composite metadata that we’ll implement assumes that such metadata are not compressed. Anyway, I always prefer not to compress the composite metadata since it’s easier, later, to manually change them or reviewing.
Technical Details
You can find the complete example at https://github.com/LorenzoBettini/p2composite-bintray-example. Here I’ll sketch the main parts. First of all, all the mechanisms for updating the composite metadata and pushing to Bintray (i.e., the steps detailed above) are in the project p2composite.example.site, which is a Maven/Tycho project with eclipse-repository packaging.
The pom.xml has some properties that you should adapt to your project, and some other properties that can be left as they are if you’re OK with the defaults:
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<properties>
<!-- The name of your own Bintray repository -->
<bintray.repo>p2-composite-example</bintray.repo>
<!-- The name of your own Bintray repository's package for releases -->
<bintray.package>releases</bintray.package>
<!-- The label for the Composite sites -->
<site.label>Composite Site Example</site.label>
<!-- If the Bintray repository is owned by someone different from your
user, then specify the bintray.owner explicitly -->
<bintray.owner>${bintray.user}</bintray.owner>
<!-- Define bintray.user and bintray.apikey in some secret place,
If you change the default remote paths it is crucial that you update the child.repository.path.prefix consistently. In fact, this is used to update the composite metadata for the composite children. For example, with the default properties the composite metadata will look like the following (here we show only compositeContent.xml):
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
<?xml version='1.0'encoding='UTF-8'?>
<?compositeMetadataRepository version='1.0.0'?>
<repository name='Composite Site Example 1.0'type='org.eclipse.equinox.internal.p2.metadata.repository.CompositeMetadataRepository'version='1.0.0'>
You can also see that two crucial properties, bintray.user and, in particular, bintray.apikey should not be made public. You should keep these hidden, for example, you can put them in your local .m2/settings.xml file, associated to the Maven profile that you use for releasing (as illustrated in the following). This is an example of settings.xml
In the pom.xml of this project there is a Maven profile, release-composite, that should be activated when you want to perform the release steps described above.
We also make sure that the generated zipped p2 repository has a name with fully qualified version
XHTML
1
2
3
4
5
6
7
8
9
<!-- make sure that zipped p2 repositories have the fully qualified version -->
In the release-composite Maven profile, we use the maven-antrun-plugin to execute some ant targets (note that the Maven properties are automatically passed to the Ant tasks): one to retrieve the remote composite metadata, if they exist, and the other one as the final step to deploy the p2 repository, its zipped version and the composite metadata to Bintray:
The Ant tasks are defined in the file bintray.ant. Please refer to the example for the complete file. Here we sketch the main parts.
This Ant file relies on some properties with default values, and other properties that are expected to be passed when running these tasks, i.e., from the pom.xml
To retrieve the existing remote composite metadata we execute the following, using the standard Ant get task. Note that if there is no composite metadata (e.g., it’s the first release that we execute, or we are releasing a new major.minor version so there’s no child composite for that version) we ignore the error; however, we still create the local directories for the composite metadata:
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<!-- Take from the remote URL the possible existing metadata -->
<macrodef name="get-file"description="Use Ant Get task the file">
<attribute name="file" />
<attribute name="url" />
<attribute name="dest" />
<sequential>
<!-- If the remote file does not exist then fail gracefully -->
<echo message="Getting @{file} from @{url} into @{dest}..." />
<get dest="@{dest}"ignoreerrors="true">
<url url="@{url}/@{file}" />
</get>
</sequential>
</macrodef>
For preprocessing/postprocessing composite metadata (in order to deal with the property p2.atomic.composite.loading as explained in the previous section) we have
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<!-- p2.atomic.composite.loading must be set to false otherwise we won't be able
to add a child to the composite repository without having all the children available -->
Finally, to push everything to Bintray we execute curl with appropriate URLs, as we described in the previous section about REST API. The single tasks for pushing to Bintray are similar, so we only show one for uploading the p2 repository associated to a specific version, and the one for uploading p2 composite metadata. As detailed at the beginning of the post, we use different URL shapes.
To update composite metadata we execute an ant task using the tycho-eclipserun-plugin. This way, we can execute the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant tasks for managing composite repositories.
ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
<plugin>
<groupId>org.eclipse.tycho.extras</groupId>
<artifactId>tycho-eclipserun-plugin</artifactId>
<version>${tycho-version}</version>
<configuration>
<!-- Update p2 composite metadata or create it -->
The file packaging-p2-composite.ant is similar to the one I showed in a previous post. We use the p2 Ant tasks for adding a child to a composite p2 repository (recall that if there is no existing composite repository, the task for adding a child also creates new compositeContent.xml/Artifacts.xml; if a child with the same name exists the ant task will not add anything new).
In case you want to remove an existing released version, since we upload the p2 repository and the zipped version as part of a package’s version, we just need to delete that version using the Bintray Web UI. However, this procedure will never remove the metadata, i.e., artifacts.jar and content.jar. The same holds if you want to remove the composite metadata. For these metadata files, you need to use the REST API, e.g., with curl. I put a shell script in the example to quickly remove all the metadata files from a given remote Bintray directory.
Performing a Release
For performing a release you just need to run
1
mvn clean verify-Prelease-composite
on the p2composite.example.tycho project.
Concluding Remarks
As I said, the procedure shown in this example is meant to be easily reusable in your projects. The ant files can be simply copied as they are. The same holds for the Maven profile. You only need to specify the Maven properties that contain values for your very project, and adjust your settings.xml with sensitive data like the bintray APIKEY.
Happy Releasing! 🙂
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.