Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

My new book on TDD, Build Automation and Continuous Integration

I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.

The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration
(with Java, Eclipse and friends)
” and can be bought from https://leanpub.com/tdd-buildautomation-ci.

The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.

The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.

On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.

In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.

Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.

it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.

We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.

The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.

When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to

configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.

Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.

The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.

I hope you enjoy the book 🙂

How to install Linux on a USB drive using Virtualbox

In this tutorial I’m going to show how to install Linux on a USB drive using Virtualbox. I find this useful to test a LInux distribution. Note that using a live distribution only allows you a small testing experience, while installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer).

You can use this procedure to install Linux on any USB drive, that is, both a USB stick or an external USB hard drive.

You could burn a live image on a USB stick, boot it, and then install Linux on a second USB stick from the live system, but using Virtualbox is faster and does not require you to create a live USB stick just to install it on a second USB stick.

I’m going to use Ubuntu (17.10) as the main system and install Fedora on a USB stick (I’ve already downloaded the 27 iso).

First of all, let’s install Virtualbox:

Add your user to the Virtualbox users, or you won’t be able to use USB 3 in the virtual machine: run this command and reboot:

Then run Virtualbox and create a new Linux machine (increase the memory a bit, depending on your actual physical memory).

Since we’ll use this machine only for booting the live iso, there’s no need to create a disk

Now let’s go to Settings of the newly created machine, and in “USB” select USB 3.0:

In “Storage” select the “Empty” disk icon and in “Optical Drive” “Choose a virtual optical disk file…” and select the iso image of the Live distribution you want to boot the machine (in my case the Fedora 27 ISO I’ve already downloaded), then check the “Live CD/DVD” checkbox

Now “Start” the virtual machine (which will boot the Live ISO).

You’ll be asked to confirm the virtual optical disk file you had previously specified in the settings.

From now on, you’ll boot the live system into the virtual machine, and this depends on the distribution you choose (in my case Fedora). You should be already familiar with that procedure if you’ve installed a Linux distribution before starting with a Live system.

For example, we choose “Try Fedora” and we can perform some tests (for instance, that we can access the Internet from the virtual machine; being able to access the Internet might be crucial later when installing the distribution on the USB stick since the installation might want to download some upgrades).

Now let’s connect the USB stick where we want to install Linux; then in the Devices menu of the Virtualbox machine, you should select the USB stick you’ve just inserted (in my case it’s a SanDisk):

Now the USB stick is mounted in the virtual machine, and it will be the target disk of our installation.

We can now start the Fedora installation in the virtual machine; again, this assumes you’re familiar with the installation of this distribution. The important part will be to select the USB stick as the target of the installation. Actually, the USB stick should be the only available option (unless you manually mounted other drives in the virtual machine):

Continue the installation and once it’s done, you can reboot the computer and make sure to boot from the USB stick.

Enjoy your new Linux installation on the USB stick 🙂

Eclipse tested with a few Gnome themes

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme 😉

Analyzing Eclipse plug-in projects with Sonarqube

In this tutorial I’m going to show how to analyze multiple Eclipse plug-in projects with Sonarqube. In particular, I’m going to focus on peculiarities that have to be taken care of due to the standard way Sonarqube analyzes sources and to the structure of typical Eclipse plug-in projects (concerning tests and code coverage).

The code of this example is available on Github: https://github.com/LorenzoBettini/tycho-sonarqube-example

This can be seen as a follow-up of my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects. I’ll basically reuse almost the same structure of that example and a few things. The part of Jacoco report is not related to Sonarqube but I’ll leave it there.

The structure of the projects is as follows:

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The project example.tests.parent contains all the common configurations for test projects (and test report, please refer to my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects for the details of this report project, which is not strictly required for Sonarqube).

This is its pom

Note that this also shows a possible way of dealing with custom argLine for tycho-surefire configuration: tycho.testArgLine will be automatically set the jacoco:prepare-agent goal, with the path of jacoco agent (needed for code coverage); the property tycho.testArgLine is automatically used by tycho-surefire. But if you have a custom configuration of tycho-surefire with additional arguments you want to pass in argLine, you must be careful not to overwrite the value set by jacoco. If you simply refer to tycho.testArgLine in the custom tycho-surefire configuration’s argLine, it will work when the jacoco profile is active but it will fail when it is not active since that property will not exist. Don’t try to define it as an empty property by default, since when tycho-surefire runs it will use that empty value, ignoring the one set by jacoco:prepare-agent (argLine’s properties are resolved before jacoco:prepare-agent is executed). Instead, use another level of indirection: refer to a new property, e.g., additionalTestArgLine, which by default is empty. In the jacoco profile, set additionalTestArgLine referring to tycho.testArgLine (in that profile, that property is surely set by jacoco:prepare-agent). Then, in the custom argLine, refer to additionalTestArgLine. An example is shown in the project example.plugin2.tests pom:

You can check that code coverage works as expected by running (it’s important to verify that jacoco has been configured correctly in your projects before running Sonarqube analysis: if it’s not working in Sonarqube then it’s something wrong in the configuration for Sonarqube, not in the jacoco configuration, as we’ll see in a minute):

Mare sure that example.tests.report/target/site/jacoco-aggregate/index.html reports some code coverage (in this example, example.plugin1 has some code uncovered by intention).

Now I assume you already have Sonarqube installed.

Let’s run a first Sonarqube analysis with

This is the result:

So Unit Tests are correctly collected! What about Code Coverage? Something is shown, but if you click on that you see some bad surprises:

Code coverage only on tests (which is irrelevant) and no coverage for our SUT (Software Under Test) classes!

That’s because jacoco .exec files are by default generated in the target folder of the tests project, now when Sonarqube analyzes the projects:

  • it finds the jacoco.exec file when it analyzes a tests project but can only see the sources of the tests project (not the ones of the SUT)
  • when it analyzes a SUT project it cannot find any jacoco.exec file.

We could fix this by configuring the maven jacoco plugin to generate jacoco.exec in the SUT project, but then the aggregate report configuration should be updated accordingly (while it works out of the box with the defaults). Another way of fixing the problem is to use the Sonarqube maven property sonar.jacoco.reportPaths and “trick” Sonarqube like that (we do that in the parent pom properties section):

This way, when it analyzes example.plugin1 it will use the jacoco.exec found in example.plugin1.tests project (if you follow the convention foo and foo.tests this works out of the box, otherwise, you have to list all the jacoco.exec paths in all the projects in that property, separated by comma).

Let’s run the analysis again:

OK, now code coverage is collected on the SUT classes as we wanted. Of course, now test classes appear as uncovered (remember, when it analyzes example.plugin1.tests it now searchs for jacoco.exec in example.plugin1.tests.tests, which does not even exist).

This leads us to another problem: test classes should be excluded from Sonarqube analysis. This works out of the box in standard Maven projects because source folders of SUT and source folders of test classes are separate and in the same project (that’s also why code coverage for pure Maven projects works out of the box in Sonarqube); this is not the case for Eclipse projects, where SUT and tests are in separate projects.

In fact, issues are reported also on test classes:

We can fix both problems by putting in the tests.parent pom properties these two Sonarqube properties (note the link to the Eclipse bug about this behavior)

This will be inherited by our tests projects and for those projects, Sonarqube will not analyze test classes.

Run the analysis again and see the results: code coverage only on SUT and issues only on SUT (remember that in this example MyClass1 is not uncovered completely by intention):

You might be tempted to use the property sonar.skip set to true for test projects, but you will use JUnit test reports collection.

The final bit of customization is to exclude the Main.java class from code coverage. We have already configured the jacoco maven plugin to do so, but this won’t be picked up by Sonarqube (that configuration only tells jacoco to skip that class when it generates the HTML report).

We have to repeat that exclusion with a Sonarqube maven property, in the parent pom:

Note that in the jacoco maven configuration we had excluded a .class file, while here we exclude Java files.

Run the analysis again and Main is not considered in code coverage:

Now you can have fun in fixing Sonarqube issues, which is out of the scope of this tutorial 🙂

This example is also analyzed from Travis using Sonarcloud (https://sonarcloud.io/dashboard?id=example%3Aexample.parent).

Hope you enjoyed this tutorial and Happy new year! 🙂

 

Formatting Java method calls in Eclipse

Especially with lambdas, you may end up with a chain of method calls that you’d like to have automatically formatted with each invocation on each line (maybe except for the very first invocation).

You can configure the Eclipse Java formatter with that respect; you just need to reach the right option (the “Force split” is necessary to have each invocation on a separate line):

and then you can have method calls formatted automatically like that:

 

How to add Eclipse launcher in Gnome dock

In this post I’ll show how to add an Eclipse launcher as a favorite (pinned) application in the Gnome dock (I’m using Ubuntu Artful). This post is inspired by http://blog.ttoine.net/en/2016/06/30/how-to-add-eclipse-neon-launcher-in-gnu-linux-menus-and-launchers/.

First of all, you need to create a .desktop file, where you need to specify the full path of your Eclipse installation:

This is relative to my installation of Eclipse which is in the folder /home/bettini/eclipse/java-latest-released/eclipse, note the executable “eclipse” and the “icon.xpm”. The name “Eclipse Java” is what will appear as the launcher name both in Gnome applications and later in the dock.

Make this file executable.

Copy this file in your home folder in .local/share/applications.

Now in Gnome Activities search for such a launcher and it should appear:

Select it and make sure that Eclipse effectively runs.

Unfortunately, in the dock, there’s no contextual menu for you to add it as a favorite and pin it to the dock:

But you can still add it to the dock favorites (and thus pin it there) by using the corresponding contextual menu that is available when the launcher appears in the Activities:

And there you go: the Eclipse launcher is now on your dock and it’s there to stay 🙂

 

Touchpad gestures in Linux KDE with Libinput-gestures

This post is based on my Dell M3800 with Linux Neon.

KDE already does a good job with touchpad gestures (e.g., two fingers for scrolling, 3 finger tap for pasting, etc.) but it does not support 3 finger swype gestures like in MacOs, e.g., for displaying all the windows or for showing the desktop.

Today I tried this utility, Libinput-gestures, which works like magic! The utility comes with good default for typical gestures (including pinch) but I configured that to fit my needs (in particular, I wanted to mimic MacOs behavior for 3 finger swypes: up = display all windows, down = display all windows of the same class and for pinch out = show desktop.

The installation of Linput-gestures is really easy (just follow the instructions at its web page).

Remember that, first of all, your user must be in the input group, so first run

Then logout from your current session, and login again.

Then, in Ubuntu, it’s just a matter of running

and install the software like this (you need git):

You can already start the program like this

and if you want it to be started at login time, then run

The default gestures are in /etc/libinput-gestures.conf. If you want to create your own custom gestures then copy that file to ~/.config/libinput-gestures.conf and edit it.

These are the lines I changed in my configuration (remember that each time you modify the configuration you need to restart libinput-gestures, i.e., instead of start in the command line above, just use restart):

You only need to know the keyboard shortcuts of the actions you want to associate to mouse gestures. With that respect, you might want to have a look at the current shortcuts in KDE Settings (the interesting components are “KWin” and “Plasma”):

This is a video demoing the gestures:

Happy gestures! 🙂

JaCoCo Code Coverage and Report of multiple Eclipse plug-in projects

In this tutorial I’ll show how to use Jacoco with Maven/Tycho to create a code coverage report of multiple Eclipse plug-in projects.

The code of the example is available here: https://github.com/LorenzoBettini/tycho-multiproject-jacoco-report-example.

This is the structure of the projects:

jacoco-report-projects

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The Maven parent pom file is written such that Jacoco is enabled only when the profile “jacoco” is explicitly activated:

This is just an example of configuration; you might want to tweak it as you see fit for your own projects (in this example I also created a Main.java with a main method that I exclude from the coverage). By default, the jacoco-maven-plugin will “prepare” the Jacoco agent in the property tycho.testArgLine (since our test projects are Maven projects with packaging eclipse-plugin-test); since tycho.testArgLine is automatically used by the tycho-surefire-plugin and since we have no special test configuration, the pom.xml of our test projects is just as simple as this:

if you need a custom configuration, then you have to explicitly mention ${tycho.testArgLine} in the <argLine>.

Now we want to create an aggregate Jacoco report for the classes in plugin1 and plugin2 projects (tested by plugin1.tests and plugin2.tests, respectively); each test project will generate a jacoco.exec file with coverage information. Before Jacoco 0.7.7, creating an aggregate report wasn’t that easy and required to store all coverage data in a single an .exec file and then use an ant task (with a manual configuration specifying all the source file and class file paths). In 0.7.7, the jacoco:report-aggregate has been added, which makes creating a report really easy!

Here’s an excerpt of the documentation:

Creates a structured code coverage report (HTML, XML, and CSV) from multiple projects within reactor. The report is created from all modules this project depends on. From those projects class and source files as well as JaCoCo execution data files will be collected. […] This also allows to create coverage reports when tests are in separate projects than the code under test. […]

Using the dependency scope allows to distinguish projects which contribute execution data but should not become part of the report:

  • compile: Project source and execution data is included in the report.
  • test: Only execution data is considered for the report.

So it’s just a matter of creating a separate project (I called that example.tests.report) where we:

  • configure the report-aggregate goal (in this example I bind that to the verify phase)
  • add as dependencies with scope compile the projects containing the actual code and with scope test the projects containing the tests and .exec data

That’s all!

Now run this command in the example.parent project:

and when the build terminates, you’ll find the HTML code coverage report for all your projects in the directory (again, you can configure jacoco with a different output path, that’s just the default):

/example.tests.report/target/site/jacoco-aggregate

jacoco-report

Since, besides the HTML report, jacoco will create an XML report, you can use any tool that keeps track of code coverage, like the online free solution Coveralls (https://coveralls.io/). Coveralls is automatically accessible from Travis (I assume that you know how to connect your github projects to Travis and Coveralls). So we just need to configure the coveralls-maven-plugin with the path of the Jacoco xml report (I’m doing this in the parent pom, in the pluginManagement section in the jacoco profile):

And here’s the Travis file:

This is the coveralls page for the example project https://coveralls.io/github/LorenzoBettini/tycho-multiproject-jacoco-report-example. And an example of coverage information:

jacoco-coveralls

That’s all!

Happy coverage! 🙂

The second edition of the Xtext book has been published

The second edition of the Xtext book, Implementing Domain-Specific Languages with Xtext and Xtend, was published at the end of August: https://www.packtpub.com/web-development/implementing-domain-specific-languages-xtext-and-xtend-second-edition. So… get it while it’s hot 🙂

4965OS_5541_Implementing Domain Specific Languages with Xtext and Xtend - Second Edition

Please, see my previous post for details about the novelties in this edition.

Sources of the examples are on github: https://github.com/LorenzoBettini/packtpub-xtext-book-2nd-examples.

Hope you’ll enjoy the book!

The forthcoming second edition of the Xtext book

The second edition of the Xtext book should be published soon! In the meantime it is already available for preorders. At the time of writing, you can benefit for discounts and preorder it at 10$.

xtext-book-2nd-edition-forthcoming

I’ll detail the differences and novelties of this second edition.

But, first things first! A huge thank you to , for reviewing this second edition, and a special thank you to Sven Efftinge, for writing the foreword to this second edition. I am also grateful to itemis Schweiz, and in particular, to Serano Colameo for sponsoring the writing of this book.

While working on this second edition, I updated all the contents of the previous edition in order to make them up to date with respect to what Xtext provides in the most recent release (at the time of writing, it is 2.10).

All the examples have been rewritten from scratch. The main examples, Entities, Expressions and SmallJava, are still there, but many parts of the DSLs, including their features and implementations, have been modified and improved, focusing on efficient implementation techniques and the best practices I learned in these years. Thus, while the features of most of the main example DSLs of the book is the same as in the first edition, their implementation is completely new.

Moreover, In the last chapters, many more examples are also introduced.

Chapter 11 on Continuous Integration, which in the previous edition was called “Building and Releasing”, has been completely rewritten and it is now based on Maven/Tycho and on Gradle, since Xtext now provides a project wizard that also creates a build configuration for these build tools. Building with Maven/Tycho is described in more details in the chapter, and Gradle is briefly described. This new chapter also briefly describes the new Xtext features: DSL editor on the web and also on IntelliJ.

I also added a brand new chapter at the end of the book, Chapter 13 “Advanced Topics”, with much more advanced material and techniques that are useful when your DSL grows in size and features. For example, the chapter will show how to manually maintain the Ecore model for your DSL in several ways, including Xcore. This chapter also presents an advanced example that extends Xbase, including the customization of its type system and compiler. An introduction to Xbase is still presented in Chapter 12, as in the previous edition, but with more details.

As in the previous edition, the book fosters unit testing a lot. An entire chapter, Chapter 7 “Testing”, is still devoted to testing all aspects of an Xtext DSL implementation.

Most chapters, as in the previous edition, still have a tutorial nature.

Summarizing, while the title and the subject of most chapters is still the same, their contents have been completely reviewed, extended and, hopefully, improved.
If you enjoyed the first edition of the book and found it useful, I hope you’ll like this second edition even more.

HiDPI in KDE Plasma

HiDPI support in KDE Plasma has been recently improved! I’m afraid what’s not improved is the procedure for using that. In this post I’ll detail the steps to use HiDPI with KDE if you have a high resolution display (for example, I have that in my Linux Dell M3800).

Remember that the settings you change will not be applied completely until you logout and login again into KDE.

First of all, you need to go in Settings, then

Display and Monitor” -> “Display Configuration“. If you scroll down you see a “Scale Display” button

kde_hidpi_1

Click on that and in the “Screen Scaling” dialog, drag the “Scale” in the middle, corresponding to a scale factor of 2 and press OK.

kde_hidpi_2

Then go back to the main page of Settings, select “Font“, and force to the DPI font to 168. (or even more if you want).

kde_hidpi_3

Apply the settings, logout and login again into KDE and you’ll enjoy your HiDPI display with a scale factor of 2, which basically means it will be usable 🙂

Be warned, KDE applications will look correctly, but there’ll still be other applications which might not have been implemented with HiDPI in mind… and they’ll still look horrible even with the scaling you set.

Publish an Eclipse p2 composite repository on Bintray

In a previous post I showed how to manage an Eclipse composite p2 repository and how to publish an Eclipse p2 composite repository on Sourceforge. In this post I’ll show a similar procedure to publish an Eclipse p2 composite repository on Bintray. The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted).

The complete example at https://github.com/LorenzoBettini/p2composite-bintray-example.

First of all, this procedure is quite different from the ones shown in other blogs (e.g., this one, this one and this one): in those approaches the p2 metadata (i.e., artifacts.jar and content.jar) are uploaded independently from a version, always in the same directory, thus overwriting the existing metadata. This leads to the fact that only the latest version of published features and bundles will be available to the end user. This is quite against the idea that old versions should still be available, and in general, all the versions should be available for the end users, especially if a new version has some breaking change and the user is not willing to update (see p2’s do’s and do not’s). For this reason, I always publish p2 composite repositories.

Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with their own p2 metadata that should never be overwritten.

On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

In this example, all the binary artifacts are meant to be made available from: https://dl.bintray.com/lorenzobettini/p2-composite-example/. (NOTE: the artifacts are not effectively available from that URL anymore).

Directory Structure

What I aim at is to have the following remote paths on Bintray:

  • releases: in this directory all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20160129-1616/ etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory the composite metadata will be uploaded. The URL https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/ should be used by your Eclipse users to install the features in their Eclipse of for target platform resolution (depending on the kind of projects you’re developing). All versions will be available from this composite update site; I call this main composite. Moreover, you can provide the URL to a child composite update site that includes all versions for a given major.minor stream, e.g., https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.0/, https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.1/, etc. I call each one of these, child composite.
  • zipped: in this directory we will upload the zipped p2 repository for each version.

Summarizing we’ll end up with a remote directory structure like the following

Uploading using REST API

In the posts I mentioned above, the typical line to upload contents with the REST API is of the shape

For metadata, and

For features and plugins.

But this has the drawback I was mentioning above.

Thanks to the Bintray Support, I managed to use a different scheme that allows me to store p2 metadata for a single p2 repository in the same directory of the p2 repository itself and to keep those metadata separate for each single release.

To achieve this, we need to use another URL scheme for uploading, using matrix params options or header options.

This means that we’ll upload everything with this URL

On the contrary, for uploading p2 composite metadata, we’ll use the schema of the other approaches, i.e., we will not associate it to any specific version; we just need to specify the desired remote path where we’ll upload the main and the child composite metadata.

Building Steps

During the build, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build, which will rely on some Ant scripts can be summarized as follows:

  • Retrieve the remote composite metadata compositeContent/Artifacts.xml, both for the main composite and the child composite. If these metadata cannot be found remotely, we fail gracefully: it means that it is the first time we release, or, if only the child composite cannot be found, that we’re releasing a new major.minor version. These will be downloaded in the directories target/main-composite and target/child-composite respectively. These will be created anyway.
  • Preprocess possible downloaded composite metadata: if this property is present

    We must temporarily set it to false, otherwise we will not be able to add additional elements in the composite site with the p2 ant tasks.
  • Update the composite metadata using the version information passed from the Maven/Tycho build using the p2 Ant tasks for composite repositories
  • Post process the composite metadata (i.e., put the property p2.atomic.composite.loading above to true, see https://bugs.eclipse.org/bugs/show_bug.cgi?id=356561 for further details about this property). UPDATE: Please have a look at the comment section, in particular, the comments from pascalrapicault, about this property.
  • Upload everything to bintray: both the new p2 repository, its zipped version and all the composite metadata.

IMPORTANT: the pre and post processing of composite metadata that we’ll implement assumes that such metadata are not compressed. Anyway, I always prefer not to compress the composite metadata since it’s easier, later, to manually change them or reviewing.

Technical Details

You can find the complete example at https://github.com/LorenzoBettini/p2composite-bintray-example. Here I’ll sketch the main parts. First of all, all the mechanisms for updating the composite metadata and pushing to Bintray (i.e., the steps detailed above) are in the project p2composite.example.site, which is a Maven/Tycho project with eclipse-repository packaging.

The pom.xml has some properties that you should adapt to your project, and some other properties that can be left as they are if you’re OK with the defaults:

If you change the default remote paths it is crucial that you update the child.repository.path.prefix consistently. In fact, this is used to update the composite metadata for the composite children. For example, with the default properties the composite metadata will look like the following (here we show only compositeContent.xml):

You can also see that two crucial properties, bintray.user and, in particular, bintray.apikey should not be made public. You should keep these hidden, for example, you can put them in your local .m2/settings.xml file, associated to the Maven profile that you use for releasing (as illustrated in the following). This is an example of settings.xml

In the pom.xml of this project there is a Maven profile, release-composite, that should be activated when you want to perform the release steps described above.

We also make sure that the generated zipped p2 repository has a name with fully qualified version

In the release-composite Maven profile, we use the maven-antrun-plugin to execute some ant targets (note that the Maven properties are automatically passed to the Ant tasks): one to retrieve the remote composite metadata, if they exist, and the other one as the final step to deploy the p2 repository, its zipped version and the composite metadata to Bintray:

The Ant tasks are defined in the file bintray.ant. Please refer to the example for the complete file. Here we sketch the main parts.

This Ant file relies on some properties with default values, and other properties that are expected to be passed when running these tasks, i.e., from the pom.xml

To retrieve the existing remote composite metadata we execute the following, using the standard Ant get task. Note that if there is no composite metadata (e.g., it’s the first release that we execute, or we are releasing a new major.minor version so there’s no child composite for that version) we ignore the error; however, we still create the local directories for the composite metadata:

For preprocessing/postprocessing composite metadata (in order to deal with the property p2.atomic.composite.loading as explained in the previous section) we have

Finally, to push everything to Bintray we execute curl with appropriate URLs, as we described in the previous section about REST API. The single tasks for pushing to Bintray are similar, so we only show one for uploading the p2 repository associated to a specific version, and the one for uploading p2 composite metadata. As detailed at the beginning of the post, we use different URL shapes.

To update composite metadata we execute an ant task using the tycho-eclipserun-plugin. This way, we can execute the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant tasks for managing composite repositories.

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

The file packaging-p2-composite.ant is similar to the one I showed in a previous post. We use the p2 Ant tasks for adding a child to a composite p2 repository (recall that if there is no existing composite repository, the task for adding a child also creates new compositeContent.xml/Artifacts.xml; if a child with the same name exists the ant task will not add anything new).

Removing Released artifacts

In case you want to remove an existing released version, since we upload the p2 repository and the zipped version as part of a package’s version, we just need to delete that version using the Bintray Web UI. However, this procedure will never remove the metadata, i.e., artifacts.jar and content.jar. The same holds if you want to remove the composite metadata. For these metadata files, you need to use the REST API, e.g., with curl. I put a shell script in the example to quickly remove all the metadata files from a given remote Bintray directory.

Performing a Release

For performing a release you just need to run

on the p2composite.example.tycho project.

Concluding Remarks

As I said, the procedure shown in this example is meant to be easily reusable in your projects. The ant files can be simply copied as they are. The same holds for the Maven profile. You only need to specify the Maven properties that contain values for your very project, and adjust your settings.xml with sensitive data like the bintray APIKEY.

Happy Releasing! 🙂

Flickering for Intel graphic card in Linux 4.2

After I upgraded my Dell Precision m3800 to the new Kubuntu Wily 15.10 I had a very bad surprise: the screen was continuously flickering in a way that it was unusable. This happens only if you are NOT using the default highest resolution 3200×1800 which, at least for me, is really too small.

I thought it was a problem with the new Plasma, but the culprit is the Intel i915 driver in the 4.2 kernel which comes with the new version of (K)ubuntu, as reported in this bug: https://bugs.freedesktop.org/show_bug.cgi?id=91393. In particular, two commits seem to be the cause, and reverting them fixes the problem (hopefully the whole bug will be fixed).

I’m detailing the procedure to get the kernel sources, reverting the two commits, and compile your own fixed kernel:

  • You need git to revert the patches (though you’re not getting the kernel sources from the git repository), so you need to install that if it’s not already installed.
  • Install the kernel sources for your current kernel:
    apt-get source linux-image-$(uname -r)
    this will unpack the kernel sources in the current directory (you don’t need to use sudo for this; if you use sudo, you may want to change the owner of the sources’ directory to match your user, so that you won’t need to compile the kernel as root)
  • Install required packages to compile the kernel
    sudo apt-get build-dep linux-image-$(uname -r)
  • Install other required packages (needed when you install your compiled kernel later):
    sudo apt-get install linux-cloud-tools-common linux-tools-common
  • Save the above mentioned two commits into two local files, in the following order (e.g., name them patch1.txt and patch2.txt):
    https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/patch/?id=4e96c97742f4201edf1b0f8e1b1b6b2ac6ff33e7
    https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/patch/?id=5fa836a9d85975c5f0f1219669523c1f0ac64349
  • Enter in the directory where the kernel sources have been unpacked and revert the two commits in the reversed order:
    git apply -R patch2.txt
    git apply -R patch1.txt
  • Run the following commands in the kernel sources directory as described here:
    chmod a+x debian/scripts/*
    chmod a+x debian/scripts/misc/*
    fakeroot debian/rules clean
  • “In order to make your kernel “newer” than the stock Ubuntu kernel from which you are based you should add a local version modifier. Add something like “+test1″ to the end of the first version number in the debian.master/changelog file, before building. This will help identify your kernel when running as it also appears in uname -a.”
  • Compile the kernel (this will take some time, and require some free space on your hard disk):
    fakeroot debian/rules binary-headers binary-generic
  • This will create in the end some .deb files in the parent folder; install them all with dpkg, e.g., with
    sudo dpkg -i linux*4.2*.deb
  • reboot and enjoy your Linux without flickering 🙂

Oomph setup for Xtext projects

In this blog post I’ll describe my experience in preparing an Oomph setup for a non-trivial Xtext project, Xsemantics.

This setup was kind of challenging because of the following features of my project, but I guess most of them can be found in any Xtext project:

  • generated sources are not stored in the Git repository (these include Xtend generated Java files and Java files generated during the MWE2 workflow)
  • the MWE2 workflow(s) must be run during the workspace setup (I have several DSLs in this project)
  • one of the DSL “inherits” from another DSL, so when running the MWE2 of the inheriting DSL the parent DSL must have already been built (i.e., Java classes must be compiled)

I hope this post can be useful for other Xtext developers.

This will not be a tutorial: it will be a collection of hints and procedures for preparing the final setup which can be found here: https://github.com/LorenzoBettini/xsemantics/blob/master/devtools/it.xsemantics.workspace/Xsemantics.setup.

By the way, Xsemantics setup is part of the official Oomph catalog, so you can try it yourself (it’s in the “Github projects” node).

This blog post assumes that you’re already familiar with Oomph and its authoring system.

The initial setup file can be created with the Oomph wizard, so I won’t talk about that.

Source folders in the repository

I found that it is better if all the source folders, including the source folders containing generated code, to be in the git repository. By “source folder” I mean a folder in an Eclipse project which is in the build path as a source folder. Thus, src-gen and xtend-gen should be in the git repository, but NOT their contents (at least, that’s what I want). Remember that git does not store empty folders, so you need to put a .gitignore in such folders stating to ignore everything but itself:

This way, when the containing projects will be imported in Eclipse you won’t risk the Java compiler to stop immediately because of a missing source folder.

Note that this does not seem to always be required: there are projects that can be built anyway, but I found it easier to always include them all.

If you put the .gitignore in more than one *-gen folder you’ll get a warning from Eclipse since it tries to copy those files to the bin folder and it would end up with duplicates. You can avoid this warning by setting the preference “Java Compiler” => “Building” => “Output folder” => “Filtered resources” as shown in the screenshot (I also avoid copying other files into the bin folder):

oomph-xsemantics1

Use platform URI in MWE2

You should change the grammarURI in your .mwe2 files: they should be platform URIs as opposed to classpath URIs. Otherwise, the MWE2 workflows will fail to find the Xtext grammars when run during the Oomph setup. An example is shown in the following screenshot

oomph-xsemantics2

Creating a “root” feature for Targlets task

This is not strictly related to Xtext. For the targlets task, in order to specify my own features and bundles, I prefer to specify one single feature which acts as a root for all my Eclipse projects that must be imported in the workspace and that participate to the targ(l)et platform via their requirements. Remember that Oomph will resolve dependencies transitively also for your projects.

To this aim, I define a feature project, e.g., it.xsemantics.workspace (which by the way also contains the Oomph setup file).

In this feature project I specify feature and bundle dependencies to all my other projects (using a feature project just makes the dependency specification easier) in the shape of included plug-ins and included features. Typically the included features are the installable features that you deploy to an update site, and the included plug-ins are the test projects (which are not part of installable features):

oomph-xsemantics3 oomph-xsemantics4

You only need to make sure that transitively these inclusions span all your project’s features and bundles.

However, this won’t help for projects that are neither plug-in projects nor feature projects, like, e.g., all releng projects. Of course you could use the “Project Import” task, but I prefer to create a new “Component Extension” file:oomph-xsemantics5

Here you can specify additional dependencies, in particular, using the type “Project” to refer to Eclipse projects which are not plug-in projects (nor feature projects):

oomph-xsemantics6

Now, when you define your “Targlets” you can refer to this root feature project, representing all your source projects. Then you can specify additional features for your target platform as usual:

oomph-xsemantics7

Use variables for Xtext versions

Since I want to have separate Eclipses and workspaces for developing Xsemantics against the current version of Xtext 2.8.4 and the development version 2.9.0 (taken from the nightly update sites), I find it very important to refer to Xtext update sites using Oomph variables (in my case xtext.site and mwe2.size):

oomph-xsemantics8

The values of such variables are defined in two separate Git branches specifications (you see I have variables also for API baseline settings, but I won’t talk about them since they’re not related to the aim of this post):

oomph-xsemantics9

I’ll use those variables also for the “P2 director” tasks; this will ensure that the Xtext plug-ins I have in Eclipse will be the same as the ones in the target platform:

oomph-xsemantics10

Running MWE2

This was the most challenging part: although Oomph provides a “Launch” task, running mwe2 workflows during the workspace setup has always been a problem (at least, that’s what I find in most places on the web).

First of all, you need to run the mwe2 launch AFTER the “Targlets” task and after a “Project Build” task

oomph-xsemantics11 oomph-xsemantics12

For the “Launch” task, you need to use the name of the .launch file, without .launch.

And here’s another small problem: of course the “Project build” task will leave the workspace full of error markers after the execution since the generated Java files are still not there; so the launch of the mwe2 workflow will make the famous popup dialog appear, asking whether you want to cancel the launch because of errors in the workspace… this is very annoying.

To avoid this, you can put a “Preference” task to always disable that dialog (you may want to renable that check later manually, after the workspace is provisioned):

oomph-xsemantics13Now the launch will start automatically without popup dialogs 🙂

By the way, don’t get fooled by the property name “cancel_launch…”; this actually corresponds to this preference “Continue launch…”:

oomph-xsemantics14Dealing with DSL dependencies

One of the Xsemantics DSL example “FJ cached” extends another DSL example “FJ”, thus, before running the MWE2 for “FJ cached” we must make sure that “FJ” has already been built, i.e., its MWE2 workflow has been executed and its Java sources have been compiled.

So we must insert another “Project Build” task at the right position:

oomph-xsemantics15

That’s all!

Now the whole setup procedure will run smoothly and at the end all the projects will be imported and will show no sign of error (not even a warning 😉

Other features

This setup also features API baseline setting, and Mylyn Github query.

You may want to try it yourself; as stated above, Xsemantics is part of the official Oomph catalog. The whole procedure might take a few minutes to conclude. During the procedure, as always, you might be asked a few passwords, depending on the choices you made before starting the setup.

Conclusions

Oomph is great great great! 🙂 Ed Merks and Eike Stepper really made a wonderful project 🙂

I now started to port all my Xtext projects to Oomph. By the way, if your Xtext project is simpler (i.e., no DSL dependencies) you may want to have a look at another example, Java–, which is also part of the official Oomph catalog.

Happy Oomphing! 😉

 

Running SWTBot tests in Travis

The problem I was having when running SWTBot tests in Travis CI was that I could not use the new container-based infrastructure of Travis, which allows to cache things like the local maven repository. This was not possible since to run SWTBot tests you need a Window Manager (in Linux, you can use metacity), and so you had to install it during the Travis build; this requires sudo and using sudo prevents the use of the container-based infrastructure. Not using the cache means that each build would download all the maven artifacts from the start.

Now things have changed 🙂

When running in the container-based infrastructure, you’re still allowed to use Travis’  APT sources and packages extensions, as long as the package you need is in their whitelist. Metacity was not there, but I opened a request for that, and now metacity is available 🙂

Now you can use the container-based infrastructure and install metacity together (note that you won’t be able to cache installed apt packages, so each time the build runs, metacity will have to be reinstalled, but installing metacity is much faster than downloading all the Maven/Tycho artifacts).

The steps to run SWTBot tests in Travis can be summarized as follows:

I left the old steps “before_install” commented out, just as a comparison.

  • “sudo: false” enables the container based infrastructure
  • “cache:” ensures that the Maven repository is cached
  • “env:” enables the use of graphical display
  • “addons:apt:packages” uses the extensions that allow you to install whitelisted APT packages (metacity in our case).
  • “before_script:” starts the virtual framebuffer and then metacity.

Then, you can specify the Maven command to run your build (here are some examples:

Happy SWTBot testing! 🙂

 

 

Using the new Eclipse Installer

I’ve just started using the brand new Eclipse installer, and I’d like to report my experiences here. First of all, a big praise to Ed Merks and Eike Stepper for creating Oomph, on which the installer is based. 🙂

First of all, the installer is currently available in the “Developer Builds” section:

eclipse-installer1

Once you downloaded it and extracted it, just run the executable oomph:

eclipse-installer2If you see an exclamation mark (on the top right corner), click on it, you’ll see some updates are available, so update it right away, and when it’s done, press OK to restart it.

The very same top right corner, also opens a menu for customization of some features, the one I prefer is the Bundle Pool, a cool feature that has been in Eclipse for so many years, and so very badly advertised, I’m afraid!

“p2 natively supports the notion of bundle pooling. When using bundle pooling, multiple applications share a common plugins directory where their software is stored. There is no duplication of content, and no duplicated downloads when upgrading software.”

One of the cool things of Oomph is that it natively and automatically supports bundle pools, it makes it really easy to manage them and makes installation faster and with less space requirements (what’s already been downloaded and installed won’t have to be downloaded and installed again for further Eclipse installations).

eclipse-installer3

If you select that menu item, you can manage your bundle pools; the installer already detected existing bundle pools (I’ve been using them myself, manually, for some time now, and it detected that):

eclipse-installer4

For this blog post I will create another bundle pool, just for testing. To create a new bundle pool, you first need to create a new p2 agent; the agent is responsible to manage the bundle pool, and to keep track of all the bundles that a specific Eclipse installation requires (this is also known as a p2 profile).

So I select “New Agent…” and choose a location in my hard disk; this will also set a bundle pool:

eclipse-installer5

Just for demonstration, I’ll select the “pool”, “Delete…”, and create a “New Bundle Pool…” for the new agent, in another directory:

eclipse-installer6

Then I select the new bundle pool, and press “OK”.

From now on, all the installations will be managed by the new agent, and all bundles will be stored in the new bundle pool.

OK, now, back to the main window, let’s start installing “Eclipse IDE for Java Developers”

In the next windows, I choose to install the new Eclipse in a different folder from the proposed default:

eclipse-installer7

Let’s press “INSTALL”, and accept the LICENSE, the installation starts:

eclipse-installer8You’ll see that the installer is really quick (as far as I know, Oomph improved p2 internal mechanisms). It only took about a minute to install this Eclipse on my computer.

Then, you’re ready to launch this installation, or see the installation log.

eclipse-installer9But first, let’s have a look at the directory layout:

eclipse-installer10

you see that the installed eclipse does not have the typical directory structure: it has no “features”/”plugins” directories: these are in the shared bundle pool. Also note that the p2 agent location has a directory representing the profile of the installed Eclipse.

Let’s try and install another Eclipse, e.g., the “Eclipse DSL Tools” (what else if not the one with the cool Xtext framework? 😉

The dialog proposes an installation directory based on my previous choice; I also select “Luna” as the platform:

eclipse-installer11

Let’s press “INSTALL”… WOW! This time it’s even faster! You know why: only the new bundles are downloaded, everything else is shared. This also means: less space wasted on your hard disk! 🙂

But there are cooler things: Bundle pool management!

Go back to the “Bundle Pool Management” dialog, select the checkbox “Show Profiles” and you see the profiles handled by the current agent:

eclipse-installer12Select the agent and press “Analyze…”

You can see the bundles used by which profile:

eclipse-installer13Hope you enjoy this new installer! 🙂

 

 

 

 

Deploy your own custom Eclipse

This is the follow up of my previous post about building a custom Eclipse distribution. In this post I’ll show how to deploy the p2 site and the zipped products on Sourceforge. Concerning the p2 site, I’ll use the same technique, with some modifications, for building a composite update site and deploy it with rsync that I showed on another post.

In particular, we’ll accomplish several tasks:

  • creating and deploying the update site with only the features (without the products)
  • creating and deploying the update site including product definition and the zipped provisioned products
  • creating a self-contained update site (including all the dependencies)
  • providing an ant script for installing your custom Eclipse from the net

The code of the example can be found at: https://github.com/LorenzoBettini/customeclipse-example. In particular, I’ll start from where I left in the previous post.

The source code assumes a specific remote directory on Sourceforge, that is part of one of my Sourceforge projects, and it is writable only with my username and password. If you want to test this example, you can simply modify the property remote.dir in the parent pom specifying a local path in your computer (or by passing a value to the maven command with the syntax -Dremote.dir=<localpath>). Indeed, rsync can also synchronize two local directories.

Recall that when you perform a synchronization, specifying the wrong local directory might lead to a complete deletion of that directory. Moreover, source and destinations URLs in rsync have a different semantics depending on whether they terminate with a slash or not, so make sure you understand them if you need to customize this ant file or to pass special URLs.  

Creating and Deploying the p2 composite site

This part reuses most of what I showed in the previous posts:

In this blog post we want to be able to add a new p2 site to the composite update site (and deploy it) for two different projects:

  • customeclipse.example.site: This is the update site with only our features and bundles
  • customeclipse.example.ide.site: This is the update site with our features and bundles and the Eclipse product definition.

To reuse the ant files for managing the p2 composite update site and syncing it with rsync, and the Maven executions that use such ant files, we put the ant files in the parent project customeclipse.example.tycho, and we configure the Maven executions in the pluginManagement section of the parent pom.

We also put in the parent pom all the properties we’ll use for the p2 composite site and for rsync (again, please have a look at the previous posts for their meaning)

The pluginManagement section contains the configuration for managing the composite update site.

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

The pluginManagement section also contains the configuration for updating and committing the composite update site to Sourceforge.

Now, we can simply activate such plugins in the build sections of our site projects described above.

In particular, we activate such plugins only inside profiles; for example, in the customeclipse.example.site project we have:

In customeclipse.example.ide.site we have similar sections, but the profiles are called differently, release-ide-composite and deploy-ide-composite, respectively.

So, if you want to update the p2 composite site with a new version containing only the features/bundles and deploy it on Sourceforge you need to run maven as follows

If you want to do the same, including the custom product definitions you need to run maven as follows (the additional build-ide profile is required because the customeclipse.example.ide.site is included as a Maven module only when that profile is activated; this way, products are created only when that profile is activated – just because provisioning a product requires some time and we don’t want to do that on normal builds)

NOTE: The remote directory on Sourceforge hosting  the composite update site will always be the same. This means that the local composite update site created and updated by both deploy-composite and deploy-ide-composite will be synchronized with the same remote folder.

In the customeclipse.example.ide.site, we added a p2.inf file with touchpoint instructions to add as update site in our Eclipse products the update site hosted on Sourceforge: http://sourceforge.net/projects/eclipseexamples/files/customeclipse/updates.

Deploying the zipped products

To copy the zipped products on Sourceforge we will still use rsync; actually, we won’t use any synchronization features: we only want to copy the zip files. I could have used the Ant Scp or Sftp tasks, but I experienced many problems with such tasks, so let’s use rsync also for that.

The ant file for rsync is slightly different with respect to the one shown in the previous post, since it has been refactored to pass the rsync macro more parameters. We still have the targets for update/commit synchronization; we added another target that will be used to simply copy something (i.e., the zipped products) to the remote directory, without any real synchronization. You may want to have a look at rsync documentation to fully understand the command line arguments.

In the customeclipse.example.ide.site, in the deploy-ide-composite profile, we configure another execution for the maven ant plugin (recall that in this profile the rsync synchronization configured in the parent’s pom pluginManagement section is also executed); this further execution will copy the zipped products to a remote folder on Sourceforge (as detailed in the previous post, you first need to create such folder using the Sourceforge web interface):

Note that when calling the rsync-copy-dir-contents of the rsync.ant file, we pass the properties as nested elements, in order to override their values (such properties’ value are already defined in the parent’s pom, and for this run we need to pass different values).

Now, if we run

many things will be executed:

  • rsync will synchronize our local composite update site with the remote composite update site
  • a new p2 site will be created, and added to our local composite update site
  • rsync will synchronize our local changes with the remote composite update site
  • Eclipse products will be created and zipped
  • the zipped products will be copied to Sourceforge

A self-contained p2 repository

Recall from the previous post that since in customeclipse.example.ide.feature we added Eclipse features (such as the platform and jdt) as dependencies (and not as included features), then the p2 update site we’ll create will not contain such features: it will contain only our own features and bundles. And that was actually intentional.

However, this means that the users of our features and of our custom Eclipse will still need to add the standard Eclipse update site before installing our features or updating the installed custom Eclipse.

If you want your p2 repository to be self-contained, i.e., to include also the external dependencies, you can do so by setting includeAllDependencies to true in the configuration of the tycho-p2-repository-plugin.

It makes sense to do that in the customeclipse.example.ide.site, so that all the dependencies for our custom Eclipse product will end up in the p2 repository:

However, doing so every time we add a new p2 update site to the composite update site would make our composite update site grow really fast in size. A single p2 repository for this example, including all dependencies is about 110Mb. A composite update site with just two p2 repositories would be 220Mb, and so on.

I think a good rule of thumb is

  • include all dependencies the first time we release our product’s update site (setting the property includeAllDependencies to true, and then setting it to false right after the first release)
  • for further releases do not include dependencies
  • include the dependencies again when we change the target platform of our product (indeed, Tycho will take the dependencies from our target platform)

Provide a command line installer

Now that our p2 composite repository is on the Internet, our users can simply download the zip file according to their OS, unzip it and enjoy it. But we could also provide another way for installing our custom Eclipse: an ant file so that the user will have to

The ant file will use the p2 director command line application to install our Eclipse product directly from the remote update site (the ant file is self-contained since if the director application is not already installed, it will install it as the first task).

Here’s the install.ant file (note that we ask the director to install our custom Eclipse product, customeclipse.example.ide and, explicitly, the main feature customeclipse.example.feature; this reflects what we specified in the product configuration, in particular, the fact that customeclipse.example.feature must be a ROOT feature, so that it can be updatable – see all the details in the previous post)

Note that this will always install the latest version present in the remote composite update site.

For instance, consider that you created zipped products for version 1.0.0, then you deployed a small upgrade only for your features, version 1.0.1, i.e., without releasing new zipped products. The ant script will install the custom Eclipse including version 1.0.1 of your features.

Some experiments

You may want to try and download the zipped product for your OS from this URL: https://sourceforge.net/projects/eclipseexamples/files/customeclipse/products/

After I deployed the self-contained p2 repository and the zipped products (activating the profiles release-ide-composite and deploy-ide-composite, with the property includeAllDependencies set to true, using the project customeclipse.example.ide.site), I deployed another p2 repository into the composite site only for the customeclipse.example.feature (activating the profiles release-composite and deploy-composite, i.e., using the project customeclipse.example.site).

Unzip the downloaded product, and check for updates (recall that the product is configured with the update site hosted on Sourceforge, through the p2.inf file described before). You will find that there’s an update for the Example Feature:

customeclipse before upgrading customeclipse available updates

After the upgrade and restart you should see the new version of the feature installed:

customeclipse after upgrading

Now, try to install the product using the ant file shown above, that can be downloaded from https://raw.githubusercontent.com/LorenzoBettini/customeclipse-example/master/customeclipse.example.tycho/install.ant.

You’ll have to wait a few minutes (and don’t worry about cookie warnings); run this version of the custom Eclipse, and you’ll find no available updates: check the installation details and you’ll see you already have the latest version of the Example Feature.

That’s all! Hope you find this post useful and… Happy Easter 🙂

Build your own custom Eclipse

In this tutorial I’ll show how to build a custom Eclipse distribution with Maven/Tycho. We will create an Eclipse distribution including our own features/plugins and standard Eclipse features, trying to keep the size of the final distribution small.

The code of the example can be found at: https://github.com/LorenzoBettini/customeclipse-example

First of all, we want to mimic the Eclipse SDK product and Eclipse SDK feature; have a look at your Eclipse Installation details

eclipse SDK installation details

You see that “Eclipse SDK” is the product (org.eclipse.sdk.ide), and “Eclipse Project SDK” is the feature (org.eclipse.sdk.feature.group).

Moreover, we want to deal with a scenario such that

Our custom feature can be installed in an existing Eclipse installation, thus we can release it independently from our custom Eclipse distribution. Our custom Eclipse distribution must be updatable, e.g., when we release a new version of our custom feature. 

The project representing our parent pom will be

  • customeclipse.example.tycho

The target platform is defined in

  • customeclipse.example.targetplatform

For this example we only need the org.eclipse.sdk feature and the native launcher feature

We created a plugin project and a feature project including such plugin (the plugin is nothing fancy, just an “Hello World Command” created with the Eclipse Plug-in project wizard):

  • customeclipse.example.plugin
  • customeclipse.example.feature

We also create another project for the p2 repository (Tycho packaging type: eclipse-repository) that distributes our plugin and feature (including the category.xml file)

  • customeclipse.example.site

All these projects are then configured with Maven/Tycho pom.xml files.

Then we create another feature that will represent our custom Eclipse distribution

  • customeclipse.example.ide.feature

This feature will then specify the features that will be part of our custom Eclipse distribution, i.e., our own feature (customeclipse.example.feature) and all the features taken from the Eclipse update sites that we want to include in our custom distribution.

Finally, we create another site project (Tycho packaging type: eclipse-repository) which is basically the same as customeclipse.example.site, but it also includes the product definition for our custom Eclipse product:

  • customeclipse.example.ide.site

NOTE: I’m using two different p2 repository projects because I want to be able to release my feature without releasing the product (see the scenario at the beginning of the post). This will also allow us to experiment with different ways of specifying the features for our custom Eclipse distribution.

Product Configuration

This is our product configuration file customeclipse.example.ide.product in the project customeclipse.example.ide.site and its representation in the Product Configuration Editor:

custom eclipse product configuration1

Note that we use org.eclipse.sdk.ide and org.eclipse.ui.ide.workbench for launching product extension identifier and application (we don’t have a custom application ourselves).

ATTENTION: Please pay attention to “uid” and “id” in the .product file, which correspond to “ID” and “Product” in the Product definition editor (quite confusing, isn’t it? 😉

This product configuration includes our customeclipse.example.ide.feature; we also inserted in the end the standard start level configuration, and other properties, like the standard workspace location.

The pom in this project will also activate the product materialization and archiving (we also specify the file name of the zip with our own pattern):

We chose NOT to include org.eclipse.example.ide.site as a module in our parent pom.xml: we include it only when we enable the profile build-ide: installing and provisioning a product takes some time, so you may not want to do that on every build invocation.  In that profile we add the customeclipse.example.ide.site module, this is the relevant part in our parent pom

In this profile, we also specify the environments for which we’ll build our custom Eclipse distribution. When this profile is not active, the target-platform-configuration will use only the current environment.

In the rest of the tutorial we’ll examine different ways of defining customeclipse.example.ide.feature. In my opinion, only the last one is the right one; but that depends on what you want to achieve. However, we’ll see the result and drawbacks of all the solutions.

You may want to try the options we detail in the following by cloning the example from https://github.com/LorenzoBettini/customeclipse-example and by modifying the corresponding files.

Include org.eclipse.sdk

The first solution is to simply include the whole org.eclipse.sdk feature in our customeclipse.example.ide.feature:

You can run the maven build specifying the profile build-ide

To get the materialized products (and the corresponding zipped versions).

NOTE: if you enable the tycho-source-feature-plugin in the parent pom to generate also source features, you’ll get this error during the build:

That’s because it tries to include in customeclipse.example.ide.feature.source the source feature of org.eclipse.sdk, which does not exist (org.eclipse.sdk already includes sources of its included features). You need to tell the tycho plugin to skip the source of org.eclipse.sdk:

The build should succeed.

Let’s copy the installed product directory (choose the one for your OS platform) to another folder; we perform the copy because a subsequent build will wipe out the target directory and we want to do some experiments. Let’s run the product and we see that our custom IDE shows our custom feature menu “Sample Menu” and the corresponding tool bar button:

If we check the installation details we see the layout mimicking the ones of Eclipse SDK (which is included in our product)

custom eclipse sdk installation details

Now let’s run the build again with above maven command.

If you have a look at the target directory you see that besides the products, in custom.eclipse.ide.site/target you also have a p2 repository,

custom ide site target

we will use the p2 repository to try and update the custom ide that we created in the first maven build (the one we copied to a different directory and that we ran in the previous step). So let’s add this built repository (in my case is /home/bettini/work/eclipse/tycho/custom-eclipse/customeclipse.example.ide.site/target/repository/) in the custom ide’s “Install New Software” dialog.

You see our Example Feature, and if you uncheck Group items by category you also see the Custom Eclipse Project SDK feature (corresponding to customeclipse.example.ide.feature) and Custom Eclipse SDK (corresponding to our product definition uid customeclipse.example.ide).

custom ide install new software 1 custom ide install new software 2

But wait… only the product is updatable! Why? (You see that’s the only one with the icon for updatable elements; if you try “Check for updates” that’s the only one that’s updatable)

Why can’t I update my “Example Feature” by itself?

If you try to select “Example Feature” in the “Install” dialog to force the update, and press Next…

custom ide install new software force 1

you’ll get an error, and the proposed solution, i.e., also update the product itself:

custom ide install new software force 2

And if you have a look at the original error…

custom ide install new software force 3

…you get an idea of the problem beneath: since we INCLUDED our “customeclipse.example.feature” in our product’s feature “customeclipse.example.ide.feature” the installed product will have a strict version requirement on “customeclipse.example.feature”: it will want exactly the version the original product was built with; long story short: you can’t update that feature, you can only update the whole product.

Before going on, also note in the target directory you have a zip of the p2 repository that has been created: customeclipse.example.ide.site-1.0.0-SNAPSHOT.zip it’s about 200 MB!  That’s because the created p2 repository contains ALL  the features and bundles INCLUDED in your product (which in our case, it basically means, all features INCLUDED in “customeclipse.example.ide.feature”).

Require org.eclipse.sdk

Let’s try and modify “customeclipse.example.ide.feature” so that it does NOT include the features, but DEPENDS on them (we can also set a version range for required features).

Let’s build the product.

First of all, note that the p2 repository zip in the target folder of customeclipse.example.ide.site is quite small!  Indeed, the repository contains ONLY our features, not all the requirements (in case, you can also force Tycho to include all the requirements), since, as stated above, the required feature will not be part of the repository.

Now let’s do the experiment once again:

  1. copy the built product for your OS into another directory
  2. run the product custom ide
  3. run another maven build
  4. add the new created p2 repository in the custom ide “Install new software” dialog

Well… the Example Feature does not appear as updatable, but this time, if we select it and press Next, we are simply notified that it is already installed, and that it will be updated

custom ide install new software force 4

So we can manually update it, but not automatically (“Check for updates” will still propose to update the whole product).

To make a feature updatable in our product we must make it a “Root level feature” (see also http://codeandme.blogspot.com/2014/06/tycho-11-install-root-level-features.html).

At the time of writing the Eclipse product definition editor does not support this feature, so we must edit the .product definition manually and add the line for specifying that customeclipse.example.feature must be a root level feature:

Let’s build again, note that this time the p2 director invocation explicitly installs customeclipse.example.feature

Let’s do the experiment again; but before trying to update let’s see that the installed software layout is now different: our Example Feature is now a root level feature (it’s also part of our Custom SDK IDE since it’s still required by customeclipse.example.ide.feature but that does not harm, and you may also want to remove that as a requirement in customeclipse.example.ide.feature).

custom eclipse sdk installation details 2

Hey! This time our “Example Feature” is marked as updatable

custom ide install new software 3

and also Check for updates proposes “Example Feature” as updatable independently from our product!

custom ide install new software 4

What happens if we make also customeclipse.example.ide.feature” a root feature? You may want to try that, and the layout of the installed software will list 3 root elements: our product “Custom Eclipse SDK”, our ide.feature “Custom Eclipse Project SDK” (which is meant to require all the software from other providers, like in this example, the org.eclipse.sdk feature itself) and our “Example Feature”.

This means that also “Custom Eclipse Project SDK” can be updated independently; this might be useful if we plan to release a new version of the ide.feature including (well, depending on) other software not included in Eclipse SDK itself (e.g., Mylyn, Xtext, or something else). At the moment, I wouldn’t see this as a priority so I haven’t set customeclipse.example.ide.feature as a root level feature in the product configuration.

Minimal Distribution

The problem of basing our distribution on org.eclipse.sdk is that the final product will include many features and bundles that you might not want in your custom distribution; e.g., CVS features, not to mention all the sources of the platform and PDE and lots of documentation. Of course, if that’s what we want, then OK. But if we want only the Java Development Tools in our custom distribution (besides our features of course)?

We can tweak the requirements in customeclipse.example.ide.feature and keep them minimal (note that the platform feature is really needed):

Build the product now.

Note also that the installed software has been reduced a lot:

custom eclipse minimal 4

The size of the zipped products dropped down to about 90Mb, instead of about 200Mb as they were before when we were using the whole org.eclipse.sdk feature.

However, by running this product you may notice that we lost some branding

  1. There’s no Welcome Page
  2. Eclipse starts with “Resource” Perspective, instead of “Java” Perspective
  3. Help => About (Note only “About” no more “About Eclipse SDK”) shows:

custom eclipse minimal 2

To recover the typical branding of Eclipse SDK, we have to know that such branding is implemented in the bundle org.eclipse.sdk (the bundle, NOT the homonymous feature).

So, all we have to do is to put that bundle in our feature’s dependencies

Rebuild, and try the product: we have all the branding back! 🙂

I hope you find this blog post useful 🙂

The sources of this example can be found here: https://github.com/LorenzoBettini/customeclipse-example

Publish an Eclipse p2 repository on Sourceforge with rsync

This can be seen as a follow-up post of my previous post on building Eclipse p2 composite repositories. In this blog post I’ll show an automatic way for publishing an Eclipse p2 (composite) repository (a.k.a. update site) on Sourceforge, using rsync for synchronization. You may find online many posts about publishing update sites on Github pages and recently on bintray. (as a reminder, rsync is a one-way synchronization tool, and we assume that the master replica is the one on sourceforge; rysnc, being a synchronization tool, will only transfer the changed files during synchronization).

I prefer sourceforge for some reasons:

  • you have full and complete access to the files upload system either with a shell or, most importantly for the technique I’ll describe here, with rsync. From what I understand, instead, bintray will manage the binary artifacts for you;
  • in order to create and update a p2 composite site you must have access to the current file system layout of the p2 update site, which I seem to understand is not possible with bintray;
  • you have download statistics and your artifacts will automatically mirrored in sourceforge’s mirrors.

By the way: you can store your git repository anywhere you want, and publish the binaries on sourceforge. (see this page and this other page).

I’ll reuse the same example of the previous post, the repository found here https://github.com/LorenzoBettini/p2composite-example, where you find all the mechanisms for creating and updating a p2 composite repository.

The steps of the technique I’ll describe here can be summarized as follows: when it comes to release a new child in the p2 composite update site (possibly already published on Sourceforge), the following steps are performed during the Maven/Tycho build

  1. Use rsync to get an update local version of the published p2 composite repository somewhere in your file system (this includes the case when you never released a version, so you’ll get a local empty directory)
  2. Build the p2 repository with Tycho
  3. Add the above created p2 repository as a new child in the local p2 composite repository (this includes the case where you create a new composite repository, since that’s your first release)
  4. Use rsync to commit the changes back to the remote p2 composite repository

Since we use rsync, we have many opportunities:

  • we’re allowed to manually modify (i.e., from outside the build infrastructure) the p2 composite repository, for instance by removing a child repository containing a wrong release, and commit the changes back;
  • we can release from any machine, notably from Jenkins or Hudson, since we always make sure to have a synchronized local version of the released p2 composite repository.

Prepare the directory on Sourceforge

This assumes that you have an account on Sourceforge, that you have registered a project. You need to create the directory that will host your p2 composite repository in the “Files” section.

For this example I created a new project eclipseexampleshttps://sourceforge.net/projects/eclipseexamples/, and I plan to store the p2 composite in the sourceforge file system on this path: p2composite.example/updates.

So I’ll create the directory structure accordingly (using the “Add Folder” button:

sourceforge create folder structure 1 sourceforge create folder structure 2 sourceforge create folder structure 3

Ant script for rsync

I’m using an ant script since it’s easy to call that from Maven, and also manually from the command line. This assumes that you have already rsync installed on your machine (or in the CI server from where you plan to perform releases).

This ant file is meant to be completely reusable.

Here’s the ant file

We have a macro for invoking rsync with the desired options (have a look at rsync documentation for understanding their meaning, but it should be straightforward to get an idea).

In particular, the transfer will be done with ssh, so you must have an ssh key pair, and you must have put the public key on your account on sourceforge. Either you created the key pair without a passphrase (e.g., for releasing from a CI server of your own), or you must make sure you have already unlocked the key pair on your local machine (e.g., with an ssh-agent, or with a keyring, depending on your OS).

The arguments source and dest will depend on whether we’re doing an update or a commit (see the two ant targets). If you define the property dryrun as -n then you can simulate the synchronization (both for update and commit); this is important at the beginning to make sure that you synchronize what you really mean to synchronize. Recall that when you perform an update, specifying the wrong local directory might lead to a complete deletion of that directory (the same holds for commit and the remote directory). Moreover, source and destinations URLs in rsync have a different semantics depending on whether they terminate with a slash or not, so make sure you understand them if you need to customize this ant file or to pass special URLs.

The properties rsync.remote.dir and rsync.local.dir will be passed from the Tycho build (or from the command line if you call the ant script directly). Once again, please use the dryrun property until you’re sure that you’re synchronizing the right paths (both local and remote).

Releasing during the Tycho build

Now we just need to call this ant’s targets appropriately from the Tycho build; I’ll do that in the pom.xml of the project that builds and updates the composite p2 repository.

Since I don’t want to push a new release on the remote site on each build, I’ll configure the plugins inside a profile (it’s up to you to decide when to release): here’s the new part:

Now the URL to access a remote path on sourceforge with ssh has the following shape

<username>,<project>@frs.sourceforge.net:/home/frs/project/<project>/<path>

So in my case I specified (again, the final / is crucial for what we want to synchronize with rsync, see the note above):

lbettini,eclipseexamples@frs.sourceforge.net:/home/frs/project/eclipseexamples/p2composite.example/updates/

The local URL specifies where the local p2 composite site is stored (see the previous post), in this example it defaults to

${user.home}/p2.repositories/updates/

Again, the final / is crucial.

We configured the maven-antrun-plugin with two executions:

  1. before updating the p2 composite update site (phase prepare-package) we make sure we have a synchronized local version of the repository
  2. after updating the p2 composite update site (phase verify) we commit the changes to the remote repository
  3. That’s all 🙂

Let’s try it

Of course, if you want to try it, you need a project on sourceforge and a directory on that project’s Files section (and you’ll have to change the URLs accordingly in the pom file).

To perform a release we need to call the build enabling the profile release-composite, and specify at least verify as goal:

Let’s say we still haven’t released anything.

Since the remote directory is empty, in our local file system we’ll simply have the directory created. In the end of the build, the composite site is created and the remote directory will be synchronized with our local contents:

Let’s have a look at the remote directory, it will contain the create p2 composite site

sourceforge uploaded artifacts 1

sourceforge uploaded artifacts 2

Let’s perform another release; Our local copy is up-to-date so we won’t receive anything during the update phase, but then we’ll commit another release

Let’s have a look at sourceforge and see the new release

sourceforge uploaded artifacts 3

Let’s remove our local copy and try to perform another release, this time the update phase will make sure our local composite repository is synchronized with the remote site (we’ll get the whole composite site we had already released), so that when we add another composite child we’ll update our local composite repository; then we’ll commit the changes to the server (again, by uploading only the modified files, i.e., the compositeArtifacts.xml and compositeContent.xml and the new directory with the new child repository:

Again, the remote site is correctly updated

sourceforge uploaded artifacts 4

Providing the URL of your p2 repository

Now that you have your p2 repository on sourceforge, you only need to give your users the URL to use for installing your features in Eclipse.

You have two forms for the URL

  • This will use the mirror infrastructure of sourceforge: http://sourceforge.net/projects/<project>/files/<path>
  • This will bypass mirrors: http://master.dl.sourceforge.net/project/<project>/<path>

If you use the mirror form, when installing in Eclipse (or provisioning a target platform) you’ll see warnings on the console of the shape

But it’s safe to ignore them.

For our example the URL can be one of the following:

  • With mirrors: http://sourceforge.net/projects/eclipseexamples/files/p2composite.example/updates/
  • Main site: http://master.dl.sourceforge.net/project/eclipseexamples/p2composite.example/updates/

You may want to try them both in Eclipse.

Please keep in mind that you may hit some unavailability errors now and then, if sourceforge sites are down for maintenance or unreachable for any reason… but that’s not much different when you hit a bad Eclipse mirror, or the main Eclipse download site is down… I guess no hosting site is perfect anyway 😉

I hope you find this blog post useful, Happy releasing! 🙂