Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy.
He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena).
His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages.
He is also the author of about 90 research papers published in international conferences and international journals.
As I have written, I’m using “Oh-My-Zsh” with Nerd icons and fonts. It has always worked perfectly in KDE.
A few days ago, KDE Plasma 6 landed in Arch (and thus, EndeavourOS), and after upgrading, the Nerd fonts were not displayed in Konsole and Kate (and, I guess, in other KDE applications).
For example, before upgrading, Konsole looked like that:
After upgrading all the nice Nerd fonts were gone:
Long story short, before, when a font did not provide support for a “glyph”, that missing glyph was looked up in other fonts. After that change, that does not happen anymore.
The default monospace font in KDE is “Hack”. I have installed other Nerd fonts, but not the “Hack Nerd” version, so what worked before the upgrade no longer works.
To fix the problem, I Installed the nerd font, e.g., for Hack (the default KDE font):
1
sudo pacman-Sttf-hack-nerd
Then, open “System Settings” -> “Fonts”, and change the “Fixed width” font from the default “Hack 10pt” to the corresponding Nerd font:
Restart Konsole and the Nerd fonts are back:
Note that this works if your Konsole profile does not have a custom font set; if you use another font, you’ll have to use the Nerd font corresponding to that font.
For example, I used JetBrains fonts in another Konsole profile, but I hadn’t installed the Nerd version:
1
2
3
4
extra/ttf-jetbrains-mono2.304-1[installed]
Typeface fordevelopers,by JetBrains
extra/ttf-jetbrains-mono-nerd3.1.1-1(nerd-fonts)
Patched font JetBrains Mono from nerd fonts library
I installed the Nerd version and changed the font from simply “JetBrains” to the Nerd version, and also, this profile was fixed:
The same holds for other KDE applications like Kate. If you haven’t set a custom font, then the Nerd version of Hack will be automatically used. Otherwise, you have to use the Nerd version of the specified font.
Note that other non-Qt applications will not be affected by this change. For example, for Alacritty, I have this section in its configuration:
INI
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[font]
size=10.0
[font.bold]
family="JetBrains Mono NL"
style="Bold"
[font.bold_italic]
family="JetBrains Mono NL"
style="Bold Italic"
[font.italic]
family="JetBrains Mono NL"
style="Italic"
[font.normal]
family="JetBrains Mono NL"
style="Regular"
So, I simply specify JetBrains, not its Nerd version. Still, when icons and other glyphs are to be rendered, they are automatically taken from any Nerd font providing those glyphs:
At last, KDE Plasma 6 has landed in Arch Linux (and in EndeavourOS, of course), and you’re eager to try the return of the desktop effect “Desktop Cube”! 🙂
You try to enable that in the System Settings “Desktop Effects.” You try the default shortcut “Meta + C”, and… it doesn’t work 🙁
Oh, they say you need at least 4 virtual desktops! So you make sure you have 4 virtual desktops, you try again, and… it still doesn’t work 🙁
Actually, 3 virtual desktops are enough. What you really need is this package, so make sure you install that and reboot:
Now, it starts downloading into the current directory, creating a subdirectory. For example, I’m running that from a mounted drive. Here’s some output and some commands to show the layout of the directories and the created configuration file to start the virtual machine with the mounted ISO:
Thus, it should be easy to put it on an external drive
Here’s the machine starting with the live ISO:
I pressed Ctrl+C to cancel md5sum checks.
Here’s Zorin starting (audio is working):
The screen had resized automatically and became bigger.
I started the installation, mainly choosing default options (e.g., erase the entire disk). And here’s the login screen after the installation finished and the machine rebooted:
The impressive thing is that animations are really fluid and smooth in the virtual machine: you almost don’t realize you’re using a virtual machine:
Here’s the disk layout and memory (on this computer, I have 16 GB, and quickemu automatically selected half the memory for the virtual machine):
Even in this case, the desktop automatically resizes if I resize the Qemu window.
The installation went smoothly and fast in this case. The login screen is full of nice blurry effects:
On the first login, you’re welcomed by the Garuda assistant to perform some initial tasks.
Here’s the information about the installed system:
Animations and effects are smooth, e.g., the “Overview”:
To summarize, with quickemu, creating a new Qemu virtual machine is easy, starting from one of the many managed Linux distributions. It also works for macOS and Windows distributions, though I haven’t tried them.
Moreover, the performance of the virtual machine is fantastic. The virtual machine seems as smooth as the currently running system.
The only drawback I’ve experienced is that, with the default configuration, the shared clipboard does not work: you must start the virtual machine with the spice display (” –display spice”). For example,
1
quickemu--vm zorin-16-core64.conf--display spice
Remember to install the spice agent in the virtual machine. In the two above examples I’ve tried, the installed virtual machine already has the agent installed automatically during the installation.
First, at least in my experiments, the shared clipboard does not work anyway when the host is running on a Wayland session. Moreover, using the “spice” display, the virtual machine’s performance decreases significantly (see my reported issue: https://github.com/quickemu-project/quickemu/issues/933). Probably, to easily communicate and paste commands in the virtual machine, it is better to install the SSH server in the virtual machine and connect to the virtual machine via SSH.
In any case, this quick look at Quickemu impressed me a lot. 🙂
I am a big fan of KDE applications for many everyday tasks. Since Hyprland is not a full desktop environment, you can choose which applications to install for text editing, images, viewers, file managers, etc.
When I started using Hyprland, I was installing Thunar as a file manager; then I switched to Nemo because it’s more powerful, then to Nautilus (but it doesn’t look right in Hyprland). Finally, I decided to use Dolphin since I already used several KDE applications in Hyprland.
This is the list of Arch packages I install in Hyprland
konsole (as a terminal, though I still use also Alacritty)
breeze-icons (to have nice icons in KDE application)
kvantum (for Kate color schemes)
okular (for a better PDF viewer and annotator)
kcalc (as a calculator)
dolphin (for a powerful file manager)
dolphin-plugins (e.g., for Dropbox folder overlay)
ark (for Archive management, including Dolphin context menus)
Note that some of the above applications (namely, Dolphin and Gweenview) have “baloo” (the KDE file indexer and searcher) as a dependency. In Hyprland, that’s pretty useless and since it takes some resources for indexing, it’s better to disable that for good right after installing the above packages:
1
balooctl disable
Some updates after the original post:
UPDATE (8 March): After the update to KDE Plasma 6, the name of the baloo command has changed:
1
balooctl6 disable
UPDATE (30 May): Dolphin cannot seem to open files anymore because it doesn’t see any association. Its “Open With” menu is also empty. I blogged about the solution.
Let’s look at a few features of KDE applications that I like.
Concerning Dolphin, it has several powerful features, too many to list here 😉 I mention better renaming for multiple files out of the box. This feature requires additional work for Thunar or Nemo, and I never like the final result.
Let’s see the enabling of the Dropbox plugin (see the installed “dolphin-plugins” above):
After restarting Dolphin, you’ll get the nice overlay on the “Dropbox” folder:
Another reason I like KDE applications is that they have built-in HUD (Head Up Display), that is, a global searchable menu: use the keyboard shortcut Ctrl + Alt + i and you get the menu: start typing to quickly reach the intended item (in this example, I quickly switch to a custom Konsole profile):
You may want to create or change the keybinding for the file manager, in my case it is:
1
bind=$mainMod SHIFT,Return,exec,dolphin
Moreover, you’ll have to update the “~/.config/mimeapps.list” file accordingly, that is, specify this line and replace the corresponding existing ones:
1
inode/directory=org.kde.dolphin.desktop;
Concerning theming, some applications like Kate allow you to choose the color scheme. For example, since we installed Kvantum, we can choose the color scheme in Kate with “Settings” -> “Window Color Scheme”.
Konsole has profiles that you can create and customize.
On the other hand, Dolphin has no such functionality, so we should theme all KDE/Qt applications. That’s the subject of another possible future post.
Enjoy your KDE applications on Hyprland as well! 🙂
This is probably the beginning of a series of articles about testing Maven plugins.
I’ll start with the Maven Embedder, which allows you to run an embedded Maven from a Java program. Note that we’re not simply running a locally installed Maven binary from a Java program; we run Maven taken from a Java library. So, we’re not forking any process.
Whether this is useful or not for your integration tests is your decision 😉
I like to use the Maven Embedder when using the Maven Verifier Component (described in another blog post). Since it’s not trivial to get the dependencies to run the Maven Embedder properly, I decided to write this tutorial, where I’ll show a basic Java class running the Maven Embedder and a few JUnit tests that use this Java class to build (with the embedded Maven) a test Maven project.
This is the website of the Maven Embedder and its description:
Maven embeddable component, with CLI and logging support.
Remember: this post will NOT describe integration testing for Maven plugins; however, getting to know the Maven Embedder in a simpler context was helpful for me.
Let’s create a simple Java Maven project with the quickstart archetype
Shell
1
2
3
4
5
6
7
mvn archetype:generate\
-DarchetypeGroupId=org.apache.maven.archetypes\
-DarchetypeArtifactId=maven-archetype-quickstart\
-DarchetypeVersion=1.4\
-DgroupId=com.examples\
-DartifactId=maven-embedder-example\
-DinteractiveMode=false
Let’s change the Java version in the POM to Java 17, use a more recent version of JUnit, and add another test dependency we’ll use later:
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<properties>
...
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
</properties>
<dependencies>
<!-- Testing dependencies -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
<!-- For the FileUtils.deleteDirectory that
we use in tests. -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
<scope>test</scope>
</dependency>
...
Let’s import the Maven Java project into Eclipse (assuming you have m2e installed in Eclipse).
Let’s add the dependencies for the Maven Embedder:
Getting all the needed dependencies right for the Maven Embedder is not trivial due to the dynamic nature of Maven components and dependency injection. The requirements are properly documented above.
Let’s replace the “App.java” inside “src/main/java/” with this Java class:
Java
1
2
3
4
5
6
7
8
9
10
11
12
13
packagecom.examples;
importorg.apache.maven.cli.MavenCli;
publicclassMavenEmbedderRunner{
publicintrun(StringbaseDir,String...args){
MavenCli cli=newMavenCli();
// Required to avoid the error:
// "-Dmaven.multiModuleProjectDirectory system property is not set."
That’s just a simple example of using the Maven Embedder. We rely on its “doMain” method that takes the arguments to pass to the embedded Maven, the base directory from where we want to launch the embedded Maven, and the standard output/error where Maven will log all its information. In a more advanced scenario, we could store the logging in a file instead of the console by passing the proper “PrintStream” streams acting on files.
Let’s create the folder “src/test/resources” (it will be used by default as a source folder in Eclipse); this is where we’ll store the test Maven project to build with the Maven Embedder.
Inside that folder, let’s create another Maven project (remember, this will be used only for testing purposes: we’ll use the Maven Embedder to build that project from a JUnit test):
Shell
1
2
3
4
5
6
7
mvn archetype:generate\
-DarchetypeGroupId=org.apache.maven.archetypes\
-DarchetypeArtifactId=maven-archetype-quickstart\
-DarchetypeVersion=1.4\
-DgroupId=com.examples\
-DartifactId=maven-quickstart-example\
-DinteractiveMode=false
We rely on the fact that the contents of “src/test/resources” are automatically copied recursively into the “target/test-classes” folder. Eclipse and m2e will take care of such a copy; during the Maven build, there’s a dedicated phase (coming before the phase “test”) that performs the copy: “process-test-resources”.
Let’s replace the “AppTest.java” inside “src/test/java/” with this JUnit class:
The first test is simpler: it runs the embedded Maven with the goals “clean” and “verify” on the test project we created above. The second one is more oriented to a proper integration test since it also passes the standard system property to tell Maven to use another local repository (not the default one “~/.m2/repository”). In such a test, we use a temporary local repository inside the target folder and always wipe its contents before the test. This way, Maven will always start with an empty local repository and download everything from scratch for building the test project in this test. On the contrary, the first test, when running the embedded Maven, will use the same local repository of your user.
The first test will be faster but will add Maven artifacts to your local Maven repository. This might be bad if you run the “install” phase on the test project because the test project artifacts will be uselessly stored in your local Maven repository.
The second test will be slower since it will always download dependencies and plugins from scratch. However, it will be completely isolated, which is good for tests and makes it more reproducible.
Note that we are not running Maven on the test project stored in “src/test/reources” to avoid cluttering the test project with generated Maven artifacts: we build the test project copied in the “target/test-classes”.
In both cases, we expect success (as usual, a 0 return value means success).
In a more realistic integration test, we should also verify the presence of some generated artifacts, like the JAR and the executed tests. However, this is easier with the Maven Verifier Component, which I’ll describe in another post.
IMPORTANT: if you run these tests from Eclipse and they fail because the Embedded Maven cannot find the test project to build, run “Project -> Clean” so that Eclipse will force the copying of the test project from “src/test/resources” to “target/test-classes” directory, where the tests expect the test project. Such a copy should happen automatically, but sometimes Eclipse goes out of sync and removes the copied test resources.
If you run such tests, you’ll see the logging of the embedded Maven on the console while it builds the test project. For example, something like that (the log is actually full of additional information like the Java class of the current goal; I replaced such noise with “…” in the shown log below):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[main] INFO ... - Scanning for projects...
[main] INFO ... -
[main] INFO ... - ---------------< com.examples:maven-quickstart-example >----------------
[main] INFO ... - Building maven-quickstart-example 1.0-SNAPSHOT
[main] INFO ... - from pom.xml
[main] INFO ... - --------------------------------[ jar ]---------------------------------
[main] INFO ... -
[main] INFO ... - --- clean:3.1.0:clean (default-clean) @ maven-quickstart-example ---
[main] INFO ... -
[main] INFO ... - --- resources:3.0.2:resources (default-resources) @ maven-quickstart-example ---
[main] INFO ... - Using 'UTF-8' encoding to copy filtered resources.
[main] INFO ... - skip non existing resourceDirectory /Users/bettini/work/maven/maven-embedder-example/target/test-classes/maven-quickstart-example/src/main/resources
[main] INFO ... -
[main] INFO ... - --- compiler:3.8.0:compile (default-compile) @ maven-quickstart-example ---
[main] INFO ... - Changes detected - recompiling the module!
[main] INFO ... - Compiling 1 source file to /Users/bettini/work/maven/maven-embedder-example/target/test-classes/maven-quickstart-example/target/classes
[main] INFO ... -
[main] INFO ... - --- resources:3.0.2:testResources (default-testResources) @ maven-quickstart-example ---
[main] INFO ... - Using 'UTF-8' encoding to copy filtered resources.
[main] INFO ... - skip non existing resourceDirectory /Users/bettini/work/maven/maven-embedder-example/target/test-classes/maven-quickstart-example/src/test/resources
[main] INFO ... -
[main] INFO ... - --- compiler:3.8.0:testCompile (default-testCompile) @ maven-quickstart-example ---
[main] INFO org.apache.maven.plugin.compiler.TestCompilerMojo - Changes detected - recompiling the module!
[main] INFO ... - Compiling 1 source file to /Users/bettini/work/maven/maven-embedder-example/target/test-classes/maven-quickstart-example/target/test-classes
[main] INFO ... -
[main] INFO ... - --- surefire:2.22.1:test (default-test) @ maven-quickstart-example ---
[main] INFO ... -
[main] INFO ... - -------------------------------------------------------
[main] INFO ... - T E S T S
[main] INFO ... - -------------------------------------------------------
[ThreadedStreamConsumer] INFO ... - Running com.examples.AppTest
[ThreadedStreamConsumer] INFO ... - Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.013 s - in com.examples.AppTest
[main] INFO ... - --- jar:3.0.2:jar (default-jar) @ maven-quickstart-example ---
[main] INFO ... - Building jar: /Users/bettini/work/maven/maven-embedder-example/target/test-classes/maven-quickstart-example/target/maven-quickstart-example-1.0-SNAPSHOT.jar
[main] INFO ... - ------------------------------------------------------------------------
[main] INFO ... - BUILD SUCCESS
[main] INFO ... - ------------------------------------------------------------------------
[main] INFO ... - Total time: 1.340 s
[main] INFO ... - Finished at: 2024-02-14T20:32:32+01:00
[main] INFO ... - ------------------------------------------------------------------------
REMEMBER: this is not the output of the main project’s build; it is the embedded Maven running the build from our JUnit test on the test project.
Note that the two tests will build the same test project. In a more realistic integration test scenario, each test should build a different test project.
If you only run the second test after it finishes, you can inspect the “target/test-classes” to see the results of the build (note the “local-repo” containing all the downloaded dependencies and plugins for the test project and the generated artifacts, including test results, for the test project):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
ls target/test-classes/local-repo/
backport-util-concurrent classworlds com commons-io junit org
tree target/test-classes/maven-quickstart-example/
target/test-classes/maven-quickstart-example/
├── pom.xml
├── src
│ ├── main
│ │ └── java
│ │ └── com
│ │ └── examples
│ │ └── App.java
│ └── test
│ └── java
│ └── com
│ └── examples
│ └── AppTest.java
└── target
├── classes
│ └── com
│ └── examples
│ └── App.class
├── generated-sources
│ └── annotations
├── generated-test-sources
│ └── test-annotations
├── maven-archiver
│ └── pom.properties
├── maven-quickstart-example-1.0-SNAPSHOT.jar
├── maven-status
│ └── maven-compiler-plugin
│ ├── compile
│ │ └── default-compile
│ │ ├── createdFiles.lst
│ │ └── inputFiles.lst
│ └── testCompile
│ └── default-testCompile
│ ├── createdFiles.lst
│ └── inputFiles.lst
├── surefire-reports
│ ├── 2024-02-17T10-26-50_728.dumpstream
│ ├── com.examples.AppTest.txt
│ └── TEST-com.examples.AppTest.xml
└── test-classes
└── com
└── examples
└── AppTest.class
28 directories, 14 files
Now, you can continue experimenting with the Maven Embedder.
In the next articles, we’ll see how to use the Maven Embedder when running Maven integration tests (typically, for integration tests of Maven plugins), e.g., together with the Maven Verifier Component.
The default screen locking uses a bright screen. Let’s make it darker:
1
mkdir~/.config/swaylock
And let’s create and edit its configuration file “~/.config/swaylock/config”; in this example, I’m going to make it “dark green”, so I’m specifying:
1
color=032205FF
By looking at its “man page”, we can see:
-c, –color <rrggbb[aa]>
Turn the screen into the given color instead of white. If -i is used, this sets the background of the image to the given color. Defaults to white (FFFFFF).
The “aa” in the previously present hex notation is the alpha value defining the color’s opacity. In the above example, I’m using no opacity.
This is my complete configuration file for swaylock:
1
2
3
daemonize
show-failed-attempts
color=032205FF
Again, the “man page” explains these values:
-F, –show-failed-attempts Show the current count of failed authentication attempts.
-f, –daemonize Detach from the controlling terminal after locking.
In my Hyprland configuration file, I also use “swayidle” (the Idle management daemon for Wayland)
1
2
3
4
# Screensaver and lock screen
# Swaylock configuration in ~/.config/swaylock/config
Note that I’ve used a character using the installed Nerd font. Of course, you can choose anything you like. The “wlogout” menu will appear when you click on that module.
This is the second post of this series, where I show a few interesting tools provided by the IDE. As I said in the previous post, this might be unrelated to Gitpod since we’re using the tools provided by Visual Studio Code and its Java extensions. However, I think the post still fits the Gitpod Java series.
Of course, this post assumes you have already followed the first post on Java and Gitpod, linked above.
We can enjoy several refactoring tools on Java projects. For example, in the “App.java”, let’s select the “Hello World!” string. We can access the available refactorings on the selected element via the context menu or by using “Ctrl+.” (Quick fix):
Let’s choose “Extract to method”:
The refactoring creates a new method returning the selected expression, and the original expression is replaced by the call to such a new method. We can use the text box to give the method a better name. (“Shift+Enter” will show a preview of the refactoring; however, if you’re used to Eclipse like me, the preview is not as visually appealing and informative as the one of Eclipse.)
Alternatively, we can accept the default name and then position the cursor on one of the “extracted” occurrences and choose F2 (“Rename symbol”) to rename the method and its references. A text box like the one above will appear to specify the name, for example, “getMessage”.
While on the method name, we can see other refactorings (“Inline” is the opposite of “Extract to method”) and actions:
Let’s choose “Change signature” and use the dialog to change a few details. For example, let’s make the method “public” (of course, that’s just an example: we could easily manually change “private” to “public”); if we haven’t renamed the method (e.g., to “getMessage”), we could do that right now with this dialog:
Let’s see what happens in case of a test failure. Now that we have a public method to call it by changing the test like that:
Java
1
2
3
4
5
6
7
8
9
...
publicclassAppTest
{
@Test
publicvoidshouldReturnTheExpectedMessage()
{
assertEquals("Hello",App.getMessage());
}
}
Let’s run the test (e.g., by using the green arrow of the code lens):
As expected, we get the test failure; in particular, we get some information about the failure both on the editor and with an additional pop-up.
Let’s fix the test with the right expected message and re-run it (again, by using the now red cross of the code lens); this time, it should succeed.
Now that we have removed the “assertTrue”, we have an unused import in the test case. We can fix that by manually removing the import, but it’s better to use a fix from the context menu in the “Problems” tab:
Alternatively, we can select the “Organize Imports” command using F1 and start typing or the corresponding shortcut “Shift+Alt+O”.
We can now enrich our project with a README.md file (exploiting the Markdown editor available in Visual Studio Code) and create a GitHub Actions workflow (again, using the YAML support, which knows about the GitHub Actions workflow schema).
For Markdown, we can also use the preview pane:
For the GitHub Actions YAML file, we can use the code completion:
That’s all for this second post. Stay tuned for the third one! 🙂
First of all, install iTerm2 (because it provides a much better experience with Oh My Zsh and Powerlevel10k); either download it and install it from here https://iterm2.com/downloads.html or use “homebrew”:
1
brew install--cask iterm2
Then, install Oh My Zsh; since I have “curl” installed, I’m using this command (otherwise, see the Oh My Zsh URL for alternative options):
You should see something like the following output (if Zsh is not your current shell, at the end of the installation you’ll get asked whether to switch to Zsh):
To enable it, edit “~/.zshrc” and set the variable ZSH_THEME accordingly:
1
ZSH_THEME="powerlevel10k/powerlevel10k"
Now, either “source” the .zshrc file or open a new instance of iterm2 to see the initial configuration of p10k (remember you can always reconfigure it by running “p10k configure”):
Meslo fonts are recommended to have nice icon fonts, so it’s best to accept the proposal to install the Meslo fonts (in macOS, you have this nice automatic procedure, while in Linux distributions, you must install them manually). Let’s wait for the fonts to be downloaded:
And then, we must restart iterm2:
Now, we start a new iterm2 instance, and we start p10k from scratch, answering the questions for checking whether we can see the font icons correctly:
Then, we can start choosing our preferred options:
I like “Rainbow”.
In the question above, I chose “Unicode” to have lots of nice-looking icons like, as we see in a minute, the Git branch and OS icon.
Above, I chose two lines to have more space on the prompt.
Here are other options you can choose:
Note above the “many icons” I previously talked about (I chose to have many icons).
Note the “Transient Prompt” option, which is the one I prefer.
Here, I select the recommended option.
Again, I let the configuration process change the ~/.zshrc file. You can then inspect the changes made as suggested:
Here’s an example of a nice-looking prompt inside a directory with a GitHub repository:
Now, I have installed two other useful plugins (to have syntax highlighting on the command line and to have suggested commands as you type based on history and completions):
The plug-ins must be enabled in the proper section of ~/.zshrc:
1
2
3
4
plugins=(...exsiting plugins...
zsh-syntax-highlighting
zsh-autosuggestions
)
Here, you can see the two plugins in action (note the syntax highlighting in green for correct commands and suggestions to complete the command):
I also like to have fzf, a general-purpose command-line fuzzy finder. This must be first installed as a program, e.g., with homebrew:
1
brew install fzf
And then enable the corresponding plug-in:
1
2
3
plugins=(...exsiting plugins...
fzf
)
I also enable a few more standard plugins. This is my list of plugins in ~/.zshrc:
1
2
3
4
5
6
7
8
plugins=(
git
zsh-syntax-highlighting
zsh-autosuggestions
zsh-interactive-cd
zsh-navigation-tools
fzf
)
Fzf has a few default shortcuts:
CTRL-T – Paste the selected files and directories onto the command line
CTRL-R – Paste the selected command from history onto the command line
ALT-C – cd into the selected directory
Unfortunately, the last one (which is one of my favorites) does not work out of the box in iterm2 because the “option/alt” key does not act like “Meta” (as in Linux). This is documented in the FAQ:
Q: How do I make the option/alt key act like Meta or send escape codes?
A: Go to Preferences > Profiles tab. Select your profile on the left, and then open the Keyboard tab. At the bottom is a set of buttons that lets you select the behavior of the Option key. For most users, Esc+ will be the best choice.
If you don’t want to perform that change, you can use “ESC c” to achieve the same result.
I tried the KDE Plasma 6 (beta) by using the KDE Neon Unstable Edition.
This is a quick report.
I tried that in a KVM virtual machine. I had to disable 3d graphics, or the installer showed an empty Window.
Here’s the live environment where I started the installer:
Note that it uses the Wayland session by default:
There are not many options when choosing to erase the disk:
The installation went smoothly.
Upon reboot, the login screen allows you to choose the X11 session, but Wayland is the default (that’s what I used):
Without 3D, you miss the blur and other effects; for example, you only get transparency without blurring:
Let’s enable 3D (“Display Spice”, “Listen Type = None” and check “OpenGL”, “Apply”, and then “Video Virtio”, check “3D acceleration”).
Everything seems to work this time (so the problem was only during the installation). We now have blur effects and smooth 3D effects:
The “Overview” effect (Alt+W) looks much nicer now (in the meantime, I switched to the dark theme), and it retains the features I had already blogged about:
The default Task Switcher (Thumbnail Grid) now makes sense (in Plasma 5, changing the default Task Switcher was the first thing I was doing in Plasma 5!):
From the visual point of view, you now also have a floating panel enabled by default.
There was a substantial system update (about 500Mb), which I applied. After rebooting, I was greeted like this:
Unfortunately, the links do not work: no browser opens…
After the update, logging out does not seem to work anymore: I get a blank screen. The same holds for the other menus like “Shut Down” and “Restart”. Welcome to beta software 😉
However, I did another upgrade the day after, and these issues were fixed.
By the way, if you want to upgrade the system, remember that in KDE Neon, you should not use “sudo apt upgrade” but “sudo pkcon update“.
These are the system information (remember: I’m on a virtual machine):
Speaking about desktop effects, we have the (useless but good-looking) Desktop Cube back! You have to enable it in the “Desktop Effects” and remember you must have at least 4 virtual desktops, or the effect will not kick in:
Cool effect 🙂
Speaking of the Desktop effects, the other effects seem to work fine, at least the ones I tried: Present Windows, Magic Lamp, Cover Flow (task switcher), and Blur.
In Wayland, there are some small quirks. The one I noted most is the missing close/maximize/minimize icons in Firefox (you cannot see them, though if you hover, you can press them):
This is a quick post about having nice fonts in Eclipse in Windows 11, based on my experience (maybe I had bad luck with the default configurations of Eclipse and/or Windows).
When I bought my Acer Aspire Vero, I found Windows 11 installed, and now and then, I’m using Windows 11 (though I’m using Linux most of the time). As an Eclipse user, I immediately installed Eclipse. However, I found the default fonts were really ugly:
Indeed, “Courier New” is not the most beautiful mono-space font 😉
Other applications look nice in Windows 11, including text editors. They use, by default, “Lucida Console”, which looks OK:
Indeed, Eclipse uses “Consolas” for other Text parts:
“Consolas” looks even better than “Lucida”! I changed that in Eclipse also for the standard Text font, and the result looks nice to me:
I have already blogged about Gitpod, which allows you to spin up fresh development environments from your GitHub projects so that you can code with Visual Studio on the web (that’s just a very reductive definition, so you may want to look at its website for the complete set of features). I have already shown how to use it for Ansible and Molecule.
Today, I will show how to use Gitpod for Java/Maven projects. This is the first post of a series about Java, Maven, and Gitpod.
NOTE: Although the post focuses on Gitpod, most of the features we will see come from Visual Studio Code and the extensions we will install. Thus, the same mechanisms could be used also on a locally installed Visual Studio Code. In that respect, it is best to get familiar with the main keyboard shortcuts (these will be shown in Visual Studio Code when no editor is opened):
Gitpod provides an example for Java, but it relies on Spring Boot and is probably too complex, especially if you’re not interested in web applications.
In this post, instead, I’ll start with a very basic Java/Maven project. It is intended as a tutorial, so you might want to follow along doing these steps with your GitHub account.
I start by creating a Maven project with the quickstart archetype locally on my computer:
Now, I can start Gitpod for this repository using the button (as I said, you need to use a browser extension; otherwise, you have to prefix the URL appropriately):
Let’s press the “Gitpod” button. (The first time you use Gitpod, you’ll have to accept a few authorizations.).
Press the “Continue with GitHub” button and wait for the workspace to be ready.
NOTE: I’m using the light theme of Visual Studio in Gitpod in this blog post.
Gitpod detected that this was a Maven project and automatically executed the command:
1
mvn install-DskipTests=false
Note that it also created the file “.gitpod.yml”, which we’ll later tweak to customize the default command and other things:
Moreover, it offers to install the Java extension pack:
Of course, we accept it because we want to have a fully-fledged Java IDE (this is based on the Eclipse JDT Language Server Protocol; you might want to have a look at what a Language Server Protocol, LSP, is). We use the arrow to choose “Install Do not Sync” (we don’t want that in all Gitpod workspaces, and we’ll configure the extensions for this project later).
Once that’s installed (note also the recommended extension GitLens, which we might want to install later, let’s use the gear icon to add the extension to our “.gitpod.yml” so that the extension will be automatically installed and available the next time we open Gitpod on this project:
Unfortunately, the “.gitpod.yml” is a bit messed up now (maybe a bug?), and we have to adjust it so that it looks like as follows:
There’s also a warning on top of the file; by hovering, we can see a few warnings complaining that the transitive dependencies of the extension are not part of the file:
Let’s click on “Quick Fix…” and then apply the suggestions to add the extensions to the file (these are just warnings, but I prefer not to have warnings in my development environment):
In the end, the file should look like this:
YAML
1
2
3
4
5
6
7
8
9
10
11
tasks:
- init: mvn install -DskipTests=false
vscode:
extensions:
-vscjava.vscode-java-pack
-redhat.java
-vscjava.vscode-java-debug
-vscjava.vscode-java-dependency
-vscjava.vscode-java-test
-vscjava.vscode-maven
Note that we have “code lens” in the editor, and we can choose to let Gitpod validate this configuration:
TIP: another extension I always add is “eamodio.gitlens”.
This will rebuild the Docker image for our workspace (see the terminal view at the bottom):
This operation takes some time to complete, so you might want to avoid that for the moment. If you choose to do the operation, in the end, another browser tab will be opened with this new configuration. We can switch to the new browser tab (the “.gitpod.yml” is available in the new workspace, though we still haven’t committed that).
NOTE: I find “mvn install” an anti-pattern, and, especially in this context, it makes no sense to run the “install” phase and run the tests when the workspace starts. In fact, I changed the “init” task to a simpler “mvn test-compile”; this is enough to let Maven resolve the compile and test dependencies when the workspace starts. The Java LSP will not have to resolve them again and will find them in the local Maven cache.
We can take the chance to commit the file by using the corresponding tab in Visual Studio Code and then push it to GitHub (“Sync Changes”):
We could also close the Gitpod tabs and re-open Gitpod (the “.gitpod.yml” is now saved in the GitHub repository), but let’s continue on the open workspace.
Let’s now open a Java file in our project:
We get a notification that the IDE is loading the Java project (this might take a few seconds).
TIP: to quickly open a file knowing (part of) its name, press “Ctrl + P” (see the shortcuts above) and start typing:
We have a fully-fledged Java IDE with “code lens” for running/debugging and parameter names (see the argument passed to “System.out.println”):
For example, let’s use “Run” to run the application and see the output in the terminal view:
Though this project generated by the archetype is just a starting point, we also have a simple JUnit test. Let’s open it.
After a few seconds, the editor is decorated with some “code lens” that allows us to run all the tests or a single test (see the green arrow in the editor’s left ruler). Clicking on the arrow immediately runs the tests or a single test. Right-clicking on such arrows gives us more options, like debugging the test.
On the right pane, we can select the “Testing” tab (depicted as a chemical ampoule) that shows all the tests detected in the project (in this simple example, there’s a single one, but in more complex projects, we can see all the tests). We can run/debug them from there.
Let’s run them and see the results (in this case, it is a complete success); note the decorations showing the succeeded tests (in case of failures, the decorations will be different):
Of course, we could run the tests through Maven in the console, but this would be a more manual process, and the output would be harder to interpret in case of failures: we want to use an IDE to run the tests.
We could also run the tests by pressing “F1” and typing “Run tests” (we’ll then use the command “Java: Run Tests”): we need to do that when a JUnit test case is open in the editor.
Let’s hover on the “assertTrue”, which is a static method of the JUnit library. The IDE will resolve its Javadoc and will show it on a pop-up (the “code lens” for the parameter names is also updated):
We can use the menu “Go to definition” (or Ctrl+click) to jump to our project’s source code and libraries. For example, let’s do that on “assertTrue”. We can view the method’s source code in the class “Assert” of JUnit (note that this editor is read-only, and the name of the file ends with “.class”):
Note that the “JAVA PROJECTS” in the “Explorer” shows the corresponding file. In this case, it is a file in the referred test dependency “junit-4.11.jar” in the local Maven cache (see the POM where this dependency is explicit).
Of course, we have code completion by pressing “Ctrl+Space”; when the suggestions appear, we can start typing to filter them, and substring filtering works as well (see the screenshot below where typing “asE” shows completions matching):
With ENTER, we select the proposal. In this case, if we select one “assertEquals”, which is a static method of “Assert”, upon selection, we will also have the corresponding static import automatically added to the file.
That’s all for the first post! Stay tuned for more posts on Java, Maven, and Gitpod! 🙂
This blog post will describe my Ansible role for installing the KDE Plasma desktop environment with several programs and configurations. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.
This role is for my personal installation and configuration and is not meant to be reusable.
The role assumes that at least the basic KDE DE is already installed in the Linux distribution. The role then installs several programs I’m using daily and performs a few configurations (it also installs a few extensions I use).
At the time of writing, the role has the following directory structure, which is standard for Ansible roles tested with Molecule.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
├── defaults
│ └── main.yml
├── files
│ ├── kde-ssh
│ │ ├── askpass.sh
│ │ ├── ssh-add.desktop
│ │ ├── ssh-agent-shutdown.sh
│ │ └── ssh-agent-startup.sh
│ └── konsole
│ ├── AplumaDark.colorscheme
│ ├── Apricot.colorscheme
│ ├── Apricot.profile
│ ├── Aritim Dark.colorscheme
│ ├── Aritim Dark.profile
│ ├── BlackOnWhite.profile
│ ├── Edna.colorscheme
│ ├── Edna.profile
│ ├── GreenOnBlack.profile
│ ├── Nordic.colorscheme
│ ├── Nordic.profile
│ └── XeroLinux.profile
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ └── main.yml
├── molecule
│ ├── default
│ │ ├── molecule.yml
│ │ └── prepare.yml
│ ├── fedora
│ │ └── molecule.yml
│ ├── shared
│ │ ├── converge.yml
│ │ └── verify.yml
│ └── ubuntu
│ ├── molecule.yml
│ └── prepare.yml
├── pip
│ └── requirements.txt
├── README.md
├── requirements.yml
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
The role has a few requirements, listed in “requirements.yml”:
YAML
1
2
3
---
collections:
- name: community.general
These requirements must also be present in playbooks using this role; my playbooks (which I’ll write about in future articles) have such dependencies in the requirements.
Let’s have a look at the main file “tasks/main.yml”, which is quite long, so I’ll show its parts and comment on the relevant parts gradually.
- name: Override spectacle package name for Ubuntu.
ansible.builtin.set_fact:
kde_spectacle: kde-spectacle
when: ansible_os_family == 'Debian'
- name: Override kvantum package name for Ubuntu.
ansible.builtin.set_fact:
kvantum: qt5-style-kvantum
when: ansible_os_family == 'Debian'
- name: Install Kde Packages
become: true
ansible.builtin.package:
state: present
name:
- kate
- "{{kde_spectacle}}"
- ark
- konsole
- dolphin
- okular
- gwenview
- yakuake
- korganizer
- kaddressbook
- kdepim-addons
- kio-gdrive
- dolphin-plugins
- plasma-systemmonitor
- kcalc
- plasma-workspace-wallpapers
- "{{kvantum}}"
# - latte-dock # it's not maintained anymore
# In Ubuntu doesn't seem to be there
# maybe it's not needed
- name: Install Kde Addons
become: true
ansible.builtin.package:
state: present
name:
-kdeplasma-addons
when: ansible_os_family != 'Debian'
This shows a few debugging details about the current Linux distribution. Indeed, the whole role has conditional tasks and variables depending on the current Linux distribution.
The file installs a few KDE programs I’m using in KDE.
The “vars/main.yml” only defines a few default variables used above:
YAML
1
2
3
4
---
# vars file for my_kde_role
kde_spectacle: spectacle
kvantum: kvantum
As seen above, a few packages have a different name in Ubuntu (Debian), which is overridden.
Then, I configure a few things in the KDE configuration (.ini) files and set a few keyboard shortcuts. The configuration should be self-explanatory.
Then, I ensure Kate is the default editor for textual files (including XML files, which otherwise, would be opened with the default browser); I also configure a few Kate preferences:
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# In Fedora it's not installed by default
- name: Ensure xdg-mime is available
become: true
ansible.builtin.package:
state: present
name:
-xdg-utils
when: ansible_os_family == 'RedHat'
- name: Ensure xdg mime default application is set
Then, I copy a few Konsole profiles (and the corresponding color schemes, see the directory “files/konsole”) and also configure the Yakuake drop-down terminal:
The final part deals with configuring the Kwallet manager to store SSH key passphrases, which, in KDE, has always been a pain to get correctly (at least, now, I have a configuration that I know works on all the distributions mentioned above):
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
- name: Install kwalletmanager and ksshaskpass
become: true
ansible.builtin.package:
state: present
name:
-kwalletmanager
-ksshaskpass
- name: Create autostart directory
ansible.builtin.file:
path: '~/.config/autostart'
mode: 0755
state: directory
- name: Copy ssh-add.desktop
ansible.builtin.copy:
src: "kde-ssh/ssh-add.desktop"
dest: "~/.config/autostart/"
mode: 0644
# inspired by AUR package plasma-workspace-agent-ssh
Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. The “default” scenario is Arch, which nowadays is my daily driver.
The reason for this is explained in my previous posts on Ansible and Molecule.
I have a similar “prepare.yml” for the default scenario, Arch.
I have nothing to verify for this role in the “verify.yml”. I just want to ensure that the Ansible role can be run (and is idempotent) in Arch, Ubuntu, and Fedora.
Of course, this is tested on GitHub Actions and can be developed directly on the web IDE Gitpod.
I hope you find this post useful for inspiration on how to use Ansible to automatize your Linux installations 🙂
Nowadays, I mostly use Arch-based distributions (especially with EndeavourOS). So I haven’t been using Ubuntu for a while and decided to try it again now that the brand new release, 23.10 “Mantic Minotaur”, is available.
Let’s start the installation. This new version of Ubuntu features a new installer, which looks nice. I still feel comfortable with this new installer having already installed Ubuntu many times.
The initial steps are the language, keyboard, and network connection:
In the next step, the installer detected a new version available to download. I said yes. Then, you have to restart the installer, starting from scratch.
By default, Ubuntu proposes a minimal installation when choosing the installation type. However, I prefer to have most of the things installed during this stage, so I chose the “Full Installation”:
Then, we get to the partitioning. As usual, I prefer manual partitioning since I have several Linux distributions installed on my computer. I chose EXT4 as the file system. On Arch, I use BTRFS. However, Ubuntu does not come with good defaults for BTRFS. I dealt with such problems in the past, but now I prefer to stick with EXT4 in Ubuntu and give up on BTRFS snapshots.
Then, we get to the timezone selection (the installer automatically detected my location) and user details. This is as usual.
Interestingly, you can select during the installation the theme and the color accent (that’s nothing special, but it is a nice surprise):
The installation starts; by clicking on the small icon on the bottom right, you can also enable logging on the terminal:
The installation only took a few minutes on this laptop.
Time to restart. Of course, at the first login, you get some updates to install:
The touchpad is already configured with tap-to-click, but it defaults to “natural scrolling” (which I don’t like). That gave me the chance to see the new nice-looking Gnome setting for the touchpad:
I installed Dropbox, and with the Ubuntu extension for “app indicator”, the Dropbox icon appears in the tray bar. It works (mostly: sometimes it always shows as if it is synchronizing, though everything is up-to-date).
Remember that the current icon theme does not show the “Dropbox” folder in Nautilus with overlay.
Connecting an external HDMI monitor works perfectly (so Wayland is not a problem); I prefer to mirror the contents:
Also, GNOME extensions work fine. Despite the new GNOME Version (45), known to have broken all extensions due to an API breakage, the ones I use seem to have been ported and work correctly.
I don’t like the fact that, despite a SWAP partition already present on my disk, the installer did not pick it up: the result is the usage of a small SWAP file, which I don’t like.
1
.rw-------4.3Groot4Nov18:06swap.img
I removed this line from the “/etc/fstab”:
1
/swap.img none swap sw00
I added the line to refer to my existing SWAP partition.
I also enabled ZRAM, which will automatically have precedence over the SWAP partition:
1
2
sudo apt install systemd-zram-generator
sudo systemctl daemon-reload
1
2
3
4
❯swapon
NAME TYPE SIZE USED PRIO
/dev/nvme0n1p4 partition21G2M-2
/dev/zram0 partition4G0B100
I don’t like the wallpapers shipped with this version (in the screenshot, you can easily tell the GNOME wallpapers from the Ubuntu ones):
However, I typically use Variety for wallpapers, so it’s not a big problem.
IMPORTANT: as I have already blogged, you need additional fonts for “Oh-My-Zsh” with the “p10k” prompt.
All in all, Ubuntu 23.10 seems pretty stable and smooth. I’m using it (not as my daily driver), and for the moment, I’m enjoying it.
Here’s another post on how to get started with Hyprland.
This time, we’ll see how to configure notifications with mako, a lightweight notification daemon for Wayland, which also works with Hyprland. (you might also want to consider and experiment with an alternative: dunst).
If you followed my previous tutorials, you have no notification daemon installed. You can verify that by running the following command (to issue a notification manually) and by looking at the resulting errors:
1
2
$ notify-send "hello"
GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Notifications was not provided by any .service files
Let’s install “mako”:
1
sudo pacman-Smako
The nice thing about mako is that you don’t need to start it as a service manually: the first time a notification is emitted, mako will run automatically.
Let’s try to run the above notification command above, and this time, we see the pop-up, by default, on the right top corner of the screen:
You have to click the pop-up to make it disappear.
Each time a program emits a notification, mako will show it. For example, Thunderbird, Firefox, and Chrome will emit notifications that mako will display.
Let’s do some further experiments by manually emitting notifications:
1
notify-send"hello world\!""This is a message"
will lead to
You can see that the first argument is the title and formatted in boldface.
You can have a look at mako’s manual (5) about its configuration file and where it is searched for:
1
2
3
4
5
6
7
8
9
10
11
12
man 5 mako
NAME
mako - configuration file
DESCRIPTION
The config file is located at <strong>~/.config/mako/config</strong> or at $XDG_CON‐
FIG_HOME/mako/config. Option lines can be specified to configure mako like so:
key=value
Empty lines and lines that begin with # are ignored.
Each time you modify the configuration, you must reload mako by using one of the following commands:
1
killall mako
or
1
makoctl reload
With that example configuration, we can emit a few notifications with different “urgencies”, and see the different colors and positions of the boxes:
1
2
3
4
5
6
7
notify-send-ulow"hello world\!""This is a low urgency message"
notify-send-unormal"hello world\!""This is a normal message"
notify-send-ucritical\
"This is a critical message\!"\
"OK, that was just a demo ;)"
If you use EndeavourOS, you will get notifications about new updates and when a reboot is required after a system update (the latter is a “critical” notification):
This blog post will describe my Ansible role for installing the GNOME desktop environment with several programs and configurations. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.
This role is for my personal installation and configuration and is not meant to be reusable.
The role assumes that at least the basic GNOME DE is already installed in the Linux distribution. The role then installs several programs I’m using on a daily basis and performs a few configurations (it also installs a few extensions I use).
At the time of writing, the role has the following directory structure, which is standard for Ansible roles tested with Molecule.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ └── main.yml
├── molecule
│ ├── default
│ │ ├── molecule.yml
│ │ └── prepare.yml
│ ├── fedora
│ │ └── molecule.yml
│ ├── no-flatpak
│ │ ├── converge.yml
│ │ ├── molecule.yml
│ │ └── verify.yml
│ ├── shared
│ │ ├── converge.yml
│ │ └── verify.yml
│ └── ubuntu
│ ├── molecule.yml
│ └── prepare.yml
├── pip
│ └── requirements.txt
├── README.md
├── requirements.yml
├── tasks
│ ├── flatpak.yml
│ ├── gnome-arch.yml
│ ├── gnome-configurations.yml
│ ├── gnome-extension-manager.yml
│ ├── gnome-extensions.yml
│ ├── gnome-templates.yml
│ ├── gnome-tracker.yml
│ ├── guake.yml
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
The role has a few requirements, listed in “requirements.yml”:
YAML
1
2
3
4
5
6
7
---
roles:
- name: petermosmans.customize-gnome
version: 0.2.10
collections:
- name: community.general
These requirements must also be present in playbooks using this role; my playbooks (which I’ll write about in future articles) have such dependencies in the requirements.
This shows a few debug information about the current Linux distribution. Indeed, the whole role has conditional tasks and variables depending on the current Linux distribution.
The file installs a few programs, mainly Gnome programs, but also other programs I’m using in GNOME.
The “vars/main.yml” only defines a few default variables used above:
YAML
1
2
3
4
---
# vars file for my_gnome_role
python_psutil: python3-psutil
with_flatpak: true
As seen above, the package for “python psutils” has a different name in Arch, and it is overridden.
For Arch, we have to install a few additional packages, which are not required in the other distributions (file “gnome-arch.yml”):
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
---
- name: Install Gnome Packages (Arch Linux)
become: true
ansible.builtin.package:
state: present
name:
-gvfs-afc
-gvfs-goa
-gvfs-google
-gvfs-gphoto2
-gvfs-mtp
-gvfs-nfs
-gvfs-smb
For the Guake dropdown terminal, we install it (see the corresponding YAML file).
The file “gnome-templates.yml” creates the template for “New File”, which, otherwise, would not be available in recent versions of GNOME, at least in the distributions I’m using.
YAML
1
2
3
4
5
6
7
8
9
10
11
- name: Create Templates directory
ansible.builtin.file:
path: '~/Templates'
state: directory
mode: 0755
- name: Create Templates
ansible.builtin.copy:
content: ""
dest: '~/Templates/New File'
mode: 0644
For the search engine GNOME Tracker, I performed a few configurations concerning the exclusion mechanisms. This is done by using the Community “dconf” module:
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# the default was ['.trackerignore', '.git', '.hg', '.nomedia']
# but that way the contents of a git working directory are not indexed
- name: Customize Tracker Ignored directories with content
- name: Make sure Tracker 2 is NOT installed (Arch)
become: true
ansible.builtin.package:
state: absent
name:
- tracker
- tracker-miners
when: ansible_os_family == 'Archlinux'
# In previous versions of ubuntu the service file was
# tracker-extract.service
# In more recent versions is
# tracker-extract-3.service
- name: Disable Tracker Extract at system level
ansible.builtin.systemd:
name: tracker-extract-3
scope: global
masked: yes
# Better to mask it at the global level
# so that it can be run also in a chroot environment
# otherwise we get "Failedtoconnecttobus: No such file or directory"
This also ensures that possibly previous versions of Tracker are not installed. Moreover, while I use Tracker to quickly look for files (e.g., with the GNOME Activities search bar), I don’t want to use “Tracker extract”, which also indexes file contents. For indexing file contents, I prefer “Recoll”, which is installed and configured in my dedicated playbooks for specific Linux distributions (I’ll blog about them in the future).
Then, the file “gnome-configurations.yml” configures a few aspects (the comments should be self-documented), including some custom keyboard shortcuts (including the one for Guake, which, in Wayland, must be set explicitly as a GNOME shortcut):
Then, by using the “petermosmans.customize-gnome” role (see the requirements file above), I install a few GNOME extensions, which are specified by their identifiers (these can be found on the GNOME extensions website). I leave a few of them commented out, since I don’t use them anymore, but I might need them in the future):
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# Required for unzipping extension archives
- name: Install Unzip
become: true
ansible.builtin.package:
state: present
name:
-unzip
# Downloading extensions is very flaky, so if it succeeds on 'converge'
# we skip it on 'idempotence'
- name: Install Gnome Extensions
ansible.builtin.include_role:
name: petermosmans.customize-gnome
tags: molecule-idempotence-notest
vars:
gnome_extensions:
- id: 19 # User Theme "user-theme@gnome-shell-extensions.gcampax.github.com"
- id: 615 # AppIndicator and KStatusNotifierItem Support
# To remove the filters on flathub introduced by Fedora
# see https://ask.fedoraproject.org/t/ansible-flathub-repo-setup/19176
# see https://fedoraproject.org/wiki/Changes/Filtered_Flathub_Applications
- name: Remove filters from flathub
become: true
ansible.builtin.command:
cmd: flatpak remote-modify --no-filter flathub
changed_when: false
YAML
1
2
3
4
5
6
- name: Install Gnome Extension Manager
become: true
community.general.flatpak:
name: com.mattjakeman.ExtensionManager
state: present
# method: user
I installed them system-wide (the “user” option is commented out).
Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. The “default” scenario is Arch, which nowadays is my daily driver.
However, after running the playbook and restarting, the terminal did not look quite right:
You see the OS logo before the “>” is not displayed, and other icon fonts (I’m using exa/eza instead of “ls”) are missing, too (e.g., the one for YAML and Markdown files). In Arch, I knew how to solve icon problems for exa. Here in Ubuntu, I never experimented in that respect.
However, the p10k GitHub repository provides many hints in that respect. Unfortunately, Ubuntu does not provide packages for Nerd fonts. However, the p10k GitHub repository provides some Meslo fonts that can be directly downloaded.
The commands to solve the problem (provided you already have “fontconfig” and “wget” installed, otherwise, do install them) are:
Now, reboot (this seems to be required), and the next time you open the terminal, everything looks fine (note the OS icon and the icons for YAML and Markdown files):
Of course, you could also download another Nerd font from the corresponding GitHub repository, but this procedure seems to work like a charm, and you use the p10k recommended font (Meslo).
By the way, the Gnome Text Editor automatically uses the new icon fonts. Other programs like Kate (which I use in Gnome as well) have to be configured to use the Meslo font.
I am writing this report about my (nice) experience upgrading the SSD (1 TB) to my Dell OptiPlex 5040 MiniTower. That’s an old computer (I bought it in 2016), but it’s still working great. However, its default SSD of 256 GB was becoming too small for Windows and my Linux distributions. This computer also came with a secondary mechanical hard disk (1 TB).
DISCLAIMER: This is NOT meant to be a tutorial; it’s just a report. You will do that at your own risk if you perform these operations! Ensure you did not void the warranty by opening your laptop.
I wrote this blog post as a reminder for myself in case I have to open this desktop again in the future!
To be honest, my plan was to add the new SSD as an additional SSD, but, as described later, I found out that the mechanical hard disk was a 2.5 one, so I replaced the old SSD with the new one (after cloning it). I’ve used a “FIDECO YPZ220C” to perform the offline cloning, which worked great!
This is the BIOS status BEFORE the upgrade:
I seem to remember that “RAID” is required to have Linux installed on such a machine.
This is the new SSD (a Samsung 870 EVO, 1 TB, SATA 2.5”):
The cool thing about this desktop PC, similar to other Dell computers I had in the past, is that you don’t need a screwdriver: you disassemble it just with your hands. However, I suggest you have a look at a disassembling video like the one I’ve used: https://www.youtube.com/watch?v=gXePa1N_8iI. I know the video is about a Dell Optiplex 7040 MT, while mine is a Dell Optiplex 5040 MT, but their shapes and internals look the same. On the contrary, the Dell Optiplex 5040 SmallFactor videos are not useful because there’s a huge difference between my MiniTower and a SmallFactor 5040.
These are a few photos of the disassembling, showing the handles to use to open the computer, disconnect a few parts, and access the part holding the 2.5 drives.
This is the part holding the two 2.5 drives (as I said, at this point, I realized that also the mechanical hard disk is occupying one such place):
The SSD (I will replace) is the first one on top.
It’s easy to remove that: just use the handles to pull it off:
There are no screws to remove: you just enlarge the container to remove the SSD and insert the new one.
As I said above, I inserted the new one after performing the offline cloning.
Once I closed the desktop, the BIOS confirmed that the new SSD was recognized! 🙂
Now, some bad news (which is easy to fix, though): if you use a partition manager, e.g., in Linux, the SSD is seen as 1 TB, but the partitions are based on the original source SSD, so you end up with lots of free space that you cannot use!
For example, here’s the output of fdisk, which understands there’s something wrong with the partition table:
1
2
3
4
5
6
7
8
9
10
11
❯ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.39.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
GPT PMBR size mismatch (500118191 != 1953525167) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.
It also suggests that it’s not a good idea to try to fix it when one of the partitions is mounted.
Using a live ISO, e.g., the one from EndeavourOS, is just a matter of fixing the partition table as follows.
1
2
3
4
5
6
7
8
9
10
$ parted -l
Warning: Not all of the space available to /dev/sda appears to be used, you can
fix the GPT to use all of the space (an extra 10485760 blocks) or continue with
the current setting?
Fix/Ignore?
Answer: fix
GPT PMBR Size Mismatch Error Fix
That’s it. Problem fixed you can reboot VM.
Now, you have access to the whole space in the disk.
Up to now, I have shown how to get started with Hyprland with the initial configurations.
Now, I’ll show how to install the mainstream status bar in Hyprland: Waybar. Again, I’m going to do that for Arch Linux. As in the previous posts, I will NOT focus on the look and feel configuration.
Before continuing, the Waybar module for keyboard status (Caps lock and Num lock) requires the user to be part of the “input” group. Ensure your user is part of that group by running “groups”. If not, then add it with
1
sudo usermod-aG input$USER
Then, you must log out and log in.
First of all, the official package waybar already supports Hyprland (in the past, you had to install an AUR package). So, let’s install the main packages (as usual, you might want to make sure your packages are up-to-date before going on):
1
sudo pacman-Swaybar
Let’s open a terminal and start Waybar
1
waybar&
The result is not that good-looking
Waybar heavily relies on Nerd fonts for icons, and, currently, we don’t have any installed (unless you have already installed a few yourself).
The terminal will also be filled with a few warnings about a few missing things (related to Sway) and errors about failures to connect to MPD.
Let’s quit Waybar (close the terminal from where you launched it), and let’s fix the font problem by installing a few font packages:
Let’s start Waybar again, and this time it looks better:
Try to click on the modules and see what happens. For some of them, the information shown will change (e.g., the time will turn into the date).
Let’s quit Waybar again, and let’s start configuring it. We must create the configuration files for Waybar (by default, they are searched for in “~/.config/waybar”). We can do that by using the default ones:
1
2
mkdir-p~/.config/waybar
cp/etc/xdg/waybar/*~/.config/waybar/
The above command will copy “config” (with the configuration of Waybar modules, i.e., the “boxes” shown in the bar; The configuration uses the JSON file format) and style.css (for the style).
Let’s edit “config”. At the time of writing, this is the initial part of the configuration file:
1
2
3
4
5
6
7
8
9
10
11
12
{
// "layer": "top", // Waybar at top layer
// "position": "bottom", // Waybar position (top|bottom|left|right)
"height":30,// Waybar height (to be removed for auto height)
The initial parts specify the position and other main configurations. This part must be enabled:
1
"layer":"top",
Otherwise, Waybar popups render behind the windows
Let’s edit the modules that must be shown on the bar’s left, center, and right. Of course, this is subjective; here, I show a few examples. The modules starting with “sway” are for the Sway Window Manager, while we’re using Hyprland, and we must use the corresponding ones:
Each module, together with its configuration, is documented in the Waybar Wiki.
Let’s focus on the left and center modules. I’ve opened three workspaces, and here’s the result (note the workspace indicator on the left, and on the center, we see the currently focused window’s title, in this case, Firefox):
By editing the “style.css” file, we can change the workspace indicator so that we better highlight the current workspace:
1
2
3
4
#workspaces button.active {
background-color:green;
box-shadow:inset0-3px#ffffff;
}
Restart Waybar, and now the current workspace is well distinguished:
The “tray” module is useful to show applications running in the background, like “skype”, “dropbox,” or the “network-manager-applet”.
Let’s now define a custom module, for example, one for showing a menu for locking the screen, logging out, rebooting, etc. To do that, first, we need to install the AUR package “wlogout”:
1
yay-Swlogout
Let’s say we want to add it as the last module on the right. We edit the Waybar config file like this:
1
"modules-right":...asbefore...,"custom/power"],
Then, in the same file, before closing the last JSON element, we define such a module (remember to add a comma after the previously existing last module):
1
2
3
4
5
6
7
8
...
},
"custom/power":{
"format":" ⏻ ",
"tooltip":false,
"on-click":"wlogout --protocol layer-shell"
}
}
Note that I’ve used a character using the installed Nerd font. Of course, you can choose anything you like. The “wlogout” menu will appear when you click on that module. Let’s restart Waybar and verify that:
By editing the “style.css”, you can customize the style of this custom module, e.g.,
1
2
3
#custom-power {
background-color:#ffa000;
}
When we’re happy with the configuration, we modify the Hyprland configuration file to start Waybar automatically when we enter Hyprland:
1
exec-once=waybar
Restart Hyprland and Waybar will now appear automatically.
Finally, you can have several Waybar bars, i.e., instances, in different parts of the screen, each one with a different configuration.
For example, let’s create another Waybar configuration for showing a Taskbar to show all the running applications from all workspaces. This can be useful to quickly look at all the running applications and quickly switch to any of them, especially in the presence of many workspaces.
I create another configuration file in the “~/.config/waybar” directory, e.g., “config-taskbar”, with these contents (you could also configure several Waybar instances in the same configuration file, but I prefer to have one configuration file for each instance):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"layer":"top",// Waybar at top layer
"position":"bottom",// Waybar position (top|bottom|left|right)
"height":30,// Waybar height (to be removed for auto height)
// "width": 1280, // Waybar width
"spacing":4,// Gaps between modules (4px)
// Choose the order of the modules
"modules-center":["wlr/taskbar"],
// Modules configuration
"wlr/taskbar":{
"format":"{icon}",
"icon-size":16,
//"icon-theme": "Numix-Circle",
"tooltip-format":"{title}",
"on-click":"activate",
"on-click-middle":"close"
}
}
We can call it from the command line as follows:
1
waybar--config~/.config/waybar/config-taskbar
Here’s the second instance of Waybar on the bottom, showing all the running applications (from all the workspaces):
Unfortunately, in my experience, not all icons for all running applications are correctly shown: for example, for “nemo”, you get an empty icon in the taskbar. You can click it, but visually, you don’t see that… maybe it has something to do with the icon set. I still have to investigate.
You can run both instances when Hyprland starts by putting these two lines in the “hyprland.conf” file:
In this blog post, I will describe a few scenarios where you want to update versions in a multi-module Maven project consistently. In these examples, you have both a parent POM, which is meant to contain common properties and configurations to be inherited throughout the other projects, and aggregator POMs, which are meant to be used only to build the multi-module project. Thus, the parent itself is NOT meant to be used to build the multi-module project.
I’m not saying that such a POM configuration and structure is ideal. I mean that it can be seen as a good practice to separate the concept of parent and aggregator POMs (though, typically, they are implemented in the same POM). However, in complex multi-module projects, you might want to separate them. In particular, as in this example, we can have different separate aggregator POMs because we want to be able to build different sets of children’s projects. The aggregators inherit from the parent to make things a bit more complex and interesting. Again, this is not strictly required, but it allows the aggregators to inherit the shared configurations and properties from the parent. However, this adds a few more (interesting?) problems, which we’ll examine in this article.
Description:
Sets the current project’s version and based on that change propagates that change onto any child modules as necessary.
This is the aggregator1 POM (note that it inherits from the parent and also mentions the parent as a child because we want to build the parent during the reactor, e.g., to deploy it):
[ERROR]Failed toexecute goal org.codehaus.mojo:versions-maven-plugin:2.14.2:set(default-cli)on project example.aggregator1:Project version isinherited from parent.->[Help1]
[ERROR]Non-resolvable parentPOM forcom.examples:example.aggregator1:0.0.1-SNAPSHOT:Could notfind artifact com.examples:example.parent1:pom:0.0.1-SNAPSHOT and'parent.relativePath'points at wrong local POM@line4,column10->[Help2]
[ERROR]
Before going on, let’s revert the change of version in parent1.
Let’s add references to aggregators in parent1
XHTML
1
2
3
4
<modules>
<module>../example.aggregator1</module>
<module>../example.aggregator2</module>
</modules>
Let’s run the Maven command on parent1:
1
[ERROR]Childmodule example.parent1/pom.xml of example.aggregator1/pom.xml forms aggregation cycle example.parent1/pom.xml->example.aggregator1/pom.xml->example.parent1/pom.xml@
That makes sense: the aggregator has the parent as a child, and the parent has the aggregator as a child.
But what if we help Maven a little to detect all the children without a cycle?
It looks like it is enough to “hide” the references to children inside a profile that is NOT activated:
XHTML
1
2
3
4
5
6
7
8
9
10
11
<profiles>
<profile>
<!-- DON'T activate it, it's only to let Maven
detect the the children -->
<id>update-versions-only</id>
<modules>
<module>../example.aggregator1</module>
<module>../example.aggregator2</module>
</modules>
</profile>
</profiles>
And the update works. All the versions are consistently updated in all the Maven modules:
The important thing is that aggregator2 does not have parent1 as a module (just parent2), or the Maven command will not terminate.
We can also consistently update the version of a single artifact; if the artifact is a parent POM, the references to that parent will also be updated in children. For example, let’s update only the version of parent2 by running this command from the parent1 project and verify that the versions are updated consistently:
Unfortunately, this is not the correct result: the version of parent2 has not been updated. Only the references to parent2 in the children have been updated to a new version that will not be found.
For this strategy to work, parent2 must have its version, not the one inherited from parent1.
Let’s verify that: let’s manually change the version of parent2 to the one we have just set in its children:
XHTML
1
2
3
4
5
6
7
8
9
10
<parent>
<groupId>com.examples</groupId>
<artifactId>example.parent1</artifactId>
<version>0.0.2-SNAPSHOT</version>
<relativePath>../example.parent1</relativePath>
</parent>
<artifactId>example.parent2</artifactId>
<packaging>pom</packaging>
<version>0.0.3-SNAPSHOT</version>
And let’s try to update to a new version the parent2:
Let’s try and be more specific by specifying the old version (after all, we’re running this command from parent1 asking to change the version of a specific child):
This time it worked! It updated the version in parent2 and all the children of parent2.
Let’s reset all the versions to the initial state.
Let’s remove the “hack” of child modules from parent1 and create a brand new aggregator that does not inherit from any parent (in fact, it configures the versions plugin itself) but serves purely as an aggregator:
But what if we apply the same trick of the modules inside a profile in this new aggregator project, which is meant to be used only to update versions consistently?
This time, the version update works even when the same module is present in both our aggregator1 and aggregator2! Moreover, versions are updated only once in the module mentioned in both our aggregators:
Maybe, this time, this is not to be considered a hack because we use this aggregator only as a means to keep track of version updates consistently in all the children of our parent POMs.
As I said, these might be seen as complex configurations; however, I think it’s good to experiment with “toy” examples before applying version changes to real-life Maven projects, which might share such complexity.
I am writing this report about my (nice) experience adding a second SSD (an NVMe, 1 Tb) to my LG GRAM 16, my main laptop that I’ve enjoyed for two years.
DISCLAIMER: This is NOT meant to be a tutorial; it’s just a report. You will do that at your own risk if you perform these operations! Ensure you did not void the warranty by opening your laptop.
I wrote this blog post as a reminder for myself in case I have to open the laptop again in the future!
I also decided to describe my experience because there seems to be some confusion and doubts about which kind of SSD you can add in the second slot (SATA? or PCI?). At least for this model, LG GRAM 16 (16Z90P), I could successfully and seamlessly insert an NVMe M.2 PCIe3. This is the SSD I added, Samsung SSD 970 EVO Plus (NOTE: this SSD does NOT include a screw for securing the SSD to the board; however, the LG GRAM has a screw in the second slot, so no problem!):
You can also see the internal of the laptop below and what’s written in the second slot.
These are the tools I’ve used:
It’s time to open the back cover. That’s the first time I did that, and I found it not very easy… not impossible, but not even as easy as I thought. Fortunately, there are several videos that show this procedure. In particular, there’s an “official” one from LG, which I suggest to follow: https://www.youtube.com/watch?v=55gM-r2xtmM.
Removing the rubber feet (there are three types) was not easy from the beginning: they are “sticky”, but with a proper tool (the grey one in the picture above), I managed to remove the bigger ones and the one at the top.
For the other smaller ones, I had to use a cutter. Be careful because they tend to jump on your face (mind your eyes). 😉 Moreover, you have to be careful not to use too much force because you might break the bigger ones (in the linked video, you can see that the person breaks the one in the low left corner)
And here’s the back cover with all the screws revealed:
The screws are not of the same type either, so I ensured to remember their places:
Again, removing the cover after removing all the screws might not be straightforward: take inspiration from the linked video above! It took me some effort and a few attempts, but I finally made it! Here’s the removed cover and the internal of the laptop (well organized and neat, isn’t it?):
Now, let’s zoom in on the second SLOT:
You can see that it says NVME and SATA3. I haven’t tried with a SATA3, but, as I said, I had no problem with the Samsung NVME!
IMPORTANT: unplug the battery cable before continuing (at least, that’s what the LG video says). That’s easy: pull it gently.
Remove the screw from the second slot:
Insert the SSD (that’s easy):
And secure it with the screw:
Of course, now we have to reconnect the battery cable!
Let’s take a final look at the result:
OK! Let’s close the laptop: putting the cover back is easier than removing it, but you still have to ensure you close the cover correctly. Put the screws back and the rubber feet (that’s also easy because they are still sticky).
The moment of truth… will the computer recognize the added SSD? Let’s enter the BIOS and… suspense… 😀
I booted into Linux (sorry, I almost never use Windows, and, to be honest, I still haven’t checked whether Windows is happy with the new SSD) and used the KDE partition manager to create a new partition to make a few experiments:
Everything looks fine! In the meantime, I’ve also created another partition for virtual machines on the new SSD, which works like a charm! I haven’t checked whether it’s faster than the primary SSD, which comes with the laptop.
That’s all! 🙂
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.