Blog

It is time. After almost 7 years of running around with a MacBook Pro / Late 2013 Model, I decided to go back to windows. Thought I would be a bit of a struggle... and it was. But still: here we go.

Reasons

The main reason to go Mac in 2013 was a friend and his recommendation. Still grateful for that. And no other real option was available for what I was looking for: weight less than 2kg, long battery life, a good looking device - let's be honest here good looking was number 1. Good build quality. Performance. Nothing else came even close. And I do work a lot in terminals - made it a good choice.

I did never regret going MacBookPro - the build quality is fantastic. The MagSafe charger is too. Very good screen. Quietly running. The touchpad is simply amazing. A lot of good things.


I decided to replace the MacBook for several reasons. The main one: the battery degraded and MacOS did start showing a warning. The tool coconutBattery showed a remaining capacity of 78%. Everything was still working fine. It was noticeable but I could live have lived with it. I just don't have to (smile)

Second: every other computer I use runs Windows. And the switching around, back and forth made me use shortcuts with weird effects and duplicated tool setups.

Since the Git installer comes with a very good Git Bash executable, Microsoft has released the new Microsoft Terminal, the Terminal in MacOS is no longer irreplaceable. There is the WSL (Windows Subsystem for Linux) - great things happened.

Third: no more MagSafe. Forth: don't like the touchbar. I just don't.

So I can't really - or does not want to - buy a new MacBook Pro. And this is bugging me for years! If I want to believe my Google search history (yes I have that turned on) the first search for "dell xps 15" was 2016. But I was looking for a device with a similarly good touchpad, weight, screen, battery life and good looks. It got more serious in 2018 where I had a lot of struggle with MacOS upgrades. Had to reset the MacBook twice within a year as of weird root certificate issues. Since Dell now offers a good sized touchpad with Microsoft Precision Drivers I thought thats my exit strategy.

The Pain of Ordering

The reviews of Dave Lee, Hardware Canucks and LTT as well as The Everyday Dad (sort of more of the "Mac" perspective) all sounded very good. But they all mentioned touchpad issues where there is a wobble. But they all claimed Dell would replace those parts, and as of June it should not be an issue anymore. Should not.

So in July 2020 I ordered a brand new and shiny Dell XPS 15 9500. It arrived mid-august 2020. Nice. A nice box, well packaged, a very good experience. And here the good mood stops. The touchpad had that wobble issue. So I called Dell Premium Support the same day. The support employee was very nice and good to talk to (after the usual 2-5 phone redirects...), I answered all the questions and was promised a technician would show up the next day with a replacement part (the whole palm-rest needs to be replaced). Sounded good. Ticket closed.

The next morning a received an email that no replacement parts are available and I would be contacted again. 

Ten days later I called again. No progress, the Dell support told me there is a wait time of 15 business days until something else can be done. Ticket closed. Waited some more.

Called again after seventeen days since the delivery. This time a new notebook was ordered for me with a delivery date planned for the beginning of October 2020. Not ideal. But well... Ticket closed.

The delivery date was then moved back again into November. Also the notebook with the defect was picked up - had to do nothing there, thats a plus. Ticket closed.


Waiting.


In the meantime I decided to order an iFixit kit and replace the MacBook battery myself following the very good guide they provide. In 55 easy steps - ok, a bit messy, because batteries really need to be glued into a chassis, right apple?! - I replaced the battery and in another 55 easy steps put it back together. Took longer than expected: but worked! Back to 100% capacity.


So I called Dell, asked them to return the notebook, as it was still a month away. Was told someone would need to approve this and they would call me back. Ticket closed. They never called back.

The new XPS actually arrived earlier, in the end of October. Nice packaging, good experience. Same touchpad issue with the new device! Dell Support again. Had another good call - after 3-5 redirects. Technician for tomorrow. Deja vu. BUT: actually he showed up the next morning. Cool. Replaced the palm-rest. The issue STILL there! I asked him if I'm nuts, but he confirmed that wobble, and that its broken. Weirdly: the old part did no longer have that wobble once not out of the notebook. Only when screwed into the laptop it appeared. He also stated he's replacing this part a lot, more in XPS series than the Precision series. Well. Disappointing quality control - but ok at least they replace the part and don't force you into discussions. New ticket.

New replacement part! He showed up again the next day again. Cool. Another palm-rest replacement. This time: SUCCESS! Finally. So the notebook I ordered end of July 2020 was usable end of October 2020. My very positive unboxing mood: gone by now. GONE! This is an expensive machine. The joy of buying one: GONE. Thats one working palm-rest out of four.

Summary

It's now three weeks since I'm using the XPS 15.

Management summary:

  • are you satisfied? no.
  • will you switch back? no. Because it's a nice machine. It really is.
  • does this make sense? no.
  • can you recommend the XPS: this decision will be yours. given the support hassle... no?
  • regrets? maybe.
    Had I tried the battery replacement earlier I might would not have ordered one and waited until AMD Ryzen Mobile CPUs are more widely available.
    But this operation was a bit risky as you have to "rip out" the old battery, leaving me with no notebook at all. Didn't want to risk that.
  • I thought I don't but I do: USB-A, still needed.
  • Dell Support: I'm sorry, but that just sucked!
    Every call with a "human" I had, the person on the other side was friendly, patient and helpful. They tried their best. Good calls - after you got redirected 3-5 times... 
    The technician that replaced the palm-rest also gets 10 out of 10.
    Dear Dell, you got the people. Give them a system to properly support them.

    But: why do I have to call Dell more than a dozen times (!!!) for an issue that I did not cause? 
    Why do you say that the person to authorize returns will call me back, then you don't? The technicians where able to do this.
    Around 5 closed support tickets, without the issue being resolved. There are no status updates anymore - you have to call again, get a new ticket. And Dell probably should have created even more tickets, because some contacts told me they will open a ticket - but did not.

    Calling back on that "direct number" in a support ticket NEVER WORKED! Not a single time! You just get kicked out of the line and you can start over!!! FFS!

    Around a dozen calls to the premium support - I wonder why they even bother doing something else that's maybe worse - and five to seven closed support tickets without my issue actually resolved.

    The Email support just replied: please call, we are not responsible for issues like this. I was in contact with a support center in Germany, one in Austria, one in Switzerland. Every Email I received looks different. Every email contained different phone numbers or contact information, different links to dead support sites.
    PLEASE: DO A BETTER JOB!! Because all the joy and fun of buying a new device is just blown away if you can't get this right.


Comparing the un-compareable


MacBookDell XPS 15
The touchpad.It's just perfect. Like it a lot. Never used a mouse.

The touchpad is very nice. Is it as good as the Mac one (from 2013!)? No, it is not. Even after 7 years, the Windows world has not been able to catch up! This is embarrassing.

I had to change some settings in the registry editor (see superuser.com):

Computer\HKEY_CURRENT_USER\Software\Microsoft\Wisp\Touch\Friction

The feeling is still 'weird'. If scrolling faster it's not really keeping up. Might needs some more messing around. And the general sensitivity is not as good. Sometimes touches (to click something) are not recognized. Never ever had that on the MacBook. Never needed to tweak something. It is just perfect the way it comes. So it's possible. Hear that? Microsoft? Dell? Both? Anyone?

Physical sizeA bit bigger, but nothing as a big plus or minus.The XPS is a bit smaller in size - mainly in width. But it has a good format.

WeightAround 2010gThe XPS feels quite a bit more heavy. But it's around 2055g. From lifting it up I would have said it must be more.
Display

Not too bright. But very nice to look at. Glas. No touch.

Good scaling in MacOS.

After 7 years some sort of coating on the display is gone. Some sort of stain? It's not noticable when the screen is on, but clearly when off.

The screen has some good and ugly sides.

The color and brightness: fantastic.
Touch: I like it. I use it a lot for scrolling and closing windows. I think it's a handy feature. Hear that Apple? Probably not.
The automatic brightness changes in steps that I notice. Irritating. Had to turn it off (settings, not registry).

And one thing I don't understand: Windows on High-Res displays does not look so eye friendly like MacOS does. Can't tell what it is. It's all sharp and color accurate. But the font scaling still shows ugly dialogs and relative sizes that have not been fully "resolved".

The brightness can only be adjusted down to some limit, and not entirely turned off. I sometimes do that when I need the device to do something overnight. 

Build quality

Looks very good even after 7 years. The lid is a bit "tilted" from carrying it around. And dropping it once. Nothing serious. As good as new. Robust machine.

The lid closes with no gaps...

On one hand (ignoring the touchpad issues for now) it's very well made. Very nice to look at. But then: for example, there are "gaps". There is a rubber lid around the screen - as on the MacBook - to prevent dust entering when it's closed. But the lid does not close good enough. There is a gap. The MacBook does not have that. Again: in 7 years (!!) you could not bring build quality up to this level??

Some small things I noted as well - and one better does not "note such things":

  • There are two little holes on top of the lid. I thought I broke something already.
  • there is a white light in front that indicates charging. It is not fully illuminated.

    Update: I was able to fix this myself. Found a hint where maybe only a wire was blocking the led - which it was. See this Dell Support Topic.
  • The lid opens with one hand and has a perfect resistance. My MacBook requires two hands.
SpeakerNo complains. If complaining: use headphones.At least the Dell speakers sound better than the 7 year old MacBook speakers. I have colleagues with a new MacBook Pro: does not really compete with that. But I don't need more - so I should not complain here.
Keyboardgot used to it. hard to tell (smile)

Feels better than the 7 year old MacBook. Has a fingerprint reader. The arrow keys on the MacBook are somehow better to use. Less shift button hitting.

Also the Power On button could be a bit away from the other buttons.

There is no FN+Disable Microphone Shortcut? But neither has the Mac.

"Instant on"Open the lid to login: just a slight delay - sometimes the keyboard is not quite ready when the screen is. MacOS: just nice.Windows: differs. When not asleep it's very close. After some time its maybe 10 seconds. Not as good. Acceptable. Hope this may gets a bit better with time. No expectations, just hope.
Noise

mostly quite. When doing something heavy for a while: well there are no miracles. But YouTube and Chrome: no fans kick in. But the MacBook is certainly on the warmer side. I will not say it's an issue, just if I could choose I would rather not have that warm fingers.

The air intakes are on the side. Thats perfect for my couch position.

The XPS is a bit more noisy that the MacBook. There are battery profiles to configure it to your preferences. A "migrate from mac" profile might be good... because... why again do I configure something I never did on MacOS?

But it is also a rather quiet machine. Not that quiet - it also has 64GB of RAM compared to the 16GB on my MacBook. And comes with double the CPU Cores/Threads and double the storage.

The air intakes are on the side and on the bottom. That feels not so perfect for my couch position (not sure if an issue).

Yet: when using Google Chrome the fans tend to kick in where on the MacBook I don't have that.

Battery Lifestill impressive. And after my battery-swap: even more so. I use caffeine to have the MacBook just "on". When left alone with only the screen running, it runs for hours and hours.

Battery-Life felt bad at the beginning. But it holds up quite nicely. Did not measure anything as I don't know how to compare both. But I have no complains.

At least the charging technology improved in the last few years. Also the charger that Dell includes is compact in size and nice. Not a random ugly one.

Running Linuxtried it. You can't run both internal and dedicated GPU. Well you can with some messing around. But: no, not going there. This is embarrassing. The notebook runs very hot and has a very bad battery life. You're obviously not supposed to do this. It's not your device alone - Apple keeps it's stakes on it.Linux and battery life. Open topic. But I meet more and more people doing this.


So, as of November 2020, that's the situation: the device I would like to own does not exist and is not build. So compromise it is. I still like the looks of MacBooks. The 16 inch screen size is also tempting. The restrictiveness of these devices when you feel opening them and swapping parts... don't like it, at all. And then there is pricing.

But in summary: the MacBook advantages in size, looks, weight, features (like the underlying Unix, the terminal, MagSafe) are gone. I caught myself looking at new MacBook 16 - but I still have the same issues. No MagSafe. Touchbar. Oh, there is that new one: expensive! Even compared to the high Dell pricing.

Other vendors? Lenovo: I would consider this to also be good hardware. But I don't like the looks (sorry), the touchpad. HP and Acer and Razer and ... did not get warm with the idea.


So I decided to stick with this and give it a serious try. I mean: it has the power! (smile)

And also because I decided to never call that support number again. Ever.


Wouldn’t it be amazing if you can deliver the right software to your customer that they can understand and follow? 


Agile software development has gained a lot of traction in recent years. Yet still, teams struggle with freedom and responsibility. Behavior-driven development also referred to as BDD enables everyone involved in the project to easily engage with the product development cycle. It helps to stay on the path to get to the right decision to build not only software right but also building the right software.


In BDD users, testers and developers write test cases together in simple text files. These test cases are scenarios that are examples of how the software is supposed to behave. The shared scenarios help team members to understand what is going on. They are used during a long period of a cycle starting with the specification, helping during implementation and design, and can even fill feature completion reports.


BDD is a software development approach that brings users, testers, and developers together to create test cases in a simple text language. It also comes along with methods to go beyond just specification files and just another test utility. It brings methods to improve transparency and traceability into an agile team.


Do you want to learn how to deliver better software to your customers? 

Check out the workshop for further details.



Culture as Code

Welcome to culture as code! Now: obviously, this is a lie - culture is about social behaviour, about norms found among humans. Culture is more than what happens at your workplace. It is found in music, art, religion. You can't possibly put that into code. And you're right, you can't.

But hear me out. 

Let me tell a story showing that code can indeed be the source of influence on human behaviour - and help to improve on your team's culture. And by adding that human to the mix, make this work.


Suspect a required movement

The situation in software development if often diverse. It all started with good intentions, enthusiasm, even huge ambitions. Then it all drifted into some arrangement of progress and staying alive. A drift that companies often try to come by with organizing work into departments, increasing delivery cycles. And people get used to those cycles. Rely on them, maybe even demand them.

A culture driven by processes and rules emerges. Not too bad, not great either.

So the cycle continues: That application needs to be done in a couple of months?! We will do testing when the software is more complete. Does the CI server show a broken build? We can fix that once we need it. It is more important right now to be able to work locally. Do the integration tests do whatever they want? We can take care of those once all services are integrated.

This mental model of postponing important things in favour of urgent work will only change when the culture changes! We need to change our culture!! This is what developers are born for. This is what developers are given time for!!! Culture changes!!! Psychological work, social competence. Well, maybe not every developer is up for the task. Still, this situation needs to improve. 


Starting a movement

Copper plate with laws used by RomansBut what are developers good at? We declare our manifestos to follow, search for patterns to implement and re-use, define principles and write guidelines. Come up with practices to repeat the next time, since they worked. With rules to follow to not stumble over the same thing again. 

Rules - we can implement those - can't we?


Investigating the situation

Some rules are surely harder to implement than others. 

Like: Be humble to each other. Nice rule. Sounds good. Sounds important. The implementation may require some Commander Data brain. No one got time for that. 

New rules, more concrete, more useful! 

So here we go: The CI build is broken? Drop all your pencils and go fix it!

The code analyzer brought up a crime to light? Drop all your pencils and go fix it! The security scanner found a new vulnerability? Drop the pencils and update that library some insane person added! The deployment failed? Drop all your pencils and fix the deployment. The service crashed miserably? Drop all your pencils and investigate!


These are all important activities, and the quality in which we perform them affects our efficiency and how much time we can spend on other things.


We don't do all this because we would need to check the CI server, the code analyzer, the security scanner, the deployment tools, the service logs and monitoring tools - just to be able to know what is going on, if there is something to do at all and if the problem that came up is actually ours or caused by some other poor fellow.


So back to our plan: what are developers good at? Well, I do hope at writing code! And all of the above can be expressed in code and be shown to your team in an instant. Is that a culture change? It sure is not. But we ain't there just yet. Let's keep trucking.


This overview will help us to identify what task is in the wild and should be done - or - honestly - should have always been done - but we didn't - because it was too fiddly to check everything. So we can go from dropping the pen directly to addressing the issue, taking a huge jump over all the forensics.


Once we have a collection of this information and can show a summary, we make transparent what is hidden somewhere. No more constant searching for the same information. For us and others. The situation of our software is transparent. And obvious state in software? Isn't that beautiful.


Writing a build monitor

Well there is one already, isn't there? Sure there is. But none that shows our tools, our state, our information the way we want it. And - to anticipate a little bit here - we want to do more than just rules on some state. We target culture.


So once we wrote a small application that collects all the information we need (from the CI server, the code analyzer, the security scanner, the deployment information, and the application health state itself) we figure: we have quite some information at hand!


For example: is the production version newer than the version on pre-production? Is the application un-healthy because of some other service that is unavailable - because that may just mean to let them know about it? Does the meta-information of our application suggest any actions? Did we configure the application correctly so all monitoring tools can work? Was there a SNAPSHOT version deployed? Is the version that was created by the CI server the one that was deployed to the test environment? Or is there a version gap? Is the test-coverage anywhere near where it should be?


The pirate codex

All kinds of things can now be checked, beyond just a state in a single tool. The state of a delivery pipeline can be observed and checked against our rules. The rules we currently need.


Some may call this IT governance. But there is one big difference. These are our rules, they represent our current focus and priorities. Ideally these match - but let us not jump into this snakepit today.


Observe the change

Now as every team has a reddish screen, some will take on the challenge to go greenish. And they will notice that once something comes up, it is usually caused by other teams. These teams will now either: be lost and frustrated (because there is no one to lead them out of the mess), or they take on the challenge themselves. Because this is our tool and our rules. And no one wants to come in last. Some claim it - but I don't believe you. So application by application, pipeline by pipeline, team by team the situation improves. Because people care. Because they see the state, they see the effect of their work, they notice the improvements over time.


And when people start taking care where they previously didn't: there is your culture change.


So a simple build screen? A tool that allows implementing some custom rules? Does that work? 

The laws carved into a wall in Gortyn (Creta)

It does. But it does not on its own. Because tools are only tools. If a team does not find a way how to help themselves or to whom to turn to for embarrassing questions, they will ignore this. Delete the transparency and keep living in the mess. You need the humans willing to go through some essentials - application by application, team by team. You need some of them who care and carry this care to others. By making the things they care for transparent - for everyone.


This worked exceptionally well for me. Because you can always find other knights willing to ride along to battle. Because they are sick of living in the mud. And this way - step by step, floor by floor you will reach the roof under the stars. I will not argue it is a fast process. But some processes are more healthy if slow. And even with 40 applications, if you can only heal 2 a month, after only two years you will have healed them all. Looking at some mess today: don't you wish you would have started 24 months ago? Because today: it would be the roof and the stars! 

So if it is not today: let it be tomorrow.


When we think about some mobile apps and how they changed how we meet people, how we connect and get in touch: then why would code not be able to influence and change a culture? Code certainly already did this on other occasions.


Yes! Code can change a culture, together with the people that hold on to it. I've seen it.


The tool that came out of all these steps was called "Mobitor" - because it needed a name at some point. You can give it a run yourself: https://mobiliar.bitbucket.io/mobitor/

The next steps are on www.cultureascodemanifesto.org to collect some guidelines to help you orient yourself on the journey.



The picture of the wall with laws is from wikipedia. The Roman bronze plate picture is from Claude. The Codex of Pirates of the Caribbean is from the fandom wiki (remember: it's just guidelines).


Kotlin all the things

So, after all, it seems JetBrains is very serious with Kotlin. And I have to admit it comes with some handy features and good IDE support. But this is not about the Kotlin language, this is about where it can be used.

As we have seen in Migrating from Gradle to Gradle writing Gradle build scripts using the new Kotlin DSL is supported. So far we have

  • our sources in Kotlin
  • our build configuration in Kotlin
  • but not our Continuous Integration configuration
    (depending on how far you want to push it your build chain or pipeline as well)

Since TeamCity (the CI server) is from JetBrains as well, it supports storing your build configuration not only over the UI. It supports storing the configuration in your VCS in XML format and (since around version 10 and 2017 in a Kotlin format. The current version 2019.1 comes with even more improvements and simplifications in this section. So throw away your build config yaml file! Kotlin you build config too! Although this seems a bit weird at the beginning, there are some big advantages:

  • it real source code
  • it compiles
  • you can share common pipeline definitions via libraries
  • since Kotlin is a typed language, there is a nice support for in your IDE with auto-completion
  • you can compile the code prior to pushing it - compared to the try and error cycle that YAML config files come with, I'll argue it is the better method

There is a very nice Blog from JetBrains on this topic that I can highly recommend to read:

And while you're at it, maybe watch the webinar on "Turbocharging TeamCity with Octopus Deploy" as well. Octopus is an additional commercial service. But distinguishing between continuous integration and deployment seems a good split in responsibilities.

Since I consider you know to be convinced let us test-drive it! We first need a TeamCity Server, a TeamCity Agent, and an example project.

To have it quickly set up to test-drive, I created a docker-compose.yaml file (yes a yaml file, isn't it ironic): https://github.com/brontofundus/kotlin-all-the-things

Clone the repository and fire it up:

$ docker-compose pull
$ docker-compose up -d

Then point a browser to http://localhost:8111/

This will bring you to the TeamCity installer. But don't panic, it will only take a few minutes!




Proceed

  • Select PostgreSQL
  • download the driver
  • and use the same settings as used in the docker-compose file:
    • "postgres" as host
    • "teamcity" as username, password and database name

Proceed

Wait

Scroll down and agree to the license
(well read it of course - but don't tell me you are not used to selling your soul)

Create an admin account

(for simplicity use "admin" and "password" here)


There you are

As you may notice on top there are 0 agents. Which is not entirely true.

But to have the one we have enabled it needs to be authorized first


Go to "Agents" and "Unauthorized" and enable the agent

If you now go to the start page (click on the logo on the top left) you will be able to add a project

Builds run already

If you have a close look at the project you will notice it contains a ".teamcity" directory that contains the build configuration: https://github.com/brontofundus/gradle-groovy-kotlin-dsl/tree/kts/.teamcity

The new 2019.1 format comes in a "portable" variant. So the number of files to earlier TeamCity version is reduced to only the settings.kts file and the pom.xml 

From here on no more clicking in the UI is necessary.

And even more comfort

Since you probably use IntelliJ to develop in Kotlin anyway and you now have a running TeamCity server from the same company, even more, comfort is possible!

Just install the TeamCity plugin in IntelliJ and point it via the new menu entry to your local server.

This will show the build status of your projects, but in addition, also allows you to run your local changes remotely as personal build! This is not a new feature but people tend to forget about it:



Install the plugin in IntelliJ

Restart and point it to the local TeamCity server
(we used "password" as password above)

And you can now remote run builds with local changes!

Even with not yet committed changes

Personal builds have this nice additional icon to mark personal builds. These are only visible to the user that created them

If you create an additional user and re-login you will not see other peoples personal builds


So we have:

  • our software in Kotlin
  • our Gradle build script in Kotlin
  • our CI configuration in Kotlin - able to share and re-use our build chains 
    (read the JetBrains blog above for more details on this)
  • and free of charge with this setup: remote runs


What a Kotlin world to live in!

Gradle build scripts have been written in a Groovy based DSL for a long time. Although flexible, the IDE support was always a bit of a problem. You either knew what to type or you searched the docs or tried to find an answer on stackoverflow. IDEs always struggled to provide help on writing tasks or configuring them.

For some time now, a Kotlin based DSL is in the works and as of Gradle 5 it is available in 1.0. So is it any better compared to what you can do with the Groovy based DSL?

To get started, some reading the documentation (later, after this blog post!) helps:

If you need to learn about Gradle in general, there are free online trainings available that I can highly recommend (from starting with gradle to advanced topics).


The example project created for this comparison is on GitHub and contains a simple spring boot application also written in kotlin that spits out a docker image. The master branch uses the groovy DSL, the kts branch uses the new kotlin DSL but does exactly the same.

Overview of the groovy build

The groovy based build script uses the new plugin syntax:


new plugin syntax
plugins {
  id "com.palantir.docker" version "0.22.1"
}

Instead of the old syntax which would look like this:


old plugin syntax
buildscript {
  repositories {
    maven {
      url "https://plugins.gradle.org/m2/"
    }
  }
  dependencies {
    classpath "gradle.plugin.com.palantir.gradle.docker:gradle-docker:0.22.1"
  }
}

apply plugin: "com.palantir.docker"

This will simplify the kotlin script migration, as the kotlin syntax is very similar to the new one.

Note on the new plugin syntax

There have been some issues with this new syntax when a Maven Repository Proxy (like Nexus or Artifactory) is used. But the Gradle plugin repository is available as maven repository as well and as of Gradle 4.4.x plugins can be loaded via a repository proxy too (previously this would only work without any authentication or with direct internet access - which is unlikely in an enterprise environment). So Gradle 4.4.x comes to the rescue! You can add your repository proxy to an init.d script and use the new plugin syntax.


$HOME/.gradle/init.d/repository.gradle
apply plugin: EnterpriseRepositoryPlugin

import org.gradle.util.GradleVersion

class EnterpriseRepositoryPlugin implements Plugin<Gradle> {

    private static String NEXUS_PUBLIC_URL = "https://<nexushostname.domain>/repository/public"

    void apply(Gradle gradle) {
        gradle.allprojects { project ->
            project.repositories {
                maven {
                    name "NexusPublic"
                    url NEXUS_PUBLIC_URL
                    credentials {
                       def env = System.getenv()
                       username "$env.NEXUS_USERNAME"
                       password "$env.NEXUS_PASSWORD"
                    }
                }
            }

            project.buildscript.repositories {
                maven {
                    name "NexusPublic"
                    url NEXUS_PUBLIC_URL
                    credentials {
                        def env = System.getenv()
                        username "$env.NEXUS_USERNAME"
                        password "$env.NEXUS_PASSWORD"
                    }
                }
            }
        }

        def referenceVersion = GradleVersion.version("4.4.1")
        def currentVersion = GradleVersion.current();
        if(currentVersion >= referenceVersion) {
            gradle.settingsEvaluated { settings ->
                settings.pluginManagement {
                    repositories {
                        maven {
                            url NEXUS_PUBLIC_URL
                            name "NexusPublic"
                            credentials {
                                def env = System.getenv()
                                username "$env.NEXUS_USERNAME"
                                password "$env.NEXUS_PASSWORD"
                            }
                        }
                    }
                }
            }
        } else {
            println "Gradle version is too low! UPGRADE REQUIRED! (below " + referenceVersion + "): " + gradle.gradleVersion
        }
    }
}



Other than that the build does not contain any unusual things. There are task configurations and a custom task. The spring boot dependencies are added, the test task is configured to measure the test coverage using jacoco. The build uses the docker-palantir plugin to create the docker image. And there is a task that prints some log statements to tell a TeamCity server about the coverage results (will not hurt on Jenkins or Bamboo).

The docker plugin uses task rules to create some tasks, so it's configured via the extension class - there are several ways to do this, the variant used in the example is also to have IntelliJ understand what task it is so auto-completion works.

Overview of the kotlin build

The kotlin DSL build script (in build.gradle.kts - the buildscript has a new filename!) uses a plugin syntax very similar to the groovy one. It looks almost the same. The build script looks very similar if you compare them both.

The way tasks are referenced or created changes slightly. Once you get used to it it's fairly easy to use, and the IDE supports what you are doing!

Quirks and conclusion

As the IDE support is currently limited to IntelliJ we can only look at that. But if you were used to compile gradle builds on the command line anyway, the gradle wrapper will automatically recognize the kotlin build script and is by default capable to run these.

An improvement you almost immediately notice: auto-completion in build scripts suddenly makes sense! For some reason it sometimes is very slow to show up but the suggestions made are way better than what you would see using the groovy builds.

Yet IntelliJ will sometimes mark fields of plugins as not accessible - the build will work, it's just the IDE that complains. There are workarounds for some of the warnings.

The expected variant:


tasks.test {
    extensions.configure(JacocoTaskExtension::class.java) {
        destinationFile = file(jacocoExecFileName)
    }
}

But there is some access warning. But you can switch to using the setter:


tasks.test {
    extensions.configure(JacocoTaskExtension::class.java) {
        setDestinationFile(jacocoExecFileName)
    }
}

Not too nice, but not a showstopper - and unclear if Gradle is to blame or if it's some IntelliJ issue. 

In other cases it helped to hint the task type:


tasks.withType(Test::class.java) {
}

To have better auto-completion. The documentation on the kotlin DSL gives some hints how to help yourself.

In every case where IntelliJ complained, a workaround could be found. But these are just to have IntelliJ not complain on the build script. Not ideal. But you can reach a state where IntelliJ does not mark any line with an error or warning! Compared to the random errors and warnings mess in the groovy build scripts: way better.

Comparing the length of both scripts, doesn't really show a clear winner. Both scripts have about the same length and structure. The tasks are often a few lines shorter, but the type declarations will add an import statement on top. Overall this simplifies the migration and keeps the readability one got used to. I wished everything was just shorter and more expressive - but that probably was just a personal wish - actually it is a bit unfair to the groovy DSL, which is already good. The build scripts seem to initialize slower but builds run at the same speed. But the way gradle optimizes task execution or determines if task configuration needs to be loaded at all did improves with gradle 5 - so the speed penalty might not be there for you at all. So the way it looks today: quite good :-)

I have no concerns to use the kotlin DSL in production builds at all and the IntelliJ support is in a good state, so you will not need to flip between the IDE and the command line all the time if you don't like doing that.

Is it a migration?

The title states this was a Gradle to Gradle migration. But the resulting build scripts look very similar. So is it really one? I would say yes. It took me two attempts with a couple of hours of searching around in the documentation and experimenting (as there are not so many examples around yet). Although the result does not look like much of a change, it took some effort to get there. But effort in the meaning of hours to days - surely not weeks (as I'm not the most experienced gradle nor kotlin user). Of course this may fall apart if a lot of plugins are used or they don't properly interact with the Gradle API in this version (as you will probably upgrade to gradle 5.x from a 4.x version).

Hints for the hasty

The linked documentation on top already contains this but just in case you are a very hasty developer, here are some useful gradle tasks in this context:


$ gradle kotlinDslAccessorsReport

prints the Kotlin code necessary to access the model elements contributed by all the applied plugins. The report provides both names and types.

You can then find out the type of a given task by running


$ gradle help --task <taskName>


Another important statement is in the migration guide in the configuring plugins section:


Keeping build scripts declarative

To get the most benefits of the Gradle Kotlin DSL you should strive to keep your build scripts declarative. The main thing to remember here is that in order to get type-safe accessors, plugins must be applied before the body of build scripts.



So if you are programming in kotlin anyway, and you also use TeamCity and its kotlin build DSL you can now also use kotlin in your builds too. Kotlin all the things!

Kotlin has certainly more momentum today compared to Groovy. The typed DSL solves some crucial handling issues in gradle build scripts. I would guess the new DSL may become the default at some point, not that I'm aware of any timeline. Just an assumption. So don't hurry, the groovy DSL will be around for quite some time. If you are starting with gradle, I would try using the kotlin DSL from the beginning.


Give it a try! 


These are some notes that were taken when watching this video: https://www.youtube.com/watch?v=lE6Hxz4yomA


One pattern of the book is “be a Hands-on Modeller” (you have to have some contact to the ground level or you won’t give good advice, stay up to date, stay sharp, keep learning things you can talk about).

Every effective DDD person is a Hands-on Modeller.


A lot of things are not exactly different from the book but there is a little different emphasize.

What is (really) essential in the book?

  1. Creating creative collaboration of domain experts & software experts à ubiquitous language pattern
    (you’re not supposed to create that model for yourselves)
  2. Exploration and experimentation
    the first useful model you encounter is unlikely to be the best one. When there are glitches and you start working around if you’re already frozen. à “blast it wide open”, explore with the domain experts
  3. Emerging models shaping and reshaping ubiquitous language
    (say things crisply an un-ubiquitous, no complicated explanations), explore with the domain expert
  4. explicit context boundaries (sadly it is in chapter 14, would be chapter 2 or 3 today)
    a statement in a language makes no sense when it's floating around, you could only guess the context it is in.
    Draw a context map! Should be done in every project!
  5. focus on the core domain (sadly it is in chapter 15)
    find the differentiator involved in your software: how is your software supposed to change the situation for the business you’re in (we do not mean efficiently, something significant)

These are the things to focus on.

Building Blocks (chapter 5)

Our modelling paradigm is too general, we have objects and relations – this is just too broad. We need something that structures this a little more, puts things into categories, helps communicate the nature of your choices.

Services - Entities - Value objects
Repositories – Factories

  • They are important but overemphasized
  • But let’s add another one anyway, as an important ingredient: Domain Events (interesting for domain expert):
    The level of event (important for the domain expert) you want to record something important happened in your domain. There is a consistent form:
    • Tend to happen at a certain time
    • Tend to have associated a person
    • typically immutable (you record it happened and that’s it)
  • Domain Events give you some architectural options, especially for distributed systems
    (record events from different locations)
  • consistent view on this entity (“runs” in a game reported from different locations) across a distributed system à event oriented view


More options:

  • Decoupling subsystems with event streams (Design Decoupling)
    • Have a core transactional system, send out a stream of domain events
      (you can change the definitions and only need to maintain the stream of events)
  • Such distributed systems are inconsistent but well characterized
  • Have multiple models in a project that are perfect for their purpose say reporting and trading (of course you don’t have to)

Another aspect of domain events are distributed systems:
Enabling high-performance systems (Greg Young)

Aggregates (super important)

  • Do people often ask how to access what’s inside? But that’s not the most important question.
  • Aggregates help to enforce the real rules
  • You have something you think of as a conceptual whole which is also made up of smaller parts and you have rules that apply to the whole thing.
  • Classic example: purchase order, having a limit, an amount, line items that add up, …
    but with thousands of line items object orientation gets a little stuck
  • Beware of mimic objects (that carry data around but not doing anything useful)
    • "Where is the action taking place?”
  • Sometimes it might be useful to give aggregates more privileges so they could execute a count query themselves.
  • Aggregate: we see it as a conceptual whole and we want it to be consistent
    • Consistency boundaries
      • Transactions
      • Distribution
        (you need to define what has to be consistent when crossing the boundaries)
      • Concurrency
    • Properties
    • Invariant rules

Strategic design

  1. Context mapping
  2. Distillation of the core domain
  3. Large scale structure

Large scale structures do not come up that often.


Setting the stage

  • Don’t spread modelling too thin (“you need to know why modelling is done”)
    Modelling is an intensive activity, so the more people understand it the more value you gain
  • Focus on the core domain, find out what it is. Find the need for extreme clarity.
  • Clean, bounded context
  • iterative process
  • access to a domain expert


Context mapping

Context: the setting in which a word or statement appears that determines its meaning.

Bounded context: a description of the conditions under which a particular model applies.

Partners: to teams that are mutually dependent. Forced into a cooperative relationship.


“Big Ball of Mud”: http://www.laputan.org/mud/  (the most successful architecture ever applied)


How to get out? Draw a context map around the big ball of mud. Build a small correct module inside the ball until eventually the ball of mud captures you. But you had that time to do it right. So think about an anti-corruption layer.

If you transfer a model into a different context, use a translation map:
model in context <--> translation map <--> model in context

Explain the meaning of something. Because meaning demands context.

Strategy

  • Draw a context map
  • Define core domain with business leadership
  • Design a platform that supports the core domain