User Stories seem to be my favourite topic these days…

…the idea behind this post started with seemingly innocent user story, like this:

I, as an End User, want to authenticate by PIN at the MFD, so that my documents are secure.

Today, I want to focus on one particular thing and that is stakeholder value.

The value to the user, according to this story is security and the desired function is authentication. Long story short, I do not know that many users, who would require authentication! Authentication (and authorization for that matter) is a solution which is helping us to achieve something else, something, like data confidentiality and non-repudiation.

The real value the user desires is data confidentiality. Non-repudiation is usually the desired value of IT administrators or security officers, who own the security policy of the organization in question. So what is the function of the system the user needs? Or does the user really care?

The function the user actually needs is some kind of protection, which creates confidence in the user that it is not easy or even possible to retrieve their documents.

So let’s start with something completely different…

I, as an End User, need the system to exhibit protection of my documents, so that I can trust their confidentiality.

What message I can take from such user story as a developer:

  • There is a stakeholder called End User.
  • The user wants the system to exhibit protection, meaning that the system should not only protect the documents, but also demonstrate that it is protecting them.
  • The user values the trust which the system builds and maintains and the fact that the trust can be put in confidentiality of user documents.

But what hapened to our authentication? There are two other stakeholders, who actually value authentication. As mentioned above, stakeholders internal to the customer, who govern the security policy may have specific requirements on authentication. In this case, it is intentional design as it stems from constraints imposed by customer environment. These constraints can and perhaps should be challenged, but never ignored.

In such case, we work with another stakeholder and with different user story:

I, as a Security Officer, want the End Users to have to authenticate by PIN at the MFD, so that we can trace each action at the MFD to a specific End User.

Putting these two user stories together brings us to authentication and much more. The user experience we are delivering has to build trust between the user and the system.

Another option is to look at authentication (by PIN) as a solution to our security problem. There are other ways how to maintain data confidentiality, so we might put data confidentiality in the position of the required function.

I, as an End User, want the system to control access to my documents in a way visible to me, so that I can trust that my documents remain confidential.

This might be one of the possible descriptions of data confidentiality in the form of a user story or rather the trust in data confidentiality. Again, no unintentional design. In this case, we are leaving more room for innovation as we are delivering on values with no design constraints.

And I will talk about constraints in one of my next posts.

Update: add information about IntelliJ Idea 14

Java is no longer part of Mac OS default installation. When you want to start Idea on new Mac OS Yosemite you’ll get this nice message:


You have two options. Install legacy Java 6 from Apple or install new Java 8 from Oracle.

IntelliJ Idea 13

In case of Java 8, just open file “/Applications/IntelliJ IDEA” and change JVMVersion from 1.6* to 1.8*:

sudo vim "/Applications/IntelliJ IDEA"

IntelliJ Idea 14

Open file “/Applications/IntelliJ IDEA” and change JVMVersion from 1.6* to 1.8*:

sudo vim "/Applications/IntelliJ IDEA"


Click IntelliJ Idea icon and enjoy your IDE.


You can find more info at JetBrains Support forum.

Do you use git? Then you probably know the basic commands like git pull, git merge and git rebase. These are pretty common, but also complex. Over the time, I have adopted few simple rules which help me to use them effectively.

Git pull considered harmful

You have probably noticed that sometimes your git pull generates automatic merge commits. They do not hold any useful information and if your repo is busy, your history can be literally flooded.

I like to keep my history linear and simple. That is why I recommend to set the default git pull configuration to fast-forward only (linear type of merge which does not create any commits).

With Git 2.0 and newer, you can just update your settings:

git config --global pull.ff only

Older versions do not have this option, but you can use an alias instead:

git config --global alias.up '!git remote update -p; git merge --ff-only @{u}'

@{u}” is a shortcut which reffers upstream of the tracked branch.


git up

Rebasing non-linear changes

When the fast-forward merge is not possible, the default git pull behavior would be a three-way merge. But in order to keep the history clean, rebase can be used.

You might have heard, that rebase is evil (git rebase hell), but as long as your rebased commits are only local, you should be safe.

Again, I recommend an alias:

git config --global alias.upr '!git remote update -p; git rebase -p @{u}'


git upr

The git pull --rebase does a similar thing, but it currently doesn’t have an option to rebase merge commits, so you can end up with a slightly different history. Git rebase -p will try to replay merge commits and preserve your history.

Clean up your commits before publishing

Before you push your commits to the repository, it’s good to revise them.  You can run the git interactive rebase to squash your commits or modify commit messages.

An alias for the interactive rebase of unpushed commits:

git config --global alias.ri '!git rebase -i @{u}'


git ri

The goal is to publish only clean and relevant commits, no experiments or fixing typos. Therefore I usually do the git rebase and push at the end of my programming sessions (typically at the end of the day), so I see all the commits in one place.

Keep in mind that interactive rebase does not preserve merge commits (combining interactive rebase and preserve merge is not recommended), so you might want to do it before you merge a new branch.

Three-way merge between branches

Three-way merge is a non-linear merge with conflict resolution. It should be always used when merging non-local branches:

  • Merge of feature branch to the master
  • When you are working on a feature in a branch and you would like to merge commits from master.

This type of merge generates the merge commit, which serves as a source of information about the merge (branches, commits, time and date, responsible user).

How to force a three-way merge:

git merge --no-ff the_other_branch

Replace the “the_other_branch” with a name of  the source branch for merge to your current branch.


  • Disable automatic three-way merge with git pull.
  • If you cannot pull with a fast-forward merge and your commits are only local, run the rebase with preserve merges enabled.
  • Use three-way merge for merging between branches or when updating you feature branch.
  • Before push, revise your commits and clean them with interactive rebase if needed.


Never rebase any pushed or pulled changes (not branches, nor commits).

Want to know more?

In my previous post, I started elaboration of a simple user story about Embedded Terminal Application deployment. There we have focused on the middle part of the user story about what the Administrator (the actor) wants. At the end, I have started elaboration of the last part, i.e. what is the benefit or better to say, the quality we want to achieve.

I sincerely hope that it does not strike as a controversial idea that user stories are all about quality. But I have always been puzzled, how to conect such seeminly different things as user / stakeholder intents and measurable qualities. Until, one day, Tom Gilb (@imtomgilb) explained all of that.

First of all, let us repeat the user story:

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD, so that I save time.

In the previous post, I have asked the question, whether saving time is the quality we are really looking for. The Administrator might need to work with the system in different contexts. On one hand, we have an Administrator who needs to save time, since he takes care of a small environment and has many things at once to focus on. On the other hand, we have a team trying to prepare thousands of printers for thousands of end users and willing to tradeoff litle extra time for reliability, as long as they don’t have to work with one machine at a time.

…so that I save time.

So we are dealing with complex quality here and we need to decompose. Let us start with putting together a list of aspects of the quality the Administrators are looking for (nomenclature is not important, as long as we can agree on common naming):

  • Time or Degree of Automation per Device, i.e. time we need to spend on each device in our fleet compared to the total time we need to prepare the environment for the end users.
  • Reliability, i.e. the probability with which a particular device of the fleet fails to get prepared despite Administrators doing everything right.
  • Robustness, i.e. the probability that the process and the tools we have provided the Administrators with works correctly, meaning it delivers the results it should, coping with whatever problems (such as device misconfigurations or differences between firmware versions) which can be expected (were experienced in the past, are documented or not guaranteed by the vendor).
  • Repeatability, i.e. how difficult (in terms of manual steps) it is to repeat the process in case of failure to potentially fix the failure (such as by turning of a device, which had been turned off and thus it was not possible to prepare it properly).

For each quality, we can establish three levels – goal, tolerable and past. The most important for us is to elaborate on tolerable and also prepare measurements (please note that all qualities mentioned above can be measured) and measure the past, i.e. the current state of the art for our product.

The goal shall be elaborated as a big enough improvement over the current state and also balanced with tolerable level. Tolerable simply means, that if we get below this point (such as Reliability below 70%), the user story does not exist as implemented, since we failed to deliver on the stakeholder (Administrator in this case) value.

We have decoupled the value the Administrator needs to receive in the product, but how to put all this into the user story?
We have started with time, but it seems now, that the overall quality the Administrators are looking into are not connected only with time and effort, but have something to do with the risk of the MFD not being prepared for the end users. It sounds too general, though as we are dealing with all sorts of risks, but the qualities we are looking to are all about doing the deployment quickly, have the ability to minimize failures and recover from them as fast as possible with minimum requirement for manual steps.

So let’s move forward with our user story…

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD, so that I save time deploying the system and recovering from failures.

Please note that we are still avoiding unintentional design as we are not saying what needs to be done or how the deployment or the recovery is done.

Our user story is far from complete… next time, I will elaborate on how to connect qualities with user stories and what is the value of tests in this matter.

This approach of quality is inspired by Evolutionary Project Management and Competitive Engineering technique put together by Tom and Kai Gilb ( It is not easy, but it is elegant in its simplicity and beatiful.