I am trying to setup better go development environment and decided to give vim-go a try (which also resulted in me replacing Vundle with Pathogen, which is much more straightforward). Installing everything was a breeze and I only encountered a problem when I tried to make tagbar work, because tagbar does not work with BSD ctags and requires Exuberant ctags 5.5.

The simplest way how to install exuberant ctags is with brew.

ondra@nb218 ~/Downloads/ctags-5.8
$ brew install ctags
Warning: You are using OS X 10.11.
We do not provide support for this pre-release version.
You may encounter build failures or other breakage.

However brew is still having problems running on 10.11 and many packages fail to build, ctags being on exception. So let’s see how we can deploy ctags into /usr/local without stepping on brew’s toys.

First of all, we need to determine the prefix to use via brew diy:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew diy --name=ctags --version=5.8

Now you can use brew fetch to download the source for ctags or just download it from sourceforge. To get the source using brew, use:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew fetch --build-from-source ctags
==> Downloading https://downloads.sourceforge.net/ctags/ctags-5.8.tar.gz
Already downloaded: /Library/Caches/Homebrew/ctags-5.8.tar.gz
SHA1: 482da1ecd182ab39bbdc09f2f02c9fba8cd20030
SHA256: 0e44b45dcabe969e0bbbb11e30c246f81abe5d32012db37395eb57d66e9e99c7

==> Downloading https://gist.githubusercontent.com/naegelejd/9a0f3af61954ae5a77e7/raw/16d981a3d99628994ef0f73848b6beffc7
Already downloaded: /Library/Caches/Homebrew/ctags--patch-26d196a75fa73aae6a9041c1cb91aca2ad9d9c1de8192fce8cdc60e4aaadbcbb
SHA1: 24c96829dfdc58b215bfccf5445a409efba1ffe5
SHA256: 26d196a75fa73aae6a9041c1cb91aca2ad9d9c1de8192fce8cdc60e4aaadbcbb

Anyhow, when you have the source extracted, run configure && make with the prefix from brew diy:

ondra@nb218 ~/Downloads/ctags-5.8
$ ./configure --prefix=/usr/local/Cellar/ctags/5.8
Exuberant Ctags, version 5.8
Darwin 15.0.0 Darwin Kernel Version 15.0.0: Sun Jul 26 19:48:55 PDT 2015; root:xnu-3247.1.78~15/RELEASE_X86_64 x86_64
checking whether to install link to etags... no

And link ctags via brew using brew link:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew link ctags
Linking /usr/local/Cellar/ctags/5.8... 2 symlinks created

And you are done. When building is fixed for brew under 10.11 simply unlink ctags and use brew install as usual.

Coding some sequence processing in Clojure, I was wondering how efficient is the test for sequence emptiness. the first thing which comes in mind is:

 (when-not (empty? coll) …) 

Sometimes, this leads to unreadable code and for instance, Joy of Clojure recommends to simply use the following pun:

 (when (seq coll) …) 

So basically converting the collection into a sequence every time, leveraging the fact that empty sequence is nil, i.e. false.

The ancient C developer in me started screaming about the complexity of seq.

Well, let us see, what it really does:

(source seq)
(def ^{
   :arglists '(^clojure.lang.ISeq [coll])
   :doc "Returns a seq on the collection.
   If the collection is empty, returns nil.
   (seq nil) returns nil. seq also works
   on Strings, native Java arrays
   (of reference types) and any objects that
   implement Iterable."
   :tag clojure.lang.ISeq
   :added "1.0"<br>
   :static true}
   seq (fn ^:static seq ^clojure.lang.ISeq [coll]
     (. clojure.lang.RT (seq coll))))

As for many clojure.lang functions, it is simply a wrapper over a Java defined method. In this case, we are looking at Java class clojure.lang.RT.

Aside being almost a classbook example of lost type information and downcasting, this basically says that the performance depends heavily on the type of the collection we are trying to convert. For many cases, this is just a downcast – not a significant performance hit (we live in the Java world, right). For some, the conversion seems linear (have a look at the RT.seqFrom() method). So I have written two test functions to see how big hit the seq function is, when it comes to Java arrays for instance.

(defn hungry-sum1
([coll] (hungry-sum1 0 coll))
([s coll] (
if (seq coll)
(recur (+ s (first coll)) (rest coll))

(defn hungry-sum2
([coll] (hungry-sum2 0 coll))
([s coll] (
if (empty? coll) s
(recur (+ s (first coll)) (rest coll)))))

(def test-data
(into-array (range 1000000)))

(defn test1 []
(seq (repeatedly 1 #(hungry-sum1 test-data))))
(defn test2 []
(seq (repeatedly 1 #(hungry-sum2 test-data))))
(println "Testing with seq for emptiness.")
(time test1)
(println "Testing with empty? for emptiness.")
(time test2)

When you load this, to clojure REPL, you might get something like this:

user=> (load-file "seqloop.clj")
Testing with seq for emptiness.
"Elapsed time: 0.018768 msecs"
Testing with empty? for emptiness.
"Elapsed time: 0.01805 msecs"

Basically meaning the speed is the same. Well, definitely not something I would expect from this code.

Dig in:

(source empty?)
(defn empty?
  "Returns true if coll has no items -
   same as (not (seq coll)).
  Please use the idiom (seq x) rather
  than (not (empty? x))"
  {:added "1.0"
   :static true}
  [coll] (not (seq coll)))

Surprise!!! Well, let’s just say that this is where i should have started in the first place :-//. I am going to play with this a bit and will get back, hopefully with some faster way how to test for collection emptiness. I am still not sure I like how Clojure treats sequences.

And yes, I know I should have read the documentation first ;-).

I had the honor to open the GeeCON Prague conference with a short keynote. I spent several months thinking about appropriate topics as I wanted to express the reasons and motivation why we have partnered with GeeCON team and cooperated to make this happen. Now that the conference is over and we all feel positive about it, my colleagues asked me to share the keynote slides with them. I feel that the slides are not very comprehensive on their own, so I am writing this short post, trying to explain what was on my mind and what message I tried to give.

Two years ago, when we started to look around to search for interesting groups, projects and events within the developer community to support and work with, we have realized that there is no conference for Java  developers in the Czech Republic and there wasn’t one for at least 8 years. The last such event were probably the Java Days organized by Sun in 2006. Anyway, we set out to Krakow with the simple mission, bring GeeCON to the Czech Republic in two years. Mission accomplished. It was fun and a learning experience, I met lots of great people and I am simply happy that I have the opportunity to work with them. So let’s dig into the keynote…

GeeCON in Prague

We met for two days in Prague, with 42 speakers givin talks in 3/4 parallel tracks, more than a dozen of partners and almost 500 participants. Two days packed with information about Java, JVM and related tools and technologies.

In 2013, we started to look around for events, communities and organizations to cooperate. Cooperation with the community is important for any public company and in our case, it is about several things. First of all, any kind of such cooperation is giving you the much needed perspective on yourself. It is also giving you the opportunity to give something back and also to bring something new to your work. For us, it is also about presenting Y Soft and showing what we are doing to the public. When we started in 2013, we realized that there is no conference for serious Java developers and we set on a mission to bring one to the Czech Republic. How this came to this end is perhaps a topic for another post :-).

And so we were there and I used this opportunity to think out loud about how developer community could and perhaps should work.

Have you ever wondered why some communities work and some don’t? Well the key concepts are, in my opinion, contribution and sense of ownership. You probably think that this is just too obvious and trivial thought, so let’s elaborate.

One of the key traits in Silicon Valley is the notion of Paying it forward. This means that everybody is trying to help others without expecting to get immediate return. Help is seen as a long term investment – you do something for somebody now and somebody else will help you when you need it. The most fascinating part of this is, that this really work and not only in the Valley.

When you create something, you own it, but at some point, you need to let it go and open this, so others can contribute. And whenever you do this, you are transcending yourself to your work and letting others to share in your ownership alike.

All contributions do count – no matter how big or small they are. You can do something as small as attending a meetup or joining in a public discussion.

Y Soft is a proud contributor and we proudly share the responsibility for the state of the developer community here in the Czech Republic. We are also proud contributor to GeeCON, being a Platinum Partner in 2013 and 2014. We are having a plethora of other projects, such as Y Soft Technology Hour.

I would like you to think about your contribution. It does not matter whether you do something small or big. But it makes sense to be serious about it, because we all share the responsibility for the developer community in the Czech Republic.

The complete slides to my keynote are available at slideshare.net.


User Stories seem to be my favourite topic these days…

…the idea behind this post started with seemingly innocent user story, like this:

I, as an End User, want to authenticate by PIN at the MFD, so that my documents are secure.

Today, I want to focus on one particular thing and that is stakeholder value.

The value to the user, according to this story is security and the desired function is authentication. Long story short, I do not know that many users, who would require authentication! Authentication (and authorization for that matter) is a solution which is helping us to achieve something else, something, like data confidentiality and non-repudiation.

The real value the user desires is data confidentiality. Non-repudiation is usually the desired value of IT administrators or security officers, who own the security policy of the organization in question. So what is the function of the system the user needs? Or does the user really care?

The function the user actually needs is some kind of protection, which creates confidence in the user that it is not easy or even possible to retrieve their documents.

So let’s start with something completely different…

I, as an End User, need the system to exhibit protection of my documents, so that I can trust their confidentiality.

What message I can take from such user story as a developer:

  • There is a stakeholder called End User.
  • The user wants the system to exhibit protection, meaning that the system should not only protect the documents, but also demonstrate that it is protecting them.
  • The user values the trust which the system builds and maintains and the fact that the trust can be put in confidentiality of user documents.

But what hapened to our authentication? There are two other stakeholders, who actually value authentication. As mentioned above, stakeholders internal to the customer, who govern the security policy may have specific requirements on authentication. In this case, it is intentional design as it stems from constraints imposed by customer environment. These constraints can and perhaps should be challenged, but never ignored.

In such case, we work with another stakeholder and with different user story:

I, as a Security Officer, want the End Users to have to authenticate by PIN at the MFD, so that we can trace each action at the MFD to a specific End User.

Putting these two user stories together brings us to authentication and much more. The user experience we are delivering has to build trust between the user and the system.

Another option is to look at authentication (by PIN) as a solution to our security problem. There are other ways how to maintain data confidentiality, so we might put data confidentiality in the position of the required function.

I, as an End User, want the system to control access to my documents in a way visible to me, so that I can trust that my documents remain confidential.

This might be one of the possible descriptions of data confidentiality in the form of a user story or rather the trust in data confidentiality. Again, no unintentional design. In this case, we are leaving more room for innovation as we are delivering on values with no design constraints.

And I will talk about constraints in one of my next posts.

In my previous post, I started elaboration of a simple user story about Embedded Terminal Application deployment. There we have focused on the middle part of the user story about what the Administrator (the actor) wants. At the end, I have started elaboration of the last part, i.e. what is the benefit or better to say, the quality we want to achieve.

I sincerely hope that it does not strike as a controversial idea that user stories are all about quality. But I have always been puzzled, how to conect such seeminly different things as user / stakeholder intents and measurable qualities. Until, one day, Tom Gilb (@imtomgilb) explained all of that.

First of all, let us repeat the user story:

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD, so that I save time.

In the previous post, I have asked the question, whether saving time is the quality we are really looking for. The Administrator might need to work with the system in different contexts. On one hand, we have an Administrator who needs to save time, since he takes care of a small environment and has many things at once to focus on. On the other hand, we have a team trying to prepare thousands of printers for thousands of end users and willing to tradeoff litle extra time for reliability, as long as they don’t have to work with one machine at a time.

…so that I save time.

So we are dealing with complex quality here and we need to decompose. Let us start with putting together a list of aspects of the quality the Administrators are looking for (nomenclature is not important, as long as we can agree on common naming):

  • Time or Degree of Automation per Device, i.e. time we need to spend on each device in our fleet compared to the total time we need to prepare the environment for the end users.
  • Reliability, i.e. the probability with which a particular device of the fleet fails to get prepared despite Administrators doing everything right.
  • Robustness, i.e. the probability that the process and the tools we have provided the Administrators with works correctly, meaning it delivers the results it should, coping with whatever problems (such as device misconfigurations or differences between firmware versions) which can be expected (were experienced in the past, are documented or not guaranteed by the vendor).
  • Repeatability, i.e. how difficult (in terms of manual steps) it is to repeat the process in case of failure to potentially fix the failure (such as by turning of a device, which had been turned off and thus it was not possible to prepare it properly).

For each quality, we can establish three levels – goal, tolerable and past. The most important for us is to elaborate on tolerable and also prepare measurements (please note that all qualities mentioned above can be measured) and measure the past, i.e. the current state of the art for our product.

The goal shall be elaborated as a big enough improvement over the current state and also balanced with tolerable level. Tolerable simply means, that if we get below this point (such as Reliability below 70%), the user story does not exist as implemented, since we failed to deliver on the stakeholder (Administrator in this case) value.

We have decoupled the value the Administrator needs to receive in the product, but how to put all this into the user story?
We have started with time, but it seems now, that the overall quality the Administrators are looking into are not connected only with time and effort, but have something to do with the risk of the MFD not being prepared for the end users. It sounds too general, though as we are dealing with all sorts of risks, but the qualities we are looking to are all about doing the deployment quickly, have the ability to minimize failures and recover from them as fast as possible with minimum requirement for manual steps.

So let’s move forward with our user story…

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD, so that I save time deploying the system and recovering from failures.

Please note that we are still avoiding unintentional design as we are not saying what needs to be done or how the deployment or the recovery is done.

Our user story is far from complete… next time, I will elaborate on how to connect qualities with user stories and what is the value of tests in this matter.

This approach of quality is inspired by Evolutionary Project Management and Competitive Engineering technique put together by Tom and Kai Gilb (www.gilb.com). It is not easy, but it is elegant in its simplicity and beatiful.


One of our internal projects at Y Soft R&D carried out by the Lead Developers is to prepare and maintain internal teaching materials. We have only recently started with the project and our first goal is to prepare trainings and drills for the baseline level to establish the basic skills each and every developer at Y Soft needs to know. Parts of it are also relevant for other colleagues, like Solution Architects, due to them writing and testing customizations.

Our baseline level is modeled around 4 Rules of Simple Design and we are now working on the first part, which is covering unit testing. We spent a whole day to prepare simple rules to help developers write good unit tests, defining what we call Unit Test Patterns.

There are four patterns we have defined:

  • Referentially Transparent Contract
  • Non-Referentially Transparent Contract
  • State Inspection
  • Side Effects Inspection

The first two are modeled after Classic TDD while the latter two are modeled after London school of TDD (What’s the difference?).

We are now preparing the coursework, guidelines and exercises to stick to the following outline.

  1. Unit Testing Trivia. The AAA / GWT principle.
  2. Using frameworks and tools to write / run unit tests in Java and C#.
  3. Unit Test Patterns. The Decision Flow or How do I know which pattern to use?
  4. Writing Tests using the RT Contract pattern
  5. Writing Tests using the non-RT Contract pattern
  6. Writing Tests using the State Inspection Pattern
  7. Writing Tests using the Side Effects Inspection Pattern

We want to produce internal web casts for the first three and Koans like exercises for number 4 – 7. Essentially providing contracts to write tests to and evaluating the tests by executing them on several purposedly flawed implementations.

The purpose of this is not to reinvent the wheel, but provide an easy to use, simple framework to help design and write unit tests.

Over the years, I have acummulated several acronyms, which I believe capture essence of various disciplines done right. Here are some picks from my list…


One cannot ignore SOLID principles, if you are serious about object oriented design. While this might seem obsolete, it is now relevant more than ever. One word of advice, take SOLID as a whole, do not choose one principle over another. They are not meant to be applied in isolation.

So what SOLID stands for?

  • Single Responsibility Principle
  • Open / Closed Principle (Open for Extension / Closed for changing implementation)
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion


Is a collection of patterns used in OOD. While I find rather artifical, I am still using it to remind myself of the different aspects of OOD I should take into account.

GRASP stands for General Responsibility Assignment Software Patterns, which is somewhat lame. What it really stands for are the following patterns or notions:

  • Controller
  • Creator
  • Indirection
  • Information Expert
  • High Cohesion
  • Low Coupling
  • Polymorphism
  • Protected Variations
  • Pure Fabrication

One day, I will write about what those really mean to me.


Is a recent addition, to my collection, but I quite like it. It is the threat model invented at Microsoft as part of their Secure Development Lifecycle framework.

STRIDE stands for different threats developers should take into account when capturing potential threats:

  • Spoofing Identity
  • Tampering with Data
  • Repudiation
  • Information Disclosure
  • Denial of Service
  • Elevation of Privileges


Simply put: Duplication Is Evil.

User Stories, Epics, Themes, Agile Use Cases, Behaviors… whatever helps us capture user motivation, need and the benefit is useful and an improvement over bloated analyses and specifications which (and many developers do not notice that) just steal creativity out of our work. I am not going to write another post about the benefits of this, but I am going to elaborate on one particular use story, we have met with, which was not done right.

Before I do this, I need to first introduce a notion of unintentional and intentional design. Intentional design stems from constraints imposed by customer environment, market conditions, etc. and is best captured in constraints and conditions. However, sometimes it might find its way to user stories while not defeating the purpose. Unintentional design is quite different story, and while it seems that such thing is obvious, it is far from it.

Before we delve into the story, let’s provide some background. SafeQ application has components which run on MFDs (Multi-Function Devices or Multi-Function Printers). You all know them as those rather big devices which can do copying, scanning and printing and these days, they can also run third party applications. In SafeQ, we call these applications Embedded Terminal applications. Before they can be used, they require some kind of installation or deployment.

And here comes our use story:

As an administrator, I want to automatically deploy Embedded Terminal application to the MFD, so that I save time.

Let’s delve into it…

As an administrator…

Administrator is the person who performs the deployment and the maintenance of the system. Nothing really misleading about this yet.

…I want to automatically deploy Embedded Terminal application to the MFD…

Which is the unintentional design. Automated deployment of ET application is part of our feature set, but can be done terribly wrong, if the developers focus on the automation and not the essential part, which is difficulty and sensitivity to human error.

So, let us elaborate…

As an Administrator, I want to eliminate all manual steps required to deploy Embedded Terminal application to the MFD…

Which switches our focus from automation, to something more important… the number of manual steps (and implied complexity) of the deployment of the embedded terminal application. But we still suffer from unintentional design: deployment. What is deployment? Even when we have this term defined in our glossary, it is still unclear and ambiguous word for the developers. When does the deployment start and when does it end?

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD…

And we have come up with intent, which is clear from design… the purpose of the whole activity is to enable users access SafeQ features. By all means not ideal, but much better in expressing the purpose and avoiding design.

…so that i save time.

Difficult to understand, what saving time really means. Does it mean, that we need to save time when preparing the MFD, but at the cost of difficult troubleshooting later. Is saving time really the quality we are looking for?

Look at it from the perspective of the Administrator, who needs to prepare thousands of MFDs for thousands of users (a scenario which is quite common for our customers). Does he care about his time most? From the perspective of having to deploy one machine at a time, he does. But he also cares about the readiness of the devices. Would he prefer to tradeoff some of his time just to increase the reliability of the environment? Ask them, they tell you “by all means”.

We are looking into a complex quality here… time, reliability, failing fast. So our user story is not complete yet, but I will focus on this topic in my next post.