Major feature of Gradle is extensibility. Developer can store common logic in a custom task class.

class GreetingTask extends DefaultTask {
    String greeting = 'hello from Y Soft'

    @TaskAction
    def greet() {
        println greeting
    }
}

task hello(type: GreetingTask)

// Customize the greeting
task greeting(type: GreetingTask) {
    greeting = 'greetings from Brno'
}

It’s not very flexible approach. All classes and the build logic are stored together in one build.gradle file.

It’s possible to move classes into separate Groovy files in buildSrc. Here is the description of transformation process.

Step 1. Create directory buildSrc/main/main/groovy/PACKAGE. E.g.: buildSrc/src/main/groovy/com/ysoft/greeting.

Step 2. Move custom class from build.gradle to GreetingTask.groovy in buildSrc/…/greeting directory.

Step 3. Add package declaration and imports from Gradle API to GreetingTask.groovy.

package com.ysoft.greeting

import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction

class GreetingTask extends DefaultTask {
    String greeting = 'hello from Y Soft'

    @TaskAction
    def greet() {
        println greeting
    }
}

Step 4. Update build.gradle, add groovy plugin and import of the custom class.

apply plugin: 'groovy'

import com.ysoft.greeting.GreetingTask

task hello(type: GreetingTask)

task greeting(type: GreetingTask) {
    greeting = 'greetings from Brno'
}

Alternatively you can use full package name with class name when specifying task type. In that case you can omit import.

Step 5. Type ‘gradle tasks’ and enjoy.

You can find examples of custom tasks at our GitHub.

Probably every programmer knows switch-case keywords. They are often used to convert data, e.g. some string from another (sub)system to your enum. While working with those, I found two patterns I call best practices.

The first one uses return outright. Its code is short and elegant:

transformGender(String gender) {
    switch(gender) {
        case "M": return Gender.MALE;
        case "F": return Gender.FEMALE;
        case "I": return Gender.INTERSEX;
        default:  return Gender.UNKNOWN;
    }
}

However, some might argue it doesn’t follow the single-exit pattern. If you happen to add e.g. logging, it makes it complicated. Since copy-pasting code would make it a direct opposite of what you were trying to do there, a different pattern should be used. I found one:

transformGender(String gender) {
    final Gender result;
    switch(gender) {
        case "M": result = Gender.MALE;
                  break;
        case "F": result = Gender.FEMALE;
                  break;
        case "I": result = Gender.INTERSEX;
                  break;
        default:  result = Gender.UNKNOWN;
    }
    return result;
}

Obviously, the price for single exit is a longer code (breaks, storing the value). However, it still keeps one advantage of the previous pattern (the basic Java tutorial doesn’t teach it): You can be sure your result is not modified, thanks to final keyword. If you fail to write the value exactly once, a compiler error warns you instantly. Also, it makes it hard to write messy code with preset value and no default branch.

I was curious if the pattern can be used in C#, but it seems it can’t. The keyword final has no equivalent in C#, with readonly not really working in the same manner. Where readonly states that it can only be written in constructor or declaration, final means that the field or variable can only be written once.

Qt Installer Framework is a quite new framework which is currently still in development. The current version contains set of tools and utilities to create installers. The most significant feature is that the framework itself is multiplatform, it supports Windows, Mac OS X and Linux.

Great feature of QT Installer Framework is that it can download the required files from the server. That means it is not required to provide files with the installer. It works with the so called “repositories”. Thanks to this it is also able to update the files without having to download the installer again, because it creates maintenance tool which can update or uninstall the files. However the documentation is not perfect, it is missing a lot of details.

The framework is an open source project. It is built on top of the Qt (it requires static built of Qt).

The framework is really easy to use. For the basic features the user needs to know to work with XML, that is all. For advanced features the knowledge of javascript (QtScript) is required (C++ might be required for the most advanced features).

Usage

Whole installer creation process starts with creating the required folder structure.

Screen Shot 2015-01-28 at 09.16.11

There are two main folders. The config and the packages folder.

Configuration

The config folder contains the installer configuration file (config.xml) and images used in the installer. There are various things that can be configured – Supported Configuration Settings. This is the example what the configuration file could look like:

<?xml version="1.0" encoding="UTF-8"?>
<Installer>
 <Name>YSoft SafeQ Mac Client</Name>
 <Version>4.3.0</Version>
 <Title>YSoft SafeQ Mac Client Installer</Title>
 <Publisher>Y Soft</Publisher>
 <ProductUrl>http://www.ysoft.com</ProductUrl>
 <TargetDir>@HomeDir@/YSoft</TargetDir>
 <AllowSpaceInPath>true</AllowSpaceInPath>
 <InstallerApplicationIcon>ysoft_96</InstallerApplicationIcon>
 <InstallerWindowIcon>ysoft_96_32x32</InstallerWindowIcon>
 <Watermark>modern-wizard.bmp</Watermark>
 <WizardStyle>Aero</WizardStyle>
</Installer>

Packages

The second folder is the package folder. It contains all components that will be installed by the installer. Every component consists of two folders, the data and the meta folder.

The data folder contains all the files that will be installed on the target machine. Installer archives all data to the 7z format and extracts it on installation.

The meta folder contains configuration file(package.xml) for the component and the installation script that will be called when the component is loaded.

This is the example file of component configuration file

<?xml version="1.0" encoding="UTF-8"?>
<Package>
 <DisplayName>CUPS Backend</DisplayName>
 <Description>CUPS Backend</Description>
 <Version>4.3.0</Version>
 <ReleaseDate>2015-01-23</ReleaseDate>
 <Default>true</Default>
 <Script>installscript.js</Script>
 <ForcedInstallation>true</ForcedInstallation>
 <RequiresAdminRights>true</RequiresAdminRights>
</Package>

and here is the full list of possible configuration values – Summary of Package Information File Settings.

The next file is the installation script file (installscript.js). The script is called when the installer is executed and the component is loaded. The script can add new installer wizard pages, prompt user for custom path for the component etc. This is example of a script that extracts component to the /tmp folder and moves it to the Application. Then it adds new item (log out checkbox) to the final page of the installer (the page items or pages have to be designed in Qt designer).

function Component(){
	//Connect signals to functions
	component.loaded.connect(this, componentLoaded);
	installer.finishButtonClicked.connect(this, finishClicked);
}

Component.prototype.createOperationsForArchive = function(archive){
	//Extract and move .app file
	component.addOperation("Extract", archive, "/tmp");
	component.addElevatedOperation("Execute","mv", "/tmp/YSoft\ SafeQ\ Client.app", "/Applications", 
								   "UNDOEXECUTE", "rm", "-rf", "/Applications/YSoft\ SafeQ\ Client.app");
}

componentLoaded = function(){
	//If this is installer load checkbox from .ui file
	if(installer.isInstaller()){
		installer.addWizardPageItem(component, "LogOutCheckBoxForm", QInstaller.InstallationFinished);	
	}
}

finishClicked = function(){
	if(!component.installed)
		return;
	//If the installation was succesful, let the user log out
	if(installer.isInstaller() && installer.status == QInstaller.Success){
		var isLogOutChecked = component.userInterface("LogOutCheckBoxForm").LogOut.checked;
        if (isLogOutChecked) {
			//Todo - logout
		}
	}
}

Here is the documentation for the Component scripting.

The installer is created by executing the Qt Installer tool

binarycreator -c config/config.xml -p packages installer

where -p is path to the packages folder and -c is path to the config.xml file. The last part is the name of the installer. 

This is the look of the final installer on Mac OS X:

Screen Shot 2015-01-28 at 09.48.00

When you work with queries that involve a LEFT JOIN on a 1:n relation, you usually want to map the parent to a collection of its child elements. Imagine a simple example, a result set with two columns, daddy and kiddo (order by daddy). Try solving this problem with some pseudo-code before you read any further.

Quite commonly, result sets are processed in a simple while(rs.next()) {...} cycle. However, in this scenario, you would need some state variables (e.g. lastDaddy) as well as a post-cycle operation. Even if such contraption is error-free, it’s rather unreadable for your coworkers. I tried a different approach. In java, it should work with any valid ResultSet implementation. For the simple example, it looks like:

Map<String, Collection<String>> result = new HashMap<>();
boolean hadNext = rs.next();

while (hadNext) {
    String group = rs.getString("daddy");
    Collection<String>; elements = new ArrayList<>();

    do {
        elements.add(rs.getString("kiddo"));
    } while (hadNext = rs.next() && 
             group.equals(rs.getString("daddy")));

    result.put(group, elements);
}

The main advantage of this approach is readability. Result set is only iterated in the inner cycle, so you need to initialize first: hadNext = rs.next(). The outer cycle contains the whole life cycle of daddy (a great win for readability). The inner cycle quite interesting: I hardly ever use the do-while cycle, but here it shines. The first child must be read before iterating the result set any further. The condition causes the inner cycle to stop both when the parent changes and when the result set ends.

In most scenarios, two levels of hierarchy are still one too many (separate queries are used instead). However, this approach can handle even multiple levels of hierarchy while staying readable. In this case, all the inner cycles use a do-while cycle, matching the whole parental line of an element. Only the innermost cycle iterates the result set.

Writing a .NET client for a third-party SOAP web service is relatively simple and straightforward task. Web is full of tutorials or how-to-examples which will help you in case you are new to this field. The first step is generation of proxy .NET class via WSDL.exe utility and its implementation into your project. After that you can simply start using remote resources with all of the cool stuff. Of course, you can always add a WCF service reference (as commonly suggested on many forums), however, sometime this approach cannot be used due to technology limitation or simply you just don’t want to waste your time with setting-up a secured WCF client.

The story behind

Recently we received access to a third-party Java web service offering us some awesome features we wished to implement into our .NET client application. We were provided with a WSDL web service specification and some schema definition files. We simply generated C# proxy classes and started to play with all those new awesome features. After a while we noticed that some of the provided methods are missing in our proxy class. We were sniffing around and found following comment in the proxy class:

// CODEGEN: The operation binding ‘AwesomeMethod’ from namespace ‘http://awesome.namespace/wsdl’ was ignored.  Missing soap:body input binding.

The wsdl utility strangely did not created some of the proxy methods. What happend? Well, all the missing methods actually served for uploading files to the remote server via multipart/related MIME bindings. In WSDL specification it was given as following:
<wsdl:input name="AwesomeMethodInput">
     <mime:multipartRelated>
          <mime:part>
               <soap:body use="literal" part="awesomeMethod"/>
          </mime:part>
          <mime:part>
               <mime:content part="uploadFile" type="application/octet-stream"/>
          </mime:part>
     </mime:multipartRelated>
</wsdl:input> 
Despite MIME attachments are standard part of SOAP specification (see http://www.w3.org/TR/SOAP-attachments), the Microsoft implementation do not support any type of multipart MIME messages (instead he created own standard called DIME, see http://en.wikipedia.org/wiki/Direct_Internet_Message_Encapsulation). Therefore we had to select different solution.

SOAP web client

The least painful option left for us was to implement the client directly through System.Net library. In fact, writing a web client in C# is not essentially difficult and plenty of examples can be found all over the internet:
var request = (HttpWebRequest)WebRequest.Create(url);
request.ContentType = "text/xml; charset=utf-8";
request.Method = "POST";
request.Timeout = timeout;
request.Credentials = new NetworkCredential("username""password");
using (var stream = request.GetRequestStream())
{
    using (var writer = new StreamWriter(stream))
    {
        writer.Write(postData);
    }
}
Using such example one can easily connect to any http based network resource. Connecting to a SOAP web service is not difficult at all either – in contrast to a simple http web client you have to add only two more things:

 

1) Specify your SOAP action in request header:

request.Headers.Add("SoapAction"@"""awesome.namespace#AwesomeMethod""");

As you can see, the action name consists of method namespace with method name, both delimited by hash. The specification must be wraped in quotation marks otherwise the request will not be performed.

2) Construct a SOAP request according to specification and append it to POST body according to your WSDL specification:

var requestString =  @"<? xml version=""1.0"" encoding=""utf-8"" ?>
<soap:Envelope xmlns:soap=""http://schemas.xmlsoap.org/soap/envelope"">
<soap:Body>
     <message-content/>
</soap:Body>
</soap:Envelope>"
;
Follow your WSDL specification carefully and you will be rewarded with working sample of SOAP client. Of course, you have to parse responses manually in that case, however, it is not such an issue in case you have no other option to call a web service.

How to attach file

If you want to upload a file to a remote SOAP web service, you have to send its content as a part of multipart MIME message. There are many multipart content types which can be used; probably the most common is multipart/form-data used for sending data from web-forms. In our case, the request had to be sent as multipart/related content according to the specification, therefore in the following we will focus on construction of such requests.

First of all, you have to specify appropriate content type and boundary in your request header. The boundary is essential part of every multipart content type, it is any string which delimits single parts of the message and denotes end of the message. The header should be specified as following:

request.ContentType = @"multipart/related; boundary=some_boundary; charset=utf-8; type=text/xml; start=""<first-part>""";

At first you can see specification of our content type followed by definition of boundary (quotation marks must not be used, no white spaces are allowed). Attribute start defines the first part of the message itself through its Content-ID; for multipart/related content type, this attribute must be specified and wrapped in quotation marks. So, if you have the header specified, the message construction with uploaded file is quite simple. You just have to compose following request message and set it as your request body:

--some_boundary
Content-ID: <first-part>
Content-Type: text/xml; charset=utf-8
   
<?xml version='1.0'?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
    <message-content/>
</soap:Envelope>
--some_boundary
Content-ID: <second-part>
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary

...@a!+s1dw32w16qw...your-binary-file…s+61...
--some_boundary--

The specification is quite strict, therefore the request must be constructed exactly as you see it in the example. I mean, boundaries must be prefixed with a double-dash. The footer of whole message must be in suffixed with another double-dash, each header must be on a new line, headers and content itself must be delimited with CRLF constant and so on.

Note that Content-Transfer-Encoding header specifies how the uploaded file will be transferred – for example you can specify binary, base64, utf-8, etc.. This value might depend on specification of your web service. It also determines the way how you will treat content of uploaded file before you attach it to the request body. For example, in case of base64 transfer encoding you have to encode whole file into base64. If you set transfer encoding to binary, you should use BinaryWriter instead of StreamWriter for uploading request body data into your request stream otherwise the uploaded file might be corrupted in your request.

Conclusion

Now you should be able to create your own SOAP requests with multipart/related MIME attachments. We could continue with more and more examples (e.g. with references between message parts), the options are countless. If you want to know more, you can read SOAP attachment specification at http://www.w3.org/TR/SOAP-attachments – there is lot of text, however, if you scroll down you can find plenty of great examples which can suite your problem.

User Stories seem to be my favourite topic these days…

…the idea behind this post started with seemingly innocent user story, like this:

I, as an End User, want to authenticate by PIN at the MFD, so that my documents are secure.

Today, I want to focus on one particular thing and that is stakeholder value.

The value to the user, according to this story is security and the desired function is authentication. Long story short, I do not know that many users, who would require authentication! Authentication (and authorization for that matter) is a solution which is helping us to achieve something else, something, like data confidentiality and non-repudiation.

The real value the user desires is data confidentiality. Non-repudiation is usually the desired value of IT administrators or security officers, who own the security policy of the organization in question. So what is the function of the system the user needs? Or does the user really care?

The function the user actually needs is some kind of protection, which creates confidence in the user that it is not easy or even possible to retrieve their documents.

So let’s start with something completely different…

I, as an End User, need the system to exhibit protection of my documents, so that I can trust their confidentiality.

What message I can take from such user story as a developer:

  • There is a stakeholder called End User.
  • The user wants the system to exhibit protection, meaning that the system should not only protect the documents, but also demonstrate that it is protecting them.
  • The user values the trust which the system builds and maintains and the fact that the trust can be put in confidentiality of user documents.

But what hapened to our authentication? There are two other stakeholders, who actually value authentication. As mentioned above, stakeholders internal to the customer, who govern the security policy may have specific requirements on authentication. In this case, it is intentional design as it stems from constraints imposed by customer environment. These constraints can and perhaps should be challenged, but never ignored.

In such case, we work with another stakeholder and with different user story:

I, as a Security Officer, want the End Users to have to authenticate by PIN at the MFD, so that we can trace each action at the MFD to a specific End User.

Putting these two user stories together brings us to authentication and much more. The user experience we are delivering has to build trust between the user and the system.

Another option is to look at authentication (by PIN) as a solution to our security problem. There are other ways how to maintain data confidentiality, so we might put data confidentiality in the position of the required function.

I, as an End User, want the system to control access to my documents in a way visible to me, so that I can trust that my documents remain confidential.

This might be one of the possible descriptions of data confidentiality in the form of a user story or rather the trust in data confidentiality. Again, no unintentional design. In this case, we are leaving more room for innovation as we are delivering on values with no design constraints.

And I will talk about constraints in one of my next posts.

In my previous post, I started elaboration of a simple user story about Embedded Terminal Application deployment. There we have focused on the middle part of the user story about what the Administrator (the actor) wants. At the end, I have started elaboration of the last part, i.e. what is the benefit or better to say, the quality we want to achieve.

I sincerely hope that it does not strike as a controversial idea that user stories are all about quality. But I have always been puzzled, how to conect such seeminly different things as user / stakeholder intents and measurable qualities. Until, one day, Tom Gilb (@imtomgilb) explained all of that.

First of all, let us repeat the user story:

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD, so that I save time.

In the previous post, I have asked the question, whether saving time is the quality we are really looking for. The Administrator might need to work with the system in different contexts. On one hand, we have an Administrator who needs to save time, since he takes care of a small environment and has many things at once to focus on. On the other hand, we have a team trying to prepare thousands of printers for thousands of end users and willing to tradeoff litle extra time for reliability, as long as they don’t have to work with one machine at a time.

…so that I save time.

So we are dealing with complex quality here and we need to decompose. Let us start with putting together a list of aspects of the quality the Administrators are looking for (nomenclature is not important, as long as we can agree on common naming):

  • Time or Degree of Automation per Device, i.e. time we need to spend on each device in our fleet compared to the total time we need to prepare the environment for the end users.
  • Reliability, i.e. the probability with which a particular device of the fleet fails to get prepared despite Administrators doing everything right.
  • Robustness, i.e. the probability that the process and the tools we have provided the Administrators with works correctly, meaning it delivers the results it should, coping with whatever problems (such as device misconfigurations or differences between firmware versions) which can be expected (were experienced in the past, are documented or not guaranteed by the vendor).
  • Repeatability, i.e. how difficult (in terms of manual steps) it is to repeat the process in case of failure to potentially fix the failure (such as by turning of a device, which had been turned off and thus it was not possible to prepare it properly).

For each quality, we can establish three levels – goal, tolerable and past. The most important for us is to elaborate on tolerable and also prepare measurements (please note that all qualities mentioned above can be measured) and measure the past, i.e. the current state of the art for our product.

The goal shall be elaborated as a big enough improvement over the current state and also balanced with tolerable level. Tolerable simply means, that if we get below this point (such as Reliability below 70%), the user story does not exist as implemented, since we failed to deliver on the stakeholder (Administrator in this case) value.

We have decoupled the value the Administrator needs to receive in the product, but how to put all this into the user story?
We have started with time, but it seems now, that the overall quality the Administrators are looking into are not connected only with time and effort, but have something to do with the risk of the MFD not being prepared for the end users. It sounds too general, though as we are dealing with all sorts of risks, but the qualities we are looking to are all about doing the deployment quickly, have the ability to minimize failures and recover from them as fast as possible with minimum requirement for manual steps.

So let’s move forward with our user story…

As an administrator, I want to eliminate all manual steps required to perform before users can start using SafeQ features on the MFD, so that I save time deploying the system and recovering from failures.

Please note that we are still avoiding unintentional design as we are not saying what needs to be done or how the deployment or the recovery is done.

Our user story is far from complete… next time, I will elaborate on how to connect qualities with user stories and what is the value of tests in this matter.

This approach of quality is inspired by Evolutionary Project Management and Competitive Engineering technique put together by Tom and Kai Gilb (www.gilb.com). It is not easy, but it is elegant in its simplicity and beatiful.