This is an introduction to UAM project – management server for Universal appliances (new TPv4, eDee and SafeQube) and the Vert.x framework on which it is developed.

There was a need for easy installation and upgrade of OS and applications running on an UA so we needed a system to manage UAs with as little user interaction as possible.
The main requirements are robustness (we will deal with unrealiable networks and possible power outages), small executables (there is limited storage on the SafeQube) and little disk access (we don’t want to wear out our SafeQube flashdisks).

UAThe system has 2 parts:

  • UA management server – server application which will be deployed next to CML and will contain APIs for system administration
  • UA management server proxy – application cluster which will be deployed on SafeQubes and will cache application binaries and configuration

Both applications are developed using Vert.x framework. UAM server uses also database as configuration storage and proxy applications use Hazelcast for clustering.

Vert.x is an application framework for programming servers (especially web servers). It is built on top of Netty so no application container like Tomcat or Jetty is necessary.

I will show several examples of working with Vert.x so you can try out writing your own applications.

Starting a web server is simple:

vertx.createHttpServer()
    .requestHandler(request -> {
        System.out.println("Received request from " + request.remoteAddress().host());
    })
    .listen(80);

Using this approach is fast, but you have to handle which business logic to call for which endpoint and it can get quite messy.
Fortunately, we can do better using Vert.x web module:

Router router = Router.router(vertx);
router.route("/hello").handler(context -> {
    String name = context.request().getParam("name");
    context.response().end("Hello " + name);
});

router.get("/ping").handler(context -> {
    context.response().end("Pong!");
});

vertx.createHttpServer()
     .requestHandler(router::accept)
     .listen(80);

This way, we can easily separate business logic for our endpoints.

Do you need request logging or security? You can add more routes for same endpoints. Routes are then processed in the order of addition to the router and can call next route in the chain or just end the response.

Router router = Router.router(vertx);
router.route().handler(CookieHandler.create());
router.get("/secured-resource").handler(context -> {
    if ("password".equals(context.getCookie("securityToken"))) {
        context.next();
    } else {
        context.response().setStatusCode(401).end("sorry, you are not authorized to see the secured stuff");
    }
});

router.get("/secured-resource").handler(context -> {
    context.response().end("top secret stuff");
});

Do you want more than just APIs? There are template engines for Handlebars, MVEL, Thymeleaf or Jade if you want to use their functionality.

router.get().handler(TemplateHandler.create(JadeTemplateEngine.create()));

Things get more interesting when you configure Vert.x instances to form a cluster. Currently, the only production ready cluster manager is built on top of Hazelcast, so we will use it.

VertxOptions options = new VertxOptions();
options.setClustered(true);
options.setClusterHost("192.168.1.2");

Vertx.clusteredVertx(options, result -> {
    if (result.succeeded()) {
        Vertx vertx = result.result();
        //initialize what we need
    } else {
        LOGGER.error("Failed to start vertx cluster", result.cause());
    }
});

As you can see, starting Vert.x cluster isn’t that complicated. Just be careful to set correct cluster host address or you may find out that the cluster doesn’t behave how you would expect.
And what functionality is available for clustered applications? There is clustered event bus, where you can use point-to-point or publish-subscribe messaging. Just register your consumers and send them messages to logical addresses without ever caring on which node the consumer resides.

vertx.eventBus().consumer("/echo", message -> {
    message.reply(message.body());
});

vertx.eventBus().send("/echo", "ping");

I hope this short introduction to Vert.x will make you curious to look at its documentation and start writing your own applications on top of it.

One of the systems our team develops is UI for end-users, where users can view and manage their print related data.

The system is designed as a simple web application, where we make AJAX calls to Spring controllers which delegate the calls to two other systems, no database is present.

One of the requirements on the system was to support about 1000 concurrent users. Since Tomcat has by default 200 threads and the calls to the other systems may take long (fortunately it’s not the case at the moment), we have decided to make use of Servlet 3.0 async. This way, each AJAX call from the browser uses up Tomcat thread only for preparation of a call to other system. The calls are handled by our asynchronous library for communication with SafeQ and asynchronous http client for communication with Payment System which both have own threadpools and fill in the response when they get a reply.

Since we depend so much on other systems’ performance, we wanted to monitor the execution time of the requests for better tracking and debugging of production problems.

There are several endpoints for each system and more can come later, so in order to avoid duplication, we have decided to leverage Spring aspect programming support. We have created an aspect for each business service (one for SafeQ, one for Payment System) and it was time to implement the measurement business logic.

In synchronous scenario, things are pretty simple. Just intercept the call, note start and end time and if it took too long, just log it in a logfile.

@Around("remoteServiceMethod()")
public Object restTimingAspect(ProceedingJoinPoint joinPoint) throws Throwable {
    long start = System.currentTimeMillis();

    Object result = joinPoint.proceed();

    long end = System.currentTimeMillis();

    long executionInMillis = end - start;
    if (executionInMillis > remoteServiceCallDurationThresholdInMillis) {
        LOGGER.warn("Execution of {} took {} ms.", joinPoint, executionInMillis);
    }

    return result;
}

This won’t work in our asynchronous case. The call to joinPoint.proceed(), which will make call to other system returns immediately without waiting for a reply. Reply is processed in a callback provided to one of async communication libraries. So we have to do a bit more.

We know the signature of our business methods. One of the arguments is always callback, which will process the reply.

public void getEntitlement(ListenableFutureCallback callback, String userGuid, Long costCenterId)

If we want to add our monitoring logic in a transparent way, we have to create special callback implementation, which will wrap the original callback and track the total time execution.

class TimingListenableFutureCallback implements ListenableFutureCallback {

    private ListenableFutureCallback delegate;
    private StopWatch timer = new StopWatch();
    private String joinPoint;

    public TimingListenableFutureCallback(ListenableFutureCallback delegate, String joinPoint) {
        this.delegate = delegate;
        this.joinPoint = joinPoint;
        timer.start();
    }
        
    @Override
    public void onSuccess(Object result) {
        logExecution(timer, joinPoint);
        delegate.onSuccess(result);
    }

    @Override
    public void onFailure(Throwable ex) {
        logExecution(timer, joinPoint);
        delegate.onFailure(ex);
    }
}

And then we have to call the target business method with properly wrapped callback argument.

@Around("remoteServiceMethod() && args(callback,..)")
public Object restTimingAspect(ProceedingJoinPoint joinPoint, ListenableFutureCallback callback) throws Throwable {
    String joinPointName = computeJoinPointName(joinPoint);
    
    Object[] wrappedArgs = Arrays.stream(joinPoint.getArgs()).map(arg -> {
        return arg instanceof ListenableFutureCallback ? wrapCallback((ListenableFutureCallback) arg, joinPointName) : arg;
    }).toArray();
    
    LOGGER.trace("Calling remote service operation {}.", joinPointName);
    
    return joinPoint.proceed(wrappedArgs);
}

There is similar implementation for the other async messaging library.

I hope this solution will help you solve similar problems in your applications in an elegant manner 🙂

We use Liquibase in our project as a DB change management tool. We use it to create our DB schema with the basic configuration our application needs to run.

This is, however, not enough for development or (unit) testing. Why? Because for each test case, we need to have data in the database in a particular state. E.g. I need to test that the system rejects the action of a user that does have the required quota for the action he/she wants to do. So I create a user with a 0 quota and then try to perform the action and see whether the system allows it or rejects it. To not waste time setting our test data repeatedly, we use special Liquibase scripts that set up what we need in our test environment (such as a user with a 0 quota), so that we do not have to do this manually.

For the Payment System project, we used Liquibase to run plain SQL scripts to insert the data that we needed. It worked well enough, but this approach has some disadvantages.

Mainly, the person writing the scripts has to have knowledge of our database and the relations between various tables, so there is a steep learning curve. Another issue is that all of the scripts have to be updated when a larger DB schema change takes place.

Therefore, during our implementation of the Quota System, I took inspiration from the work of my colleague, who used a kind of high-level DSL as a convenient way to setup the Quota system and I turned it into a production-ready feature on top of Liquibase. This solved the problem of manual execution (the scripts always run during the application startup and are guaranteed to run exactly once).

For the DSL, I chose Groovy, since we already use it for our tests, and there is no interoperability issue with Java based Liquibase.

Liquibase has many extension points and implementing a custom changelog parser seemed the way to go.

The default parser for XML parses the input file, generates internal Liquibase objects representing particular changes, and then transforms them to SQL, which is executed on the DB.

I created similar a parser, which was registered to accept Groovy scripts.

The parser executes the script file, which creates internal Liquibase objects, but, of course, one DSL keyword can create more insert statements. What is more, when a DB schema changes, the only place that needs to change is the DSL implementation, not all of the already created scripts.

Example Groovy script:

importFile('changelog.xml')

bw = '3' //using guid of identifier which is already present in database
color = identifier name:'color'
print = identifier name:'PRINT', guid:'100' //creating identifier with specific guid

q1 = quota guid:'q1', name:'quota1', limit:50, identifiers:[bw, print], period:weekly('MONDAY')
q2 = quota guid:'q2', name:'quota2', limit:150, identifiers:[color, print], period:monthly(1)

for (i in 1..1000)
    quotaSubject guid: 'guid' + i, name:'John' + i, quotas:[q1,q2]

This example shows that the Groovy script can reference another Liquibase script file – e.g., the default changelog file, which creates the basic DB structure and initial data.

It also shows the programmatic creation of quotaSubjects (accounts in the system), where we can use normal Groovy for a loop to simply create many accounts for load testing.

QuotaSubjects have assigned quotas, which are identified by quota identifiers. These can be either created automatically, or we can reference already existing ones.

The keywords identifier, quota, quotaSubject, weekly, and monthly are just normal Groovy functions, that take Map as an argument, which allows us to pass them named parameters.

Before execution, the script is concatenated with the main DSL script, where the keywords are defined.

Part of the main DSL script that processes identifiers:

QuotaIdentifier identifier(Map args) {
    assert args.name, 'QuotaIdentifier name is not specified, available params are: ' + args
    String guid = args.guid ? args.guid : nextGuid()

    addInsertChange('QUOTA_IDENTIFIER', columns('GUID', guid) << column('NAME', args.name) << column('STATUS', 'ENABLED'))
    new QuotaIdentifier(guid)
}

private def addInsertChange(String tableName, List<ColumnConfig> columns) {
    InsertDataChange change = new InsertDataChange()
    change.setTableName(tableName)
    change.setColumns(columns)

    groovyDSL_liquibaseChanges << change
}

The calls produce Liquibase objects, which are appended to variables accessible within the Groovy script. The content of the variables constitutes the final Liquibase changelog, which is created after the processing of the script is done.

This way, the test data file is simple to read, write, and maintain. A small change in the changelog parser also allowed us to embed the test data scripts in our Spock test specifications so that we can see the test execution logic and test data next to each other.

@Transactional @ContextConfiguration(loader = YSoftAnnotationConfigContextLoader.class)
class QuotaSubjectAdministrationServiceTestSpec extends UberSpecification {

    // ~ test configuration overrides ==================================================================

    @Configuration @ImportResource("/testApplicationContext.xml")
    static class OverridingContext {

        @Bean
        String testDataScript() {
            """groovy: importFile('changelog.xml')

                       bw = identifier name:'bw'
                       color = identifier name:'color'
                       print = identifier name:'print'
                       a4 = identifier name:'a4'

                       p_a4_c = quota guid:'p_a4_c', name:'p_a4_c', limit:10, identifiers:[print, a4, color], period:weekly('MONDAY')
                       p_a4_bw = quota guid:'p_a4_bw', name:'p_a4_bw', limit:20, identifiers:[print, a4, bw], period:weekly('MONDAY')
                       p_a4 = quota guid:'p_a4', name:'p_a4', limit:30, identifiers:[print, a4], period:weekly('MONDAY')
                       p = quota guid:'p', name:'p', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q1 = quota guid:'q1', name:'q1', limit:40, identifiers:[print], period:weekly('MONDAY')   
                       q2 = quota guid:'q2', name:'q2', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q_to_delete = quota guid:'q_to_delete', name:'q_to_delete', limit:40, identifiers:[print], period:weekly('MONDAY')

                       quotaSubject guid: 'user1', name:'user1', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user2', name:'user2', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user3', name:'user3', quotas:[p_a4_c, p_a4_bw, p_a4, p]
                       quotaSubject guid: 'user4', name:'user4', quotas:[p_a4_c]"""
        }
    }

    // ~ instance fields ===============================================================================

    @Autowired private QuotaSubjectAdministrationService quotaSubjectAdministrationService;
    @Autowired private QuotaAdministrationService quotaAdministrationService;

    // ~ findByGuid ====================================================================================

    def "findByGuid should throw IllegalInputException for null guid"() {
        when:
        quotaSubjectAdministrationService.findByGuid(null)

        then:
        thrown(IllegalInputException)
    }

The goal of this article series is to share our experience of building a new system for processing small-scale transactions within the Y Soft product suite. This article will discuss the main system requirements and high-level architecture decisions we made. Following articles will describe how we used the chosen technologies and the challenges we faced.

Back in 2011, YSoft SafeQ (the main Y Soft product dealing with print management) contained a component that limited users’ consumption based on their account balance and printing costs. However, it was tightly coupled with YSoft SafeQ and its maintenance; further development and customizations were too expensive to continue in this manner. So in 2012, we made the decision to create a separate system for payment transaction processing. At that time, Y Soft opened a new development branch in Prague and hired a new team to build this green field project.

Requirements

The main requirements for Payment System were the following:

  • Compatibility – it had to be able to do what the old solution did, obviously 🙂
  • Strong data consistency (among all cluster nodes) – we would be dealing with money and many of our customers would not like it if their users continued printing for free once they hit 0 on their account or if any money transactions had been lost.
  • Deployment in the customer environment – we do not host our products ourselves rather our products run in the customer’s environment. This means that the environments vary from big data centers with plenty of resources (think big banks and universities) to every day workstations (small businesses).
  • Extensibility – some of our customers have their own payment processing systems that manage monetary accounts. There are also plenty of different payment gateways in various countries for money deposits. We had to be able to integrate Payment System with all of them at a minimal cost (ideally offloading the cost to a 3rd party).
  • YSoft Payment Machines – Y Soft also develops its own hardware for depositing money – the YSoft Payment Machine so we had to support those as well.

We also imposed some requirements of our own:

  • Fast development – the project must be simple to build and simple to run with as few steps as possible to start developing. New developers should not have to spend several days in order to build and run the solution. Nobody should be made to memorize 30-step manuals to build or run anything!
  • Simple design – everybody hates to write (and read) documentation, therefore, we wanted to design the system according to KISS principles so that we could concentrate on programming and not on explaining complex design to our co-workers.
  • Flexible UI – we have seen our share of Java web frameworks and we cannot say it has ever been a pleasant experience. Especially if something does not work as it is supposed to, or if you need just some little extra that the framework does not provide out of the box. Therefore, we wanted the UI to be simple but flexible.

How We Addressed the Requirements

Compatibility

One of the challenges was to find out what the old solution really did since there was not much documentation (hardly any at all :-)). We gathered requirements from various sources, mainly company board members and QA analysts (since we did not have product management at that time) and prepared a sizable business analysis. That way we were quite confident that we understood the requirements and that all of the participants had the same notion of the system to be delivered.

Payment_System_context_diagram

The main requirements boiled down to a system that manages monetary accounts and provides the means to create 2-step and multi-step transactions on those accounts (which map to the print and copy processes of YSoft SafeQ). The system should also provide means to manipulate account balances by a cash desk operator.

We wanted to create a more general purpose payment transaction processing system so we analyzed existing systems like PayPal, Google Wallet and Amazon Payments, to avoid reinventing the wheel, regarding naming and the functionality of different operations. The result is a system that is being deployed in production and tested as standalone without YSofr SafeQ, sometimes in surprising setups, e.g., machines for selling candy in Switzerland :-).

Data Consistency

Since we needed to keep the data consistent, we chose a straightforward solution – one shared database and multiple stateless application servers. There were also other options, like a fully distributed solution or a single application. But we wanted failover, so a single application was not an option. On the other hand, a fully distributed solution would be a lot more complex and would have many necessary limitations during network partition (there are plans for a different system that will be fully distributed, but for different use cases, and we will achieve consistency in a different way).

NoSQL databases are currently very popular, but they are mostly targeted at cloud provider infrastructure with reliable networks. Environments where partitions happen on a daily basis (e.g., nodes are online only when the owner of the machine is at work) are not ideal for them.

Therefore, we chose a traditional SQL database as the database solution. YSoft SafeQ supports PostgreSQL and MSSQL, so that was the minimal set of supported DBs for us as well. We delegate database failover to the chosen database installation.

Support Multiple Database Vendors

We need to support multiple databases and of course we do not want to duplicate code for each vendor. Besides PostgreSQL and MSSQL we use H2 for development since it does not need complex installation, it can be easily bundled with the application or used in tests and it is very flexible (in memory, persistent and server mode).

We chose tools that grant us database portability. For database structure and minimal data set insertion, we use Liquibase (more details will follow in a separate article). For querying, we use Hibernate and we have successfully avoided the need for stored procedures so far (yes, there are times when they are handy, but they come at a cost, especially when you need to support multiple DB vendors).

Deployment in the Customer Environment

The sizes and needs of our customers vary dramatically, so we provide several different deployment options. They range from a single application server with a schema on existing database installation to a clustered solution with multiple application servers behind a load balancer.

deployment

We achieved this by implementing the main transaction processing logic as stateless. So you can start a transaction using one node and finish it using another.

Many customers have security policies that do not allow us very good access to the installed solution and descriptive logs are a must. We have cooperated with our colleagues from the Customer Support Services department (CSS), who act as the first line of support, on reviews of log messages. The results are simple and clear log messages (usually one descriptive message per business operation), so CSS consultants can deal with many issues without our assistance.

Extensibility

There are four kinds of extension points in our system:

  • REST API – We chose REST mainly because it is well supported in most languages and platforms and because it is simpler to use than alternatives (like traditional web services). This way, anyone can easily integrate with us. You can even create Payment System transactions from the command line if you like. This API comes in 2 flavors:
    • Merchant API, which is used by merchants to create payment transactions.
    • Administration API, which is used for account and system management.
  • Java API for external payment processing systems – this is an extension point where either developers or CSS consultants write adapters for existing customer-specific payment processing systems. In this case, Payment System acts as a proxy for SafeQ, so SafeQ does not have to deal with the specifics of customer specific systems. Implementations have to be in Java (or another JVM language), but of course we count on providing Java adapters for payment processing systems that provide, for example, REST interfaces.
  • REST “API” for payment gateways – we have defined a REST contract that has to be implemented by an integration application. This decoupling made the internal Payment System’s design much simpler, since we do not have to care about all of the different workflows that different payment gateways use. The workflow is separated in the integration application, which is HTTP based, since most of the payment gateways we support have either HTTP POST or REST based APIs. Multiple payment gateway integrations can be configured at the same time.
  • User management API – this is an API for retrieving information about users. Currently, we rely on SafeQ as the provider of user-related information in our deployments. This is, however, impractical during development, so we have isolated this functionality in an API and have a simple DB-backed development version.

YSoft Payment Machines

YSoft Payment Machines (our own hardware for money deposits) have limited hardware – e.g., 64MB of memory for the operating system and device firmware. This was a big limitation for choosing a messaging solution, so a simple and straightforward solution like (RESTful) web services was out of reach.

We have ended up with Google Protocol Buffers on top of TCP (we use Netty on the server side). Google Protocol Buffers have a C implementation – nanopb, as well as a Java implementation and its memory footprint in C was low enough that we could use them on our devices (we could not even use a C++ version because there was not enough spare memory for C++ runtime).

Fast Development

We use Maven as our build system. We were happy using Maven on previous projects and it is fully sufficient for our needs on this project as well. The main benefit of using Maven is that developers familiar with it can build and modify the build configuration of our project easily. They do not have to learn the build specifics when switching from one project to another, which reduces the risk of making an error.

Our deployables are war files bundled with the default configuration so you can run the system just by dropping them in a web container, and you can then access Payment System right away. The defaults, which make use of the H2 database, can be changed along with other configuration options in a separate configuration file, which contains overrides.

We also provide a pre-configured version of Tomcat that we use as our production web container. We also have a Maven Cargo plugin configured so that you can start Payment System without a container installed (more on that in a separate article).

Simple Design

The design is the usual View, Controller, Service and DAO. Everybody is familiar with it and there are no surprises there. The main point here is consistency so developers do not have to think too hard about where to put the code, and they can concentrate on solving business problems. The knowledge transfer to new co-workers about the database schema, Liquibase usage, business logic and REST APIs took less than 1 hour.

Flexible UI

We use Jade4j as our templating technology with Twitter Bootstrap and jQuery all on top of Spring MVC. The templates are concise and you can see right away what HTML code will be generated. No JavaScript is generated on the server. To lessen problems associated with a double submit, we follow the POST-REDIRECT-GET idiom and try to avoid state in the view as much as possible (again more on that will follow in a separate article).

Conclusion

To sum it all up – by using battle-tested technologies (Hibernate, Spring, Maven, Tomcat, Liquibase, Netty) cleverly, having a simple straight-forward design and data model, we have achieved our goals and satisfied all of the requirements. We have a working and extensible system, proven in several installations, and more installations will come soon. But, of course, everything was not always easy and simple, so stay tuned…