What we do

We create intelligent enterprise office solutions that build smart business and empower employees to be more productive and creative.

Our YSoft SafeQ Workflow Solutions Platform is used by more than 14,000 corporations and SMB organizations from over 120 countries to manage, optimize and secure their print and digital processes and workflows. Our 3D print solutions are focused in the Education sector where they provide unique workflow and cost recovery benefits.

Through YSoft Labs, we experiment with new technologies for potential new products. We accelerate the technology growth of other innovative companies through Y Soft Ventures, our in-house investment arm.

 

Our Culture

Y Soft culture is defined by our 6 attitudes. We don’t say they are right or better. We say they are ours – created by employees for employees. We are always seeking people who share similar attitudes because our culture should live on through all Y Softers.

 

values

Headquartered in the Czech Republic, we employ over 370 dedicated people around the world. Our R&D centers are in Brno, Prague and Ostrava, Czech Republic, but you can also meet Y Softers in North and Latin America, Dubai, Singapore, Japan or Australia. Together we have 17 offices in 16 countries.

 mapa

R&D – At Y Soft it is a CRAFT

Unlike other technology companies, whose R&D departments only implement the vision of others, Y Soft employees regularly contribute ideas and help shape the direction of new products.
For us, R&D is a craft — a craft we continuously build upon to expand the R&D know-how in the Czech Republic. We believe in DevOps and have finished the first stage in building our own production grade environment. We are also building a core competency in UI design. Our developers transform their talent into solutions with high added value that impress our customers around the world. We inspire each other with our dedication to quality, by using the most up to date technologies, and in designing, developing and integrating our own hardware and software. We challenge ourselves daily to change the world.img_8833

We are building large-scale connected, global systems for demanding customers (nearly 25% of the Fortune Global 500 use Y Soft’s solutions). This level of customer satisfaction is achieved through R&D’s open access to customers and the opportunity to meet with company executives to discuss and influence R&D direction.

We also work with many young startups through our in-house venture arm, which exposes you to new ideas and mentors to see if entrepreneurship is a future path for you. We offer opportunities to work with thesis students at local universities and reward employees who contribute to our intern’s success.

img_9543

Want more from Y Soft?

We are growing and can offer you an exciting, rewarding career with a wide range of opportunities to utilize your skills and to grow personally and professionally.

In latest project we came across the problem of securing communication, where peers don’t share backend language. Our goal was to securely generate shared secret between Java server and .NET client. Each language supports all features needed for the key agreement, but these features aren’t always compatible, due to different encoding or data representation.

This article should be quick and easy solution how to agree on shared secret between such server and client with hand-on examples. Each subchapter contains concrete example on how to send, receive and process data needed for respected operation. Authentication and signature scheme are similar to guides published internet-wide, the main topic- key agreement is presented with our insights and comments on how to make it work.

The reason I have included all the code snippets even if they are “straight from internet” is that I wanted to simply group all these schemes to one place, so that you don’t have to find examples on Oracle, CodeRanch, StrackOverflow, MSDN Microsoft documentation, … Honestly it was a little bit frustrating for me, so there you go 🙂

If you are interested just in key agreement, feel free to skip right to the key agreement example, as I would like to discuss authentication and signature schemes first in order to prevent attacks on anonymous key agreement.

Please beware of incautious use of the published code. Your protocol might require different configuration and by simple copy-pasting you may introduce some security issues or errors to it.

Certificate based authentication

Consider each peer to have certificate signed by some root CA.

In my examples I’ll be working mostly with byte arrays as it is most universal way to represent data.

In most cases whole certificate chain is transferred to other side. Using one certificate (self-signed) is just simplification of process below.

.NET authentication

Java peer can transform certificate into byte array using Certificate.getEncoded() method. Byte array is then sent to counterpart.

.NET peer parses certificate chain data and stores parsed certificates into List<byte[]> to ease processing.

Code below shows the authentication process. Process differs from standard validation in a way, that we are sending only certificate chain. From the chain peers certificate is extracted. From the rest of received certificate chain we create subchain and check if we were able to create complete chain from peer to Certificate Authority.

For simplicity revocation checks and certificate usage field (OID of the field – 2.5.29.15) are being ignored. In real environment you should definitely check this list and field, but in this article it is not really necessary.

/**
* receivedCertificateChain previously transformed byte array containing whole certificate chain. This chain is transformed
*                          into list of separate certificates from chain 
*/
public bool certificateChainValidation(List<byte[]> receivedCertificateChain)
{
  var otherSideCertificate = new X509Certificate2(receivedCertificateChain[0]);
  var chain = new X509Chain { ChainPolicy = { RevocationMode = X509RevocationMode.NoCheck, VerificationFlags = X509VerificationFlags.IgnoreWrongUsage } };
  chain.ChainPolicy.ExtraStore.AddRange(receivedCertificateChain.Skip(1).Select(c => new X509Certificate2(c)).ToArray());

  return chain.Build(otherSideCertificate));
}

Root CA certificate must be trusted by your system to be successfully validated.

Java authentication

.NET peer can transform each certificate into byte array using X509Certificate2.RawData property and send it to counterpart.

First, Java peer needs to define validation parameters and Trust anchor. In this case it’s root CA.

After that, you validate received certificate chain. If method validate(CertPath, PKIXParameters) is processed without any exception, you can consider received certificate chain correctly validated.

/**
* certificatePath received data of certificate chain from counterpart 
*/
public boolean certificateChainValidation(byte[] certificatePath) {
  try {
    //generate Certificate Path from obtained byte array
    CertificateFactory cf = CertificateFactory.getInstance(“X.509”);
    CertPath received = cf.generateCertPath(new ByteArrayInputStream(certificatePath));

    //get all trusted entities from dedicated truststore
    Set<TrustAnchor> trustedCAs = getAnchors();

    //set the list of trusted entities and turn off the revocation check
    //revocation is by default set to true, so if you don't turn it off explicitly, exceptions will bloom
    PKIXParameters params = new PKIXParameters(trustedCAs);
    params.setRevocationEnabled(false);

    CertPathValidator cpv = CertPathValidator.getInstance("PKIX");
    //if no exception is thrown during validation process, validation is successful
    cpv.validate(received, params);
  } catch (CertificateException | NoSuchAlgorithmException | InvalidAlgorithmParameterException | CertPathValidatorException e ) {
    throw new CertificateException("Could not validate certificate", e);
    return false;
  }
  return true;
}

getAnchors() is a method which simply loads all entries from dedicated truststore and returns them as Set<TrustAnchors>.

Signatures

Without correct signature scheme whole authentication scheme would be compromised, as any attacker would be able to replay intercepted certificate chain. Therefore while sending outgoing certificate chain, each peer has to include signature to it. It is sufficient for such signature to be reasonably short (resp. long) nonce. This nonce should be freshly generated by the communication counterpart and can be sent before authentication attempt (or as an authentication challenge). Nonce could be any-random value or  static counter, but must be definitely unique.

During the key agreement phase, signatures are used to sign public keys that are exchanged between peers to ensure the integrity of data.

.NET signature generation

First you need to use certificate with included private key – PKCS#12 certificates.

Flags as PersistKeySet and Exportable have to be included to certificate loading, as without them you are not able to obtain private key for signatures from the keystore. PersistKeySet flag ensures that while importing certificate from .pfx or .p12 file, you import private key with it. Exportable flag means that you can export imported keys (mainly private key).

As mentioned in this comment we need a little workaround to force crypto provider to use SHA256 hashes.

public byte[] signData(byte[] dataToSign)
{
  X509Certificate2 cert = new X509Certificate2(PATH_TO_CERTIFICATE, PASSWORD, X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable);
  RSACryptoServiceProvider privateKey = (RSACryptoServiceProvider)cert.PrivateKey;
 
  RSACryptoServiceProvider rsaClear = new RSACryptoServiceProvider();
  // Export RSA parameters from 'privateKey' and import them into 'rsaClear'
  // Workaround to force RSACryptoServiceProvider use SHA256 hash
  rsaClear.ImportParameters(privateKey.ExportParameters(true));
  return rsaClear.SignData(dataToSign, "SHA256");
}

Java signature generation

Straight-forward example presented internet-wide.

/**
* data       data to be signed
* privateKey private key imported from key pair storage
*/
public byte[] signData(byte[] data, PrivateKey privateKey) {
  try {
    Signature signature = Signature.getInstance(signatureType);
    signature.initSign(privateKey);
    signature.update(data);
    return signature.sign();
  } catch (NoSuchAlgorithmException | SignatureException | InvalidKeyException e) {
    throw new SignatureException(“Could not create signature”, e);
    return null;
  }
}

.NET signature verification

First thing to mention is, that you shouldn’t create X509Certificate2 object directly from byte arrays (5th tip in this article). Depending on use of your protocol it might result into stalling and slowing whole protocol, what you definitely don’t want. But again, for simplicity I’m creating object from byte array directly. Mitigation of mentioned problem can be seen in linked article.

/**
* signature    signature of received data
* receivedData data received from counterpart. These data were also signed by counterpart to ensure integrity
*/
public bool verifySignature(byte[] signature, byte[] receivedData)
{
  X509Certificate2 cert = new X509Certificate2(PATH_TO_CERTIFICATE);

  RSACryptoServiceProvider publicKey = (RSACryptoServiceProvider)cert.PublicKey.Key;
  return publicKey.VerifyData(receivedData, "SHA256", signature);
}

Java signature verification

Straight-forward example presented internet-wide.

/**
* receivedData data received from counterpart
* signature    signature of receivedData
* publicKey    public key associated with privateKey object from "Java signature generation"
*/
public boolean verifyData(byte[] receivedData, byte[] signature, PublicKey publicKey) {
  try {
    Signature signature = Signature.getInstance(“SHA256withRSA”);
    signature.initVerify(publicKey);
    signature.update(receivedData);
    return signature.verify(signature);
  } catch (NoSuchAlgorithmException | SignatureException | InvalidKeyException e) {
    throw new SignatureException(“Could not validate signature”, e);
    return false;
  }
}

 

Key pair generation

Both, key pair generation and key agreement use BouncyCastle library (BC) for the crypto operations. During key generation phase, you need to specify key agreement algorithm. In this project we used elliptic curve Diffie-Hellmann (“ECDH”) key agreement. Elliptic curves were used to generate key pair mainly due to their advantages (performance, key size).

.NET key pair generation

The only difference from standard key generation is encoding of generated public key in a way that Java will accept the key and is able to create PublicKey object.

private static AsymmetricCipherKeyPair generateKeypair()
{
  var keyPairGenerator = GeneratorUtilities.GetKeyPairGenerator("ECDH");
  var ellipticCurve = SecNamedCurves.GetByName(ELLIPTIC_CURVE_NAME);
  var parameters = new ECDomainParameters(ellipticCurve.Curve, ellipticCurve.G, ellipticCurve.N, ellipticCurve.H, ellipticCurve.GetSeed());
  keyPairGenerator.Init(new ECKeyGenerationParameters(parameters, new SecureRandom()));
  return keyPairGenerator.GenerateKeyPair();
}

Now you need to send public key to other peer. Unfortunately, it is necessary to format public key in following way, otherwise Java peer will not be able to correctly create PublicKey object and create shared secret.

AsymmetricCipherKeyPair keyPair = generateKeypair();
byte[] publicKeyData = SubjectPublicKeyInfoFactory.CreateSubjectPublicKeyInfo(keyPair.Public).GetDerEncoded();

Java key pair generation

public static KeyPair generateKeyPair() {
  try {
    Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider());
 
    ECNamedCurveParameterSpec ecNamedCurveParameterSpec = ECNamedCurveTable.getParameterSpec(ELLIPTIC_CURVE_NAME);
    KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("ECDH", PROVIDER);
    keyPairGenerator.initialize(ecNamedCurveParameterSpec);
    return keyPairGenerator.generateKeyPair();
  } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException | NoSuchProviderException e) {
    throw new EncryptionException("Could not generate key pair", e);
  }
}

 

Key agreement

.NET key agreement

At this step, you need to beware the issue in C# BoucyCastle library, which results into failed key agreement even if you have correct keys (correct means same data, but different indentation => 0x0059534F4654 is different from 0x59534F4654).

Problem is that BC library creates shared secret as BigInteger. BigIntegers trims all leading zeroes and at conversion from it to byte array these zeroes aren’t included into array.

Unfortunately until publication day issue wasn’t still fixed in BC and you need to check key lengths by yourselves.


private static byte[] DeriveKey(AsymmetricCipherKeyPair myKeyPair, byte[] otherPartyPublicKey)
{
  IBasicAgreement keyAgreement = AgreementUtilities.GetBasicAgreement(KEY_AGREEMENT_ALGORITHM);
  keyAgreement.Init(myKeyPair.Private);

  //check otherPartyPublicKey length

  //shared secret generation
  var fullKey = keyAgreement.CalculateAgreement(PublicKeyFactory.CreateKey(otherPartyPublicKey)).ToByteArrayUnsigned();  
  return fullKey;
}

Java key agreement

Straight-forward example presented internet-wide.

public SecretKey secretDerivation(PublicKey receivedPublicKey, PrivateKey myPrivateKey) {
  try {
    KeyAgreement keyAgreement = KeyAgreement.getInstance(KEY_AGREEMENT_ALGORITHM, PROVIDER);
    keyAgreement.init(privateKey);
    keyAgreement.doPhase(publicKey, true);
    return keyAgreement.generateSecret(“AES”);
  } catch (NoSuchAlgorithmException | InvalidKeyException | NoSuchProviderException e) {
    throw new EncryptionException("Could not generate secret", e);
  }
}

 

From these examples You might gain bad feeling as all examples are rather brute and messages aren’t sent/received in no standard way- only as byte arrays. Best way is to exchange protocol messages in more standardized way. By doing so, you start heavily relying on third party library.

More standardized version of protocol should use Cryptographic Message Syntax. We will have a closer look on this option in next blog article.

This is an introduction to UAM project – management server for Universal appliances (new TPv4, eDee and SafeQube) and the Vert.x framework on which it is developed.

There was a need for easy installation and upgrade of OS and applications running on an UA so we needed a system to manage UAs with as little user interaction as possible.
The main requirements are robustness (we will deal with unrealiable networks and possible power outages), small executables (there is limited storage on the SafeQube) and little disk access (we don’t want to wear out our SafeQube flashdisks).

UAThe system has 2 parts:

  • UA management server – server application which will be deployed next to CML and will contain APIs for system administration
  • UA management server proxy – application cluster which will be deployed on SafeQubes and will cache application binaries and configuration

Both applications are developed using Vert.x framework. UAM server uses also database as configuration storage and proxy applications use Hazelcast for clustering.

Vert.x is an application framework for programming servers (especially web servers). It is built on top of Netty so no application container like Tomcat or Jetty is necessary.

I will show several examples of working with Vert.x so you can try out writing your own applications.

Starting a web server is simple:

vertx.createHttpServer()
    .requestHandler(request -> {
        System.out.println("Received request from " + request.remoteAddress().host());
    })
    .listen(80);

Using this approach is fast, but you have to handle which business logic to call for which endpoint and it can get quite messy.
Fortunately, we can do better using Vert.x web module:

Router router = Router.router(vertx);
router.route("/hello").handler(context -> {
    String name = context.request().getParam("name");
    context.response().end("Hello " + name);
});

router.get("/ping").handler(context -> {
    context.response().end("Pong!");
});

vertx.createHttpServer()
     .requestHandler(router::accept)
     .listen(80);

This way, we can easily separate business logic for our endpoints.

Do you need request logging or security? You can add more routes for same endpoints. Routes are then processed in the order of addition to the router and can call next route in the chain or just end the response.

Router router = Router.router(vertx);
router.route().handler(CookieHandler.create());
router.get("/secured-resource").handler(context -> {
    if ("password".equals(context.getCookie("securityToken"))) {
        context.next();
    } else {
        context.response().setStatusCode(401).end("sorry, you are not authorized to see the secured stuff");
    }
});

router.get("/secured-resource").handler(context -> {
    context.response().end("top secret stuff");
});

Do you want more than just APIs? There are template engines for Handlebars, MVEL, Thymeleaf or Jade if you want to use their functionality.

router.get().handler(TemplateHandler.create(JadeTemplateEngine.create()));

Things get more interesting when you configure Vert.x instances to form a cluster. Currently, the only production ready cluster manager is built on top of Hazelcast, so we will use it.

VertxOptions options = new VertxOptions();
options.setClustered(true);
options.setClusterHost("192.168.1.2");

Vertx.clusteredVertx(options, result -> {
    if (result.succeeded()) {
        Vertx vertx = result.result();
        //initialize what we need
    } else {
        LOGGER.error("Failed to start vertx cluster", result.cause());
    }
});

As you can see, starting Vert.x cluster isn’t that complicated. Just be careful to set correct cluster host address or you may find out that the cluster doesn’t behave how you would expect.
And what functionality is available for clustered applications? There is clustered event bus, where you can use point-to-point or publish-subscribe messaging. Just register your consumers and send them messages to logical addresses without ever caring on which node the consumer resides.

vertx.eventBus().consumer("/echo", message -> {
    message.reply(message.body());
});

vertx.eventBus().send("/echo", "ping");

I hope this short introduction to Vert.x will make you curious to look at its documentation and start writing your own applications on top of it.

One of the systems our team develops is UI for end-users, where users can view and manage their print related data.

The system is designed as a simple web application, where we make AJAX calls to Spring controllers which delegate the calls to two other systems, no database is present.

One of the requirements on the system was to support about 1000 concurrent users. Since Tomcat has by default 200 threads and the calls to the other systems may take long (fortunately it’s not the case at the moment), we have decided to make use of Servlet 3.0 async. This way, each AJAX call from the browser uses up Tomcat thread only for preparation of a call to other system. The calls are handled by our asynchronous library for communication with SafeQ and asynchronous http client for communication with Payment System which both have own threadpools and fill in the response when they get a reply.

Since we depend so much on other systems’ performance, we wanted to monitor the execution time of the requests for better tracking and debugging of production problems.

There are several endpoints for each system and more can come later, so in order to avoid duplication, we have decided to leverage Spring aspect programming support. We have created an aspect for each business service (one for SafeQ, one for Payment System) and it was time to implement the measurement business logic.

In synchronous scenario, things are pretty simple. Just intercept the call, note start and end time and if it took too long, just log it in a logfile.

@Around("remoteServiceMethod()")
public Object restTimingAspect(ProceedingJoinPoint joinPoint) throws Throwable {
    long start = System.currentTimeMillis();

    Object result = joinPoint.proceed();

    long end = System.currentTimeMillis();

    long executionInMillis = end - start;
    if (executionInMillis &gt; remoteServiceCallDurationThresholdInMillis) {
        LOGGER.warn("Execution of {} took {} ms.", joinPoint, executionInMillis);
    }

    return result;
}

This won’t work in our asynchronous case. The call to joinPoint.proceed(), which will make call to other system returns immediately without waiting for a reply. Reply is processed in a callback provided to one of async communication libraries. So we have to do a bit more.

We know the signature of our business methods. One of the arguments is always callback, which will process the reply.

public void getEntitlement(ListenableFutureCallback callback, String userGuid, Long costCenterId)

If we want to add our monitoring logic in a transparent way, we have to create special callback implementation, which will wrap the original callback and track the total time execution.

class TimingListenableFutureCallback implements ListenableFutureCallback {

    private ListenableFutureCallback delegate;
    private StopWatch timer = new StopWatch();
    private String joinPoint;

    public TimingListenableFutureCallback(ListenableFutureCallback delegate, String joinPoint) {
        this.delegate = delegate;
        this.joinPoint = joinPoint;
        timer.start();
    }
        
    @Override
    public void onSuccess(Object result) {
        logExecution(timer, joinPoint);
        delegate.onSuccess(result);
    }

    @Override
    public void onFailure(Throwable ex) {
        logExecution(timer, joinPoint);
        delegate.onFailure(ex);
    }
}

And then we have to call the target business method with properly wrapped callback argument.

@Around("remoteServiceMethod() && args(callback,..)")
public Object restTimingAspect(ProceedingJoinPoint joinPoint, ListenableFutureCallback callback) throws Throwable {
    String joinPointName = computeJoinPointName(joinPoint);
    
    Object[] wrappedArgs = Arrays.stream(joinPoint.getArgs()).map(arg -> {
        return arg instanceof ListenableFutureCallback ? wrapCallback((ListenableFutureCallback) arg, joinPointName) : arg;
    }).toArray();
    
    LOGGER.trace("Calling remote service operation {}.", joinPointName);
    
    return joinPoint.proceed(wrappedArgs);
}

There is similar implementation for the other async messaging library.

I hope this solution will help you solve similar problems in your applications in an elegant manner 🙂

While choosing the right security layer for messaging via ZeroMQ, there were two main considerations: built-in security CurveZMQ or usage of more conventional TLS backend?

First option is ZeroMQ’s proprietary security protocol based on elliptic curve cryptography and Daniel’s Bernstein NaCl cryptographic library. CurveZMQ utilizes not only Bernstein’s Curve25519 elliptic curve, but also other ciphers, designed by him, as well- f.e. Salsa20 and poly1305. Other NaCl’s ciphers can be found on its web page under Public-key and Secret-key cryptography chapters.

From the beginning this option was really hot candidate to use, as according to papers about performance of Salsa20 and performance of Curve25519, NaCl crypto functions are faster than standard crypto functions. Also the fact that it was already included in ZeroMQ was big plus. Unfortunately NaCl library and its functions were created in years 2008-2010 and aren’t used widely as f.e. AES or SHA are, therefore aren’t matured enough as other libraries, what brings a little bit panic to our mind and ultimately was the cause of not using it.

Second option was the usage of TLS backend- TLSZMQ. This is a demo project of Ian Barber to secure ZeroMQ communication using OpenSSL library. Unfortunately TLS isn’t built into ZeroMQ, so for now this is the only option how to use TLS and ZeroMQ together 🙁

Implementations comparison

To find the best approach we had few requirements to check. Mainly it was the speed of messaging and library and crypto functions maturity, but other requirements such as support and liveliness of implementation, licensing, quality of code also weighted in decision.

From this set of metrics only 2 appeared to be ultimately decisive:

  • Library and crypto functions maturity
  • Speed of messaging

While the maturity of NaCl library was strong argument why not to use CurveZMQ, performance of NaCl mentioned before spoke strongly in favor of it.

To finally resolve which approach to use and determine the total overhead of each implementation I have created performance tests to find out if TLSZMQ could be used in our environment. The test suite for each implementation was designed to be a block of 20 separate tests running for 10 minutes each, to be sure of reliable test outcomes. Even the message sizes were chosen to reflect the size of messages to be really sent in environment.

total persec

At this point, results clearly spoke strongly in favor of CurveZMQ. On the other hand, there was a conflict with maturity of crypto functions used in CurveZMQ and use of single elliptic curve to generate keys. This could cause many problems, if the curve or other functions would be compromised in the future.

At this point I realized that tests were done with OpenSSL’s by default chosen cipher suite RSA-AES256-SHA384 (OpenSSL version 1.0.1p). There is nothing wrong with this cipher suite, but we have to admit that these algorithms were unnecessarily secure (understand with high overhead).

Yes, I know what I just said and that it sounds really bad, but we must also consider the environment of messaging (its encryption) and performance of the system as well. Only after thorough examination we should accommodate ciphers to real requirements and not just use cipher suites with unnecessarily high security (and overhead) to overkill it.

Therefore, tests were rerun with more appropriate (weaker but still strong) cipher suites. The requirement was to get as much close to CurveZMQ performance as possible:

  • elliptic curves used everywhere possible
    • key exchange [ECDH]
    • authentication [ECDSA]

Any EC could be used, not just one.

  • AES doesn’t have to use 256b key as 128b long keys are still considered pretty strong.

From few acceptable cipher suites, this one specific peaked out: ECDSA-AESGCM128-SHA256.

This new finding has shown that new cipher suite increased the messaging performance considerably (sometimes effectively doubling the speed) [see graphs].

From the results one can see, that TLSZMQ isn’t that much inferior to CurveZMQ and due to TLS’s maturity it is a better choice. The final argument is that test results of TLSZMQ were worse, but sufficient in YSoft SafeQ environment, so the choice of implementation was pretty straight-forward.

Truth is, that NaCl and Curve25519 are really fun (and much easier) to play with and I encourage you to at least have a look at them, but in the end TLS backend is more flexible in terms of crypto primitives (ciphers, key generation) and doesn’t bring that much security concerns. And that’s what we are looking for in secured ZeroMQ messaging.

As I have written some time ago, we use OWASP Dependency check in order to scan our product for known vulnerabilities. But when you have reports from many components of the product, you need to manage them somehow. In the previous article, I’ve briefly mentioned our tool that processes the reports. In this article, I’ll present this tool. FWIW, we have open-sourced it. You can use it, you can fork it, you can send us pull requests for it.

Vulnerable libraries view

The vulnerable libraries view summarizes vulnerabilities grouped by libraries. They are sorted by priority, which is computed as number of affected projects multiplied by highest-severity vulnerability. This is probably the most important view. If you choose a vulnerable library, you can check all the details about vulnerabilities, including affected projects. When you decide to update a library to a newer version, you will probably wonder if there is some known vulnerability present in the new library version. The ODC Analyzer allows you to check it after simply entering your desired library version.

One might find the ordering to be controversial, mostly using the highest-rated vulnerability. Maybe the scoring system is not perfect (and no automatic scoring system can be perfect), but I find it reasonable. I assume that highest-scored vulnerability is likely some remote code execution triggerable by remote unauthenticated attacker. In such case, does having multiple such vulnerabilities make the situation worse? Well, it might slightly increase the probability that the vulnerability will be exposed to the attacker, but having two such vulnerabilities can hardly make it twice as bad as having just one. So I wanted one highest-rated vulnerability to be a ceiling of risk introduced by the vulnerable library to the project. Maybe we could improve the algorithm to rate multiple low-severity vulnerabilities higher than the severity of the highest-rated vulnerability. I already have an idea how to do this, but it has to be discussed.

Vulnerabilities view

You can also list all vulnerabilities affecting one of the projects scanned. They are sorted by severity multiplied by number of affected projects. Details available there are mostly the same as in the vulnerable libraries view.

Another interesting feature is vulnerability suppression. Currently, ODC Analyzer can generate a suppression that can be pasted to suppressions.xml, so it is taken into account the next time when running vulnerability scans. We consider making it more smart and moving the suppression to ODC Analyzer itself.

Filters

Unless you have just a single team, most people are probably not interested in all the vulnerabilities of all the projects. They will want to focus on only their projects instead. ODC Analyzer allows focusing on them by filtering only one project or even a subproject. One can also define a team and filter all projects belonging to this team.

Teams are currently defined in configuration file, there is no web interface for it now.

E-mail notifications

One can subscribe to new vulnerabilities of a project by e-mail. This allows one to watch all relevant vulnerabilities without periodic polling of the page.

Export to issue tracking system

We are preparing export to issue tracking system. There is some preliminary implementation, but we might still perform some redesigns. We can export one issue per vulnerability. The rationale behind this grouping is that this allows efficient discussion when multiple projects are affected by the same vulnerability. A side effect of such design is that you need an extra project in your issue tracker and you will might want to create child issues for particular projects.

Currently, only JIRA export is implemented. However, if you want to export it to <insert name of your favourite issue tracker here>, you can simply implement one interface, add a few of code for configuration parsing and send us a pull request 🙂

Some features under consideration

Affected releases

We perform scans on releases. It would be great to add affected releases field to vulnerabilities. We, however, have to be careful to explain its meaning. We are not going to perform these scans on outdated unsupported releases, thus they will not appear there. So, it must be clear that affected releases are not exhaustive in this way.

Discussion on vulnerabilities

We consider adding a discussion to vulnerabilities. One might want to discuss impact on a project. These discussions should be shared across projects, because it allows knowledge sharing across teams. However, this will probably be made in issue tracker systems like JIRA, as we don’t want to duplicate their functionality. We want to integrate them, though.

Better branches support

If a project has two branches, it can be currently added as two separate projects only. It might be useful to take into account that multiple branches of a software belong to the same project. There are some potential advantages:

  • Issues in production might have higher urgency than issues in development.
  • Watching a particular project would imply watching all the branches.

However, it is currently not sure how to implement it and we don’t want to start implementation of this feature without a proper design. For example:

  • Some projects might utilize build branches in Bamboo.
  • Some projects can’t utilize build branches, because there are some significant differences across branches. For example, some projects switch from Maven to Gradle.
  • Is it useful to allow per-branch configuration (e.g., two branches belonging to different teams or watching only one branch)?
  • Should the branches be handled somewhat automatically?
  • How to handle different branching models (e.g. master + feature branches, master + devel + feature branches, …)?

Library tagging

Library tagging is useful for knowing what types of libraries are in the system. It is partially implemented. It works, but it has to be controlled using direct access to the database. There was never a GUI for adding a tag. When you have some existing tags, there is a GUI for adding these tags to a library, but there is no way to add one permission for that.

Homepage

The project was originally designed to be rather a single-page. This did not scale, so we added some additional views. The current homepage is rather a historical left-over and maybe it should be completely redesigned.

List of all libraries

Non-vulnerable libraries are not so interesting, but one might still want to list them for some purposes. While there is a hidden page containing all the libraries (including a hidden CSV output), it is not integrated to the rest of the application. We have needed this in the past, but we are not sure how to meaningfully integrate it with the rest of the system without creating too much of clutter.

Conclusion

We have implemented a tool useful for handling libraries with known vulnerabilities and cooperation across teams. This tool is in active development. If you find it useful, you can try it. If you miss a feature, you can contribute by your code. If you are unsure if your contribution is welcome, you can discuss it first.

Black Hat Europe 2015

Black Hat is a famous computer security conference held annually in the USA, Europe and Asia. Philippe Arteau, the original developer of FindSecurityBugs, was accepted as a presenter and because of my significant software contribution, he suggested me presenting the tool together. Y Soft then supported me in taking this opportunity.

Black Hat Europe 2015 was held in November in Amsterdam and over 1500 people from 61 countries attended this event. The conference was opened with the keynote called What Got Us Here Won’t Get Us There mentioning upcoming security apocalypse, complexity as a problem, security anti-patterns, taking wrong turns, developing bad habits, missing opportunities or re-examining old truths (using many slides). The Forum and three smaller rooms were then used for parallel briefings – usually one-hour presentations with slides and some demonstrations of latest developments in information security. There was also Business Hall with sponsors, free coffee and a Tool/Demo area for the sub-event called Arsenal. Here we and mostly independent researchers showed our “weapons” – every tool had a kiosk for two hours and people come closer to see short demonstrations and to ask questions.

Despite presenting during lunch time, fair number of Black Hat attendees came to discuss and see FindSecurityBugs in action. The most common questions were about what it can analyse (Java and other JVM languages like Scala, no source code needed), which security weaknesses can we detect (the list is here, a few people were happy that there are also detectors for Android) and whether is it free or not (yes, it is open source). People seemed to like the tool and someone was quite surprised it is not a commercial tool 🙂

There were many interesting briefings during the two days. One of the most successful presentations (with applause in the middle of it) explained a reliable software-only attack to defeat full disk encryption (BitLocker) in Windows. If there was no pre-boot authentication and the attacked machine had joined a domain, it was possible to change the password at the login screen (and have full access to the system) by setting up a mock domain controller with user password expired. Microsoft released a patch during the conference. Other briefings included breaking into buildings, using continuous integration tools as an attack surface, fooling and tracking self-driving cars or delivering exploits “with style” (encoded in image/HTML polyglot). One presentation (about flaws in banking software) was cancelled (or probably postponed to another event) because the findings were too serious to disclose them at that time, but there was an interesting talk about backend-as-a-service instead – many Android applications contain hard coded credentials to access cloud services and researchers were able to extract 56 millions of sensitive user records. You can download many of the presentations (and some white papers) from the Black Hat website.

I also had time for a small sightseeing – there was a special night bus for attendees to the city centre and I was able to see it again before my flight home too. Amsterdam has a nice architecture separated by kilometres of canals and it is a very interesting city in general. What surprised me the most were bicycles everywhere – I had known people ride here a lot, but I didn’t expect to see such a huge amount of bikes going around and parked in the centre. They don’t wear helmets, sometimes carry a few children in a cart and don’t seem to be very careful (so I had to be a bit more careful than I’m used to when crossing streets). Walking through red-light district De Wallen with adult theatres and half-naked ladies behind glass doors is a remarkable experience too (don’t try to make photos of them, they’ll be angry). Sex shops and “coffee shops” (selling cannabis) are quite common not only in this area.

Another surprise came at the airport, where the inspection was very thorough and I was forced to put everything out of my bag (which I was not very happy about, because it took me a long time to pack it into a single cabin baggage). Just after (when I connected to the airport Wi-Fi) I realized what happened in Paris recently. The plane was also surprisingly small and first time for me I had received special instructions for opening the emergency door (since my seat was next to the right wing by chance). Nevertheless, the whole tripe was a nice experience for me and I was happy to be there, so maybe next time in London 🙂