ARUR_logoY Soft Applied Research and University Relations (ARUR) is a program under which we ysofters:

  • Supervise and consult students working on their theses
  • Support student research positions at research laboratories
  • Offer student internships
  • Give lectures at universities
  • Organize and support student events

This post provides statistics (collected for the last six years) about some of the Y Soft ARUR activities mentioned above.

Theses supervision

During the last six years, 46 bachelor and master theses realized in collaboration with Y Soft were successfully defended. See Figure 1 for a number of theses defended per each year. In 2015, for example, 6 bachelor and 8 master theses were successfully defended. Most of all the theses came from Y Soft R&D in collaboration with the Faculty of Informatics, Masaryk University.

Figure 1

Figure 1. Number of defended theses

More than 75% of all theses defended during the last six years received “A” or “B” grade. See Figure 2 for grades statistics per each year. In 2015, for example, 14 theses were successfully defended and received eight “A” and six “B” grades (i.e., 100% of the theses defended in 2015 received “A” or “B” grade).

Figure 2

Figure 2. Grades of defended theses

Since 2012, five of the students received awards for their theses (see Figure 3).

Figure 3

Figure 3. Number of awards

Part-time positions at Y Soft

14 students were given part-time jobs within the scope of ARUR during the last six years. Some of these part-timers have been working in Y Soft on their bachelor or master thesis for more than a year. There were ten students from the Faculty of Informatics (Masaryk University), two from the Faculty of Electrical Engineering and Communication (Brno University of Technology), and two from the Faculty of Information Technology (Brno University of Technology).

Student positions at research laboratories

Every year, in cooperation with the Faculty of Informatics, Masaryk University, we participate in organizing a competition for talented students who are in the first or the second year of their bachelor studies. During the last six years, we have supported nine students who were winners of the competition. Some students were funded for more than one year. In 2015, six student positions at research laboratories have been funded.

Thanks to all already participating in our ARUR program and we are looking forward for new participants!

We are more than happy to discuss any suggestions and ideas for cooperation you might have.

Contact us: uni-rel@ysoft.com

For those, who has attended Robot Framework workshop on Test Crunch conference you can find more details about environment setup, source codes and books below.

Installing Robot Framework on your computer

  1. Download and install Python 2.7 (32/64 bit based on your OS) from https://www.python.org/downloads/
  2. Add Python location into Path environment variable (e.g. c:\Python27\;c:\Python27\Scripts\)
  3. Download and install wxPython 2.8 with unicode support (32/64 bit based on Python version) from http://sourceforge.net/projects/wxpython/files/wxPython/2.8.12.1/
  4. Install pip (package manager) by  opening command line and running command python get-pip.py
  5. Open command line and run following commands to install Robot framework and additional libraries
    pip install robotframework
    pip install robotframework-ride
    pip install robotframework-selenium2library
  6. Start RIDE by running ride.py in command line

Workshop source code

Books and other study materials

Recording from workshop will be available soon.

Stay tuned and enjoy Robot Framework.

Members of our Robot team have participated in the worldwide competition of robots on the 11 April to 12 April in 2015, which was held in Vienna, Austria. There were a lot of competition categories including Humanoid sumo, where we participated. Over 600 robots were registered over all categories and 16 robots in the Humanoid sumo.

Robot specifications

Each humanoid robot has to meet certain specifications, e.g. maximum dimensions and weight. The rules also require having a head, two legs, two arms and a name. We named our robot YSoft Ragnarök, which is a great foretold battle from Norse mythology. The limit for the weight is 3000 g, which was quite problematic for us. At the beginning of the competition we had to reduce robot’s weight to 2997 g by removing some insignificant parts. Our strategy is a great stability which many other robots lack, but it comes with demand for heavy parts especially at the bottom of the robot. Heavy body also reduces mobility and speed so we had to develop better solution in order to find the opposing robot reliably, then move directly to it and wreck it. For this purpose we used ultrasonic sensors with maximum range of 2 meters, high precision and low power consumption.

robot clanekArena rules

Tournament started with qualifications and continued with single match elimination. Each match startswith two robots in opposing corners facing each other. The main goal is to push the other robot out of the arena or to knock it down. If any robot is pushed out of the arena, it can be placed within the arena again, however it must be placed face down. If the robot can autonomously stand up, the match continues. Team gains 3 points for pushing the other robot out of the arena. If any robot falls in the arena, the opposing team gains 1 point. Two points are awarded for a robot that knocks the opponent to the ground. Match ends if any robot is knocked out (and cannot stand up), it does not move for a period of time or the time for the match runs out (the maximum time is 3 minutes). The competitor with highest score wins.

Our performance

We have beaten every robot in the qualification group and successfully advanced into the semifinals without suffering a loss. However, this had already happened in the past when we lost the next two matches to finish in the 4th place. This time we tried not to repeat such outcome. The first semifinal match against Mexican robot Speedy Gonzales was quite even as the opponent avoided contact with us so we only managed to knock it down once. The second match versus another Mexican robot Atom was more one-sided because it could not get up after knockout (this match can be viewed here).

Once we got into the finals, we have faced our old rival from Poland, robot DUE. At the end we have beaten them and won the first place (video). Polish robots DUE and UNO took both 2nd and 3rd place.

 

In QA we use robotic arm to autonomously operate a multifunctional device (MFD) according to a given test that is repetitive, time consuming or not performable by a human.

How does the robot know where is the screen located? How do the 2D screen coordinates transform to the robot’s system?

calib1 clanekRobot moves the end effector (stylus) in the 3D Cartesian coordinate system with the origin [0,0,0] at the center of the first servomotor. Position of the stylus is then transformed to angles of all servomotors by inverse kinematics. It is also possible to calculate position of stylus given the angles of the servomotors (forward kinematics). However the screen of the MFD is a 2D plane in the 3D space with unknown origin, dimensions and rotations. Robot needs to know where the screen is located relatively to its origin in order to correctly tap any button on the screen.

Dimensions of the screen need to be measured by hand in millimeters. We use 2D coordinates with origin at the bottom left corner to define position on the screen. In 3D space position and rotation of any plane is uniquely determined by three non-collinear points. If these three points are known, transformation matrix can be found. This matrix multiplied by position on the plane is equal to corresponding position in 3D space.
Previously we used ‘basic’ calibration where the robot is navigated through the bottom left corner [0, 0] (origin of the screen), top left corner [0, height] and top right corner [width, height]. At each corner, stylus’ position (in 3D space) is saved and transformation matrix is calculated. This method of calibration requires a lot of precision because even a slight deviation from the corner leads to a similar deviation in every robot’s tap so robot might not accurately tap the desired button. There is no feedback from the MFD, but sometimes there is no other way to perform the calibration.

calib2 clanek

With the new semi-automatic calibration we created our custom version of Terminal server (component which handles communication with MFD) that detects any tap on the MFD screen and sends its coordinates (X,Y in pixels) into the Robot application. Screen resolution is also required so Terminal Server sends that on demand. With the knowledge of screen dimensions and screen resolution robot is able to calculate position of a tap (in pixels) to position on the screen (in millimeters) and save the end effector’s position. The semi-automatic calibration procedure is almost the same as the basic one but robot can be navigated to any point within a marked rectangle, not just the specific point at the corner. This nullifies the need for precision. However, in this case a problem occurred in form of inaccurate values of Z axis. For this purpose we have developed an automatic recalibration. This recalibration takes data gained from semi-automatic calibration and automatically repeats the procedure of semi-automatic calibration with the knowledge of existing corners of the screen. It goes through those three corners same as before, however it starts higher above each point and slowly descends to accurately measure the Z coordinate. After recalibration all data from semi-automatic are forgotten and replaced with the values from automatic calibration. This procedure eliminates any error made by an engineer during calibration and makes the robot´s calibration nearly perfect.

Most systems today need to handle the user authentication. That means, the password entered during user registration must be stored in the system for later comparison.

It is obvious that the passwords must not be stored in plain-text form. In that case, if an attacker succeeded in getting access to the database, where these passwords are stored (e.g. using SQL Injection), he would obtain the whole list of user names with their corresponding passwords. Then it is very simple for him to impersonate a valid user.

Hashing

However, to check, if the password entered by the user is correct, we do not need the original password. It is enough to have a suitable information, which uniquely identifies it and can be easily computed from each password entering the system.

Such information is the password hash. Hash algorithm is a one-way function, generating a fixed-length string from the inputs (in this case from the given password) with no possibility to derive these inputs back from the computed string. Another property of a cryptographic hash function is that change of one input bit leads to change of many bits in the resulting hash. When the hash function is collision-free, we can assume that the identical hashes imply the identical inputs, from which these hashes are computed.

So instead of the password itself, only its hash will be stored in the system. Every time a user tries to login to the system, hash of the password entered is computed and compared to the stored one.

Slow hashing

However, cryptographic hash functions such as MD5 or SHA are not appropriate. The purpose of these functions is calculation of digest of large amount of data to ensure its integrity. This digest needs to be computed in as short time as possible, and thus these hash functions are designed to be fast. This property is, however, not desirable for password hashing.

As an example take the MD5 function. One 2.13GHz core is able to compute cca 6 million MD5 hashes per second using Cain & Abel tool. Trying every single possible 8 character long lowercase alphanumeric password then takes approximately 130 hours. And that is only one core. Modern computers use more of them, for example with six such cores a password can be cracked in less than a day. Furthermore, we can definitely assume that an attacker has much better equipment.

In order to prevent an attacker from trying millions of hashes per second, we need to use a slow cryptographic hash function for password hashing. Several hash functions were specifically designed for this purpose. These functions include: PBKDF2, bcrypt, scrypt.

Work factor parameters

These hash functions are not only slow, they also come up with work factor parameters defining how expensive the hash computation will be. Although the scrypt function is the youngest one (designed in 2009), it has an advantage over the older ones – it not only defines the CPU cost, but also the memory requirements. That is why scrypt is recommended function for password storage and this article talks mainly about it.

Scrypt uses following work factor parameters:

  • N – number of iterations, related to both memory and CPU cost
  • r – size of the RAM block needed, related to memory cost
  • p – parallelization, defines maximum number of threads, related to CPU cost

These parameters allow to set the memory needed and time it takes to compute one hash. The approximate memory usage for a single hash generation can be computed from the parameters using the following formula:

memory  =  N  ·  2  ·  r  ·  64

The time, on the other hand, is platform-dependent. The graph below shows dependency of time needed for single hash computation on the work factor parameters N and r. The parallelization parameter is set to 1 in all cases. The values in the graph were measured using CryptSharp, the C# implementation of scrypt function, on Windows Server 2012 with four 2.2GHz cores.

csharp_Server12_scrypt

It is needed to specify the computation time as a compromise between the usability and security provided. For example, if we have a system with only one login at a time and high security is needed, we set the parameters to make computation take cca one second. However, in case of many parallel logins this time needs to be set to only few milliseconds.

We can take the above example of password hash cracking. Using scrypt function (CryptSharp implementation) with parameters N=210, r=4 and p=1, hashing of one password takes approximately 10ms, i.e. this 2.2GHz core is able to compute 100 hashes per second. Then computation of all possible 8 character long lowercase alphanumeric passwords takes 895 years.

Attacker goals

Imagine an attacker, who obtained the list of user names and corresponding password hashes. There are now three goals he can have:

  • Crack a password of one specific user (e.g. admin)
  • Crack a password of any user
  • Crack passwords of a longer list of users

Attacks

In the first option the attacker has a password hash and wants to find the corresponding password it was computed from. He can use brute force or dictionary attack, i.e. try many possible inputs to the hashing function and compare the results with the obtained password hash.
An effective method for trying so many hashes is usage of lookup tables. The general idea is to pre-compute hashes of possible passwords and store them in a lookup table data structure (or Rainbow tables for lower memory requirements). Comparison of these pre-computed values with given hash is much faster than hash computation.

The second option is simpler. The only thing needed is to compute hashes of possible inputs and compare each result with all password hashes in the obtained list. Sooner or later the attacker will hit some match.

For cracking a longer list of hashes the attacker does not need to crack one password at a time, he will instead compare each computed hash with all hashes from the list. This way cracking of a hashes list takes approximately the same time as cracking only one specific password.

Salt

The above attacks work because each password is hashed the same way, the same password always results in the same hash. The simplest way of preventing against this is salting. That means, a random string (salt) is generated for each password and used together with it to create a hash.
It is needed to ensure uniqueness of the salts, thus they really need to be randomly generated. Any random number generator can be used, however, cryptographically secure RNGs, such as RNGCryptoServiceProvider in C# or SecureRandom in Java, are recommended.

The salt is a non-secret value, it needs to be stored together with the password hash to ensure its availability to the hash function. Thus, if someone gets access to the hashes, he automatically gets also all the salts. However, the salt power is not in its secrecy, but in randomness.

With different salt, same passwords result in different hashes. Pre-computed hash attack is infeasible due to a large additional memory requirements – an attacker needs to store pre-computed hashes for each possible salt.

Cracking password of any user is reduced to cracking password of a specific one, since the salt for each user password is different.

Also cracking of a larger list of hashes is more complicated with different salt for each password, the attacker has no other choice than cracking one password at a time.

Pepper

In order to increase security even more, we can use another randomly generated string – pepper. In comparison to the salt, pepper needs to be kept secret as it is used as an HMAC key. HMAC is a one-way algorithm based on hash function generating fixed-length string from the input message and a secret key, which in our case is generated pepper.

Since pepper is a secret key, it needs to be generated using a cryptographically secure random number generator, such as RNGCryptoServiceProvider.

public static byte[] GeneratePepper()
{
    byte[] pepper = new byte[32];
    RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
    rng.GetBytes(pepper);
    return pepper;
}

When generated, the pepper must be stored separately in a configuration file with restricted access.

Although an attacker had enough resources to be able to crack the hash function, he would still need this secret value for obtaining the user password. And with pepper randomly generated for each system instance, if one instance is compromised, other remain secure.

Overall scheme

The overall hashing of the password with both salt and pepper looks as follows:

scrypt ( Base64 ( HMAC ( ‘SHA256’, password, pepper ) ), salt, workFactors )

And the C# implementation of this scheme using CryptSharp library:

public byte[] HashPassword(String password, byte[] pepper, byte[] salt)
{
    if (salt == null)
    {
        Console.WriteLine("Password hash not created - salt is null.");
        return null;
    }

    String encodedHmac = HmacBase64(password, pepper);
    return CryptSharp.Utility.SCrypt.ComputeDerivedKey(Encoding.UTF8.
         GetBytes(encodedHmac), salt, n, r, p, null, HASH_LENGTH);
}

private static string HmacBase64(string password, byte[] pepper)
{
    if (pepper == null)
    {
        Console.WriteLine("Password hash not created - pepper is null.");
        return null;
    }
    HMACSHA256 hmac = new HMACSHA256(pepper);
    hmac.Initialize();
    byte[] buffer = Encoding.UTF8.GetBytes(password);
    byte[] rawHmac = hmac.ComputeHash(buffer);
    return System.Convert.ToBase64String(rawHmac);
}

Conclusion

User passwords must never be stored as plain text, always compute its hash using a slow cryptographic hash function. To each password generate random salt and use this value together with the password for hash computation. For higher level of security generate random secret pepper for each system instance.

Of course, security of the user password depends on the password itself. An attacker could still try frequently used passwords such as “123456”, however, with secure storage we can protect him from trying too many of them and from obtaining the strong ones.

As many of you may know, Android supports native printing since Android 4.4. This means, that there is new api handling communication between application from which user prints and application that later sends the job to the printer.

android print

So how it works? First lets have a look at applications from which user prints.

Main responsibility of these applications is to prepare output for print in pdf format. This includes for example paging or updating image to landscape or portrait mode.

Application from which user prints uses then system PrintManager service.

PrintManager printManager = (PrintManager) getSystemService(Context.PRINT_SERVICE);

Document output is prepared with PrintDocumentAdapter which is passed as second parameter of PrintManagers print function.

printManager.print(jobName, new PrintDocumentAdapter(...), printAttributes);

Now we are heading to the second part of job printing, where we have to discover printers and send them our job. This is responsibility of PrintService.

Printer discovery

We can either add printers manually. Set their ip address and port, or we can look for network printers in the local network.

Let’s have a look on how to find printers which support Zeroconf discovery in local network. Implementations of Zeroconf are for example Avahi daemon or Bonjour.

When printer discovery in Android is started, onCreatePrinterDiscoverySession() method in PrintService is called. Here we have to create our PrinterDiscoverySession.

Responsibilities of PrinterDiscoverySession are pretty straightforward.

  • find and add discovered printers
  • remove previously added printers that disappeared
  • update already added printers

In this example we will use NsdManager. NsdManager is native Android solution for finding zeroconf services. On the other hand its functionality is very limited, but for purpose of this demo it’s satisfactory. There exist other and better solutions, for example JmDNS. Current limitation of NsdManager is not being able to load txt records of mDNS messages.

In order to use NsdManager we have to implement two interfaces. DiscoveryLisener (handles discovery callbacks) and ResolveListener (handles resolving callbacks). I will call them OurDiscoveryListener and OurResolveListener.

First in onStartPrinterDiscovery() method we create new instance of DiscoveryListener and start the actual discovery.

discoveryListener = new OurDiscoveryListener(nsdManager);
nsdManager.discoverServices(SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD, discoveryListener);

This is pretty self-explanatory. We specify discovery service type, which is either “_ipps_.tcp.” or “_ipp_.tcp”, depending on the fact if we want to encrypt ipp messages or don’t.

And when service is found, then OurDiscoveryListener will handle what happens in individual states of discovery.

In the following code we can see that for example when service is found we try to resolve it with NsdManager.

public void onServiceFound(NsdServiceInfo service) {
    nsdManager.resolveService(service, new OurResolveListener());
}

Resolving service means, that we try to get more information about the service. This includes host ip and port. OurPrinterResolveListener then handles states what should happen when resolving succeeds or fails. When resolving succeeds, we process gained data and save it for future use.

Last part of printer discovery is to find more details about selected printer and checking whether is this printer still available. This is handled in onStartPrinterStateTracking() method.

Discovering details about printer can be done for examaple with ipp operation Get-Printer-Attributes and according to received data, set the printer information. Second function is to keep tracking of the printer state.

The following code sample just shows how to set few printer capabilities, which should be set according to the printer attributes. This doesn’t contain  tracking of printer state.

@Override
public void onStartPrinterStateTracking(PrinterId printerId) {
    // check for info we found when printer was discovered
    PrinterInfo printerInfo = findPrinterInfo(printerId);

    if (printerInfo != null) {
        PrinterCapabilitiesInfo capabilities = new PrinterCapabilitiesInfo.Builder(printerId)
                .setMinMargins(new PrintAttributes.Margins(200, 200, 200, 200))
                .addMediaSize(PrintAttributes.MediaSize.ISO_A4, true)
                .addMediaSize(PrintAttributes.MediaSize.ISO_A5, false)
                .addResolution(new PrintAttributes.Resolution("R1", "200x200", 200, 200), false)
                .addResolution(new PrintAttributes.Resolution("R2", "200x300", 200, 300), true)
                .setColorModes(PrintAttributes.COLOR_MODE_COLOR |
                        PrintAttributes.COLOR_MODE_MONOCHROME, PrintAttributes.COLOR_MODE_COLOR)
                .build();

        printerInfo = new PrinterInfo.Builder(printerInfo)
                .setCapabilities(capabilities).build();

        // We add printer info to system service
        List<PrinterInfo> printers = new ArrayList();
        printers.add(printerInfo);
        addPrinters(printers);
    }
}

When different printer is selected, then onStopPrinterStateTracking is called and onStartPrinterStateTracking again.

 

Printing:

Android itself doesn’t contain implementation of any printing protocol. Because of this I created small IPP parser. But that’s topic for another day.

Here I will only show example of handling queued print job.

In the following code we pick one job according to id from saved processed jobs and set his state to printing. Class PrintTask in the following example is just Android AsyncTask which in background creates IPP request and appends job data.

public void handleQueuedPrintJob(PrintJobId printJobId, PrinterId printerId) {
    final PrintJob printJob = mProcessedPrintJobs.get(printJobId);
    if (printJob == null) {
        return;
    }

    if (printJob.isQueued()) {
        printJob.start();
    }

    OurPrinter ourPrinter = ourDiscoverySession.getPrinter(printerId);

    if (ourPrinter != null) {
        AsyncTask <Void, Void, Void> printTask =
                new PrintTask(printJob, ourPrinter);
        printTask.execute();
    }
}

In case that we have decided to use ipps, we also have to set correct certificate. Next step is to create new HttpURLConnection. (or HttpsURLConnection for secure transfer).

The last thing we have to do is to write into the output stream our IPP message, send it and wait for response from server.

Android manifest file

We have to set necessary permissions in the Android manifest file, in order to be able to run the PrintService.
Add android.permission.BIND_PRINT_SERVICE when creating the service. Example:

...
<service android:name=".OurPrintService" 
    android:permission="android.permission.BIND_PRINT_SERVICE">

    <intent-filter>
        <action android:name="android.printservice.PrintService" />
    </intent-filter>
</service>
...

This allows system to bind to the PrintService. Otherwise the service wouldn’t be shown in the system.

Also

<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.START_PRINT_SERVICE_CONFIG_ACTIVITY" />

permissions are needed for printer discovery and to access files in external storage.

We use Liquibase in our project as a DB change management tool. We use it to create our DB schema with the basic configuration our application needs to run.

This is, however, not enough for development or (unit) testing. Why? Because for each test case, we need to have data in the database in a particular state. E.g. I need to test that the system rejects the action of a user that does have the required quota for the action he/she wants to do. So I create a user with a 0 quota and then try to perform the action and see whether the system allows it or rejects it. To not waste time setting our test data repeatedly, we use special Liquibase scripts that set up what we need in our test environment (such as a user with a 0 quota), so that we do not have to do this manually.

For the Payment System project, we used Liquibase to run plain SQL scripts to insert the data that we needed. It worked well enough, but this approach has some disadvantages.

Mainly, the person writing the scripts has to have knowledge of our database and the relations between various tables, so there is a steep learning curve. Another issue is that all of the scripts have to be updated when a larger DB schema change takes place.

Therefore, during our implementation of the Quota System, I took inspiration from the work of my colleague, who used a kind of high-level DSL as a convenient way to setup the Quota system and I turned it into a production-ready feature on top of Liquibase. This solved the problem of manual execution (the scripts always run during the application startup and are guaranteed to run exactly once).

For the DSL, I chose Groovy, since we already use it for our tests, and there is no interoperability issue with Java based Liquibase.

Liquibase has many extension points and implementing a custom changelog parser seemed the way to go.

The default parser for XML parses the input file, generates internal Liquibase objects representing particular changes, and then transforms them to SQL, which is executed on the DB.

I created similar a parser, which was registered to accept Groovy scripts.

The parser executes the script file, which creates internal Liquibase objects, but, of course, one DSL keyword can create more insert statements. What is more, when a DB schema changes, the only place that needs to change is the DSL implementation, not all of the already created scripts.

Example Groovy script:

importFile('changelog.xml')

bw = '3' //using guid of identifier which is already present in database
color = identifier name:'color'
print = identifier name:'PRINT', guid:'100' //creating identifier with specific guid

q1 = quota guid:'q1', name:'quota1', limit:50, identifiers:[bw, print], period:weekly('MONDAY')
q2 = quota guid:'q2', name:'quota2', limit:150, identifiers:[color, print], period:monthly(1)

for (i in 1..1000)
    quotaSubject guid: 'guid' + i, name:'John' + i, quotas:[q1,q2]

This example shows that the Groovy script can reference another Liquibase script file – e.g., the default changelog file, which creates the basic DB structure and initial data.

It also shows the programmatic creation of quotaSubjects (accounts in the system), where we can use normal Groovy for a loop to simply create many accounts for load testing.

QuotaSubjects have assigned quotas, which are identified by quota identifiers. These can be either created automatically, or we can reference already existing ones.

The keywords identifier, quota, quotaSubject, weekly, and monthly are just normal Groovy functions, that take Map as an argument, which allows us to pass them named parameters.

Before execution, the script is concatenated with the main DSL script, where the keywords are defined.

Part of the main DSL script that processes identifiers:

QuotaIdentifier identifier(Map args) {
    assert args.name, 'QuotaIdentifier name is not specified, available params are: ' + args
    String guid = args.guid ? args.guid : nextGuid()

    addInsertChange('QUOTA_IDENTIFIER', columns('GUID', guid) << column('NAME', args.name) << column('STATUS', 'ENABLED'))
    new QuotaIdentifier(guid)
}

private def addInsertChange(String tableName, List<ColumnConfig> columns) {
    InsertDataChange change = new InsertDataChange()
    change.setTableName(tableName)
    change.setColumns(columns)

    groovyDSL_liquibaseChanges << change
}

The calls produce Liquibase objects, which are appended to variables accessible within the Groovy script. The content of the variables constitutes the final Liquibase changelog, which is created after the processing of the script is done.

This way, the test data file is simple to read, write, and maintain. A small change in the changelog parser also allowed us to embed the test data scripts in our Spock test specifications so that we can see the test execution logic and test data next to each other.

@Transactional @ContextConfiguration(loader = YSoftAnnotationConfigContextLoader.class)
class QuotaSubjectAdministrationServiceTestSpec extends UberSpecification {

    // ~ test configuration overrides ==================================================================

    @Configuration @ImportResource("/testApplicationContext.xml")
    static class OverridingContext {

        @Bean
        String testDataScript() {
            """groovy: importFile('changelog.xml')

                       bw = identifier name:'bw'
                       color = identifier name:'color'
                       print = identifier name:'print'
                       a4 = identifier name:'a4'

                       p_a4_c = quota guid:'p_a4_c', name:'p_a4_c', limit:10, identifiers:[print, a4, color], period:weekly('MONDAY')
                       p_a4_bw = quota guid:'p_a4_bw', name:'p_a4_bw', limit:20, identifiers:[print, a4, bw], period:weekly('MONDAY')
                       p_a4 = quota guid:'p_a4', name:'p_a4', limit:30, identifiers:[print, a4], period:weekly('MONDAY')
                       p = quota guid:'p', name:'p', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q1 = quota guid:'q1', name:'q1', limit:40, identifiers:[print], period:weekly('MONDAY')   
                       q2 = quota guid:'q2', name:'q2', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q_to_delete = quota guid:'q_to_delete', name:'q_to_delete', limit:40, identifiers:[print], period:weekly('MONDAY')

                       quotaSubject guid: 'user1', name:'user1', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user2', name:'user2', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user3', name:'user3', quotas:[p_a4_c, p_a4_bw, p_a4, p]
                       quotaSubject guid: 'user4', name:'user4', quotas:[p_a4_c]"""
        }
    }

    // ~ instance fields ===============================================================================

    @Autowired private QuotaSubjectAdministrationService quotaSubjectAdministrationService;
    @Autowired private QuotaAdministrationService quotaAdministrationService;

    // ~ findByGuid ====================================================================================

    def "findByGuid should throw IllegalInputException for null guid"() {
        when:
        quotaSubjectAdministrationService.findByGuid(null)

        then:
        thrown(IllegalInputException)
    }
Do you want to attend the famous developer conference, GeeCON, in the Czech Republic? No problem! Y Soft and GeeCON’s organizers bring it to our capital again. We can therefore proudly announce, that Y Soft is the platinum sponsor, co-organizer and key partner of GeeCON 2015 in Prague, on October 22-23.

 

GeeCON focuses on news and hacks all around Java and Java Virtual Machine based technologies. It was first organized in 2009 and since then has grown into a big conference with over 80 speakers and sessions in three days, from an initial 350 participants to 2000+ attendees today. From its originally wider focus, it has crystallized into one specialized topic, although no less rich – all about JAVA technology.

GeeCON is a conference focused on Java and Java Virtual Machine based technologies, with special attention to dynamic languages like Groovy and Ruby. GeeCON is a forum for sharing experiences about modern software development methodologies, enterprise architectures, software craftsmanship, design patterns, distributed computing and more!

The fact that the participants are literally flocking from all over Europe says a lot about the qualities of GeeCON, with some even coming from other continents as well. Traditionally, representatives of Czech developers have come there in large numbers. Lectures take place in several halls in parallel so that all participants can choose exactly according to their interests. You will find famous names from around the world among the speakers – Kevlin Henney, Milen Dyankov, Simon Brown, Grant Ingersoll or Antonio Goncalves.

We encourage you to visit GeeCON Prague, CineStar – Cerny Most, all you have to do is book a date in your diary for the 22nd to 23rd of October, everything else you will discover over the following weeks directly at www.geecon.cz.

After finishing hard-coded passwords detector, I have focused on improving the detection of the most serious security bugs, which could be found by static taint analysis. SQL injection, OS command injection and Cross-site scripting (XSS) are placed as top first, second and fourth in CWE Top 25 most dangerous software errors (while well-known buffer overflow, not applicable to Java, is placed third). Path Traversal, Unvalidated Redirect, XPath injection or LDAP injection are also related types of weaknesses – unvalidated user input can exploit syntax of an interpreter and cause a vulnerability. Injections in general are the risk number one in OWASP Top 10 too, so a reliable open-source static analyser for those kinds of weaknesses could really make the world a more secure place 🙂

FindBugs already has detectors for some kinds of injections, but many bugs is missed due to insufficient flow analysis, unknown taint sources and sinks and also targeting zero false positives (even though there are some). In contrast, the aim of bug detectors in FindSecurityBugs is to be helpful during security code review and not to miss any vulnerability – there was some effort to reduce false positives, but before my contribution almost all taint sinks were reported in practice. Unfortunately, searching a real problem among many false warnings is quite tedious. The aim of the new detection mechanism is to report more high-confidence bugs (with minimum of false positives) than FindBugs detectors plus report lower-confidence bugs with decreased priority not missing any real bugs while having false positives rate much lower than FindSecurityBugs had originally.

For a reliable detection, we need a good data-flow analysis. I have already mentioned OpcodeStackDetector class in previous articles, but there is a more advanced and general mechanism in FindBugs. We can create and register classes performing a custom data-flow analysis and request those results later in detectors. Methods are symbolically executed after building control flow graph made of blocks of instructions connected by different types of edges (such as goto, ifcmp or exception handling), which are attempted to be pruned for impossible flow. We have to create a class to represent facts at different code locations – we want to remember some information (called a fact) for every reachable instruction, which can later help us to decide, whether a particular bug should be reported at that location. We need to model effects of instructions and edges on facts, specify the way of merging facts from different flow branches and make everything to work together. Fortunately, there are existing classes designed for extension to make this process easier. In particular, FrameDataflowAnalysis models values in the operand stack and local variables, so we can concentrate on the sub-facts about these values. The actual fact is then a frame of these sub-facts. This class models effects of instructions by pushing the default sub-fact on the modelled stack and popping the right amount of stack values. It also automatically moves sub-facts between the stack and the part of the frame with local variables.

Lets have a look, which classes had to be implemented for taint analysis. If we want to run custom data-flow analysis, a special class implementing IAnalysisEngineRegistrar must be created and referenced from findbugs.xml.

<!-- Registers engine for taint analysis dataflow -->
<EngineRegistrar
    class="com.h3xstream.findsecbugs.taintanalysis.EngineRegistrar"/>

This simple class (called EngineRegistrar) makes a new instance of TaintDataflowEngine and registers it with global analysis cache.

public class EngineRegistrar implements IAnalysisEngineRegistrar {

    @Override
    public void registerAnalysisEngines(IAnalysisCache cache) {
        new TaintDataflowEngine().registerWith(cache);
    }
}

Thanks to this, in the right time, method analyze of TaintDataflowEngine (implementing ImethodAnalysisEngine) is called for each method of analyzed code. This method requests objects needed for analysis, instantiates two custom classes (mentioned in next two sentences) and executes the analysis.

public class TaintDataflowEngine
    implements IMethodAnalysisEngine<TaintDataflow> {

    @Override
    public TaintDataflow analyze(IAnalysisCache cache)
            throws CheckedAnalysisException {
        CFG cfg = cache.getMethodAnalysis(CFG.class, descriptor);
        DepthFirstSearch dfs = cache
            .getMethodAnalysis(DepthFirstSearch.class, descriptor);
        MethodGen methodGen = cache
            .getMethodAnalysis(MethodGen.class, descriptor);
        TaintAnalysis analysis = new TaintAnalysis(
            methodGen, dfs, descriptor);
        TaintDataflow flow = new TaintDataflow(cfg, analysis);
        flow.execute();
        return flow;
    }

    @Override
    public void registerWith(IAnalysisCache iac) {
        iac.registerMethodAnalysisEngine(TaintDataflow.class, this);
    }
}

TaintDataflow (extending Dataflow) is really simple and used to store results of performed analysis (used later by detectors).

public class TaintDataflow
        extends Dataflow<TaintFrame, TaintAnalysis> {

    public TaintDataflow(CFG cfg, TaintAnalysis analysis) {
        super(cfg, analysis);
    }
}

TaintAnalysis (extending FrameDataflowAnalysis) implements data-flow operations on TaintFrame but it mostly delegates them to other classes.

public class TaintAnalysis
        extends FrameDataflowAnalysis<Taint, TaintFrame> {

    private final MethodGen methodGen;
    private final TaintFrameModelingVisitor visitor;

    public TaintAnalysis(MethodGen methodGen, DepthFirstSearch dfs,
            MethodDescriptor descriptor) {
        super(dfs);
        this.methodGen = methodGen;
        this.visitor = new TaintFrameModelingVisitor(
            methodGen.getConstantPool(), descriptor);
    }

    @Override
    protected void mergeValues(TaintFrame frame, TaintFrame result,
            int i) throws DataflowAnalysisException {
        result.setValue(i, Taint.merge(
            result.getValue(i), frame.getValue(i)));
    }

    @Override
    public void transferInstruction(InstructionHandle handle,
            BasicBlock block, TaintFrame fact)
            throws DataflowAnalysisException {
        visitor.setFrameAndLocation(
            fact, new Location(handle, block));
        visitor.analyzeInstruction(handle.getInstruction());
    }

    // some other methods
}

TaintFrame is just a concrete class for abstract Frame<Taint>.

public class TaintFrame extends Frame<Taint> {

    public TaintFrame(int numLocals) {
        super(numLocals);
    }
}

Effects of instructions are modelled by TaintFrameModelingVisitor (extending AbstractFrameModelingVisitor) so we can code with the visitor pattern again.

public class TaintFrameModelingVisitor
    extends AbstractFrameModelingVisitor<Taint, TaintFrame> {

    private final MethodDescriptor methodDescriptor;

    public TaintFrameModelingVisitor(ConstantPoolGen cpg,
            MethodDescriptor method) {
        super(cpg);
        this.methodDescriptor = method;
    }

    @Override
    public Taint getDefaultValue() {
        return new Taint(Taint.State.UNKNOWN);
    }

    @Override
    public void visitACONST_NULL(ACONST_NULL obj) {
        getFrame().pushValue(new Taint(Taint.State.NULL));
    }

    // many more methods
}

The taint fact – information about a value in the frame (stack item or local variable) is stored in a class called just Taint.

The most important piece of information in Taint is the taint state represented by an enum with values TAINTED, UNKNOWN, SAFE and NULL. TAINTED is pushed for invoke instruction with a method call configured to be tainted (e.g. getParameter from HttpServletRequest or readLine from BufferedReader), SAFE is stored for ldc (load constant) instruction, NULL for aconst_null and UNKNOWN is a default value (this description is a bit simplified). Merging of taint states is defined such that if we could compare them as TAINTED > UNKNOWN > SAFE > NULL, then merge of states is the greatest value (e.g. TAINTED + SAFE = TAINTED). Not only this merging is done where there are more input edges to a code block of control flow graph, but I have also implemented a mechanism of taint transferring methods. For example, consider calling toLowerCase method on a String before passing it to a taint sink – instead of pushing a default value (UNKNOWN), we can copy the state of the parameter not to forget the information. Merging is also done in more complicated examples such as for append method of StringBuilder – the taint state of the argument is merged with the taint state of StringBuilder instance and returned to be pushed on the modelled stack.

There were two problems with taint state transfer which had to be solved. First, taint state must be transferred directly to mutable classes too, not only to their return values (plus the method can be void). Not only we set the taint state for an object when it is being seen for the first time in the analysed method and then the state is copied, but we also change it according to instance methods calls. For example, StringBuilder is safe, when a new instance is created with non-parametric constructor, but it can taint itself by calling its append method. If only methods with safe parameters are called, the taint state of StringBuilder object remains safe too.  For this reason, effect of load instructions is modified to mark index of loaded local variable to Taint instance of corresponding stack item. Then we can transfer taint state to a local variable with index stored in Taint for specified methods in mutable classes. Second, taint transferring constructors (methods <init> in bytecode) must be handled specifically, because of the way of creating new objects in Java. Instruction new is followed by dup and invokespecial, which consumes duplicated value and initializes the object remaining at the top of the stack. Since the new object is not stored in any variable, we must transfer the taint value from merged parameters to the stack top separately.

Bugs related to taint analysis are identified by TaintDetector (implementing Detector). For better performance, before methods of some class are analyzed, constant pool (part of the class file format with all needed constants) is searched and the analysis continues only if there are references for some taint sinks. Then TaintDataflow instance is loaded for each method and locations of its control flow graph are iterated until taint sink method is found. This means, we find all invoke instructions used in a currently analysed method and check, whether the called methods are related to the searched weaknesses. Facts (instances of Taint class) from TaintDataFlow are extracted for each sink parameter of a sink method. Bug is reported with high confidence (and priority), if the taint state is TAINTED, with medium confidence for UNKNOWN taint state and with low confidence for SAFE and NULL (just for the case of a bad analysis, these warnings are not normally shown anywhere). Taint class also contains references for taint source locations, so these are shown in bug reports to make review easier – you should see a path between taint sources and the taint sink. TaintDetector itself is abstract, so it must be extended to detect concrete weakness types (like command injection) and InjectionSource interface implemented to specify taint sinks (the name of the interface is a bit misleading) and items in a constant pool to specify candidate classes.

public class CommandInjectionDetector extends TaintDetector {

    public CommandInjectionDetector(BugReporter bugReporter) {
        super(bugReporter);
    }

    @Override
    public InjectionSource[] getInjectionSource() {
        return new InjectionSource[] {new CommandInjectionSource()};
    }
}

CommandInjectionSource overwrites method getInjectableParameters, which returns an instance of InjectionPoint containing parameters, that cannot be tainted, and the weakness type to report. Boolean method isCandidate looks up constant pool for the names of taint sink classes and return true if present.

TaintDetector is currently used to detect command, SQL, LDAP and script (for eval method of ScriptEngine) injections and unvalidated redirect. More bug types and taint sinks should follow soon. Test results are looking quite promising so far. Inter-procedural analysis (not restricted to a method scope) should be the next big improvement, which could make this analysis really helpful. Then everything should be tested with a large amount of real code to iron out the kinks. You can see the discussed classes in taintanalysis package and try the new version of FindSecurityBugs.