Most systems today need to handle the user authentication. That means, the password entered during user registration must be stored in the system for later comparison.

It is obvious that the passwords must not be stored in plain-text form. In that case, if an attacker succeeded in getting access to the database, where these passwords are stored (e.g. using SQL Injection), he would obtain the whole list of user names with their corresponding passwords. Then it is very simple for him to impersonate a valid user.

Hashing

However, to check, if the password entered by the user is correct, we do not need the original password. It is enough to have a suitable information, which uniquely identifies it and can be easily computed from each password entering the system.

Such information is the password hash. Hash algorithm is a one-way function, generating a fixed-length string from the inputs (in this case from the given password) with no possibility to derive these inputs back from the computed string. Another property of a cryptographic hash function is that change of one input bit leads to change of many bits in the resulting hash. When the hash function is collision-free, we can assume that the identical hashes imply the identical inputs, from which these hashes are computed.

So instead of the password itself, only its hash will be stored in the system. Every time a user tries to login to the system, hash of the password entered is computed and compared to the stored one.

Slow hashing

However, cryptographic hash functions such as MD5 or SHA are not appropriate. The purpose of these functions is calculation of digest of large amount of data to ensure its integrity. This digest needs to be computed in as short time as possible, and thus these hash functions are designed to be fast. This property is, however, not desirable for password hashing.

As an example take the MD5 function. One 2.13GHz core is able to compute cca 6 million MD5 hashes per second using Cain & Abel tool. Trying every single possible 8 character long lowercase alphanumeric password then takes approximately 130 hours. And that is only one core. Modern computers use more of them, for example with six such cores a password can be cracked in less than a day. Furthermore, we can definitely assume that an attacker has much better equipment.

In order to prevent an attacker from trying millions of hashes per second, we need to use a slow cryptographic hash function for password hashing. Several hash functions were specifically designed for this purpose. These functions include: PBKDF2, bcrypt, scrypt.

Work factor parameters

These hash functions are not only slow, they also come up with work factor parameters defining how expensive the hash computation will be. Although the scrypt function is the youngest one (designed in 2009), it has an advantage over the older ones – it not only defines the CPU cost, but also the memory requirements. That is why scrypt is recommended function for password storage and this article talks mainly about it.

Scrypt uses following work factor parameters:

  • N – number of iterations, related to both memory and CPU cost
  • r – size of the RAM block needed, related to memory cost
  • p – parallelization, defines maximum number of threads, related to CPU cost

These parameters allow to set the memory needed and time it takes to compute one hash. The approximate memory usage for a single hash generation can be computed from the parameters using the following formula:

memory  =  N  ·  2  ·  r  ·  64

The time, on the other hand, is platform-dependent. The graph below shows dependency of time needed for single hash computation on the work factor parameters N and r. The parallelization parameter is set to 1 in all cases. The values in the graph were measured using CryptSharp, the C# implementation of scrypt function, on Windows Server 2012 with four 2.2GHz cores.

csharp_Server12_scrypt

It is needed to specify the computation time as a compromise between the usability and security provided. For example, if we have a system with only one login at a time and high security is needed, we set the parameters to make computation take cca one second. However, in case of many parallel logins this time needs to be set to only few milliseconds.

We can take the above example of password hash cracking. Using scrypt function (CryptSharp implementation) with parameters N=210, r=4 and p=1, hashing of one password takes approximately 10ms, i.e. this 2.2GHz core is able to compute 100 hashes per second. Then computation of all possible 8 character long lowercase alphanumeric passwords takes 895 years.

Attacker goals

Imagine an attacker, who obtained the list of user names and corresponding password hashes. There are now three goals he can have:

  • Crack a password of one specific user (e.g. admin)
  • Crack a password of any user
  • Crack passwords of a longer list of users

Attacks

In the first option the attacker has a password hash and wants to find the corresponding password it was computed from. He can use brute force or dictionary attack, i.e. try many possible inputs to the hashing function and compare the results with the obtained password hash.
An effective method for trying so many hashes is usage of lookup tables. The general idea is to pre-compute hashes of possible passwords and store them in a lookup table data structure (or Rainbow tables for lower memory requirements). Comparison of these pre-computed values with given hash is much faster than hash computation.

The second option is simpler. The only thing needed is to compute hashes of possible inputs and compare each result with all password hashes in the obtained list. Sooner or later the attacker will hit some match.

For cracking a longer list of hashes the attacker does not need to crack one password at a time, he will instead compare each computed hash with all hashes from the list. This way cracking of a hashes list takes approximately the same time as cracking only one specific password.

Salt

The above attacks work because each password is hashed the same way, the same password always results in the same hash. The simplest way of preventing against this is salting. That means, a random string (salt) is generated for each password and used together with it to create a hash.
It is needed to ensure uniqueness of the salts, thus they really need to be randomly generated. Any random number generator can be used, however, cryptographically secure RNGs, such as RNGCryptoServiceProvider in C# or SecureRandom in Java, are recommended.

The salt is a non-secret value, it needs to be stored together with the password hash to ensure its availability to the hash function. Thus, if someone gets access to the hashes, he automatically gets also all the salts. However, the salt power is not in its secrecy, but in randomness.

With different salt, same passwords result in different hashes. Pre-computed hash attack is infeasible due to a large additional memory requirements – an attacker needs to store pre-computed hashes for each possible salt.

Cracking password of any user is reduced to cracking password of a specific one, since the salt for each user password is different.

Also cracking of a larger list of hashes is more complicated with different salt for each password, the attacker has no other choice than cracking one password at a time.

Pepper

In order to increase security even more, we can use another randomly generated string – pepper. In comparison to the salt, pepper needs to be kept secret as it is used as an HMAC key. HMAC is a one-way algorithm based on hash function generating fixed-length string from the input message and a secret key, which in our case is generated pepper.

Since pepper is a secret key, it needs to be generated using a cryptographically secure random number generator, such as RNGCryptoServiceProvider.

public static byte[] GeneratePepper()
{
    byte[] pepper = new byte[32];
    RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
    rng.GetBytes(pepper);
    return pepper;
}

When generated, the pepper must be stored separately in a configuration file with restricted access.

Although an attacker had enough resources to be able to crack the hash function, he would still need this secret value for obtaining the user password. And with pepper randomly generated for each system instance, if one instance is compromised, other remain secure.

Overall scheme

The overall hashing of the password with both salt and pepper looks as follows:

scrypt ( Base64 ( HMAC ( ‘SHA256’, password, pepper ) ), salt, workFactors )

And the C# implementation of this scheme using CryptSharp library:

public byte[] HashPassword(String password, byte[] pepper, byte[] salt)
{
    if (salt == null)
    {
        Console.WriteLine("Password hash not created - salt is null.");
        return null;
    }

    String encodedHmac = HmacBase64(password, pepper);
    return CryptSharp.Utility.SCrypt.ComputeDerivedKey(Encoding.UTF8.
         GetBytes(encodedHmac), salt, n, r, p, null, HASH_LENGTH);
}

private static string HmacBase64(string password, byte[] pepper)
{
    if (pepper == null)
    {
        Console.WriteLine("Password hash not created - pepper is null.");
        return null;
    }
    HMACSHA256 hmac = new HMACSHA256(pepper);
    hmac.Initialize();
    byte[] buffer = Encoding.UTF8.GetBytes(password);
    byte[] rawHmac = hmac.ComputeHash(buffer);
    return System.Convert.ToBase64String(rawHmac);
}

Conclusion

User passwords must never be stored as plain text, always compute its hash using a slow cryptographic hash function. To each password generate random salt and use this value together with the password for hash computation. For higher level of security generate random secret pepper for each system instance.

Of course, security of the user password depends on the password itself. An attacker could still try frequently used passwords such as “123456”, however, with secure storage we can protect him from trying too many of them and from obtaining the strong ones.

As many of you may know, Android supports native printing since Android 4.4. This means, that there is new api handling communication between application from which user prints and application that later sends the job to the printer.

android print

So how it works? First lets have a look at applications from which user prints.

Main responsibility of these applications is to prepare output for print in pdf format. This includes for example paging or updating image to landscape or portrait mode.

Application from which user prints uses then system PrintManager service.

PrintManager printManager = (PrintManager) getSystemService(Context.PRINT_SERVICE);

Document output is prepared with PrintDocumentAdapter which is passed as second parameter of PrintManagers print function.

printManager.print(jobName, new PrintDocumentAdapter(...), printAttributes);

Now we are heading to the second part of job printing, where we have to discover printers and send them our job. This is responsibility of PrintService.

Printer discovery

We can either add printers manually. Set their ip address and port, or we can look for network printers in the local network.

Let’s have a look on how to find printers which support Zeroconf discovery in local network. Implementations of Zeroconf are for example Avahi daemon or Bonjour.

When printer discovery in Android is started, onCreatePrinterDiscoverySession() method in PrintService is called. Here we have to create our PrinterDiscoverySession.

Responsibilities of PrinterDiscoverySession are pretty straightforward.

  • find and add discovered printers
  • remove previously added printers that disappeared
  • update already added printers

In this example we will use NsdManager. NsdManager is native Android solution for finding zeroconf services. On the other hand its functionality is very limited, but for purpose of this demo it’s satisfactory. There exist other and better solutions, for example JmDNS. Current limitation of NsdManager is not being able to load txt records of mDNS messages.

In order to use NsdManager we have to implement two interfaces. DiscoveryLisener (handles discovery callbacks) and ResolveListener (handles resolving callbacks). I will call them OurDiscoveryListener and OurResolveListener.

First in onStartPrinterDiscovery() method we create new instance of DiscoveryListener and start the actual discovery.

discoveryListener = new OurDiscoveryListener(nsdManager);
nsdManager.discoverServices(SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD, discoveryListener);

This is pretty self-explanatory. We specify discovery service type, which is either “_ipps_.tcp.” or “_ipp_.tcp”, depending on the fact if we want to encrypt ipp messages or don’t.

And when service is found, then OurDiscoveryListener will handle what happens in individual states of discovery.

In the following code we can see that for example when service is found we try to resolve it with NsdManager.

public void onServiceFound(NsdServiceInfo service) {
    nsdManager.resolveService(service, new OurResolveListener());
}

Resolving service means, that we try to get more information about the service. This includes host ip and port. OurPrinterResolveListener then handles states what should happen when resolving succeeds or fails. When resolving succeeds, we process gained data and save it for future use.

Last part of printer discovery is to find more details about selected printer and checking whether is this printer still available. This is handled in onStartPrinterStateTracking() method.

Discovering details about printer can be done for examaple with ipp operation Get-Printer-Attributes and according to received data, set the printer information. Second function is to keep tracking of the printer state.

The following code sample just shows how to set few printer capabilities, which should be set according to the printer attributes. This doesn’t contain  tracking of printer state.

@Override
public void onStartPrinterStateTracking(PrinterId printerId) {
    // check for info we found when printer was discovered
    PrinterInfo printerInfo = findPrinterInfo(printerId);

    if (printerInfo != null) {
        PrinterCapabilitiesInfo capabilities = new PrinterCapabilitiesInfo.Builder(printerId)
                .setMinMargins(new PrintAttributes.Margins(200, 200, 200, 200))
                .addMediaSize(PrintAttributes.MediaSize.ISO_A4, true)
                .addMediaSize(PrintAttributes.MediaSize.ISO_A5, false)
                .addResolution(new PrintAttributes.Resolution("R1", "200x200", 200, 200), false)
                .addResolution(new PrintAttributes.Resolution("R2", "200x300", 200, 300), true)
                .setColorModes(PrintAttributes.COLOR_MODE_COLOR |
                        PrintAttributes.COLOR_MODE_MONOCHROME, PrintAttributes.COLOR_MODE_COLOR)
                .build();

        printerInfo = new PrinterInfo.Builder(printerInfo)
                .setCapabilities(capabilities).build();

        // We add printer info to system service
        List<PrinterInfo> printers = new ArrayList();
        printers.add(printerInfo);
        addPrinters(printers);
    }
}

When different printer is selected, then onStopPrinterStateTracking is called and onStartPrinterStateTracking again.

 

Printing:

Android itself doesn’t contain implementation of any printing protocol. Because of this I created small IPP parser. But that’s topic for another day.

Here I will only show example of handling queued print job.

In the following code we pick one job according to id from saved processed jobs and set his state to printing. Class PrintTask in the following example is just Android AsyncTask which in background creates IPP request and appends job data.

public void handleQueuedPrintJob(PrintJobId printJobId, PrinterId printerId) {
    final PrintJob printJob = mProcessedPrintJobs.get(printJobId);
    if (printJob == null) {
        return;
    }

    if (printJob.isQueued()) {
        printJob.start();
    }

    OurPrinter ourPrinter = ourDiscoverySession.getPrinter(printerId);

    if (ourPrinter != null) {
        AsyncTask <Void, Void, Void> printTask =
                new PrintTask(printJob, ourPrinter);
        printTask.execute();
    }
}

In case that we have decided to use ipps, we also have to set correct certificate. Next step is to create new HttpURLConnection. (or HttpsURLConnection for secure transfer).

The last thing we have to do is to write into the output stream our IPP message, send it and wait for response from server.

Android manifest file

We have to set necessary permissions in the Android manifest file, in order to be able to run the PrintService.
Add android.permission.BIND_PRINT_SERVICE when creating the service. Example:

...
<service android:name=".OurPrintService" 
    android:permission="android.permission.BIND_PRINT_SERVICE">

    <intent-filter>
        <action android:name="android.printservice.PrintService" />
    </intent-filter>
</service>
...

This allows system to bind to the PrintService. Otherwise the service wouldn’t be shown in the system.

Also

<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.START_PRINT_SERVICE_CONFIG_ACTIVITY" />

permissions are needed for printer discovery and to access files in external storage.

We use Liquibase in our project as a DB change management tool. We use it to create our DB schema with the basic configuration our application needs to run.

This is, however, not enough for development or (unit) testing. Why? Because for each test case, we need to have data in the database in a particular state. E.g. I need to test that the system rejects the action of a user that does have the required quota for the action he/she wants to do. So I create a user with a 0 quota and then try to perform the action and see whether the system allows it or rejects it. To not waste time setting our test data repeatedly, we use special Liquibase scripts that set up what we need in our test environment (such as a user with a 0 quota), so that we do not have to do this manually.

For the Payment System project, we used Liquibase to run plain SQL scripts to insert the data that we needed. It worked well enough, but this approach has some disadvantages.

Mainly, the person writing the scripts has to have knowledge of our database and the relations between various tables, so there is a steep learning curve. Another issue is that all of the scripts have to be updated when a larger DB schema change takes place.

Therefore, during our implementation of the Quota System, I took inspiration from the work of my colleague, who used a kind of high-level DSL as a convenient way to setup the Quota system and I turned it into a production-ready feature on top of Liquibase. This solved the problem of manual execution (the scripts always run during the application startup and are guaranteed to run exactly once).

For the DSL, I chose Groovy, since we already use it for our tests, and there is no interoperability issue with Java based Liquibase.

Liquibase has many extension points and implementing a custom changelog parser seemed the way to go.

The default parser for XML parses the input file, generates internal Liquibase objects representing particular changes, and then transforms them to SQL, which is executed on the DB.

I created similar a parser, which was registered to accept Groovy scripts.

The parser executes the script file, which creates internal Liquibase objects, but, of course, one DSL keyword can create more insert statements. What is more, when a DB schema changes, the only place that needs to change is the DSL implementation, not all of the already created scripts.

Example Groovy script:

importFile('changelog.xml')

bw = '3' //using guid of identifier which is already present in database
color = identifier name:'color'
print = identifier name:'PRINT', guid:'100' //creating identifier with specific guid

q1 = quota guid:'q1', name:'quota1', limit:50, identifiers:[bw, print], period:weekly('MONDAY')
q2 = quota guid:'q2', name:'quota2', limit:150, identifiers:[color, print], period:monthly(1)

for (i in 1..1000)
    quotaSubject guid: 'guid' + i, name:'John' + i, quotas:[q1,q2]

This example shows that the Groovy script can reference another Liquibase script file – e.g., the default changelog file, which creates the basic DB structure and initial data.

It also shows the programmatic creation of quotaSubjects (accounts in the system), where we can use normal Groovy for a loop to simply create many accounts for load testing.

QuotaSubjects have assigned quotas, which are identified by quota identifiers. These can be either created automatically, or we can reference already existing ones.

The keywords identifier, quota, quotaSubject, weekly, and monthly are just normal Groovy functions, that take Map as an argument, which allows us to pass them named parameters.

Before execution, the script is concatenated with the main DSL script, where the keywords are defined.

Part of the main DSL script that processes identifiers:

QuotaIdentifier identifier(Map args) {
    assert args.name, 'QuotaIdentifier name is not specified, available params are: ' + args
    String guid = args.guid ? args.guid : nextGuid()

    addInsertChange('QUOTA_IDENTIFIER', columns('GUID', guid) << column('NAME', args.name) << column('STATUS', 'ENABLED'))
    new QuotaIdentifier(guid)
}

private def addInsertChange(String tableName, List<ColumnConfig> columns) {
    InsertDataChange change = new InsertDataChange()
    change.setTableName(tableName)
    change.setColumns(columns)

    groovyDSL_liquibaseChanges << change
}

The calls produce Liquibase objects, which are appended to variables accessible within the Groovy script. The content of the variables constitutes the final Liquibase changelog, which is created after the processing of the script is done.

This way, the test data file is simple to read, write, and maintain. A small change in the changelog parser also allowed us to embed the test data scripts in our Spock test specifications so that we can see the test execution logic and test data next to each other.

@Transactional @ContextConfiguration(loader = YSoftAnnotationConfigContextLoader.class)
class QuotaSubjectAdministrationServiceTestSpec extends UberSpecification {

    // ~ test configuration overrides ==================================================================

    @Configuration @ImportResource("/testApplicationContext.xml")
    static class OverridingContext {

        @Bean
        String testDataScript() {
            """groovy: importFile('changelog.xml')

                       bw = identifier name:'bw'
                       color = identifier name:'color'
                       print = identifier name:'print'
                       a4 = identifier name:'a4'

                       p_a4_c = quota guid:'p_a4_c', name:'p_a4_c', limit:10, identifiers:[print, a4, color], period:weekly('MONDAY')
                       p_a4_bw = quota guid:'p_a4_bw', name:'p_a4_bw', limit:20, identifiers:[print, a4, bw], period:weekly('MONDAY')
                       p_a4 = quota guid:'p_a4', name:'p_a4', limit:30, identifiers:[print, a4], period:weekly('MONDAY')
                       p = quota guid:'p', name:'p', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q1 = quota guid:'q1', name:'q1', limit:40, identifiers:[print], period:weekly('MONDAY')   
                       q2 = quota guid:'q2', name:'q2', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q_to_delete = quota guid:'q_to_delete', name:'q_to_delete', limit:40, identifiers:[print], period:weekly('MONDAY')

                       quotaSubject guid: 'user1', name:'user1', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user2', name:'user2', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user3', name:'user3', quotas:[p_a4_c, p_a4_bw, p_a4, p]
                       quotaSubject guid: 'user4', name:'user4', quotas:[p_a4_c]"""
        }
    }

    // ~ instance fields ===============================================================================

    @Autowired private QuotaSubjectAdministrationService quotaSubjectAdministrationService;
    @Autowired private QuotaAdministrationService quotaAdministrationService;

    // ~ findByGuid ====================================================================================

    def "findByGuid should throw IllegalInputException for null guid"() {
        when:
        quotaSubjectAdministrationService.findByGuid(null)

        then:
        thrown(IllegalInputException)
    }
Do you want to attend the famous developer conference, GeeCON, in the Czech Republic? No problem! Y Soft and GeeCON’s organizers bring it to our capital again. We can therefore proudly announce, that Y Soft is the platinum sponsor, co-organizer and key partner of GeeCON 2015 in Prague, on October 22-23.

 

GeeCON focuses on news and hacks all around Java and Java Virtual Machine based technologies. It was first organized in 2009 and since then has grown into a big conference with over 80 speakers and sessions in three days, from an initial 350 participants to 2000+ attendees today. From its originally wider focus, it has crystallized into one specialized topic, although no less rich – all about JAVA technology.

GeeCON is a conference focused on Java and Java Virtual Machine based technologies, with special attention to dynamic languages like Groovy and Ruby. GeeCON is a forum for sharing experiences about modern software development methodologies, enterprise architectures, software craftsmanship, design patterns, distributed computing and more!

The fact that the participants are literally flocking from all over Europe says a lot about the qualities of GeeCON, with some even coming from other continents as well. Traditionally, representatives of Czech developers have come there in large numbers. Lectures take place in several halls in parallel so that all participants can choose exactly according to their interests. You will find famous names from around the world among the speakers – Kevlin Henney, Milen Dyankov, Simon Brown, Grant Ingersoll or Antonio Goncalves.

We encourage you to visit GeeCON Prague, CineStar – Cerny Most, all you have to do is book a date in your diary for the 22nd to 23rd of October, everything else you will discover over the following weeks directly at www.geecon.cz.

After finishing hard-coded passwords detector, I have focused on improving the detection of the most serious security bugs, which could be found by static taint analysis. SQL injection, OS command injection and Cross-site scripting (XSS) are placed as top first, second and fourth in CWE Top 25 most dangerous software errors (while well-known buffer overflow, not applicable to Java, is placed third). Path Traversal, Unvalidated Redirect, XPath injection or LDAP injection are also related types of weaknesses – unvalidated user input can exploit syntax of an interpreter and cause a vulnerability. Injections in general are the risk number one in OWASP Top 10 too, so a reliable open-source static analyser for those kinds of weaknesses could really make the world a more secure place 🙂

FindBugs already has detectors for some kinds of injections, but many bugs is missed due to insufficient flow analysis, unknown taint sources and sinks and also targeting zero false positives (even though there are some). In contrast, the aim of bug detectors in FindSecurityBugs is to be helpful during security code review and not to miss any vulnerability – there was some effort to reduce false positives, but before my contribution almost all taint sinks were reported in practice. Unfortunately, searching a real problem among many false warnings is quite tedious. The aim of the new detection mechanism is to report more high-confidence bugs (with minimum of false positives) than FindBugs detectors plus report lower-confidence bugs with decreased priority not missing any real bugs while having false positives rate much lower than FindSecurityBugs had originally.

For a reliable detection, we need a good data-flow analysis. I have already mentioned OpcodeStackDetector class in previous articles, but there is a more advanced and general mechanism in FindBugs. We can create and register classes performing a custom data-flow analysis and request those results later in detectors. Methods are symbolically executed after building control flow graph made of blocks of instructions connected by different types of edges (such as goto, ifcmp or exception handling), which are attempted to be pruned for impossible flow. We have to create a class to represent facts at different code locations – we want to remember some information (called a fact) for every reachable instruction, which can later help us to decide, whether a particular bug should be reported at that location. We need to model effects of instructions and edges on facts, specify the way of merging facts from different flow branches and make everything to work together. Fortunately, there are existing classes designed for extension to make this process easier. In particular, FrameDataflowAnalysis models values in the operand stack and local variables, so we can concentrate on the sub-facts about these values. The actual fact is then a frame of these sub-facts. This class models effects of instructions by pushing the default sub-fact on the modelled stack and popping the right amount of stack values. It also automatically moves sub-facts between the stack and the part of the frame with local variables.

Lets have a look, which classes had to be implemented for taint analysis. If we want to run custom data-flow analysis, a special class implementing IAnalysisEngineRegistrar must be created and referenced from findbugs.xml.

<!-- Registers engine for taint analysis dataflow -->
<EngineRegistrar
    class="com.h3xstream.findsecbugs.taintanalysis.EngineRegistrar"/>

This simple class (called EngineRegistrar) makes a new instance of TaintDataflowEngine and registers it with global analysis cache.

public class EngineRegistrar implements IAnalysisEngineRegistrar {

    @Override
    public void registerAnalysisEngines(IAnalysisCache cache) {
        new TaintDataflowEngine().registerWith(cache);
    }
}

Thanks to this, in the right time, method analyze of TaintDataflowEngine (implementing ImethodAnalysisEngine) is called for each method of analyzed code. This method requests objects needed for analysis, instantiates two custom classes (mentioned in next two sentences) and executes the analysis.

public class TaintDataflowEngine
    implements IMethodAnalysisEngine<TaintDataflow> {

    @Override
    public TaintDataflow analyze(IAnalysisCache cache)
            throws CheckedAnalysisException {
        CFG cfg = cache.getMethodAnalysis(CFG.class, descriptor);
        DepthFirstSearch dfs = cache
            .getMethodAnalysis(DepthFirstSearch.class, descriptor);
        MethodGen methodGen = cache
            .getMethodAnalysis(MethodGen.class, descriptor);
        TaintAnalysis analysis = new TaintAnalysis(
            methodGen, dfs, descriptor);
        TaintDataflow flow = new TaintDataflow(cfg, analysis);
        flow.execute();
        return flow;
    }

    @Override
    public void registerWith(IAnalysisCache iac) {
        iac.registerMethodAnalysisEngine(TaintDataflow.class, this);
    }
}

TaintDataflow (extending Dataflow) is really simple and used to store results of performed analysis (used later by detectors).

public class TaintDataflow
        extends Dataflow<TaintFrame, TaintAnalysis> {

    public TaintDataflow(CFG cfg, TaintAnalysis analysis) {
        super(cfg, analysis);
    }
}

TaintAnalysis (extending FrameDataflowAnalysis) implements data-flow operations on TaintFrame but it mostly delegates them to other classes.

public class TaintAnalysis
        extends FrameDataflowAnalysis<Taint, TaintFrame> {

    private final MethodGen methodGen;
    private final TaintFrameModelingVisitor visitor;

    public TaintAnalysis(MethodGen methodGen, DepthFirstSearch dfs,
            MethodDescriptor descriptor) {
        super(dfs);
        this.methodGen = methodGen;
        this.visitor = new TaintFrameModelingVisitor(
            methodGen.getConstantPool(), descriptor);
    }

    @Override
    protected void mergeValues(TaintFrame frame, TaintFrame result,
            int i) throws DataflowAnalysisException {
        result.setValue(i, Taint.merge(
            result.getValue(i), frame.getValue(i)));
    }

    @Override
    public void transferInstruction(InstructionHandle handle,
            BasicBlock block, TaintFrame fact)
            throws DataflowAnalysisException {
        visitor.setFrameAndLocation(
            fact, new Location(handle, block));
        visitor.analyzeInstruction(handle.getInstruction());
    }

    // some other methods
}

TaintFrame is just a concrete class for abstract Frame<Taint>.

public class TaintFrame extends Frame<Taint> {

    public TaintFrame(int numLocals) {
        super(numLocals);
    }
}

Effects of instructions are modelled by TaintFrameModelingVisitor (extending AbstractFrameModelingVisitor) so we can code with the visitor pattern again.

public class TaintFrameModelingVisitor
    extends AbstractFrameModelingVisitor<Taint, TaintFrame> {

    private final MethodDescriptor methodDescriptor;

    public TaintFrameModelingVisitor(ConstantPoolGen cpg,
            MethodDescriptor method) {
        super(cpg);
        this.methodDescriptor = method;
    }

    @Override
    public Taint getDefaultValue() {
        return new Taint(Taint.State.UNKNOWN);
    }

    @Override
    public void visitACONST_NULL(ACONST_NULL obj) {
        getFrame().pushValue(new Taint(Taint.State.NULL));
    }

    // many more methods
}

The taint fact – information about a value in the frame (stack item or local variable) is stored in a class called just Taint.

The most important piece of information in Taint is the taint state represented by an enum with values TAINTED, UNKNOWN, SAFE and NULL. TAINTED is pushed for invoke instruction with a method call configured to be tainted (e.g. getParameter from HttpServletRequest or readLine from BufferedReader), SAFE is stored for ldc (load constant) instruction, NULL for aconst_null and UNKNOWN is a default value (this description is a bit simplified). Merging of taint states is defined such that if we could compare them as TAINTED > UNKNOWN > SAFE > NULL, then merge of states is the greatest value (e.g. TAINTED + SAFE = TAINTED). Not only this merging is done where there are more input edges to a code block of control flow graph, but I have also implemented a mechanism of taint transferring methods. For example, consider calling toLowerCase method on a String before passing it to a taint sink – instead of pushing a default value (UNKNOWN), we can copy the state of the parameter not to forget the information. Merging is also done in more complicated examples such as for append method of StringBuilder – the taint state of the argument is merged with the taint state of StringBuilder instance and returned to be pushed on the modelled stack.

There were two problems with taint state transfer which had to be solved. First, taint state must be transferred directly to mutable classes too, not only to their return values (plus the method can be void). Not only we set the taint state for an object when it is being seen for the first time in the analysed method and then the state is copied, but we also change it according to instance methods calls. For example, StringBuilder is safe, when a new instance is created with non-parametric constructor, but it can taint itself by calling its append method. If only methods with safe parameters are called, the taint state of StringBuilder object remains safe too.  For this reason, effect of load instructions is modified to mark index of loaded local variable to Taint instance of corresponding stack item. Then we can transfer taint state to a local variable with index stored in Taint for specified methods in mutable classes. Second, taint transferring constructors (methods <init> in bytecode) must be handled specifically, because of the way of creating new objects in Java. Instruction new is followed by dup and invokespecial, which consumes duplicated value and initializes the object remaining at the top of the stack. Since the new object is not stored in any variable, we must transfer the taint value from merged parameters to the stack top separately.

Bugs related to taint analysis are identified by TaintDetector (implementing Detector). For better performance, before methods of some class are analyzed, constant pool (part of the class file format with all needed constants) is searched and the analysis continues only if there are references for some taint sinks. Then TaintDataflow instance is loaded for each method and locations of its control flow graph are iterated until taint sink method is found. This means, we find all invoke instructions used in a currently analysed method and check, whether the called methods are related to the searched weaknesses. Facts (instances of Taint class) from TaintDataFlow are extracted for each sink parameter of a sink method. Bug is reported with high confidence (and priority), if the taint state is TAINTED, with medium confidence for UNKNOWN taint state and with low confidence for SAFE and NULL (just for the case of a bad analysis, these warnings are not normally shown anywhere). Taint class also contains references for taint source locations, so these are shown in bug reports to make review easier – you should see a path between taint sources and the taint sink. TaintDetector itself is abstract, so it must be extended to detect concrete weakness types (like command injection) and InjectionSource interface implemented to specify taint sinks (the name of the interface is a bit misleading) and items in a constant pool to specify candidate classes.

public class CommandInjectionDetector extends TaintDetector {

    public CommandInjectionDetector(BugReporter bugReporter) {
        super(bugReporter);
    }

    @Override
    public InjectionSource[] getInjectionSource() {
        return new InjectionSource[] {new CommandInjectionSource()};
    }
}

CommandInjectionSource overwrites method getInjectableParameters, which returns an instance of InjectionPoint containing parameters, that cannot be tainted, and the weakness type to report. Boolean method isCandidate looks up constant pool for the names of taint sink classes and return true if present.

TaintDetector is currently used to detect command, SQL, LDAP and script (for eval method of ScriptEngine) injections and unvalidated redirect. More bug types and taint sinks should follow soon. Test results are looking quite promising so far. Inter-procedural analysis (not restricted to a method scope) should be the next big improvement, which could make this analysis really helpful. Then everything should be tested with a large amount of real code to iron out the kinks. You can see the discussed classes in taintanalysis package and try the new version of FindSecurityBugs.

Yes, you heard right! Developer testing. It means testing done by developers! And yes, I’m talking about the confirmation testing, which is known as “The changelog” in our R&D department. The result – improvement from 50 % to 95 % of tickets closed at the end of a sprint and all sprint goals completed on time 4 sprints in a row! [1]

Percentage of work planned and really done over sprints

Percentage of work planned and really done over sprints

Our development team has recently been trying very hard to shorten the development cycle of features and fixes. One of the biggest delays we have identified was caused by the tickets waiting in the “To Test” state. It means that the implementation part has been completed and is waiting for a QA Engineer to confirm it’s functionality by testing it. As I was the only tester for 7 developers on the team, the tickets with lower severity had to simply wait, often more than a week. Moreover, the testing activities were concentrated at the end of a sprint. A ticket reopened too late can easily be a cause of a failure in reaching team’s sprint goals.

Generally, it is strongly recommended by the literature to not let developers test their own work. The common reasoning is, that everybody is in love with their creations and would not like to damage them. Also, some defects are approach-related and thus cannot be discovered by the same person with the same approach. Moreover, test planning, test case writing and test data definition need special skills, which developers generally do not possess. Our mindset was changed by our CTO, who saw this as an opportunity to improve efficiency of the development.

In our team, we kept all of the aforementioned risks in mind and tailored the process to negate all of them. We have tried several versions of the process. In a short time we found the most important thing – that we need to differentiate tasks (new development) and defects (bug fixes). You’ll later see why.

Generally, it is much easier to write test cases for known functionality. This is the case for the defects which only fix or slightly modify an already tested feature. Experiences with an existing (sub-)system, where a feature is only updated and testing approaches are well known, help the QA engineer to define a set of all necessary test cases, edge-case scenarios and also expected visual aspects. Therefore, based on the well-defined test cases, a developer should be able to develop the fix and test it in a way, which will ensure it meets quality requirements. Later, a QA engineer interviews the developer to find out his confidence level about the testing and also asks several direct questions about the most critical areas. Based on this information and on the experiences with the particular developer[2], the QA engineer then decides which defects need to be retested and which can be closed right away.

On the other hand, tasks usually represent development of new features. Since the people writing the code in Y Soft are not “coders” but developers, they have to propose solutions for many low-level issues throughout the development process. Therefore, it frequently happens, that some aspects and limitations are discovered later in the sprint, making it very challenging for a QA engineer to define a complete set of test cases in advance. Also, without a proper hands-on the final work, it is also very difficult to define requirements for visual aspects and to judge user friendliness. Therefore, tasks have to always be retested by a QA engineer.

Nevertheless, defining at least some of these tests brings certain benefits that are also common for the defects:

  • A QA engineer can discover inconsistencies between his/her understanding compared to the developer’s understanding of the work to be done. It is generally better to find them before significant time has been spent on development of something undesired and reworking it later.
  • A partial test suite can be very helpful to the developer during the development, as it can be used as a checklist to cover many possible scenarios.
  • Some other scenarios can be discovered during the development and can be added to the test suite. These test cases would otherwise probably not exist.
  • As the developer performs tests himself, many issues are found and fixed in a much shorter time and with less effort than it would when they are found and reported back by the QA department (several days later). This way we can assure higher quality of the developed product in the development phase.
  • Based on the current state of work and human resources, the team can flexibly agree on the extent to which developers will test their work. Either they do extensive tests to help QA engineers, or they perform only a basic set of tests in order to move on to the next development task sooner.

These result in:

  • shorter development cycles (Open to Closed status)
  • less reopened tickets
  • better understanding of the whole solution for all members of the team

The process itself consists of the following steps:

  1. When the sprint backlog is defined:
    1. A QA Engineer creates a set of test cases for each of the tickets (pre-)planned for the sprint;
    2. The test cases are defined in a subtask of each ticket. The subtasks are named “Testing of [ticketID]”;
  2. At the sprint planning meeting:
    1. The QA engineer consults on the technical details of the solution and proposed tests with other team members and the current product manager;
    2. The effort for each ticket is estimated including the testing part;
  3. Developers have to test their work and switch the ticket to status “To Test”:
    1. All defects are tested in a standardized testing environment (ideally prepared by a QA engineer);
    2. Tasks can be tested in a development environment (e.g. running on a developer’s machine built by a development tool);
  4. When a ticket has the “To Test” status, the QA engineer:
    1. Evaluates which defects need to be tested again and which do not;
    2. Retests all tasks;

It is important to note that the aforementioned benefits are only subjectively observed by the members of the team, as none of them has been measured in any systematic way. Doing that would require returning back to the old way of work, in order to make the necessary measurements. Since members of the team are satisfied with the new process , there is no need or motivation to revert back to the old method. A change from less than 50 % to 95 % of closed tickets from the end of Sprint 17 to the end of Sprint 18 and the 100 % fulfillment of sprint goals in the last four sprints presents a sufficiently strong argument to try this process in other teams.

[1] The orange line represents the percentage of developers’ time was planned for a sprint. The time is estimated for each ticket. The green line represents the percentage of that planned estimated time, for which the tickets were closed at the end of the sprint. The first attempt to use developer testing was in Sprint 18. In Sprint 22, we resumed the process. The trend of about 70 % of work planned and more than 90 % finished remains to date.

[2] In order to gain experiences with the developers, there has to be a period of several sprints, where every defect is retested. The QA engineer needs to measure the ratio of closed and reopened defects per developer. During this period the QA engineer can also find out, whether he/she is able to define all necessary test cases beforehand.

I am trying to setup better go development environment and decided to give vim-go a try (which also resulted in me replacing Vundle with Pathogen, which is much more straightforward). Installing everything was a breeze and I only encountered a problem when I tried to make tagbar work, because tagbar does not work with BSD ctags and requires Exuberant ctags 5.5.

The simplest way how to install exuberant ctags is with brew.

ondra@nb218 ~/Downloads/ctags-5.8
$ brew install ctags
Warning: You are using OS X 10.11.
We do not provide support for this pre-release version.
You may encounter build failures or other breakage.

However brew is still having problems running on 10.11 and many packages fail to build, ctags being on exception. So let’s see how we can deploy ctags into /usr/local without stepping on brew’s toys.

First of all, we need to determine the prefix to use via brew diy:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew diy --name=ctags --version=5.8
--prefix=/usr/local/Cellar/ctags/5.8

Now you can use brew fetch to download the source for ctags or just download it from sourceforge. To get the source using brew, use:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew fetch --build-from-source ctags
==> Downloading https://downloads.sourceforge.net/ctags/ctags-5.8.tar.gz
Already downloaded: /Library/Caches/Homebrew/ctags-5.8.tar.gz
SHA1: 482da1ecd182ab39bbdc09f2f02c9fba8cd20030
SHA256: 0e44b45dcabe969e0bbbb11e30c246f81abe5d32012db37395eb57d66e9e99c7

==> Downloading https://gist.githubusercontent.com/naegelejd/9a0f3af61954ae5a77e7/raw/16d981a3d99628994ef0f73848b6beffc7
Already downloaded: /Library/Caches/Homebrew/ctags--patch-26d196a75fa73aae6a9041c1cb91aca2ad9d9c1de8192fce8cdc60e4aaadbcbb
SHA1: 24c96829dfdc58b215bfccf5445a409efba1ffe5
SHA256: 26d196a75fa73aae6a9041c1cb91aca2ad9d9c1de8192fce8cdc60e4aaadbcbb

Anyhow, when you have the source extracted, run configure && make with the prefix from brew diy:

ondra@nb218 ~/Downloads/ctags-5.8
$ ./configure --prefix=/usr/local/Cellar/ctags/5.8
Exuberant Ctags, version 5.8
Darwin 15.0.0 Darwin Kernel Version 15.0.0: Sun Jul 26 19:48:55 PDT 2015; root:xnu-3247.1.78~15/RELEASE_X86_64 x86_64
checking whether to install link to etags... no

And link ctags via brew using brew link:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew link ctags
Linking /usr/local/Cellar/ctags/5.8... 2 symlinks created

And you are done. When building is fixed for brew under 10.11 simply unlink ctags and use brew install as usual.

FindBugs GUI

In the previous article, I was describing the creation of a new FindBugs detector for hard-coded passwords and cryptographic keys. I also mentioned some imperfections and I have decided to learn more about FindBugs and improve the detection.

Java virtual machine has a stack architecture – operands must be pushed on the stack before method is invoked, given number of stack values is consumed during invocation and produced return value (if any) is pushed subsequently. My detector class extends OpcodeStackDetector, which implements abstract interpretation technique to collect approximated information about values at the operand stack for each code location. These pieces of information (usually called facts) are kept only for those locations, where a derived value of the fact does not depend on the preceding control flow (for example, the value is the same for each possible branch executed in earlier conditional statements).

One of the facts available for stack values is the actual value of a number or String (related to constant propagation performed by compilers during optimization). We can use this to detect hard-coded values – known constant means hard-coded value. However, we also need to track other other data types besides Strings (numbers can be ignored) to detect passwords in a char array and hard-coded keys. In addition, there is one more issue with this approach…

Tracking concrete values is unnecessarily complicated and the value often becomes unknown – we only need to know whether the value is constant, not which constant is on the stack. Consider a piece of code like this:

private Connection getConnection(String user) {
    String password;
    if ("root".equals(user)) {
        password = "superSecurePassword";
    } else {
        password = "differentPassword";
    }
    return DriverManager.getConnection(DB_URL, user, password);
}

The constant value in the password variable is known inside both branches, but these values are forgotten after the conditional statement, since the values differ. For this reason, weakness like this was not reported by the previous version of the detector nor by the original FindBugs detector for constant database passwords. Even if there is only one possible constant, analysis can fail because of null values, see this code (looking a bit unreal, but demonstrating the problem):

String pwd = null;
if (shouldConnect()) {
    pwd = "hardcoded";
}
if (pwd != null) {
    Connection connection = DriverManager.getConnection(url, user, pwd);
    // some code working with database
}

We can easily see that the password variable has always value “hardcoded”, but the performed analysis is linear and the fact is forgotten right after the first conditional statement. Second condition cannot return the forgotten constant back and weakness is not detected again.

Fortunately, these issues can be solved by setting and reading user value fact, which OpcodeStackDetector allows (if annotation CustomUserValue is added to the extending class). Our fact has only one value for hard-coded stack items or it is null to indicate unknown state (default). We can periodically check, whether a value on the stack is a known constant or null and set the user value for it if it is, propagation is done automatically. If then analysis merges facts from different control flow branches with different constants (or null), user value is the same and not reset to default. Custom user value is also used to mark hard-coded passwords and keys with the other password and key data types, detection of those objects remains similar as in the previous version of the detector. Weakness is reported if sink parameter has non-null user value and stack value is not null (null passwords are not considered to be hard-coded).

The detector is using proper flow analysis after this improvement; however, it is restricted to a method scope and hard-coded values in fields are reported only if used in the same class. Inter-method and inter-class analysis is a future challenge, but I have kept reporting of hard-coded fields with suspicious names and unknown sink not to miss important positives. In contrast to the previous version, these fields are reported with lower priority and only if not reported before using proper flow analysis technique to prevent duplicate warning. Moreover, all fields are reported in a single warning for a class to make possible false positives less distracting.

Another improvement is the possibility to configure more method parameters as password or key sinks. If more than one parameter is hard-coded, only single warning is produced and the parameters are mentioned in the detailed message. The last important change is that hard-coded cryptographic keys are reported in a separated bug pattern since both hard-coded passwords and keys have a different CWE identifier (259 and 321) and are equally important. Decision between reported warnings is done automatically based on the data types of hard-coded parameters.

I have tested the detector with Juliet Test Suite and using proper analysis it can reveal both types of weakness in 17 flow variants (out of 37) and all sink variants with no false positives. Original FindBugs detector reveals weaknesses in 10 flow variants and only for database passwords, other password methods and hard-coded keys are not detected in any variant.

You can see the detector class on GitHub. Happy coding with no hard-coded passwords!

software bug

FindBugs is a great open source tool for detection of software bugs in Java. It uses static analysis to search compiled classes for hundreds of bug patterns and even more can be found using FindSecurityBugs and fb-contrib plugins. However, before my recent contribution there was no general detector for hard-coded passwords and cryptographic keys. Hard-coded password are identical for each installation and can be easily extracted, which is likely to be exploited (see CWE-259 for more information). FindBugs and FindSecurityBugs could already detect this vulnerability, but only for constant database passwords and two very specific cases. I have created a detector (accepted to FindSecurityBugs), which is able to find hard-coded values of Strings, char and byte arrays or BigIntegers used as an input parameter for one of the configured methods such as KeyStore.load or KeySpec constructors.

To add a new detector, we have to create a class that implements Detector or extends a prepared class with helper functionality (I have used OpcodeStackDetector). An instance of BugReporter passed in constructor and its method reportBug are used to report problems (BugInstance objects). We also need to add the class name to findbugs.xml to be executed and edit messages.xml for information about detections. Good start for thinking about the detection logic is writing a bunch of flawed code samples and looking to their bytecode (plugin for IDEA can be used). We can write unit tests for them by mocking BugReporter.

Detector class can use visitor design pattern (if it implements Visitor) to react on events while analyzed class is scanned. I have started by overwriting method sawOpcode, which is called every time an instruction is read. Since we are interested in invocations of specific methods, we need to check if it is one of the invoke instructions and get full called method name such as java/security/KeyStore.load(Ljava/io/InputStream;[C)V, which contains method class with package, argument types ([C is char array, parameters of object types starts with L and ends with semicolon) and return type (V for void). Method name can be obtained by calling methods getClassConstantOperand, getNameConstantOperand and getNameSigOperand inherited from DismantleByteCode. If it is one of the problematic methods (loaded from resource files), we can create a BugInstance, add current analyzed line plus some info to it and report the bug. Now we have a password usage detector but not hard coded password detector, so it is time to eliminate false positives (detections that are not real bugs).

For String passwords, we can utilize OpcodeStackDetector and check nullness of stack.getStackItem(0).getConstant() to detect usage of constant String as a first method parameter. Unfortunately, it is not so easy for the other variable types. To detect that an array is initialized with hard-coded values, I am checking whether instruction for new array creation is followed by push and array store instructions while for example not calling methods. Constant arrays are also converted from constant Strings using methods toCharArray and getBytes. After implementing this, we can detect BigIntegers too, since they can be constructed from Strings or byte arrays.

In terms of so called taint analysis, we are able to detect vulnerability source (hard-coded data) and sink (usage of password or cryptographic function), but bug should be reported only if there is a flow from source to sink (we cannot be sure that hard-coded password is really a password until it is used as a password parameter). In the current implementation, no complex flow analysis is performed, we assume that a taint source followed by a taint sink of a matching type inside the same method body are always related. For this reason, false positives are easy to demonstrate, but are quite uncommon in practice. On the other hand, local hard-coded declarations are forgotten while another method is analyzed (visit method is overwritten to reset the state), so passwords are not detected if they are passed as a parameter and used in another method.

Class fields are also taken into an account – if constant data is stored to them, we remember that and consider it as a taint source when they are read. Because of this, the order of the methods matters and as static initializer section is added to the class end by compiler, its analysis is run ‘manually’ by calling doVisitMethod when class analysis starts. In addition, if the field stores hard-coded data and its name is suspicious (like password, secret or aesKey), the bug is reported immediately, since there is a high bug confidence and if it was used in a different class, it would not be reported otherwise (one the other hand, it can be reported twice now).

You can see the whole code on GitHub. I have mentioned some imperfections, but I think the detector is working quite well. Unfortunately, there is not much information about writing detectors, so creating them can be just a matter of trial and error. If you have an idea for improvement or a new detector, don’t hesitate to contact me or pull the code directly. 🙂