After finishing hard-coded passwords detector, I have focused on improving the detection of the most serious security bugs, which could be found by static taint analysis. SQL injection, OS command injection and Cross-site scripting (XSS) are placed as top first, second and fourth in CWE Top 25 most dangerous software errors (while well-known buffer overflow, not applicable to Java, is placed third). Path Traversal, Unvalidated Redirect, XPath injection or LDAP injection are also related types of weaknesses – unvalidated user input can exploit syntax of an interpreter and cause a vulnerability. Injections in general are the risk number one in OWASP Top 10 too, so a reliable open-source static analyser for those kinds of weaknesses could really make the world a more secure place 🙂

FindBugs already has detectors for some kinds of injections, but many bugs is missed due to insufficient flow analysis, unknown taint sources and sinks and also targeting zero false positives (even though there are some). In contrast, the aim of bug detectors in FindSecurityBugs is to be helpful during security code review and not to miss any vulnerability – there was some effort to reduce false positives, but before my contribution almost all taint sinks were reported in practice. Unfortunately, searching a real problem among many false warnings is quite tedious. The aim of the new detection mechanism is to report more high-confidence bugs (with minimum of false positives) than FindBugs detectors plus report lower-confidence bugs with decreased priority not missing any real bugs while having false positives rate much lower than FindSecurityBugs had originally.

For a reliable detection, we need a good data-flow analysis. I have already mentioned OpcodeStackDetector class in previous articles, but there is a more advanced and general mechanism in FindBugs. We can create and register classes performing a custom data-flow analysis and request those results later in detectors. Methods are symbolically executed after building control flow graph made of blocks of instructions connected by different types of edges (such as goto, ifcmp or exception handling), which are attempted to be pruned for impossible flow. We have to create a class to represent facts at different code locations – we want to remember some information (called a fact) for every reachable instruction, which can later help us to decide, whether a particular bug should be reported at that location. We need to model effects of instructions and edges on facts, specify the way of merging facts from different flow branches and make everything to work together. Fortunately, there are existing classes designed for extension to make this process easier. In particular, FrameDataflowAnalysis models values in the operand stack and local variables, so we can concentrate on the sub-facts about these values. The actual fact is then a frame of these sub-facts. This class models effects of instructions by pushing the default sub-fact on the modelled stack and popping the right amount of stack values. It also automatically moves sub-facts between the stack and the part of the frame with local variables.

Lets have a look, which classes had to be implemented for taint analysis. If we want to run custom data-flow analysis, a special class implementing IAnalysisEngineRegistrar must be created and referenced from findbugs.xml.

<!-- Registers engine for taint analysis dataflow -->
<EngineRegistrar
    class="com.h3xstream.findsecbugs.taintanalysis.EngineRegistrar"/>

This simple class (called EngineRegistrar) makes a new instance of TaintDataflowEngine and registers it with global analysis cache.

public class EngineRegistrar implements IAnalysisEngineRegistrar {

    @Override
    public void registerAnalysisEngines(IAnalysisCache cache) {
        new TaintDataflowEngine().registerWith(cache);
    }
}

Thanks to this, in the right time, method analyze of TaintDataflowEngine (implementing ImethodAnalysisEngine) is called for each method of analyzed code. This method requests objects needed for analysis, instantiates two custom classes (mentioned in next two sentences) and executes the analysis.

public class TaintDataflowEngine
    implements IMethodAnalysisEngine<TaintDataflow> {

    @Override
    public TaintDataflow analyze(IAnalysisCache cache)
            throws CheckedAnalysisException {
        CFG cfg = cache.getMethodAnalysis(CFG.class, descriptor);
        DepthFirstSearch dfs = cache
            .getMethodAnalysis(DepthFirstSearch.class, descriptor);
        MethodGen methodGen = cache
            .getMethodAnalysis(MethodGen.class, descriptor);
        TaintAnalysis analysis = new TaintAnalysis(
            methodGen, dfs, descriptor);
        TaintDataflow flow = new TaintDataflow(cfg, analysis);
        flow.execute();
        return flow;
    }

    @Override
    public void registerWith(IAnalysisCache iac) {
        iac.registerMethodAnalysisEngine(TaintDataflow.class, this);
    }
}

TaintDataflow (extending Dataflow) is really simple and used to store results of performed analysis (used later by detectors).

public class TaintDataflow
        extends Dataflow<TaintFrame, TaintAnalysis> {

    public TaintDataflow(CFG cfg, TaintAnalysis analysis) {
        super(cfg, analysis);
    }
}

TaintAnalysis (extending FrameDataflowAnalysis) implements data-flow operations on TaintFrame but it mostly delegates them to other classes.

public class TaintAnalysis
        extends FrameDataflowAnalysis<Taint, TaintFrame> {

    private final MethodGen methodGen;
    private final TaintFrameModelingVisitor visitor;

    public TaintAnalysis(MethodGen methodGen, DepthFirstSearch dfs,
            MethodDescriptor descriptor) {
        super(dfs);
        this.methodGen = methodGen;
        this.visitor = new TaintFrameModelingVisitor(
            methodGen.getConstantPool(), descriptor);
    }

    @Override
    protected void mergeValues(TaintFrame frame, TaintFrame result,
            int i) throws DataflowAnalysisException {
        result.setValue(i, Taint.merge(
            result.getValue(i), frame.getValue(i)));
    }

    @Override
    public void transferInstruction(InstructionHandle handle,
            BasicBlock block, TaintFrame fact)
            throws DataflowAnalysisException {
        visitor.setFrameAndLocation(
            fact, new Location(handle, block));
        visitor.analyzeInstruction(handle.getInstruction());
    }

    // some other methods
}

TaintFrame is just a concrete class for abstract Frame<Taint>.

public class TaintFrame extends Frame<Taint> {

    public TaintFrame(int numLocals) {
        super(numLocals);
    }
}

Effects of instructions are modelled by TaintFrameModelingVisitor (extending AbstractFrameModelingVisitor) so we can code with the visitor pattern again.

public class TaintFrameModelingVisitor
    extends AbstractFrameModelingVisitor<Taint, TaintFrame> {

    private final MethodDescriptor methodDescriptor;

    public TaintFrameModelingVisitor(ConstantPoolGen cpg,
            MethodDescriptor method) {
        super(cpg);
        this.methodDescriptor = method;
    }

    @Override
    public Taint getDefaultValue() {
        return new Taint(Taint.State.UNKNOWN);
    }

    @Override
    public void visitACONST_NULL(ACONST_NULL obj) {
        getFrame().pushValue(new Taint(Taint.State.NULL));
    }

    // many more methods
}

The taint fact – information about a value in the frame (stack item or local variable) is stored in a class called just Taint.

The most important piece of information in Taint is the taint state represented by an enum with values TAINTED, UNKNOWN, SAFE and NULL. TAINTED is pushed for invoke instruction with a method call configured to be tainted (e.g. getParameter from HttpServletRequest or readLine from BufferedReader), SAFE is stored for ldc (load constant) instruction, NULL for aconst_null and UNKNOWN is a default value (this description is a bit simplified). Merging of taint states is defined such that if we could compare them as TAINTED > UNKNOWN > SAFE > NULL, then merge of states is the greatest value (e.g. TAINTED + SAFE = TAINTED). Not only this merging is done where there are more input edges to a code block of control flow graph, but I have also implemented a mechanism of taint transferring methods. For example, consider calling toLowerCase method on a String before passing it to a taint sink – instead of pushing a default value (UNKNOWN), we can copy the state of the parameter not to forget the information. Merging is also done in more complicated examples such as for append method of StringBuilder – the taint state of the argument is merged with the taint state of StringBuilder instance and returned to be pushed on the modelled stack.

There were two problems with taint state transfer which had to be solved. First, taint state must be transferred directly to mutable classes too, not only to their return values (plus the method can be void). Not only we set the taint state for an object when it is being seen for the first time in the analysed method and then the state is copied, but we also change it according to instance methods calls. For example, StringBuilder is safe, when a new instance is created with non-parametric constructor, but it can taint itself by calling its append method. If only methods with safe parameters are called, the taint state of StringBuilder object remains safe too.  For this reason, effect of load instructions is modified to mark index of loaded local variable to Taint instance of corresponding stack item. Then we can transfer taint state to a local variable with index stored in Taint for specified methods in mutable classes. Second, taint transferring constructors (methods <init> in bytecode) must be handled specifically, because of the way of creating new objects in Java. Instruction new is followed by dup and invokespecial, which consumes duplicated value and initializes the object remaining at the top of the stack. Since the new object is not stored in any variable, we must transfer the taint value from merged parameters to the stack top separately.

Bugs related to taint analysis are identified by TaintDetector (implementing Detector). For better performance, before methods of some class are analyzed, constant pool (part of the class file format with all needed constants) is searched and the analysis continues only if there are references for some taint sinks. Then TaintDataflow instance is loaded for each method and locations of its control flow graph are iterated until taint sink method is found. This means, we find all invoke instructions used in a currently analysed method and check, whether the called methods are related to the searched weaknesses. Facts (instances of Taint class) from TaintDataFlow are extracted for each sink parameter of a sink method. Bug is reported with high confidence (and priority), if the taint state is TAINTED, with medium confidence for UNKNOWN taint state and with low confidence for SAFE and NULL (just for the case of a bad analysis, these warnings are not normally shown anywhere). Taint class also contains references for taint source locations, so these are shown in bug reports to make review easier – you should see a path between taint sources and the taint sink. TaintDetector itself is abstract, so it must be extended to detect concrete weakness types (like command injection) and InjectionSource interface implemented to specify taint sinks (the name of the interface is a bit misleading) and items in a constant pool to specify candidate classes.

public class CommandInjectionDetector extends TaintDetector {

    public CommandInjectionDetector(BugReporter bugReporter) {
        super(bugReporter);
    }

    @Override
    public InjectionSource[] getInjectionSource() {
        return new InjectionSource[] {new CommandInjectionSource()};
    }
}

CommandInjectionSource overwrites method getInjectableParameters, which returns an instance of InjectionPoint containing parameters, that cannot be tainted, and the weakness type to report. Boolean method isCandidate looks up constant pool for the names of taint sink classes and return true if present.

TaintDetector is currently used to detect command, SQL, LDAP and script (for eval method of ScriptEngine) injections and unvalidated redirect. More bug types and taint sinks should follow soon. Test results are looking quite promising so far. Inter-procedural analysis (not restricted to a method scope) should be the next big improvement, which could make this analysis really helpful. Then everything should be tested with a large amount of real code to iron out the kinks. You can see the discussed classes in taintanalysis package and try the new version of FindSecurityBugs.

Yes, you heard right! Developer testing. It means testing done by developers! And yes, I’m talking about the confirmation testing, which is known as “The changelog” in our R&D department. The result – improvement from 50 % to 95 % of tickets closed at the end of a sprint and all sprint goals completed on time 4 sprints in a row! [1]

Percentage of work planned and really done over sprints

Percentage of work planned and really done over sprints

Our development team has recently been trying very hard to shorten the development cycle of features and fixes. One of the biggest delays we have identified was caused by the tickets waiting in the “To Test” state. It means that the implementation part has been completed and is waiting for a QA Engineer to confirm it’s functionality by testing it. As I was the only tester for 7 developers on the team, the tickets with lower severity had to simply wait, often more than a week. Moreover, the testing activities were concentrated at the end of a sprint. A ticket reopened too late can easily be a cause of a failure in reaching team’s sprint goals.

Generally, it is strongly recommended by the literature to not let developers test their own work. The common reasoning is, that everybody is in love with their creations and would not like to damage them. Also, some defects are approach-related and thus cannot be discovered by the same person with the same approach. Moreover, test planning, test case writing and test data definition need special skills, which developers generally do not possess. Our mindset was changed by our CTO, who saw this as an opportunity to improve efficiency of the development.

In our team, we kept all of the aforementioned risks in mind and tailored the process to negate all of them. We have tried several versions of the process. In a short time we found the most important thing – that we need to differentiate tasks (new development) and defects (bug fixes). You’ll later see why.

Generally, it is much easier to write test cases for known functionality. This is the case for the defects which only fix or slightly modify an already tested feature. Experiences with an existing (sub-)system, where a feature is only updated and testing approaches are well known, help the QA engineer to define a set of all necessary test cases, edge-case scenarios and also expected visual aspects. Therefore, based on the well-defined test cases, a developer should be able to develop the fix and test it in a way, which will ensure it meets quality requirements. Later, a QA engineer interviews the developer to find out his confidence level about the testing and also asks several direct questions about the most critical areas. Based on this information and on the experiences with the particular developer[2], the QA engineer then decides which defects need to be retested and which can be closed right away.

On the other hand, tasks usually represent development of new features. Since the people writing the code in Y Soft are not “coders” but developers, they have to propose solutions for many low-level issues throughout the development process. Therefore, it frequently happens, that some aspects and limitations are discovered later in the sprint, making it very challenging for a QA engineer to define a complete set of test cases in advance. Also, without a proper hands-on the final work, it is also very difficult to define requirements for visual aspects and to judge user friendliness. Therefore, tasks have to always be retested by a QA engineer.

Nevertheless, defining at least some of these tests brings certain benefits that are also common for the defects:

  • A QA engineer can discover inconsistencies between his/her understanding compared to the developer’s understanding of the work to be done. It is generally better to find them before significant time has been spent on development of something undesired and reworking it later.
  • A partial test suite can be very helpful to the developer during the development, as it can be used as a checklist to cover many possible scenarios.
  • Some other scenarios can be discovered during the development and can be added to the test suite. These test cases would otherwise probably not exist.
  • As the developer performs tests himself, many issues are found and fixed in a much shorter time and with less effort than it would when they are found and reported back by the QA department (several days later). This way we can assure higher quality of the developed product in the development phase.
  • Based on the current state of work and human resources, the team can flexibly agree on the extent to which developers will test their work. Either they do extensive tests to help QA engineers, or they perform only a basic set of tests in order to move on to the next development task sooner.

These result in:

  • shorter development cycles (Open to Closed status)
  • less reopened tickets
  • better understanding of the whole solution for all members of the team

The process itself consists of the following steps:

  1. When the sprint backlog is defined:
    1. A QA Engineer creates a set of test cases for each of the tickets (pre-)planned for the sprint;
    2. The test cases are defined in a subtask of each ticket. The subtasks are named “Testing of [ticketID]”;
  2. At the sprint planning meeting:
    1. The QA engineer consults on the technical details of the solution and proposed tests with other team members and the current product manager;
    2. The effort for each ticket is estimated including the testing part;
  3. Developers have to test their work and switch the ticket to status “To Test”:
    1. All defects are tested in a standardized testing environment (ideally prepared by a QA engineer);
    2. Tasks can be tested in a development environment (e.g. running on a developer’s machine built by a development tool);
  4. When a ticket has the “To Test” status, the QA engineer:
    1. Evaluates which defects need to be tested again and which do not;
    2. Retests all tasks;

It is important to note that the aforementioned benefits are only subjectively observed by the members of the team, as none of them has been measured in any systematic way. Doing that would require returning back to the old way of work, in order to make the necessary measurements. Since members of the team are satisfied with the new process , there is no need or motivation to revert back to the old method. A change from less than 50 % to 95 % of closed tickets from the end of Sprint 17 to the end of Sprint 18 and the 100 % fulfillment of sprint goals in the last four sprints presents a sufficiently strong argument to try this process in other teams.

[1] The orange line represents the percentage of developers’ time was planned for a sprint. The time is estimated for each ticket. The green line represents the percentage of that planned estimated time, for which the tickets were closed at the end of the sprint. The first attempt to use developer testing was in Sprint 18. In Sprint 22, we resumed the process. The trend of about 70 % of work planned and more than 90 % finished remains to date.

[2] In order to gain experiences with the developers, there has to be a period of several sprints, where every defect is retested. The QA engineer needs to measure the ratio of closed and reopened defects per developer. During this period the QA engineer can also find out, whether he/she is able to define all necessary test cases beforehand.

I am trying to setup better go development environment and decided to give vim-go a try (which also resulted in me replacing Vundle with Pathogen, which is much more straightforward). Installing everything was a breeze and I only encountered a problem when I tried to make tagbar work, because tagbar does not work with BSD ctags and requires Exuberant ctags 5.5.

The simplest way how to install exuberant ctags is with brew.

ondra@nb218 ~/Downloads/ctags-5.8
$ brew install ctags
Warning: You are using OS X 10.11.
We do not provide support for this pre-release version.
You may encounter build failures or other breakage.

However brew is still having problems running on 10.11 and many packages fail to build, ctags being on exception. So let’s see how we can deploy ctags into /usr/local without stepping on brew’s toys.

First of all, we need to determine the prefix to use via brew diy:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew diy --name=ctags --version=5.8
--prefix=/usr/local/Cellar/ctags/5.8

Now you can use brew fetch to download the source for ctags or just download it from sourceforge. To get the source using brew, use:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew fetch --build-from-source ctags
==> Downloading https://downloads.sourceforge.net/ctags/ctags-5.8.tar.gz
Already downloaded: /Library/Caches/Homebrew/ctags-5.8.tar.gz
SHA1: 482da1ecd182ab39bbdc09f2f02c9fba8cd20030
SHA256: 0e44b45dcabe969e0bbbb11e30c246f81abe5d32012db37395eb57d66e9e99c7

==> Downloading https://gist.githubusercontent.com/naegelejd/9a0f3af61954ae5a77e7/raw/16d981a3d99628994ef0f73848b6beffc7
Already downloaded: /Library/Caches/Homebrew/ctags--patch-26d196a75fa73aae6a9041c1cb91aca2ad9d9c1de8192fce8cdc60e4aaadbcbb
SHA1: 24c96829dfdc58b215bfccf5445a409efba1ffe5
SHA256: 26d196a75fa73aae6a9041c1cb91aca2ad9d9c1de8192fce8cdc60e4aaadbcbb

Anyhow, when you have the source extracted, run configure && make with the prefix from brew diy:

ondra@nb218 ~/Downloads/ctags-5.8
$ ./configure --prefix=/usr/local/Cellar/ctags/5.8
Exuberant Ctags, version 5.8
Darwin 15.0.0 Darwin Kernel Version 15.0.0: Sun Jul 26 19:48:55 PDT 2015; root:xnu-3247.1.78~15/RELEASE_X86_64 x86_64
checking whether to install link to etags... no

And link ctags via brew using brew link:

ondra@nb218 ~/Downloads/ctags-5.8
$ brew link ctags
Linking /usr/local/Cellar/ctags/5.8... 2 symlinks created

And you are done. When building is fixed for brew under 10.11 simply unlink ctags and use brew install as usual.

FindBugs GUI

In the previous article, I was describing the creation of a new FindBugs detector for hard-coded passwords and cryptographic keys. I also mentioned some imperfections and I have decided to learn more about FindBugs and improve the detection.

Java virtual machine has a stack architecture – operands must be pushed on the stack before method is invoked, given number of stack values is consumed during invocation and produced return value (if any) is pushed subsequently. My detector class extends OpcodeStackDetector, which implements abstract interpretation technique to collect approximated information about values at the operand stack for each code location. These pieces of information (usually called facts) are kept only for those locations, where a derived value of the fact does not depend on the preceding control flow (for example, the value is the same for each possible branch executed in earlier conditional statements).

One of the facts available for stack values is the actual value of a number or String (related to constant propagation performed by compilers during optimization). We can use this to detect hard-coded values – known constant means hard-coded value. However, we also need to track other other data types besides Strings (numbers can be ignored) to detect passwords in a char array and hard-coded keys. In addition, there is one more issue with this approach…

Tracking concrete values is unnecessarily complicated and the value often becomes unknown – we only need to know whether the value is constant, not which constant is on the stack. Consider a piece of code like this:

private Connection getConnection(String user) {
    String password;
    if ("root".equals(user)) {
        password = "superSecurePassword";
    } else {
        password = "differentPassword";
    }
    return DriverManager.getConnection(DB_URL, user, password);
}

The constant value in the password variable is known inside both branches, but these values are forgotten after the conditional statement, since the values differ. For this reason, weakness like this was not reported by the previous version of the detector nor by the original FindBugs detector for constant database passwords. Even if there is only one possible constant, analysis can fail because of null values, see this code (looking a bit unreal, but demonstrating the problem):

String pwd = null;
if (shouldConnect()) {
    pwd = "hardcoded";
}
if (pwd != null) {
    Connection connection = DriverManager.getConnection(url, user, pwd);
    // some code working with database
}

We can easily see that the password variable has always value “hardcoded”, but the performed analysis is linear and the fact is forgotten right after the first conditional statement. Second condition cannot return the forgotten constant back and weakness is not detected again.

Fortunately, these issues can be solved by setting and reading user value fact, which OpcodeStackDetector allows (if annotation CustomUserValue is added to the extending class). Our fact has only one value for hard-coded stack items or it is null to indicate unknown state (default). We can periodically check, whether a value on the stack is a known constant or null and set the user value for it if it is, propagation is done automatically. If then analysis merges facts from different control flow branches with different constants (or null), user value is the same and not reset to default. Custom user value is also used to mark hard-coded passwords and keys with the other password and key data types, detection of those objects remains similar as in the previous version of the detector. Weakness is reported if sink parameter has non-null user value and stack value is not null (null passwords are not considered to be hard-coded).

The detector is using proper flow analysis after this improvement; however, it is restricted to a method scope and hard-coded values in fields are reported only if used in the same class. Inter-method and inter-class analysis is a future challenge, but I have kept reporting of hard-coded fields with suspicious names and unknown sink not to miss important positives. In contrast to the previous version, these fields are reported with lower priority and only if not reported before using proper flow analysis technique to prevent duplicate warning. Moreover, all fields are reported in a single warning for a class to make possible false positives less distracting.

Another improvement is the possibility to configure more method parameters as password or key sinks. If more than one parameter is hard-coded, only single warning is produced and the parameters are mentioned in the detailed message. The last important change is that hard-coded cryptographic keys are reported in a separated bug pattern since both hard-coded passwords and keys have a different CWE identifier (259 and 321) and are equally important. Decision between reported warnings is done automatically based on the data types of hard-coded parameters.

I have tested the detector with Juliet Test Suite and using proper analysis it can reveal both types of weakness in 17 flow variants (out of 37) and all sink variants with no false positives. Original FindBugs detector reveals weaknesses in 10 flow variants and only for database passwords, other password methods and hard-coded keys are not detected in any variant.

You can see the detector class on GitHub. Happy coding with no hard-coded passwords!