Přijď 7.11.2017 od 18:00 k nám do Y Softu, Technická 13, Brno na pilotní díl Tech Support Meetupu.

Víš, co se ti děje na síti?
Co je to network sniffing?
Jak analyzovat šifrovanou komunikaci?

Pokud, chceš znát odpovědi, tak neváhej a přijď na meetup!

Během interaktivního workshopu se seznámíš se základními informacemi o Wiresharku, proč jsou tyto informace užitečné a jak je využívat pro efektivní analýzu a ladění sítě.

Vstup zdarma. Registruj se prosím zde, počet míst je omezen.

Workshop bude probíhat v češtině a provede vás nim nadšená luštitelka šifer, Lenka Bačinská.

 

 

As I have written some time ago, we use OWASP Dependency check in order to scan our product for known vulnerabilities. But when you have reports from many components of the product, you need to manage them somehow. In the previous article, I’ve briefly mentioned our tool that processes the reports. In this article, I’ll present this tool. FWIW, we have open-sourced it. You can use it, you can fork it, you can send us pull requests for it.

Vulnerable libraries view

The vulnerable libraries view summarizes vulnerabilities grouped by libraries. They are sorted by priority, which is computed as number of affected projects multiplied by highest-severity vulnerability. This is probably the most important view. If you choose a vulnerable library, you can check all the details about vulnerabilities, including affected projects. When you decide to update a library to a newer version, you will probably wonder if there is some known vulnerability present in the new library version. The ODC Analyzer allows you to check it after simply entering your desired library version.

One might find the ordering to be controversial, mostly using the highest-rated vulnerability. Maybe the scoring system is not perfect (and no automatic scoring system can be perfect), but I find it reasonable. I assume that highest-scored vulnerability is likely some remote code execution triggerable by remote unauthenticated attacker. In such case, does having multiple such vulnerabilities make the situation worse? Well, it might slightly increase the probability that the vulnerability will be exposed to the attacker, but having two such vulnerabilities can hardly make it twice as bad as having just one. So I wanted one highest-rated vulnerability to be a ceiling of risk introduced by the vulnerable library to the project. Maybe we could improve the algorithm to rate multiple low-severity vulnerabilities higher than the severity of the highest-rated vulnerability. I already have an idea how to do this, but it has to be discussed.

Vulnerabilities view

You can also list all vulnerabilities affecting one of the projects scanned. They are sorted by severity multiplied by number of affected projects. Details available there are mostly the same as in the vulnerable libraries view.

Another interesting feature is vulnerability suppression. Currently, ODC Analyzer can generate a suppression that can be pasted to suppressions.xml, so it is taken into account the next time when running vulnerability scans. We consider making it more smart and moving the suppression to ODC Analyzer itself.

Filters

Unless you have just a single team, most people are probably not interested in all the vulnerabilities of all the projects. They will want to focus on only their projects instead. ODC Analyzer allows focusing on them by filtering only one project or even a subproject. One can also define a team and filter all projects belonging to this team.

Teams are currently defined in configuration file, there is no web interface for it now.

E-mail notifications

One can subscribe to new vulnerabilities of a project by e-mail. This allows one to watch all relevant vulnerabilities without periodic polling of the page.

Export to issue tracking system

We are preparing export to issue tracking system. There is some preliminary implementation, but we might still perform some redesigns. We can export one issue per vulnerability. The rationale behind this grouping is that this allows efficient discussion when multiple projects are affected by the same vulnerability. A side effect of such design is that you need an extra project in your issue tracker and you will might want to create child issues for particular projects.

Currently, only JIRA export is implemented. However, if you want to export it to <insert name of your favourite issue tracker here>, you can simply implement one interface, add a few of code for configuration parsing and send us a pull request 🙂

Some features under consideration

Affected releases

We perform scans on releases. It would be great to add affected releases field to vulnerabilities. We, however, have to be careful to explain its meaning. We are not going to perform these scans on outdated unsupported releases, thus they will not appear there. So, it must be clear that affected releases are not exhaustive in this way.

Discussion on vulnerabilities

We consider adding a discussion to vulnerabilities. One might want to discuss impact on a project. These discussions should be shared across projects, because it allows knowledge sharing across teams. However, this will probably be made in issue tracker systems like JIRA, as we don’t want to duplicate their functionality. We want to integrate them, though.

Better branches support

If a project has two branches, it can be currently added as two separate projects only. It might be useful to take into account that multiple branches of a software belong to the same project. There are some potential advantages:

  • Issues in production might have higher urgency than issues in development.
  • Watching a particular project would imply watching all the branches.

However, it is currently not sure how to implement it and we don’t want to start implementation of this feature without a proper design. For example:

  • Some projects might utilize build branches in Bamboo.
  • Some projects can’t utilize build branches, because there are some significant differences across branches. For example, some projects switch from Maven to Gradle.
  • Is it useful to allow per-branch configuration (e.g., two branches belonging to different teams or watching only one branch)?
  • Should the branches be handled somewhat automatically?
  • How to handle different branching models (e.g. master + feature branches, master + devel + feature branches, …)?

Library tagging

Library tagging is useful for knowing what types of libraries are in the system. It is partially implemented. It works, but it has to be controlled using direct access to the database. There was never a GUI for adding a tag. When you have some existing tags, there is a GUI for adding these tags to a library, but there is no way to add one permission for that.

Homepage

The project was originally designed to be rather a single-page. This did not scale, so we added some additional views. The current homepage is rather a historical left-over and maybe it should be completely redesigned.

List of all libraries

Non-vulnerable libraries are not so interesting, but one might still want to list them for some purposes. While there is a hidden page containing all the libraries (including a hidden CSV output), it is not integrated to the rest of the application. We have needed this in the past, but we are not sure how to meaningfully integrate it with the rest of the system without creating too much of clutter.

Conclusion

We have implemented a tool useful for handling libraries with known vulnerabilities and cooperation across teams. This tool is in active development. If you find it useful, you can try it. If you miss a feature, you can contribute by your code. If you are unsure if your contribution is welcome, you can discuss it first.

Black Hat Europe 2015

Black Hat is a famous computer security conference held annually in the USA, Europe and Asia. Philippe Arteau, the original developer of FindSecurityBugs, was accepted as a presenter and because of my significant software contribution, he suggested me presenting the tool together. Y Soft then supported me in taking this opportunity.

Black Hat Europe 2015 was held in November in Amsterdam and over 1500 people from 61 countries attended this event. The conference was opened with the keynote called What Got Us Here Won’t Get Us There mentioning upcoming security apocalypse, complexity as a problem, security anti-patterns, taking wrong turns, developing bad habits, missing opportunities or re-examining old truths (using many slides). The Forum and three smaller rooms were then used for parallel briefings – usually one-hour presentations with slides and some demonstrations of latest developments in information security. There was also Business Hall with sponsors, free coffee and a Tool/Demo area for the sub-event called Arsenal. Here we and mostly independent researchers showed our “weapons” – every tool had a kiosk for two hours and people come closer to see short demonstrations and to ask questions.

Despite presenting during lunch time, fair number of Black Hat attendees came to discuss and see FindSecurityBugs in action. The most common questions were about what it can analyse (Java and other JVM languages like Scala, no source code needed), which security weaknesses can we detect (the list is here, a few people were happy that there are also detectors for Android) and whether is it free or not (yes, it is open source). People seemed to like the tool and someone was quite surprised it is not a commercial tool 🙂

There were many interesting briefings during the two days. One of the most successful presentations (with applause in the middle of it) explained a reliable software-only attack to defeat full disk encryption (BitLocker) in Windows. If there was no pre-boot authentication and the attacked machine had joined a domain, it was possible to change the password at the login screen (and have full access to the system) by setting up a mock domain controller with user password expired. Microsoft released a patch during the conference. Other briefings included breaking into buildings, using continuous integration tools as an attack surface, fooling and tracking self-driving cars or delivering exploits “with style” (encoded in image/HTML polyglot). One presentation (about flaws in banking software) was cancelled (or probably postponed to another event) because the findings were too serious to disclose them at that time, but there was an interesting talk about backend-as-a-service instead – many Android applications contain hard coded credentials to access cloud services and researchers were able to extract 56 millions of sensitive user records. You can download many of the presentations (and some white papers) from the Black Hat website.

I also had time for a small sightseeing – there was a special night bus for attendees to the city centre and I was able to see it again before my flight home too. Amsterdam has a nice architecture separated by kilometres of canals and it is a very interesting city in general. What surprised me the most were bicycles everywhere – I had known people ride here a lot, but I didn’t expect to see such a huge amount of bikes going around and parked in the centre. They don’t wear helmets, sometimes carry a few children in a cart and don’t seem to be very careful (so I had to be a bit more careful than I’m used to when crossing streets). Walking through red-light district De Wallen with adult theatres and half-naked ladies behind glass doors is a remarkable experience too (don’t try to make photos of them, they’ll be angry). Sex shops and “coffee shops” (selling cannabis) are quite common not only in this area.

Another surprise came at the airport, where the inspection was very thorough and I was forced to put everything out of my bag (which I was not very happy about, because it took me a long time to pack it into a single cabin baggage). Just after (when I connected to the airport Wi-Fi) I realized what happened in Paris recently. The plane was also surprisingly small and first time for me I had received special instructions for opening the emergency door (since my seat was next to the right wing by chance). Nevertheless, the whole tripe was a nice experience for me and I was happy to be there, so maybe next time in London 🙂

While caring about security of our code is arguably important, it is not enough for building a secure product. Vulnerabilities might also arise from a 3rd party component. Handling of those vulnerable libraries is thus another essential aspect of building a secure product.

Database of vulnerabilities

A simple way for managing a large number of 3rd party libraries might be using a vulnerability database. There is well-known National Vulnerability Database that contains information about many vulnerabilities. This is also what OWASP Dependency Check uses. Let me introduce it.

NVD aims to contain all known vulnerabilities of publicly available software. An entry about a vulnerability has some CVE number. (CVE means “Common Vulnerabilities and Exposures”, so CVEs are not limited to vulnerabilities, but it is not so important for now.) You can look at some example of CVE. Various details might be available, but the level of details may depend on multiple factors. For example, a not-yet-publicly-disclosed vulnerability might have a CVE number, but its description will be obviously not very verbose.

CVE numbes are usually assigned to CPEs (“Common Platform Enumeration”). A CPE is an identifier of vulnerable software. For example, the mentioned CVE-2005-1234 contains information that it affects cpe:/a:phpbb_group:phpbb-auction:1.0m and cpe:/a:phpbb_group:phpbb-auction:1.2m.

How OWASP Dependency Check works?

In a nutshell, it scans a project (JARs, POM files, ZIP files, native libraries, .NET assemblies, …) and tries to assign a CPE to each entity. Note that there is much of heuristics involved. Obviously, there is also much what can go wrong.

When CPEs are assigned, it looks for vulnerabilities (CVEs) assigned to those CPEs. A scan is written to a result file. Multiple formats are supported. The two most prominent formats are HTML (for direct reading) and XML (for further processing). We use the XML output in order to process multiple reports of multiple projects and assign them to particular teams. (One team is usually responsible for multiple projects.)

Integration with projects

Maven

OWASP Dependency Check has a mature Maven plugin. It basically works out-of-box and you can adjust probably any parameter supported by OWASP Dependency Check. We didn’t have to modify the project, as it can be run by mvn org.owasp:dependency-check-maven:check and the configuration can be adjusted by passing -Dproperty=value parameters.

Gradle

There is also some plugin for Gradle. Unfortunately, it is not as mature as the Maven plugin. We had to modify it in order to make it working in our environment. I’d like to merge those changes with the original project. Even after the modifications, it is still not ideal. The most prominent issue is that it includes test dependencies. Excluding them in the Groovy plugin is not as simple as with Maven plugin, because Gradle can have many configurations and each of those configurations might have different dependencies. I have no simple clue how to distinguish important configurations from others. However, this was not a really painful issue, as these dependencies are considerably less common and usually don’t have any known vulnerability in NVD, as they usually aren’t touched by untrusted input.

How we scan our projects?

Running those scans manually on multiple sub projects and evaluating all the reports manually would take too much time. So, we have developed a tool for automating some of the manual work.

First, scans are run automatically every day. There is no need to run it manually on every single subproject. Results of these scans are stored in XML format. Some adaptations specific to our environment are applied. For example, scans are able to run without any connection to the Internet. (Maven plugin can be configured without any modification, while Gradle plugin required to be modified for that.) Of course, we have to download all the vulnerability database separately, outside of the scan process.

Second, our tool downloads the results from our Bamboo build server and processes them. There are some sanity checks that should warn us if something breaks. For example, we check freshness of the vulnerability database.

Third, issues related to particular subprojects are automatically assigned to corresponding teams. Issues can be prioritized by severity and number of occurrences.

While ODC is a great tool, there are some issues connected to using it. I’ll discuss some of them there.

Complexity of library identifiers

How many ways of identifying a library do you know? Maybe you will suggest using groupId and artifactId and version (i.e. GAV) by which you can locate the library on a Maven repository. (Or you can pick some similar identifier used on another platform than Java.) This is where we usually start. However, there are multiple other identifiers:

  • GAV identifier: as mentioned above.
  • file hash: ODC uses SHA1 hash for lookups in Maven database. Note that there might be multiple GAV identifiers for one SHA1 hash, so there is 1:1 relation. Moreover, when we consider snapshot, we theoretically get m:n relation.
  • CPE identifier: This one is very important for ODC, as ODC can’t match vulnerabilities without that. Unfortunately, there is no exact algorithm for computation of CPE from GAV or vice versa. Moreover, multiple GAVs might be assigned to one CPE. For example, Apache Tomcat consists of many libraries, but all of them have just one CPE per version. Unfortunately, the ODC heuristic matching algorithm might also assign multiple CPEs to one GAV in some cases.
  • GA identifier: This is just some simplification of GAV identifier, which misses the version number. There is nothing much special about this identifier for ODC, but we have to work with that, too.
  • intuitive sense: For example, when you mention “Hibernate”, you probably mean multiple GAs at the same time.

Note that this complexity is not introduced by ODC. The complexity is introduced by the ecosystem that ODC uses (e.g. CPEs) and by the ecosystem ODC supports (e.g. Maven artifacts).

Bundled libraries

While überJARs might be useful in some cases, they are making inspection of all the transitive dependencies harder. While ODC can detect some bundled libraries (e.g. by package names or by POM manifests if included), this is somehow suboptimal. Moreover, if ODC finds a bundled dependency, it might look like a false positive at first sight.

Libraries without a CPE

Some libraries don’t have a CPE. So, when ODC can’t assign any CPE, it might mean either there is no CPE for the library (and hopefully no publicly known vulnerability) or ODC has failed to assign the CPE. There is no easy way to know that is the case.

False positives

False positives are implied by the heuristics used mainly for detecting CPE identifier. We ca’t get rid of all of them, until a better mechanism (e.g. CPE identifiers in POMs) is developed and widely used. Especially the latter would be rather a long run.

False positives can be suppressed in ODC in two ways. One can suppress assignment of a specific CPE to a specific library (identified by SHA1 hash). If a more fine-grained control is needed, one can suppress assignment of a single vulnerability to a specific library (identified by SHA1 hash). Both of them make sense when handling false positives.

Handling a CPE collision

For example, there is NLog library for logging in .NET. ODC assigns it cpe:/a:nlog:nlog. This might look correct, but this CPE has been used for some Perl project and there is some vulnerability for this CPE that is not related to the logging library.

If one suppressed matching the CPE, one could get some false negatives in future, as vulnerabilities of the NLog library might be published under the same CPE, according to a discussion post by Jeremy Long (the author of the tool).

In this case, we should not suppress those CPEs. We should just suppress CVEs for them.

Obviously mismatched CPEs

CPE might be also mismatched obviously. While the case is similar to the previous one, the CPE name makes it obvious that it does not cover the library. In this case, we can suppress the CPE for the SHA1 hash.

Missing context

Vulnerability description usually contains a score. However, even if the score is 10 out of 10, it does not imply that the whole system is vulnerable. In general, there are multiple other aspects that might change the severity for a particular case:

  • The library might be isolated from untrusted input.
  • The untrusted input might be somehow limited.
  • The library might run in sandboxed environment.
  • Preconditions of the vulnerability are unrealistic for the context.

None of them can be detected by ODC and automatic evaluation of those aspects is very hard, if it is not impossible.

Conclusion

We have integrated OWASP Dependency Check with our environment. We have evaluated the quality of reports. The most fragile part is matching a library to a CPE identifier, which may lead to both false negatives (rarely seen) and false positives (sometimes seen). Despite those issues, we have found ODC to be useful enough and adaptable to our environment.

sdl-shield-greenNowadays, security becomes an important aspect of almost every software system. Unfortunately, it does not necessary mean that security is adequately considered in every piece of software. What does “adequately” mean? For me, security measures are adequate if an investment to them is less than loss caused if these measures were not implemented. I would say this explanation is clear if there is no unknown variable there, i.e., the potential loss.

If a company has no security incidents so far, does it mean that its security measures are adequate and it should not put additional investments in security? It is really hard to say yes or no. The software may change (e.g., it may introduces new features with new attack vectors), the market can change (e.g., more people can start using the software, so it may become more attractive to attackers), and the environment where the software is used may change (e.g., from an intranet solution it may become a public cloud based solution).

The history gives us a generous hint. No matter what, security incidents happen. The question is only when a what impact it will have on the customers and the company.

“Security is a process, not a product”
– Bruce Schneier

In order to create secure products, a company should regularly evolve and adapt its development processes to changes that continuously take place around us.
Here, in Y Soft, we decided to examine and try in one of our development teams the Microsoft Secure Development Lifecycle (SDL) that is a process aiming to help to build more secure software.

In this series of posts, I would like to:

  • Look at Microsoft SDL in more details,
  • Describe its advantage/disadvantages as we perceive them here in Y Soft,
  • Describe obstacles we meet and how we overcome them.

Secure Development Lifecycle (SDL) is a process that helps to build more secure software. Microsoft introduced its SDL in 2004, successfully uses and regularly updates it. Microsoft SDL covers all phases of development process, e.g., Requirements, Design, Implementation, Verification and Release.

Training → Requirements → Design → Implementation → Verification → Release → Response

Microsoft SDL describes tasks that should be completed at each phase and also provides tools and training materials that are available free of charge.
Key features of Microsoft SDL, as I perceive them, are:

  1. Security is considered systematically. Developers and managers can be certain to some degree that important security issues were not overlooked and they will not emerge at release (or after release) phase where their resolution is often not cost effective.
  2. Microsoft SDL is designed to be a part of software development process. And this is probably the most important thing about it. A bullet-proof system does not exist. New attacks appear all the time. And it is important that new defenses that come after are in place and in time.
  3. Security is considered early in the product lifecycle. The process starts from the requirements phase. It means that design reflects requirements on security features and features that need to be secure. The process that starts from the requirements phase, among other, decreases the risk of security design flaws in the later phases and hence decreases the total development cost.
  4. Process is primarily focused on developers. Most of the tasks from the Microsoft SDL are completed by developers themselves. The tasks include among others, threat modeling, secure design, code analysis using static analysis tools, etc. Security experts are those who are consulting, reviewing and helping developers to get their job done right.
  5. It can be integrated gradually. Microsoft SDL defines four maturity levels (Basic, Standardized, Advanced and Dynamic) where each level has additional tasks (that should be completed) in comparison to the previous one.
  6. It naturally fits the software development process. Tasks and activities defined in the Microsoft SDL logically follow or extend tasks that are normally done by developers. For example, for systematic threat analysis, a developer can utilize the Microsoft SDL Threat Modeling Tool. This tool uses Data Flow Diagrams (DFDs) that resemble container/component diagram from C4 model that is used by our developing teams.

Next time, we will look at the training and requirements phase, and discuss activities and tasks we decided to implement.