This blog post will introduce tool Terraform, which we use for deploying testing environments in YSoft. We will cover following topics:

  • What is Terraform?
  • How does it work?
  • Example of use

What is Terraform?

Terraform is a command line tool for building and changing infrastructure in a safe and efficient matter. It can manage resources on most of the popular service providers. In essence, Terraform is simply a tool that takes configuration files as input and generates an execution plan describing what needs to be done to reach the desired state. Do you need to add another server to your cluster? Just add another module to your configuration. Or redeploy your production environment in a matter of minutes? Than Terraform is the right tool.

How does it work?

Infrastructure as a code

Configuration files that define infrastructure are written using high-level configuration syntax. This basically means that blueprint of your production or testing infrastructure can be versioned and treated as you would normally treat any other code. In addition, since we are talking about code, the configuration can be shared and re-used.

Execution plan

Before every Terraform execution, there is planning step, where Terraform generates an execution plan. The execution plan will show you what will happen when you run an execution (when you apply the plan). This way you avoid surprises when you manipulate with your infrastructure.

Terraform state file

How does Terraform determine the current state of infrastructure? The answer is the state file. State file keeps the information about all the resources that were created by execution of the given configuration file. To assure that the information in state file is fresh and up to date, Terraform queries our provider for any changes of our infrastructure (and modifies state file accordingly), before running any operation, meaning: for every plan and apply, Terraform will perform synchronization of a state file with a provider.

Sometimes this behavior can be problematic, for example querying large infrastructures can take a non-trivial amount of time. In this scenarios, we can turn off the synchronizing, which means the cached state will be treated as the record of truth.

Below you can see the picture of the whole execution process.

Example of use

In our example, we will be working with the azure provider. The example configuration files can be used only for the azure provider (hence the configuration files for different providers may and will differ). In our example, it is also expected, that we have set up terraform on our machine and appropriate endpoints to provider beforehand.

Step 1: Write configuration file

The presented configuration file has no expectations regarding previously created resources and it can be executed on its own, without the need to create any resources in advance.

The configuration file that we will write describes following resources:

Now we create an empty directory on the machine where we have installed terraform and within we create a file with name main.tf. The contents of the main.tf  file:

provider "azurerm" {
  subscription_id = "..."
  client_id       = "..."
  client_secret   = "..."
  tenant_id       = "..."
}

resource "azurerm_resource_group" "test" {
  name     = "test-rg"
  location = "West US 2"
}

resource "azurerm_virtual_network" "test" {
  name                = "test-vn"
  address_space       = ["10.0.0.0/16"]
  location            = "West US 2"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
  name                 = "test-sbn"
  resource_group_name  = "${azurerm_resource_group.test.name}"
  virtual_network_name = "${azurerm_virtual_network.test.name}"
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_network_interface" "test" {
  name                = "test-nic"
  location            = "West US 2"
  resource_group_name = "${azurerm_resource_group.test.name}"

  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "dynamic"
  }
}

resource "azurerm_virtual_machine" "test" {
  name                  = "test-vm"
  location              = "West US 2"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  network_interface_ids = ["${azurerm_network_interface.test.id}"]
  vm_size               = "Standard_DS1_v2"

  delete_os_disk_on_termination = true

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "16.04-LTS"
    version   = "latest"
  }

  storage_os_disk {
    name              = "myosdisk1"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }
}

 

Step 2: Planning the execution

Now we browse into a directory with our main.tf  file and we run command terraform init, which initializes various local settings and data that will be used by subsequent commands.

Secondly, we run command terraform plan, which will output the execution plan, describing which actions Terraform will take in order to change real infrastructure to match the configuration. The output format is similar to the diff format generated by tools such as Git. If terraform plan failed with an error, read the error message and fix the error that occurred. At this stage, it is likely to be a syntax error in the configuration.

Step 3: Applying the plan

If terraform plan ran successfully we are safe to execute terraform apply. Throughout the whole “apply” process terraform will inform us of progress. Once the terraform is done, our environment is ready and we can easily check by logging in to our virtual machine. Also, our directory now contains file terraform.tfstate file, which is state file that corresponds to our newly created infrastructure.

Conclusion

This example was only a very simple one to show how configuration file might look. Terraform offers more on top of that. Configurations can be packed into modules, self-contained packages that are managed as a group. This way we can create reusable parametrizable components and treat these pieces of infrastructure as a black box. Besides that, Terraform can also perform a provisioning of VM and much more.

At Y Soft, we use robots to test our solutions for verification and validation aspects, we are interested if the system works according to required specifications and what are the qualities of the system. To save time and money, it is possible to use a single robot to test multiple devices simultaneously. How is this done? It is very simple so let’s look at it.

When performing actions to operate given a device, the robot knows where the device is located due to a calibration. The calibration file contains transformation matrix that can transform location on the device to robot’s coordinate system. The file also contains information about the device that is compatible with the calibration. How the calibration is computed is covered in this article. There is also calibration of the camera that contains information about the region of interest, meaning where exactly is the device screen located in the view of a camera. All of the calibration files are stored on the hard drive.

Example of calibration files for two Terminal Professionals:

{  
   "ScreenXAngle":0.35877067027057225,
   "DeviceId":18,
   "DeviceModelId":18,
   "DeviceName":"Terminal_Professional 4, 10.0.5.182",
   "MatrixArray":[[0.728756457140,0.651809992529,-0.159500429741,75.4354297376],[-0.683749126568,0.734998176419,-0.10936964140,71.1249458777],[0.0422532822652,0.187897834122,0.981120733880,-34.923427696],[0.0,0.0,0.0,1.0]] ,
   "ScreenSize":{  
      "Width":153.0,
      "Height":91.0
   }
}
{  
   "ScreenXAngle":0.25580856461537194,
   "DeviceId":27,
   "DeviceModelId":18,
   "DeviceName":"Terminal_Professional 4, 10.0.5.112",
   "MatrixArray":[ [0.713158843830,-0.686471581194,0.220191724515,-176.983055],[0.699596463347,0.6783511825194,-0.15148794414,-71.7788394],[-0.05297850752,0.2635031531279,0.963621817536,-29.83848504],[0.0,0.0,0.0,1.0] ],
   "ScreenSize":{  
      "Width":153.0,
      "Height":91.0
   }
}


For the robot to operate on multiple devices, all of the devices must be within the robot’s operational range, which is quite limited, so this feature is currently is only used for smaller devices, like mobile phones and Terminal Professional. It is theoretically possible to use a single robot on more devices, but for practical purposes, there are usually only 2 devices. Also, all devices must be at relatively the same height, which limits testing on multifunctional devices that have various height and terminal placement. Space is also limited by the camera’s range, so multiple cameras might be required, but this is not a problem as camera calibration also contains the unique identifier of a camera. Therefore a robot can operate on multiple devices using multiple cameras or just a single camera if devices are very close to each other.

Before testing begins, a robot needs to have all calibrations of devices available on the hard drive and all action elements (buttons) need to be within its operational range. Test configuration contains variables such as DEVICE_ID and DEVICE2_ID which need to contain correct device IDs as stored the robot’s database. Which tests will run on the devices and the duration of the tests also need to be specified. Tests used for these devices are usually measurement and endurance tests, which run in iterations. There are multiple variants for these tests, for example, let’s say we wish to run tests for 24 hours on two devices and each device should have an equal fraction of this time. This means that the test will run for 12 hours on one device and 12 hours on the other, which is called consecutive testing. Another variant is simultaneous testing, which means that the robot will alternate between the devices after each iteration for a total time of 24 hours. The robot loads the calibration for another device after each iteration and continues with the test on that device.  This is sometimes very useful should one device become unresponsive, the test can continue on the second device for the remaining time. Results of each iteration of the test for each device are stored in a database along with other information about the test and can be viewed later.

Testing multiple devices with a single robot also makes it possible to test and compare different versions of an application or operating system (in this case on Terminal Professional) without ending the test, reinstalling of the device and running the test again. This saves a lot of time and makes the comparison more accurate.

Reaction time measurement is a process of acquiring timespan for how long it takes for the tested device to change its state after clicking on action element. The most common scenario is the measuring of time needed to load new screen after clicking on the button that invokes screen change. This measurement directly testifies about user experience with the tested system.

So how does our robotic system do it?

The algorithm of reaction time measurement is based on calculation of pixel-wise differences between two consecutive frames. That simply subtracts values of pixels of one image from another. Let’s have two frames labeled as fr1 and fr2, there are three main types of difference computation:

  • Changes in fr1 according to fr2: diff = fr1 – fr2
  • Changes in fr2 according to fr1: diff = fr2 – fr1
  • Changes in both directions: diff = | fr1 – fr2 |

The last mentioned computation of differences is called Absolute difference and is used in our algorithm for reaction time measurements. In general, the input frames are grayscale 8-bit images of the same size. Computation of differences for color images is possible however it would only deliver more errors in RGB color spectrum due to more variables being dependent on surrounding lighting conditions. The final computed difference is just a number indicating the amount of changes between two frames, therefore, it is perfect for detecting the change of screen in the sequence of images.

Enough of theory, let’s make it work!

First of all, we need two images indicating the change of screen. For this purpose, I choose the following two pictures.

Imagine those are the two consecutive frames in the sequence of all frames we talked about earlier.

Next step is to change them into grayscale and make sure they are of the same height and width.

After those necessary adjustments, we are ready to calculate the difference between them. As was told before, it is computed as an absolute subtraction of one image from another. In about every computer vision library there is a method implemented for this purpose, i.e. in opencv it is called AbsDiff. The following image shows the result of subtraction of two images above. As you can see there is a visible representation of both images. That is completely fine because every non zero pixel tells us how much given pixel is distinct from the same pixel in the second image. If result image would be black it would mean that difference is zero and images are identical, vice versa for white image.

Next step is to sum values of all pixels in the result image. Remember that each pixel value for grayscale 8-bit image is in a range from 0 to 255. The difference for this image:

diff = 68964041

This value itself is not very descriptive of the change between the two images, therefore normalization needs to be applied. The form of normalization we use is to transform that computed difference number into a percentual representation of change in the screen with a defined threshold. Threshold specifies what value of a pixel is high enough to be classified as changed so rather than computing sum of all pixels in result image we find how many of pixels are above the defined threshold. The normalized difference for this image:

diffnormed =96.714% (with threshold =10)

This result compared to the previous one very precisely tells us how much change happened between the two images. The algorithm to detect the amount of change between two images was just the first part of the whole time measurement process. In our robotic system, we have implemented two modes of reaction measurement, Forward and Backward reaction time evaluation.

 

Forward reaction time evaluation

Forward RTE is based on real-time-ish evaluation meaning that algorithm procedurally obtains data from an image source and process them as they arrive. The algorithm does not have the ambition to find desired screen immediately but it rather searches for screen changes, evaluates them and then compares them to desired one.

Forward RTE diagram shows the process flow of the algorithm. At the start, it sets the first frame as the reference image. Differences against this reference are then computed with incoming frames. If the computed difference is above the threshold then the frame is identified and the result is compared to desired screen. If this does not match, the frame is set as new reference and differences are then calculated against it. If that does match then timestamp of image acquirement is saved and the algorithm ends. In theory, every screen change during measuring is identified only once, however it strongly depends on the threshold value that user needs to set. Even if this algorithm tries to be real-time the identification algorithms take so much time that it is not possible yet.

 

 

Backward reaction time evaluation

Backward RTE works pretty much the other way. Rather than searching desired image from the start, it waits for all images to be acquired, identifies the last frame and sets it as a reference, after that looks for the first appearance of given reference in sequence.

Backward RTE diagram shows the process flow of the algorithm. First of all, it waits for all frames from the subsequence of frames. After all frames are acquired, the last frame is identified and if the last frame is the actual desired screen then the reference is set and the algorithm proceeds. If the last frame was not desired screen it would mean that desired screen did not load yet or some other error happened. For this case, algorithm records backup sequences to provide additional consecutive frames. If there is no desired screen in those sequences then the algorithm is aborted.

After reference is set the actual algorithm starts. It looks for the first frame which is very similar to the reference one using difference algorithms described earlier. Found image is identified and compared to desired. If identified and desired screens match then the time of acquirement is saved and the algorithm ends. However, if they do not match then the sequence is shortened to start at the index of falsely identified frame and then the algorithm searches further. The furthest index is the actual end of sequence because image at the end sequence was at the start of RTE identified as desired.

Summary

This article contains information about difference based measurement of reaction time. It guides you through computation of differences between two images. Also, this article describes our very own two reaction time evaluation modes which we use in practice.

 

Additional notes

To keep the description of algorithms as readable as possible few adjustments are missing. Preprocessing of the images is the essential part where elimination of noise has high impact on the stability of the whole algorithm. We have also implemented few optimization procedures that reduce the amount of data that needs to be processed, e.i. bisection.

 

Přijď 7.11.2017 od 18:00 k nám do Y Softu, Technická 13, Brno na pilotní díl Tech Support Meetupu.

Víš, co se ti děje na síti?
Co je to network sniffing?
Jak analyzovat šifrovanou komunikaci?

Pokud, chceš znát odpovědi, tak neváhej a přijď na meetup!

Během interaktivního workshopu se seznámíš se základními informacemi o Wiresharku, proč jsou tyto informace užitečné a jak je využívat pro efektivní analýzu a ladění sítě.

Vstup zdarma. Registruj se prosím zde, počet míst je omezen.

Workshop bude probíhat v češtině a provede vás nim nadšená luštitelka šifer, Lenka Bačinská.

 

 

This step-by-step guide shows the way to smoothly build FIPS capable OpenSSL library for use in the FIPS 140-2 compliant Tomcat server on Windows machines.

What is FIPS 140-2?

The Federal Information Processing Standard 140-2 is a security standard published by the National Institute of Standards and Technology (NIST), covering specification of security requirements for implementing cryptographic modules. Cryptographic module may be either a library, a component of a product or application, or a complete product.

The specifications include e.g. a list of approved algorithms, module inputs and outputs, physical security, cryptographic key management and more areas related to the secure design.

NIST manages a list of FIPS 140-1 and FIPS 140-2 validated cryptographic modules, i.e. modules tested, validated and certified under the Cryptographic Module Validation Program. The complete list can be found here: http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm.

FIPS 140-2 compliant Tomcat

The compliance (unlike FIPS validation) means that only FIPS approved algorithms and validated modules are used in the product, but the product itself was not validated.

Apache Tomcat, an open source Java application server, can use two different implementations of the SSL/TLS protocol, and thus there are two options for achieving FIPS 140-2 compliance:

  • JSSE, the Java implementation – it is needed to have enabled only FIPS validated Java cryptographic providers along with the correct setting of the ciphers and algorithms in the Tomcat HTTPS connector
  • Apache Portable Runtime, the OpenSSL implementation – if FIPS 140-2 is supported by the linked OpenSSL library, so called FIPS Mode can be enabled in the Tomcat settings

OpenSSL FIPS Object Module

OpenSSL library is not FIPS validated. A special software component called OpenSSL FIPS Object Module was created instead.

OpenSSL being compiled with the OpenSSL FIPS Object Module embedded inside is so called FIPS capable OpenSSL. It provides the standard, non-FIPS API as well as a FIPS 140-2 Approved Mode, a setting in products using this library in which only FIPS 140-2 validated cryptography is used and non-FIPS approved algorithms are disabled.

Current version of OpenSSL FIPS Object Module is 2.0 and is compatible with standard OpenSSL 1.0.1 and 1.0.2 distributions.

Step zero: Prerequisites

For the whole following building process, the Developer Command Prompt for Visual Studio is required. It is one of the optional choices offered during VS installation. When installing VS, check the following option (example for VS 2015):

  • Programming languages\Visual C++\Common Tools for Visual C++ 2015

In case Visual Studio is already installed without Developer Command Prompt, you can add this feature by program modification:

Start -> Programs and Features -> Microsoft Visual Studio 2015 -> Change -> Modify

The following window should appear. Again, check the aforementioned option.

The guide was tested using Visual Studio Professional 2015. Both, the aforementioned option for the Developer Command Prompt for Visual Studio installation and batch files needed in the following process, may differ in other versions.

Step one: Getting the source codes

Download the Windows sources for:

Unpack:

  • Tomcat Native
  • OpenSSL FIPS Object Module to a directory outside Tomcat Native.
  • OpenSSL sources to tomcat-native-X\native\srclib\openssl
  • Apache Portable Runtime sources to tomcat-native-X\native\srclib\apr

Step two: Building the OpenSSL FIPS Object Module

Prerequisites:

  • Developer Command Prompt for Visual Studio
  • Extracted OpenSSL FIPS Object Module files
  • Perl installed and location added to the PATH system variable

Compilation (64-bit version):

  1. Open Developer Command Prompt:
    Start -> Developer Command Prompt for VS2015
  2. Add variables for desired environment:
    cd vc
    vcvarsall x64
    
  3. Navigate to the extracted OpenSSL FIPS Object Module sources:
    cd openssl-fips-X\
    
  4. Set needed variables:
    Set PROCESSOR_ARCHITECTURE=AMD64
    Set FIPSDIR=absolute\path\to\Openssl-fips-X
    
  5. [Optional] In case you use Cygwin Perl, you may encounter an error (“No rule for …”) during the build process. In order to prevent this issue, open the openssl-fips-X\util\mk1mf.pl file in text editor, find the first chop; command and add the following to the next row:
    s/\s*$//;
    
  6. Build the OpenSSL FIPS Object Module
    ms\do_fips
    

The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd openssl-fips-X\
Set PROCESSOR_ARCHITECTURE=x86
Set FIPSDIR=absolute\path\to\Openssl-fips-X
ms\do_fips

Step three: Building the FIPS capable OpenSSL

Prerequisites:

  • Developer Command Prompt for Visual Studio
  • Compiled FIPS module
  • OpenSSL 1.0.1 or 1.0.2 sources extracted in the tomcat-native-X\native\srclib\openssl folder
  • Perl installed and location added to the PATH system variable (note that Cygwin Perl may have issues with backslash in addresses)
  • NASM (Netwide Assembler)  installed and location added to the PATH system variable

Compilation (64-bit version):

  1. Open Developer Command Prompt:
    Start -> Developer Command Prompt for VS2015
  2. Add variables for desired environment:
    cd vc
    vcvarsall x64
    
  3. Navigate to the extracted OpenSSL sources:
    cd native\srclib\openssl\
    
  4. Configure and make:
    perl Configure VC-WIN64A fips --with-fipsdir=absolute\path\to\Openssl-fips-X
    ms\do_win64a
    nmake -f ms\nt.mak
    

The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd native\srclib\openssl\
perl Configure VC-WIN32 fips --with-fipsdir=absolute\path\to\Openssl-fips-X
ms\do_nasm
nmake -f ms\nt.mak

Version check:

FIPS capable OpenSSL contains information about this fact in its version info. Check the version of your compiled OpenSSL library:

Step four: Building APR

Prerequisites:

  • Developer Command Prompt for Visual Studio
  • Apache Portable Runtime sources extracted in the tomcat-native-X\native\srclib\apr folder

Compilation (64-bit version):

  1. Open Developer Command Prompt:
    Start -> Developer Command Prompt for VS2015
  2. Add variables for desired environment:
    cd vc
    vcvarsall x64
    
  3. Navigate to the extracted APR sources:
    cd native\srclib\apr\
    
  4. Build Apache Portable Runtime:
    nmake -f NMAKEmakefile BUILD_CPU=x64 APR_DECLARE_STATIC=1
    nmake -f NMAKEmakefile BUILD_CPU=x64 APR_DECLARE_STATIC=1 install
    

The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd native\srclib\apr\
nmake -f NMAKEmakefile BUILD_CPU=x86 APR_DECLARE_STATIC=1
nmake -f NMAKEmakefile BUILD_CPU=x86 APR_DECLARE_STATIC=1 install

By default, the compiled files should appear in C:\include\ and C:\lib\ folders.

Step four and a half: Cleaning the mess

It is recommended to create an appropriate file system structure before proceeding to the compilation of the Tomcat Native library.

Create the following folders:

  • deps
  • deps\openssl
  • deps\openssl\lib
  • deps\openssl\include
  • deps\apr
  • deps\apr\lib
  • deps\apr\include

And copy the following files:

  • native\srclib\openssl\out32\openssl.exe to deps\openssl
  • native\srclib\openssl\out32\ssleay32.lib, native\srclib\openssl\out32\libeayfips32.lib and native\srclib\openssl\out32\libeaycompat32.lib to deps\openssl\lib
  • content of native\srclib\openssl\inc32\ to deps\openssl\include
  • C:\lib\apr-1.lib to deps\apr-1\lib
  • content of C:\include\apr-1\ to deps\apr\include

Step five: Building Tomcat Native library

Prerequisites:

  • Developer Command Prompt for Visual Studio
  • Compiled FIPS capable OpenSSL and APR
  • Java installed and JAVA_HOME system variable leading to the location set

Compilation (64-bit version):

  1. Open Developer Command Prompt:
    Start -> Developer Command Prompt for VS2015
  2. Add variables for desired environment:
    cd vc
    vcvarsall x64
    
  3. Navigate to the extracted Tomcat Native sources:
    cd tomcat-native-X\native\
    
  4. Set needed variables:
    Set CPU=X64
    Set FIPSDIR=absolute\path\to\Openssl-fips-X
    
  5. Build FIPS capable Tomcat Native library
    nmake -f NMAKEMakefile WITH_APR=path\to\deps\apr WITH_OPENSSL=path\to\deps\openssl APR_DECLARE_STATIC=1 [ENABLE_OCSP=1] WITH_FIPS=1
    

The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd tomcat-native-X\native\
Set CPU=X86
Set FIPSDIR=absolute\path\to\Openssl-fips-X
nmake -f NMAKEMakefile WITH_APR=path\to\deps\apr WITH_OPENSSL=path\to\deps\openssl APR_DECLARE_STATIC=1 [ENABLE_OCSP=1] WITH_FIPS=1

Compiled files should appear in the tomcat-native-X\native\WINXP_X64_DLL_RELEASE or tomcat-native-X\native\WINXP_X86_DLL_RELEASE folder.

Tomcat settings

Now that we have FIPS capable Tomcat Native library, the last action needed is the configuration of Tomcat to use the FIPS validated implementation.

  1. Copy the compiled tcnative-1.dll to your tomcat\bin folder.
  2. In the tomcat\conf\server.xml file edit following tags:
    Enable FIPS Mode for the APR listener:

    <Listener
        className="org.apache.catalina.core.AprLifecycleListener"
        SSLEngine="on"
        FIPSMode="on"
    />
    

    Configure the HTTPS connector to use Native (OpenSSL) implementation of SSL/TLS protocol:

    <Connector
        protocol="org.apache.coyote.http11.Http11AprProtocol"
        …
    />
    
  3. Restart the Apache Tomcat service

And that’s it! Your Tomcat is now using only FIPS approved algorithms and FIPS validated implementations.

Highest tested versions

This guide was tested with the following component versions:

  • Apache Portable Runtime 1.5.2
  • OpenSSL 1.0.2l
  • OpenSSL FIPS Object Module 2.0.16
  • Tomcat Native 1.2.12

I study Faculty of Information Technology at VUT in Brno and the main aim of my bachelor thesis is a research of developing the cross-platform desktop applications. The adult spends approximately 2.5 hours per day on desktop computers as shown by the statistics that took place between 2008 and 2015. While the time spent on mobile phones increases, the time spent on desktop computers is, on the other hand, not decreasing. In my opinion, this has a great potential in developing a single desktop application that can be run on all major operating systems. An application such as this that can be built via Electron framework.

Electron

It is an open-source framework that allows the creation of native desktop applications on Linux, Windows and macOS platforms with web technologies like JavaScript, HTML, and CSS. A combination of Chromium and Node.js makes this possible. The application that is created by Electron has all the benefits of a native desktop application, such as access to the file system or system notifications. It is recommended to use the npm package manager of Node.js while developing the application. With the npm, a developer can record the necessary modules during development into the package.json file, run the application itself, etc.

Processes

In the Electron application, there are two types of processes: the main process and the renderer process. Each one has its own unique role. The process that runs script specified in the package.json as the main one is called the main process. The main process runs renderer processes and takes care of communicating with the API of the operating system. It allows developing the application with OS’s native GUI. Each renderer process renders the content of one web page. The renderer process runs in isolation from other renderer processes.

Development basic application

To develop basic Electron application, it is necessary to create three files: firstly, package.json with information about the application. Secondly, JavaScript file which will be run as the first one and, lastly, HTML file for creating GUI of application (i.e. the web page). The folder-structure of basic application can look like:

app/
├── package.json
├── main.js
└── index.html

To run the application, you can download last Electron release from the web and copy app folder to the downloaded application as below and execute binary electron (electron.exe on Windows, etc.).

Folder-structure of Electron application on macOS

electron/Electron.app/Contents/Resources/
└── app/

Folder-structure of Electron application on Windows and Linux

electron/resources/
└── app/

Application distribution

After the application is developed, the developer can package the app folder into asar archive instead of distributing the app’s source code. In such a case, the app folder can be replaced, which includes the source code with app.asar package.

There are three options for distribution. The first one is to manually download all the latest versions of Electron for the necessary platforms and copy the application to the appropriate folder. Using the command line, a third-party tool Electron-packager creates also these packages. The second option, which includes creating install files, is to use the tools also from third parties: Electron-builder or Electron-windows-store.The last option, which includes distributing source code instead of package asar, is to use npm package manager.

The tool Electron-builder uses Electron-packager and creates the file needed to install the application. The format of the installer, that Electron-builder supports for Windows is NSIS. The most widely used packages for Linux are the deb, rpm, freebsd and apk. Similarly, for macOS there are dmg, pkg and mas. Electron-builder also supports automatic updates. If there are dependencies to the native operating system, compilation is required on this system. Otherwise, it is possible to compile application on a selected operating system or use some build server such as AppVeyor for Windows and Travis for Linux and macOS.

Other tool developed by Microsoft for compiling Electron applications into the .appx package is called Electron-windows-store. The tool is available from Microsoft PowerShell. Electron-windows-store can be used only in the operating system Windows 10 with Anniversary Update. Application format for windows store will be generated from the package of the Electron-packager tool. Requirements to compile application:

  • You must have a certificate that supports the application
  • The Windows 10 SDK
  • Node.js with a minimum version of 4.

The last option is to distribute only the source code and use the npm package manager to install dependencies and run the application. It is necessary to have all dependencies in the package.json file. Then, user just downloads the source code and executes these commands:

#to install all dependencies
$npm install
#to run application
$npm start

What we do

We create intelligent enterprise office solutions that build smart business and empower employees to be more productive and creative.

Our YSoft SafeQ Workflow Solutions Platform is used by more than 14,000 corporations and SMB organizations from over 120 countries to manage, optimize and secure their print and digital processes and workflows. Our 3D print solutions are focused in the Education sector where they provide unique workflow and cost recovery benefits.

Through YSoft Labs, we experiment with new technologies for potential new products. We accelerate the technology growth of other innovative companies through Y Soft Ventures, our in-house investment arm.

 

Our Culture

Y Soft culture is defined by our 6 attitudes. We don’t say they are right or better. We say they are ours – created by employees for employees. We are always seeking people who share similar attitudes because our culture should live on through all Y Softers.

 

values

Headquartered in the Czech Republic, we employ over 370 dedicated people around the world. Our R&D centers are in Brno, Prague and Ostrava, Czech Republic, but you can also meet Y Softers in North and Latin America, Dubai, Singapore, Japan or Australia. Together we have 17 offices in 16 countries.

 mapa

R&D – At Y Soft it is a CRAFT

Unlike other technology companies, whose R&D departments only implement the vision of others, Y Soft employees regularly contribute ideas and help shape the direction of new products.
For us, R&D is a craft — a craft we continuously build upon to expand the R&D know-how in the Czech Republic. We believe in DevOps and have finished the first stage in building our own production grade environment. We are also building a core competency in UI design. Our developers transform their talent into solutions with high added value that impress our customers around the world. We inspire each other with our dedication to quality, by using the most up to date technologies, and in designing, developing and integrating our own hardware and software. We challenge ourselves daily to change the world.img_8833

We are building large-scale connected, global systems for demanding customers (nearly 25% of the Fortune Global 500 use Y Soft’s solutions). This level of customer satisfaction is achieved through R&D’s open access to customers and the opportunity to meet with company executives to discuss and influence R&D direction.

We also work with many young startups through our in-house venture arm, which exposes you to new ideas and mentors to see if entrepreneurship is a future path for you. We offer opportunities to work with thesis students at local universities and reward employees who contribute to our intern’s success.

img_9543

Want more from Y Soft?

We are growing and can offer you an exciting, rewarding career with a wide range of opportunities to utilize your skills and to grow personally and professionally.

In latest project we came across the problem of securing communication, where peers don’t share backend language. Our goal was to securely generate shared secret between Java server and .NET client. Each language supports all features needed for the key agreement, but these features aren’t always compatible, due to different encoding or data representation.

This article should be quick and easy solution how to agree on shared secret between such server and client with hand-on examples. Each subchapter contains concrete example on how to send, receive and process data needed for respected operation. Authentication and signature scheme are similar to guides published internet-wide, the main topic- key agreement is presented with our insights and comments on how to make it work.

The reason I have included all the code snippets even if they are “straight from internet” is that I wanted to simply group all these schemes to one place, so that you don’t have to find examples on Oracle, CodeRanch, StrackOverflow, MSDN Microsoft documentation, … Honestly it was a little bit frustrating for me, so there you go 🙂

If you are interested just in key agreement, feel free to skip right to the key agreement example, as I would like to discuss authentication and signature schemes first in order to prevent attacks on anonymous key agreement.

Please beware of incautious use of the published code. Your protocol might require different configuration and by simple copy-pasting you may introduce some security issues or errors to it.

Certificate based authentication

Consider each peer to have certificate signed by some root CA.

In my examples I’ll be working mostly with byte arrays as it is most universal way to represent data.

In most cases whole certificate chain is transferred to other side. Using one certificate (self-signed) is just simplification of process below.

.NET authentication

Java peer can transform certificate into byte array using Certificate.getEncoded() method. Byte array is then sent to counterpart.

.NET peer parses certificate chain data and stores parsed certificates into List<byte[]> to ease processing.

Code below shows the authentication process. Process differs from standard validation in a way, that we are sending only certificate chain. From the chain peers certificate is extracted. From the rest of received certificate chain we create subchain and check if we were able to create complete chain from peer to Certificate Authority.

For simplicity revocation checks and certificate usage field (OID of the field – 2.5.29.15) are being ignored. In real environment you should definitely check this list and field, but in this article it is not really necessary.

/**
* receivedCertificateChain previously transformed byte array containing whole certificate chain. This chain is transformed
*                          into list of separate certificates from chain 
*/
public bool certificateChainValidation(List<byte[]> receivedCertificateChain)
{
  var otherSideCertificate = new X509Certificate2(receivedCertificateChain[0]);
  var chain = new X509Chain { ChainPolicy = { RevocationMode = X509RevocationMode.NoCheck, VerificationFlags = X509VerificationFlags.IgnoreWrongUsage } };
  chain.ChainPolicy.ExtraStore.AddRange(receivedCertificateChain.Skip(1).Select(c => new X509Certificate2(c)).ToArray());

  return chain.Build(otherSideCertificate));
}

Root CA certificate must be trusted by your system to be successfully validated.

Java authentication

.NET peer can transform each certificate into byte array using X509Certificate2.RawData property and send it to counterpart.

First, Java peer needs to define validation parameters and Trust anchor. In this case it’s root CA.

After that, you validate received certificate chain. If method validate(CertPath, PKIXParameters) is processed without any exception, you can consider received certificate chain correctly validated.

/**
* certificatePath received data of certificate chain from counterpart 
*/
public boolean certificateChainValidation(byte[] certificatePath) {
  try {
    //generate Certificate Path from obtained byte array
    CertificateFactory cf = CertificateFactory.getInstance(“X.509”);
    CertPath received = cf.generateCertPath(new ByteArrayInputStream(certificatePath));

    //get all trusted entities from dedicated truststore
    Set<TrustAnchor> trustedCAs = getAnchors();

    //set the list of trusted entities and turn off the revocation check
    //revocation is by default set to true, so if you don't turn it off explicitly, exceptions will bloom
    PKIXParameters params = new PKIXParameters(trustedCAs);
    params.setRevocationEnabled(false);

    CertPathValidator cpv = CertPathValidator.getInstance("PKIX");
    //if no exception is thrown during validation process, validation is successful
    cpv.validate(received, params);
  } catch (CertificateException | NoSuchAlgorithmException | InvalidAlgorithmParameterException | CertPathValidatorException e ) {
    throw new CertificateException("Could not validate certificate", e);
    return false;
  }
  return true;
}

getAnchors() is a method which simply loads all entries from dedicated truststore and returns them as Set<TrustAnchors>.

Signatures

Without correct signature scheme whole authentication scheme would be compromised, as any attacker would be able to replay intercepted certificate chain. Therefore while sending outgoing certificate chain, each peer has to include signature to it. It is sufficient for such signature to be reasonably short (resp. long) nonce. This nonce should be freshly generated by the communication counterpart and can be sent before authentication attempt (or as an authentication challenge). Nonce could be any-random value or  static counter, but must be definitely unique.

During the key agreement phase, signatures are used to sign public keys that are exchanged between peers to ensure the integrity of data.

.NET signature generation

First you need to use certificate with included private key – PKCS#12 certificates.

Flags as PersistKeySet and Exportable have to be included to certificate loading, as without them you are not able to obtain private key for signatures from the keystore. PersistKeySet flag ensures that while importing certificate from .pfx or .p12 file, you import private key with it. Exportable flag means that you can export imported keys (mainly private key).

As mentioned in this comment we need a little workaround to force crypto provider to use SHA256 hashes.

public byte[] signData(byte[] dataToSign)
{
  X509Certificate2 cert = new X509Certificate2(PATH_TO_CERTIFICATE, PASSWORD, X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable);
  RSACryptoServiceProvider privateKey = (RSACryptoServiceProvider)cert.PrivateKey;
 
  RSACryptoServiceProvider rsaClear = new RSACryptoServiceProvider();
  // Export RSA parameters from 'privateKey' and import them into 'rsaClear'
  // Workaround to force RSACryptoServiceProvider use SHA256 hash
  rsaClear.ImportParameters(privateKey.ExportParameters(true));
  return rsaClear.SignData(dataToSign, "SHA256");
}

Java signature generation

Straight-forward example presented internet-wide.

/**
* data       data to be signed
* privateKey private key imported from key pair storage
*/
public byte[] signData(byte[] data, PrivateKey privateKey) {
  try {
    Signature signature = Signature.getInstance(signatureType);
    signature.initSign(privateKey);
    signature.update(data);
    return signature.sign();
  } catch (NoSuchAlgorithmException | SignatureException | InvalidKeyException e) {
    throw new SignatureException(“Could not create signature”, e);
    return null;
  }
}

.NET signature verification

First thing to mention is, that you shouldn’t create X509Certificate2 object directly from byte arrays (5th tip in this article). Depending on use of your protocol it might result into stalling and slowing whole protocol, what you definitely don’t want. But again, for simplicity I’m creating object from byte array directly. Mitigation of mentioned problem can be seen in linked article.

/**
* signature    signature of received data
* receivedData data received from counterpart. These data were also signed by counterpart to ensure integrity
*/
public bool verifySignature(byte[] signature, byte[] receivedData)
{
  X509Certificate2 cert = new X509Certificate2(PATH_TO_CERTIFICATE);

  RSACryptoServiceProvider publicKey = (RSACryptoServiceProvider)cert.PublicKey.Key;
  return publicKey.VerifyData(receivedData, "SHA256", signature);
}

Java signature verification

Straight-forward example presented internet-wide.

/**
* receivedData data received from counterpart
* signature    signature of receivedData
* publicKey    public key associated with privateKey object from "Java signature generation"
*/
public boolean verifyData(byte[] receivedData, byte[] signature, PublicKey publicKey) {
  try {
    Signature signature = Signature.getInstance(“SHA256withRSA”);
    signature.initVerify(publicKey);
    signature.update(receivedData);
    return signature.verify(signature);
  } catch (NoSuchAlgorithmException | SignatureException | InvalidKeyException e) {
    throw new SignatureException(“Could not validate signature”, e);
    return false;
  }
}

 

Key pair generation

Both, key pair generation and key agreement use BouncyCastle library (BC) for the crypto operations. During key generation phase, you need to specify key agreement algorithm. In this project we used elliptic curve Diffie-Hellmann (“ECDH”) key agreement. Elliptic curves were used to generate key pair mainly due to their advantages (performance, key size).

.NET key pair generation

The only difference from standard key generation is encoding of generated public key in a way that Java will accept the key and is able to create PublicKey object.

private static AsymmetricCipherKeyPair generateKeypair()
{
  var keyPairGenerator = GeneratorUtilities.GetKeyPairGenerator("ECDH");
  var ellipticCurve = SecNamedCurves.GetByName(ELLIPTIC_CURVE_NAME);
  var parameters = new ECDomainParameters(ellipticCurve.Curve, ellipticCurve.G, ellipticCurve.N, ellipticCurve.H, ellipticCurve.GetSeed());
  keyPairGenerator.Init(new ECKeyGenerationParameters(parameters, new SecureRandom()));
  return keyPairGenerator.GenerateKeyPair();
}

Now you need to send public key to other peer. Unfortunately, it is necessary to format public key in following way, otherwise Java peer will not be able to correctly create PublicKey object and create shared secret.

AsymmetricCipherKeyPair keyPair = generateKeypair();
byte[] publicKeyData = SubjectPublicKeyInfoFactory.CreateSubjectPublicKeyInfo(keyPair.Public).GetDerEncoded();

Java key pair generation

public static KeyPair generateKeyPair() {
  try {
    Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider());
 
    ECNamedCurveParameterSpec ecNamedCurveParameterSpec = ECNamedCurveTable.getParameterSpec(ELLIPTIC_CURVE_NAME);
    KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("ECDH", PROVIDER);
    keyPairGenerator.initialize(ecNamedCurveParameterSpec);
    return keyPairGenerator.generateKeyPair();
  } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException | NoSuchProviderException e) {
    throw new EncryptionException("Could not generate key pair", e);
  }
}

 

Key agreement

.NET key agreement

At this step, you need to beware the issue in C# BoucyCastle library, which results into failed key agreement even if you have correct keys (correct means same data, but different indentation => 0x0059534F4654 is different from 0x59534F4654).

Problem is that BC library creates shared secret as BigInteger. BigIntegers trims all leading zeroes and at conversion from it to byte array these zeroes aren’t included into array.

Unfortunately until publication day issue wasn’t still fixed in BC and you need to check key lengths by yourselves.


private static byte[] DeriveKey(AsymmetricCipherKeyPair myKeyPair, byte[] otherPartyPublicKey)
{
  IBasicAgreement keyAgreement = AgreementUtilities.GetBasicAgreement(KEY_AGREEMENT_ALGORITHM);
  keyAgreement.Init(myKeyPair.Private);

  //check otherPartyPublicKey length

  //shared secret generation
  var fullKey = keyAgreement.CalculateAgreement(PublicKeyFactory.CreateKey(otherPartyPublicKey)).ToByteArrayUnsigned();  
  return fullKey;
}

Java key agreement

Straight-forward example presented internet-wide.

public SecretKey secretDerivation(PublicKey receivedPublicKey, PrivateKey myPrivateKey) {
  try {
    KeyAgreement keyAgreement = KeyAgreement.getInstance(KEY_AGREEMENT_ALGORITHM, PROVIDER);
    keyAgreement.init(privateKey);
    keyAgreement.doPhase(publicKey, true);
    return keyAgreement.generateSecret(“AES”);
  } catch (NoSuchAlgorithmException | InvalidKeyException | NoSuchProviderException e) {
    throw new EncryptionException("Could not generate secret", e);
  }
}

 

From these examples You might gain bad feeling as all examples are rather brute and messages aren’t sent/received in no standard way- only as byte arrays. Best way is to exchange protocol messages in more standardized way. By doing so, you start heavily relying on third party library.

More standardized version of protocol should use Cryptographic Message Syntax. We will have a closer look on this option in next blog article.