We wish you all the best in 2018.

Here is small game for you. Find your way through the labyrinth.

Y Soft is using a robotic arm for testing multi-functional devices, but the robotic arm is not enough for our testing purpose. We need to interact with the device in different ways than just tapping on the touchscreen. A Screen of the tested device is already captured by a camera, therefore it is needed another feedback from a device and react to that feedback. Due to that, we developed Modular sensor platform, which can be easily plugged into a computer (Web API server) by USB. Via REST API protocol you can read information or command different kinds of sensors and actors. The following diagram illustrates how the platform is composed.

### Web API server

As this diagram shows you can connect multiple sensors to the server via USB to CAN converter. When the web server starts it sends discovery packet. From the responses, the web knows what types and how many sensors are connected. After initialization, it starts listening to sensors commands from clients.

The web API server is written using ASP.NET Core framework. In the following link, you can find a tutorial, which shows you a simplicity of creating a RESTful application and from which components the server is composed.

The .NET Core is cross-platform so the web server can run on any device running Linux, macOS or Windows.

Try to create ASP.NET Core application based on tutorial above or you can just create a console application (see link). The Created application can be built for any supported OS, for ARM there is available only runtime, not SDK for developing an application (see SDK support, ARM Runtime).

Build for a device is as simple as run this command

dotnet publish -r <Runtime identifier>

in the directory of the project (after -r switch you can specify any supported platform, for more information use this link). You must also install prerequisites to the target device (see link), then you can copy this folder

<Project path>bin\<Configuration>\netcoreapp2.0\<Runtime identifier>\publish

to ARM device and run the application.

### Summary

This article shows the composition of parts of the platform and how parts communicate with each other and that the platform is not limited only to one operating system. It works with Windows, Linux, macOS, even on ARM architecture. In next part of an article, I will tell you about the development of USB to CAN converter and sensors.

Chef is an automation platform designed to help the deployment and provisioning
process during software development and in production. Chef can, in cooperation with other deployment tools, transform the whole product environment into
infrastructure as a code.

## DSL

Chef provides a custom DSL that lets its users define the whole environment as a set of resources, together forming recipes, which can be further grouped into cookbooks. The DSL is based on Ruby, which adds a level of flexibility by offering Ruby’s language constructs to help the development. A basic example of a resource is a file with a specified content:
file 'C:\app\app.config' do
content "server_port = #{port}"
end

Upon executing, Chef will make sure there exists a defined file and has the correct content. If the file with the same content already exists, Chef will finish without updating the resource, letting developers know the environment has already been in a desired stated before the Chef run.

## Provisioning

The resources have build-in validations ensuring only the changes in configurations are applied in an existing environment. This lets users execute recipes repeatedly with only minor adjustments and Chef will make the necessary changes in your environment, leaving the correctly defined resources untouched.
This is especially handy in a scenario when an environment is already deployed and developers keep updating the recipes with new resources and managing configurations of deployed components. Here, with correctly defined validations, the recipes will be executed on target machines repeatedly, always updating the environment without modifying the parts of the environment which are already up to date.
This behavior can be illustrated on the following example:
my_tool = maven 'tool.exe' do
artifact_id     'tool'
group_id        'com.ysoft'
version         '1.0.0'
dest            'C:\utils'
packaging       'exe'
end

execute 'run tool.exe' do
command "#{my_tool.dest}\\#{my_tool.name} > #{my_tool.dest}\\tool.output"
not_if {::File.exist?(#{my_tool.dest}\\tool.output)}
end

In this example, the goal is to download an exe file and run it exactly once (only the first run of this recipe should update the environment). The maven resource internally validates, whether the given artifact has already been downloaded (there would already exist a file C:\utils\tool.exe).
The problem is with the execute resource, as it has no way of checking whether it has been run before, thus potentially executing more times. Users can, however, define restrictions themselves, in this case, the not_if attribute. It will prevent the resource to execute again, as it checks the existence of the tool output from previous runs.

## Architecture

To enable environment provisioning, Chef operates in a client-server architecture with a pull-based model.
Chef server represents the storage of everything necessary for deployment and provisioning. It stores cookbooks, templates, data bags, policies and metadata describing each registered node.
Chef client is installed on every machine managed by Chef server. It is responsible for contacting Chef server and checking whether there are new configurations to be applied (hence the pull-based model).
ChefDK workstation is the machine from which the whole Chef infrastructure is operated. Here, the cookbooks are developed and Chef server is managed.
In this example, we can differentiate between the Chef infrastructure (blue) and the managed environment (green). The process of deployment and provisioning is as follows:
1. A developer creates/modifies a cookbook and uploads it to the Chef server.
2. Chef client requests the server for changes in the recipes.
3. If there are changes to be made, Chef server notifies the client.
4. The client initiates a Chef run with the new recipes.
Note here that in a typical Chef environment, Chef client is set to request the server for changes periodically, to automate the process of configuration propagation.

## Serverless deployment

When only the deployment of the environment is necessary (e.g for a simple installation of a product where no provisioning is required), in an offline deployment or while testing, much of the operational overhead of Chef can be mitigated by leaving out the server completely.
Chef client (with additional tools from ChefDK) can operate in a local mode. In such case, everything necessary for the deployment, including the recipes, is stored on the Chef client, which will act as a dummy server for the duration of the Chef run.

Here, you can see the architecture of a serverless deployment. The process is as follows:
1. Chef client deploys a dummy server and points it to cookbooks stored on the same machine.
2. Chef client from now on acts as the client in the example above and requests the server for changes in the recipes.
3. Chef Server notifies the client of the changes and a new Chef run is initiated.

## Conclusion

Chef is a promising tool that has a potential to help us improve not only the products we offer, but also make the process of development and testing easier.
In combination with infrastructure deployment tools (like Terraform) we are currently researching, automatization of product deployment and provisioning can allow our developers to focus on important tasks instead of dealing with the deployment of testing environments or manually updating configuration files across multiple machines.

This blog post will introduce tool Terraform, which we use for deploying testing environments in YSoft. We will cover following topics:

• What is Terraform?
• How does it work?
• Example of use

## What is Terraform?

Terraform is a command line tool for building and changing infrastructure in a safe and efficient matter. It can manage resources on most of the popular service providers. In essence, Terraform is simply a tool that takes configuration files as input and generates an execution plan describing what needs to be done to reach the desired state. Do you need to add another server to your cluster? Just add another module to your configuration. Or redeploy your production environment in a matter of minutes? Than Terraform is the right tool.

## How does it work?

### Infrastructure as a code

Configuration files that define infrastructure are written using high-level configuration syntax. This basically means that blueprint of your production or testing infrastructure can be versioned and treated as you would normally treat any other code. In addition, since we are talking about code, the configuration can be shared and re-used.

### Execution plan

Before every Terraform execution, there is planning step, where Terraform generates an execution plan. The execution plan will show you what will happen when you run an execution (when you apply the plan). This way you avoid surprises when you manipulate with your infrastructure.

### Terraform state file

How does Terraform determine the current state of infrastructure? The answer is the state file. State file keeps the information about all the resources that were created by execution of the given configuration file. To assure that the information in state file is fresh and up to date, Terraform queries our provider for any changes of our infrastructure (and modifies state file accordingly), before running any operation, meaning: for every plan and apply, Terraform will perform synchronization of a state file with a provider.

Sometimes this behavior can be problematic, for example querying large infrastructures can take a non-trivial amount of time. In this scenarios, we can turn off the synchronizing, which means the cached state will be treated as the record of truth.

Below you can see the picture of the whole execution process.

## Example of use

In our example, we will be working with the azure provider. The example configuration files can be used only for the azure provider (hence the configuration files for different providers may and will differ). In our example, it is also expected, that we have set up terraform on our machine and appropriate endpoints to provider beforehand.

### Step 1: Write configuration file

The presented configuration file has no expectations regarding previously created resources and it can be executed on its own, without the need to create any resources in advance.

The configuration file that we will write describes following resources:

Now we create an empty directory on the machine where we have installed terraform and within we create a file with name main.tf. The contents of the main.tf  file:

provider "azurerm" {
subscription_id = "..."
client_id       = "..."
client_secret   = "..."
tenant_id       = "..."
}

resource "azurerm_resource_group" "test" {
name     = "test-rg"
location = "West US 2"
}

resource "azurerm_virtual_network" "test" {
name                = "test-vn"
location            = "West US 2"
resource_group_name = "${azurerm_resource_group.test.name}" } resource "azurerm_subnet" "test" { name = "test-sbn" resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}" address_prefix = "10.0.2.0/24" } resource "azurerm_network_interface" "test" { name = "test-nic" location = "West US 2" resource_group_name = "${azurerm_resource_group.test.name}"

ip_configuration {
name                          = "testconfiguration1"
subnet_id                     = "${azurerm_subnet.test.id}" private_ip_address_allocation = "dynamic" } } resource "azurerm_virtual_machine" "test" { name = "test-vm" location = "West US 2" resource_group_name = "${azurerm_resource_group.test.name}"
network_interface_ids = ["${azurerm_network_interface.test.id}"] vm_size = "Standard_DS1_v2" delete_os_disk_on_termination = true storage_image_reference { publisher = "Canonical" offer = "UbuntuServer" sku = "16.04-LTS" version = "latest" } storage_os_disk { name = "myosdisk1" caching = "ReadWrite" create_option = "FromImage" managed_disk_type = "Standard_LRS" } os_profile { computer_name = "hostname" admin_username = "testadmin" admin_password = "Password1234!" } os_profile_linux_config { disable_password_authentication = false } }  ### Step 2: Planning the execution Now we browse into a directory with our main.tf file and we run command terraform init, which initializes various local settings and data that will be used by subsequent commands. Secondly, we run command terraform plan, which will output the execution plan, describing which actions Terraform will take in order to change real infrastructure to match the configuration. The output format is similar to the diff format generated by tools such as Git. If terraform plan failed with an error, read the error message and fix the error that occurred. At this stage, it is likely to be a syntax error in the configuration. ### Step 3: Applying the plan If terraform plan ran successfully we are safe to execute terraform apply. Throughout the whole “apply” process terraform will inform us of progress. Once the terraform is done, our environment is ready and we can easily check by logging in to our virtual machine. Also, our directory now contains file terraform.tfstate file, which is state file that corresponds to our newly created infrastructure. ## Conclusion This example was only a very simple one to show how configuration file might look. Terraform offers more on top of that. Configurations can be packed into modules, self-contained packages that are managed as a group. This way we can create reusable parametrizable components and treat these pieces of infrastructure as a black box. Besides that, Terraform can also perform a provisioning of VM and much more. At Y Soft, we use robots to test our solutions for verification and validation aspects, we are interested if the system works according to required specifications and what are the qualities of the system. To save time and money, it is possible to use a single robot to test multiple devices simultaneously. How is this done? It is very simple so let’s look at it. When performing actions to operate given a device, the robot knows where the device is located due to a calibration. The calibration file contains transformation matrix that can transform location on the device to robot’s coordinate system. The file also contains information about the device that is compatible with the calibration. How the calibration is computed is covered in this article. There is also calibration of the camera that contains information about the region of interest, meaning where exactly is the device screen located in the view of a camera. All of the calibration files are stored on the hard drive. Example of calibration files for two Terminal Professionals: { "ScreenXAngle":0.35877067027057225, "DeviceId":18, "DeviceModelId":18, "DeviceName":"Terminal_Professional 4, 10.0.5.182", "MatrixArray":[[0.728756457140,0.651809992529,-0.159500429741,75.4354297376],[-0.683749126568,0.734998176419,-0.10936964140,71.1249458777],[0.0422532822652,0.187897834122,0.981120733880,-34.923427696],[0.0,0.0,0.0,1.0]] , "ScreenSize":{ "Width":153.0, "Height":91.0 } }  { "ScreenXAngle":0.25580856461537194, "DeviceId":27, "DeviceModelId":18, "DeviceName":"Terminal_Professional 4, 10.0.5.112", "MatrixArray":[ [0.713158843830,-0.686471581194,0.220191724515,-176.983055],[0.699596463347,0.6783511825194,-0.15148794414,-71.7788394],[-0.05297850752,0.2635031531279,0.963621817536,-29.83848504],[0.0,0.0,0.0,1.0] ], "ScreenSize":{ "Width":153.0, "Height":91.0 } }  For the robot to operate on multiple devices, all of the devices must be within the robot’s operational range, which is quite limited, so this feature is currently is only used for smaller devices, like mobile phones and Terminal Professional. It is theoretically possible to use a single robot on more devices, but for practical purposes, there are usually only 2 devices. Also, all devices must be at relatively the same height, which limits testing on multifunctional devices that have various height and terminal placement. Space is also limited by the camera’s range, so multiple cameras might be required, but this is not a problem as camera calibration also contains the unique identifier of a camera. Therefore a robot can operate on multiple devices using multiple cameras or just a single camera if devices are very close to each other. Before testing begins, a robot needs to have all calibrations of devices available on the hard drive and all action elements (buttons) need to be within its operational range. Test configuration contains variables such as DEVICE_ID and DEVICE2_ID which need to contain correct device IDs as stored the robot’s database. Which tests will run on the devices and the duration of the tests also need to be specified. Tests used for these devices are usually measurement and endurance tests, which run in iterations. There are multiple variants for these tests, for example, let’s say we wish to run tests for 24 hours on two devices and each device should have an equal fraction of this time. This means that the test will run for 12 hours on one device and 12 hours on the other, which is called consecutive testing. Another variant is simultaneous testing, which means that the robot will alternate between the devices after each iteration for a total time of 24 hours. The robot loads the calibration for another device after each iteration and continues with the test on that device. This is sometimes very useful should one device become unresponsive, the test can continue on the second device for the remaining time. Results of each iteration of the test for each device are stored in a database along with other information about the test and can be viewed later. Testing multiple devices with a single robot also makes it possible to test and compare different versions of an application or operating system (in this case on Terminal Professional) without ending the test, reinstalling of the device and running the test again. This saves a lot of time and makes the comparison more accurate. Reaction time measurement is a process of acquiring timespan for how long it takes for the tested device to change its state after clicking on action element. The most common scenario is the measuring of time needed to load new screen after clicking on the button that invokes screen change. This measurement directly testifies about user experience with the tested system. So how does our robotic system do it? The algorithm of reaction time measurement is based on calculation of pixel-wise differences between two consecutive frames. That simply subtracts values of pixels of one image from another. Let’s have two frames labeled as fr1 and fr2, there are three main types of difference computation: • Changes in fr1 according to fr2: diff = fr1 – fr2 • Changes in fr2 according to fr1: diff = fr2 – fr1 • Changes in both directions: diff = | fr1 – fr2 | The last mentioned computation of differences is called Absolute difference and is used in our algorithm for reaction time measurements. In general, the input frames are grayscale 8-bit images of the same size. Computation of differences for color images is possible however it would only deliver more errors in RGB color spectrum due to more variables being dependent on surrounding lighting conditions. The final computed difference is just a number indicating the amount of changes between two frames, therefore, it is perfect for detecting the change of screen in the sequence of images. Enough of theory, let’s make it work! First of all, we need two images indicating the change of screen. For this purpose, I choose the following two pictures. Imagine those are the two consecutive frames in the sequence of all frames we talked about earlier. Next step is to change them into grayscale and make sure they are of the same height and width. After those necessary adjustments, we are ready to calculate the difference between them. As was told before, it is computed as an absolute subtraction of one image from another. In about every computer vision library there is a method implemented for this purpose, i.e. in opencv it is called AbsDiff. The following image shows the result of subtraction of two images above. As you can see there is a visible representation of both images. That is completely fine because every non zero pixel tells us how much given pixel is distinct from the same pixel in the second image. If result image would be black it would mean that difference is zero and images are identical, vice versa for white image. Next step is to sum values of all pixels in the result image. Remember that each pixel value for grayscale 8-bit image is in a range from 0 to 255. The difference for this image: diff = 68964041 This value itself is not very descriptive of the change between the two images, therefore normalization needs to be applied. The form of normalization we use is to transform that computed difference number into a percentual representation of change in the screen with a defined threshold. Threshold specifies what value of a pixel is high enough to be classified as changed so rather than computing sum of all pixels in result image we find how many of pixels are above the defined threshold. The normalized difference for this image: diffnormed =96.714% (with threshold =10) This result compared to the previous one very precisely tells us how much change happened between the two images. The algorithm to detect the amount of change between two images was just the first part of the whole time measurement process. In our robotic system, we have implemented two modes of reaction measurement, Forward and Backward reaction time evaluation. ## Forward reaction time evaluation Forward RTE is based on real-time-ish evaluation meaning that algorithm procedurally obtains data from an image source and process them as they arrive. The algorithm does not have the ambition to find desired screen immediately but it rather searches for screen changes, evaluates them and then compares them to desired one. Forward RTE diagram shows the process flow of the algorithm. At the start, it sets the first frame as the reference image. Differences against this reference are then computed with incoming frames. If the computed difference is above the threshold then the frame is identified and the result is compared to desired screen. If this does not match, the frame is set as new reference and differences are then calculated against it. If that does match then timestamp of image acquirement is saved and the algorithm ends. In theory, every screen change during measuring is identified only once, however it strongly depends on the threshold value that user needs to set. Even if this algorithm tries to be real-time the identification algorithms take so much time that it is not possible yet. ## Backward reaction time evaluation Backward RTE works pretty much the other way. Rather than searching desired image from the start, it waits for all images to be acquired, identifies the last frame and sets it as a reference, after that looks for the first appearance of given reference in sequence. Backward RTE diagram shows the process flow of the algorithm. First of all, it waits for all frames from the subsequence of frames. After all frames are acquired, the last frame is identified and if the last frame is the actual desired screen then the reference is set and the algorithm proceeds. If the last frame was not desired screen it would mean that desired screen did not load yet or some other error happened. For this case, algorithm records backup sequences to provide additional consecutive frames. If there is no desired screen in those sequences then the algorithm is aborted. After reference is set the actual algorithm starts. It looks for the first frame which is very similar to the reference one using difference algorithms described earlier. Found image is identified and compared to desired. If identified and desired screens match then the time of acquirement is saved and the algorithm ends. However, if they do not match then the sequence is shortened to start at the index of falsely identified frame and then the algorithm searches further. The furthest index is the actual end of sequence because image at the end sequence was at the start of RTE identified as desired. ## Summary This article contains information about difference based measurement of reaction time. It guides you through computation of differences between two images. Also, this article describes our very own two reaction time evaluation modes which we use in practice. ## Additional notes To keep the description of algorithms as readable as possible few adjustments are missing. Preprocessing of the images is the essential part where elimination of noise has high impact on the stability of the whole algorithm. We have also implemented few optimization procedures that reduce the amount of data that needs to be processed, e.i. bisection. Přijď 7.11.2017 od 18:00 k nám do Y Softu, Technická 13, Brno na pilotní díl Tech Support Meetupu. Víš, co se ti děje na síti? Co je to network sniffing? Jak analyzovat šifrovanou komunikaci? Pokud, chceš znát odpovědi, tak neváhej a přijď na meetup! Během interaktivního workshopu se seznámíš se základními informacemi o Wiresharku, proč jsou tyto informace užitečné a jak je využívat pro efektivní analýzu a ladění sítě. Vstup zdarma. Registruj se prosím zde, počet míst je omezen. Workshop bude probíhat v češtině a provede vás nim nadšená luštitelka šifer, Lenka Bačinská. This step-by-step guide shows the way to smoothly build FIPS capable OpenSSL library for use in the FIPS 140-2 compliant Tomcat server on Windows machines. ## What is FIPS 140-2? The Federal Information Processing Standard 140-2 is a security standard published by the National Institute of Standards and Technology (NIST), covering specification of security requirements for implementing cryptographic modules. Cryptographic module may be either a library, a component of a product or application, or a complete product. The specifications include e.g. a list of approved algorithms, module inputs and outputs, physical security, cryptographic key management and more areas related to the secure design. NIST manages a list of FIPS 140-1 and FIPS 140-2 validated cryptographic modules, i.e. modules tested, validated and certified under the Cryptographic Module Validation Program. The complete list can be found here: http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm. ## FIPS 140-2 compliant Tomcat The compliance (unlike FIPS validation) means that only FIPS approved algorithms and validated modules are used in the product, but the product itself was not validated. Apache Tomcat, an open source Java application server, can use two different implementations of the SSL/TLS protocol, and thus there are two options for achieving FIPS 140-2 compliance: • JSSE, the Java implementation – it is needed to have enabled only FIPS validated Java cryptographic providers along with the correct setting of the ciphers and algorithms in the Tomcat HTTPS connector • Apache Portable Runtime, the OpenSSL implementation – if FIPS 140-2 is supported by the linked OpenSSL library, so called FIPS Mode can be enabled in the Tomcat settings ## OpenSSL FIPS Object Module OpenSSL library is not FIPS validated. A special software component called OpenSSL FIPS Object Module was created instead. OpenSSL being compiled with the OpenSSL FIPS Object Module embedded inside is so called FIPS capable OpenSSL. It provides the standard, non-FIPS API as well as a FIPS 140-2 Approved Mode, a setting in products using this library in which only FIPS 140-2 validated cryptography is used and non-FIPS approved algorithms are disabled. Current version of OpenSSL FIPS Object Module is 2.0 and is compatible with standard OpenSSL 1.0.1 and 1.0.2 distributions. ## Step zero: Prerequisites For the whole following building process, the Developer Command Prompt for Visual Studio is required. It is one of the optional choices offered during VS installation. When installing VS, check the following option (example for VS 2015): • Programming languages\Visual C++\Common Tools for Visual C++ 2015 In case Visual Studio is already installed without Developer Command Prompt, you can add this feature by program modification: Start -> Programs and Features -> Microsoft Visual Studio 2015 -> Change -> Modify The following window should appear. Again, check the aforementioned option. The guide was tested using Visual Studio Professional 2015. Both, the aforementioned option for the Developer Command Prompt for Visual Studio installation and batch files needed in the following process, may differ in other versions. ## Step one: Getting the source codes Download the Windows sources for: Unpack: • Tomcat Native • OpenSSL FIPS Object Module to a directory outside Tomcat Native. • OpenSSL sources to tomcat-native-X\native\srclib\openssl • Apache Portable Runtime sources to tomcat-native-X\native\srclib\apr ## Step two: Building the OpenSSL FIPS Object Module Prerequisites: • Developer Command Prompt for Visual Studio • Extracted OpenSSL FIPS Object Module files • Perl installed and location added to the PATH system variable Compilation (64-bit version): 1. Open Developer Command Prompt: Start -> Developer Command Prompt for VS2015 2. Add variables for desired environment: cd vc vcvarsall x64  3. Navigate to the extracted OpenSSL FIPS Object Module sources: cd openssl-fips-X\  4. Set needed variables: Set PROCESSOR_ARCHITECTURE=AMD64 Set FIPSDIR=absolute\path\to\Openssl-fips-X  5. [Optional] In case you use Cygwin Perl, you may encounter an error (“No rule for …”) during the build process. In order to prevent this issue, open the openssl-fips-X\util\mk1mf.pl file in text editor, find the first chop; command and add the following to the next row: s/\s*$//;

6. Build the OpenSSL FIPS Object Module
ms\do_fips


The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd openssl-fips-X\
Set PROCESSOR_ARCHITECTURE=x86
Set FIPSDIR=absolute\path\to\Openssl-fips-X
ms\do_fips


## Step three: Building the FIPS capable OpenSSL

Prerequisites:

• Developer Command Prompt for Visual Studio
• Compiled FIPS module
• OpenSSL 1.0.1 or 1.0.2 sources extracted in the tomcat-native-X\native\srclib\openssl folder
• Perl installed and location added to the PATH system variable (note that Cygwin Perl may have issues with backslash in addresses)
• NASM (Netwide Assembler)  installed and location added to the PATH system variable

Compilation (64-bit version):

1. Open Developer Command Prompt:
Start -> Developer Command Prompt for VS2015
2. Add variables for desired environment:
cd vc
vcvarsall x64

3. Navigate to the extracted OpenSSL sources:
cd native\srclib\openssl\

4. Configure and make:
perl Configure VC-WIN64A fips --with-fipsdir=absolute\path\to\Openssl-fips-X
ms\do_win64a
nmake -f ms\nt.mak


The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd native\srclib\openssl\
perl Configure VC-WIN32 fips --with-fipsdir=absolute\path\to\Openssl-fips-X
ms\do_nasm
nmake -f ms\nt.mak


Version check:

## Step four: Building APR

Prerequisites:

• Developer Command Prompt for Visual Studio
• Apache Portable Runtime sources extracted in the tomcat-native-X\native\srclib\apr folder

Compilation (64-bit version):

1. Open Developer Command Prompt:
Start -> Developer Command Prompt for VS2015
2. Add variables for desired environment:
cd vc
vcvarsall x64

3. Navigate to the extracted APR sources:
cd native\srclib\apr\

4. Build Apache Portable Runtime:
nmake -f NMAKEmakefile BUILD_CPU=x64 APR_DECLARE_STATIC=1
nmake -f NMAKEmakefile BUILD_CPU=x64 APR_DECLARE_STATIC=1 install


The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd native\srclib\apr\
nmake -f NMAKEmakefile BUILD_CPU=x86 APR_DECLARE_STATIC=1
nmake -f NMAKEmakefile BUILD_CPU=x86 APR_DECLARE_STATIC=1 install


By default, the compiled files should appear in C:\include\ and C:\lib\ folders.

## Step four and a half: Cleaning the mess

It is recommended to create an appropriate file system structure before proceeding to the compilation of the Tomcat Native library.

Create the following folders:

• deps
• deps\openssl
• deps\openssl\lib
• deps\openssl\include
• deps\apr
• deps\apr\lib
• deps\apr\include

And copy the following files:

• native\srclib\openssl\out32\openssl.exe to deps\openssl
• native\srclib\openssl\out32\ssleay32.lib, native\srclib\openssl\out32\libeayfips32.lib and native\srclib\openssl\out32\libeaycompat32.lib to deps\openssl\lib
• content of native\srclib\openssl\inc32\ to deps\openssl\include
• C:\lib\apr-1.lib to deps\apr-1\lib
• content of C:\include\apr-1\ to deps\apr\include

## Step five: Building Tomcat Native library

Prerequisites:

• Developer Command Prompt for Visual Studio
• Compiled FIPS capable OpenSSL and APR
• Java installed and JAVA_HOME system variable leading to the location set

Compilation (64-bit version):

1. Open Developer Command Prompt:
Start -> Developer Command Prompt for VS2015
2. Add variables for desired environment:
cd vc
vcvarsall x64

3. Navigate to the extracted Tomcat Native sources:
cd tomcat-native-X\native\

4. Set needed variables:
Set CPU=X64
Set FIPSDIR=absolute\path\to\Openssl-fips-X

5. Build FIPS capable Tomcat Native library
nmake -f NMAKEMakefile WITH_APR=path\to\deps\apr WITH_OPENSSL=path\to\deps\openssl APR_DECLARE_STATIC=1 [ENABLE_OCSP=1] WITH_FIPS=1


The compilation process for the 32-bit version:

cd vc
vcvarsall x86
cd tomcat-native-X\native\
Set CPU=X86
Set FIPSDIR=absolute\path\to\Openssl-fips-X
nmake -f NMAKEMakefile WITH_APR=path\to\deps\apr WITH_OPENSSL=path\to\deps\openssl APR_DECLARE_STATIC=1 [ENABLE_OCSP=1] WITH_FIPS=1


Compiled files should appear in the tomcat-native-X\native\WINXP_X64_DLL_RELEASE or tomcat-native-X\native\WINXP_X86_DLL_RELEASE folder.

## Tomcat settings

Now that we have FIPS capable Tomcat Native library, the last action needed is the configuration of Tomcat to use the FIPS validated implementation.

1. Copy the compiled tcnative-1.dll to your tomcat\bin folder.
2. In the tomcat\conf\server.xml file edit following tags:
Enable FIPS Mode for the APR listener:

<Listener
className="org.apache.catalina.core.AprLifecycleListener"
SSLEngine="on"
FIPSMode="on"
/>


Configure the HTTPS connector to use Native (OpenSSL) implementation of SSL/TLS protocol:

<Connector
protocol="org.apache.coyote.http11.Http11AprProtocol"
…
/>

3. Restart the Apache Tomcat service

And that’s it! Your Tomcat is now using only FIPS approved algorithms and FIPS validated implementations.

## Highest tested versions

This guide was tested with the following component versions:

• Apache Portable Runtime 1.5.2
• OpenSSL 1.0.2l
• OpenSSL FIPS Object Module 2.0.16
• Tomcat Native 1.2.12

I study Faculty of Information Technology at VUT in Brno and the main aim of my bachelor thesis is a research of developing the cross-platform desktop applications. The adult spends approximately 2.5 hours per day on desktop computers as shown by the statistics that took place between 2008 and 2015. While the time spent on mobile phones increases, the time spent on desktop computers is, on the other hand, not decreasing. In my opinion, this has a great potential in developing a single desktop application that can be run on all major operating systems. An application such as this that can be built via Electron framework.

# Electron

It is an open-source framework that allows the creation of native desktop applications on Linux, Windows and macOS platforms with web technologies like JavaScript, HTML, and CSS. A combination of Chromium and Node.js makes this possible. The application that is created by Electron has all the benefits of a native desktop application, such as access to the file system or system notifications. It is recommended to use the npm package manager of Node.js while developing the application. With the npm, a developer can record the necessary modules during development into the package.json file, run the application itself, etc.

## Processes

In the Electron application, there are two types of processes: the main process and the renderer process. Each one has its own unique role. The process that runs script specified in the package.json as the main one is called the main process. The main process runs renderer processes and takes care of communicating with the API of the operating system. It allows developing the application with OS’s native GUI. Each renderer process renders the content of one web page. The renderer process runs in isolation from other renderer processes.

## Development basic application

To develop basic Electron application, it is necessary to create three files: firstly, package.json with information about the application. Secondly, JavaScript file which will be run as the first one and, lastly, HTML file for creating GUI of application (i.e. the web page). The folder-structure of basic application can look like:

app/
├── package.json
├── main.js
└── index.html

To run the application, you can download last Electron release from the web and copy app folder to the downloaded application as below and execute binary electron (electron.exe on Windows, etc.).

Folder-structure of Electron application on macOS

electron/Electron.app/Contents/Resources/
└── app/

Folder-structure of Electron application on Windows and Linux

electron/resources/
└── app/

## Application distribution

After the application is developed, the developer can package the app folder into asar archive instead of distributing the app’s source code. In such a case, the app folder can be replaced, which includes the source code with app.asar package.

There are three options for distribution. The first one is to manually download all the latest versions of Electron for the necessary platforms and copy the application to the appropriate folder. Using the command line, a third-party tool Electron-packager creates also these packages. The second option, which includes creating install files, is to use the tools also from third parties: Electron-builder or Electron-windows-store.The last option, which includes distributing source code instead of package asar, is to use npm package manager.

The tool Electron-builder uses Electron-packager and creates the file needed to install the application. The format of the installer, that Electron-builder supports for Windows is NSIS. The most widely used packages for Linux are the deb, rpm, freebsd and apk. Similarly, for macOS there are dmg, pkg and mas. Electron-builder also supports automatic updates. If there are dependencies to the native operating system, compilation is required on this system. Otherwise, it is possible to compile application on a selected operating system or use some build server such as AppVeyor for Windows and Travis for Linux and macOS.

Other tool developed by Microsoft for compiling Electron applications into the .appx package is called Electron-windows-store. The tool is available from Microsoft PowerShell. Electron-windows-store can be used only in the operating system Windows 10 with Anniversary Update. Application format for windows store will be generated from the package of the Electron-packager tool. Requirements to compile application:

• You must have a certificate that supports the application
• The Windows 10 SDK
• Node.js with a minimum version of 4.

The last option is to distribute only the source code and use the npm package manager to install dependencies and run the application. It is necessary to have all dependencies in the package.json file. Then, user just downloads the source code and executes these commands:

#to install all dependencies
$npm install #to run application$npm start