For those, who has attended Robot Framework workshop on Test Crunch conference you can find more details about environment setup, source codes and books below.

Installing Robot Framework on your computer

  1. Download and install Python 2.7 (32/64 bit based on your OS) from https://www.python.org/downloads/
  2. Add Python location into Path environment variable (e.g. c:\Python27\;c:\Python27\Scripts\)
  3. Download and install wxPython 2.8 with unicode support (32/64 bit based on Python version) from http://sourceforge.net/projects/wxpython/files/wxPython/2.8.12.1/
  4. Install pip (package manager) by  opening command line and running command python get-pip.py
  5. Open command line and run following commands to install Robot framework and additional libraries
    pip install robotframework
    pip install robotframework-ride
    pip install robotframework-selenium2library
  6. Start RIDE by running ride.py in command line

Workshop source code

Books and other study materials

Recording from workshop will be available soon.

Stay tuned and enjoy Robot Framework.

In QA we use robotic arm to autonomously operate a multifunctional device (MFD) according to a given test that is repetitive, time consuming or not performable by a human.

How does the robot know where is the screen located? How do the 2D screen coordinates transform to the robot’s system?

calib1 clanekRobot moves the end effector (stylus) in the 3D Cartesian coordinate system with the origin [0,0,0] at the center of the first servomotor. Position of the stylus is then transformed to angles of all servomotors by inverse kinematics. It is also possible to calculate position of stylus given the angles of the servomotors (forward kinematics). However the screen of the MFD is a 2D plane in the 3D space with unknown origin, dimensions and rotations. Robot needs to know where the screen is located relatively to its origin in order to correctly tap any button on the screen.

Dimensions of the screen need to be measured by hand in millimeters. We use 2D coordinates with origin at the bottom left corner to define position on the screen. In 3D space position and rotation of any plane is uniquely determined by three non-collinear points. If these three points are known, transformation matrix can be found. This matrix multiplied by position on the plane is equal to corresponding position in 3D space.
Previously we used ‘basic’ calibration where the robot is navigated through the bottom left corner [0, 0] (origin of the screen), top left corner [0, height] and top right corner [width, height]. At each corner, stylus’ position (in 3D space) is saved and transformation matrix is calculated. This method of calibration requires a lot of precision because even a slight deviation from the corner leads to a similar deviation in every robot’s tap so robot might not accurately tap the desired button. There is no feedback from the MFD, but sometimes there is no other way to perform the calibration.

calib2 clanek

With the new semi-automatic calibration we created our custom version of Terminal server (component which handles communication with MFD) that detects any tap on the MFD screen and sends its coordinates (X,Y in pixels) into the Robot application. Screen resolution is also required so Terminal Server sends that on demand. With the knowledge of screen dimensions and screen resolution robot is able to calculate position of a tap (in pixels) to position on the screen (in millimeters) and save the end effector’s position. The semi-automatic calibration procedure is almost the same as the basic one but robot can be navigated to any point within a marked rectangle, not just the specific point at the corner. This nullifies the need for precision. However, in this case a problem occurred in form of inaccurate values of Z axis. For this purpose we have developed an automatic recalibration. This recalibration takes data gained from semi-automatic calibration and automatically repeats the procedure of semi-automatic calibration with the knowledge of existing corners of the screen. It goes through those three corners same as before, however it starts higher above each point and slowly descends to accurately measure the Z coordinate. After recalibration all data from semi-automatic are forgotten and replaced with the values from automatic calibration. This procedure eliminates any error made by an engineer during calibration and makes the robot´s calibration nearly perfect.

As many of you may know, Android supports native printing since Android 4.4. This means, that there is new api handling communication between application from which user prints and application that later sends the job to the printer.

android print

So how it works? First lets have a look at applications from which user prints.

Main responsibility of these applications is to prepare output for print in pdf format. This includes for example paging or updating image to landscape or portrait mode.

Application from which user prints uses then system PrintManager service.

PrintManager printManager = (PrintManager) getSystemService(Context.PRINT_SERVICE);

Document output is prepared with PrintDocumentAdapter which is passed as second parameter of PrintManagers print function.

printManager.print(jobName, new PrintDocumentAdapter(...), printAttributes);

Now we are heading to the second part of job printing, where we have to discover printers and send them our job. This is responsibility of PrintService.

Printer discovery

We can either add printers manually. Set their ip address and port, or we can look for network printers in the local network.

Let’s have a look on how to find printers which support Zeroconf discovery in local network. Implementations of Zeroconf are for example Avahi daemon or Bonjour.

When printer discovery in Android is started, onCreatePrinterDiscoverySession() method in PrintService is called. Here we have to create our PrinterDiscoverySession.

Responsibilities of PrinterDiscoverySession are pretty straightforward.

  • find and add discovered printers
  • remove previously added printers that disappeared
  • update already added printers

In this example we will use NsdManager. NsdManager is native Android solution for finding zeroconf services. On the other hand its functionality is very limited, but for purpose of this demo it’s satisfactory. There exist other and better solutions, for example JmDNS. Current limitation of NsdManager is not being able to load txt records of mDNS messages.

In order to use NsdManager we have to implement two interfaces. DiscoveryLisener (handles discovery callbacks) and ResolveListener (handles resolving callbacks). I will call them OurDiscoveryListener and OurResolveListener.

First in onStartPrinterDiscovery() method we create new instance of DiscoveryListener and start the actual discovery.

discoveryListener = new OurDiscoveryListener(nsdManager);
nsdManager.discoverServices(SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD, discoveryListener);

This is pretty self-explanatory. We specify discovery service type, which is either “_ipps_.tcp.” or “_ipp_.tcp”, depending on the fact if we want to encrypt ipp messages or don’t.

And when service is found, then OurDiscoveryListener will handle what happens in individual states of discovery.

In the following code we can see that for example when service is found we try to resolve it with NsdManager.

public void onServiceFound(NsdServiceInfo service) {
    nsdManager.resolveService(service, new OurResolveListener());
}

Resolving service means, that we try to get more information about the service. This includes host ip and port. OurPrinterResolveListener then handles states what should happen when resolving succeeds or fails. When resolving succeeds, we process gained data and save it for future use.

Last part of printer discovery is to find more details about selected printer and checking whether is this printer still available. This is handled in onStartPrinterStateTracking() method.

Discovering details about printer can be done for examaple with ipp operation Get-Printer-Attributes and according to received data, set the printer information. Second function is to keep tracking of the printer state.

The following code sample just shows how to set few printer capabilities, which should be set according to the printer attributes. This doesn’t contain  tracking of printer state.

@Override
public void onStartPrinterStateTracking(PrinterId printerId) {
    // check for info we found when printer was discovered
    PrinterInfo printerInfo = findPrinterInfo(printerId);

    if (printerInfo != null) {
        PrinterCapabilitiesInfo capabilities = new PrinterCapabilitiesInfo.Builder(printerId)
                .setMinMargins(new PrintAttributes.Margins(200, 200, 200, 200))
                .addMediaSize(PrintAttributes.MediaSize.ISO_A4, true)
                .addMediaSize(PrintAttributes.MediaSize.ISO_A5, false)
                .addResolution(new PrintAttributes.Resolution("R1", "200x200", 200, 200), false)
                .addResolution(new PrintAttributes.Resolution("R2", "200x300", 200, 300), true)
                .setColorModes(PrintAttributes.COLOR_MODE_COLOR |
                        PrintAttributes.COLOR_MODE_MONOCHROME, PrintAttributes.COLOR_MODE_COLOR)
                .build();

        printerInfo = new PrinterInfo.Builder(printerInfo)
                .setCapabilities(capabilities).build();

        // We add printer info to system service
        List<PrinterInfo> printers = new ArrayList();
        printers.add(printerInfo);
        addPrinters(printers);
    }
}

When different printer is selected, then onStopPrinterStateTracking is called and onStartPrinterStateTracking again.

 

Printing:

Android itself doesn’t contain implementation of any printing protocol. Because of this I created small IPP parser. But that’s topic for another day.

Here I will only show example of handling queued print job.

In the following code we pick one job according to id from saved processed jobs and set his state to printing. Class PrintTask in the following example is just Android AsyncTask which in background creates IPP request and appends job data.

public void handleQueuedPrintJob(PrintJobId printJobId, PrinterId printerId) {
    final PrintJob printJob = mProcessedPrintJobs.get(printJobId);
    if (printJob == null) {
        return;
    }

    if (printJob.isQueued()) {
        printJob.start();
    }

    OurPrinter ourPrinter = ourDiscoverySession.getPrinter(printerId);

    if (ourPrinter != null) {
        AsyncTask <Void, Void, Void> printTask =
                new PrintTask(printJob, ourPrinter);
        printTask.execute();
    }
}

In case that we have decided to use ipps, we also have to set correct certificate. Next step is to create new HttpURLConnection. (or HttpsURLConnection for secure transfer).

The last thing we have to do is to write into the output stream our IPP message, send it and wait for response from server.

Android manifest file

We have to set necessary permissions in the Android manifest file, in order to be able to run the PrintService.
Add android.permission.BIND_PRINT_SERVICE when creating the service. Example:

...
<service android:name=".OurPrintService" 
    android:permission="android.permission.BIND_PRINT_SERVICE">

    <intent-filter>
        <action android:name="android.printservice.PrintService" />
    </intent-filter>
</service>
...

This allows system to bind to the PrintService. Otherwise the service wouldn’t be shown in the system.

Also

<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.START_PRINT_SERVICE_CONFIG_ACTIVITY" />

permissions are needed for printer discovery and to access files in external storage.

We use Liquibase in our project as a DB change management tool. We use it to create our DB schema with the basic configuration our application needs to run.

This is, however, not enough for development or (unit) testing. Why? Because for each test case, we need to have data in the database in a particular state. E.g. I need to test that the system rejects the action of a user that does have the required quota for the action he/she wants to do. So I create a user with a 0 quota and then try to perform the action and see whether the system allows it or rejects it. To not waste time setting our test data repeatedly, we use special Liquibase scripts that set up what we need in our test environment (such as a user with a 0 quota), so that we do not have to do this manually.

For the Payment System project, we used Liquibase to run plain SQL scripts to insert the data that we needed. It worked well enough, but this approach has some disadvantages.

Mainly, the person writing the scripts has to have knowledge of our database and the relations between various tables, so there is a steep learning curve. Another issue is that all of the scripts have to be updated when a larger DB schema change takes place.

Therefore, during our implementation of the Quota System, I took inspiration from the work of my colleague, who used a kind of high-level DSL as a convenient way to setup the Quota system and I turned it into a production-ready feature on top of Liquibase. This solved the problem of manual execution (the scripts always run during the application startup and are guaranteed to run exactly once).

For the DSL, I chose Groovy, since we already use it for our tests, and there is no interoperability issue with Java based Liquibase.

Liquibase has many extension points and implementing a custom changelog parser seemed the way to go.

The default parser for XML parses the input file, generates internal Liquibase objects representing particular changes, and then transforms them to SQL, which is executed on the DB.

I created similar a parser, which was registered to accept Groovy scripts.

The parser executes the script file, which creates internal Liquibase objects, but, of course, one DSL keyword can create more insert statements. What is more, when a DB schema changes, the only place that needs to change is the DSL implementation, not all of the already created scripts.

Example Groovy script:

importFile('changelog.xml')

bw = '3' //using guid of identifier which is already present in database
color = identifier name:'color'
print = identifier name:'PRINT', guid:'100' //creating identifier with specific guid

q1 = quota guid:'q1', name:'quota1', limit:50, identifiers:[bw, print], period:weekly('MONDAY')
q2 = quota guid:'q2', name:'quota2', limit:150, identifiers:[color, print], period:monthly(1)

for (i in 1..1000)
    quotaSubject guid: 'guid' + i, name:'John' + i, quotas:[q1,q2]

This example shows that the Groovy script can reference another Liquibase script file – e.g., the default changelog file, which creates the basic DB structure and initial data.

It also shows the programmatic creation of quotaSubjects (accounts in the system), where we can use normal Groovy for a loop to simply create many accounts for load testing.

QuotaSubjects have assigned quotas, which are identified by quota identifiers. These can be either created automatically, or we can reference already existing ones.

The keywords identifier, quota, quotaSubject, weekly, and monthly are just normal Groovy functions, that take Map as an argument, which allows us to pass them named parameters.

Before execution, the script is concatenated with the main DSL script, where the keywords are defined.

Part of the main DSL script that processes identifiers:

QuotaIdentifier identifier(Map args) {
    assert args.name, 'QuotaIdentifier name is not specified, available params are: ' + args
    String guid = args.guid ? args.guid : nextGuid()

    addInsertChange('QUOTA_IDENTIFIER', columns('GUID', guid) << column('NAME', args.name) << column('STATUS', 'ENABLED'))
    new QuotaIdentifier(guid)
}

private def addInsertChange(String tableName, List<ColumnConfig> columns) {
    InsertDataChange change = new InsertDataChange()
    change.setTableName(tableName)
    change.setColumns(columns)

    groovyDSL_liquibaseChanges << change
}

The calls produce Liquibase objects, which are appended to variables accessible within the Groovy script. The content of the variables constitutes the final Liquibase changelog, which is created after the processing of the script is done.

This way, the test data file is simple to read, write, and maintain. A small change in the changelog parser also allowed us to embed the test data scripts in our Spock test specifications so that we can see the test execution logic and test data next to each other.

@Transactional @ContextConfiguration(loader = YSoftAnnotationConfigContextLoader.class)
class QuotaSubjectAdministrationServiceTestSpec extends UberSpecification {

    // ~ test configuration overrides ==================================================================

    @Configuration @ImportResource("/testApplicationContext.xml")
    static class OverridingContext {

        @Bean
        String testDataScript() {
            """groovy: importFile('changelog.xml')

                       bw = identifier name:'bw'
                       color = identifier name:'color'
                       print = identifier name:'print'
                       a4 = identifier name:'a4'

                       p_a4_c = quota guid:'p_a4_c', name:'p_a4_c', limit:10, identifiers:[print, a4, color], period:weekly('MONDAY')
                       p_a4_bw = quota guid:'p_a4_bw', name:'p_a4_bw', limit:20, identifiers:[print, a4, bw], period:weekly('MONDAY')
                       p_a4 = quota guid:'p_a4', name:'p_a4', limit:30, identifiers:[print, a4], period:weekly('MONDAY')
                       p = quota guid:'p', name:'p', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q1 = quota guid:'q1', name:'q1', limit:40, identifiers:[print], period:weekly('MONDAY')   
                       q2 = quota guid:'q2', name:'q2', limit:40, identifiers:[print], period:weekly('MONDAY')
                       q_to_delete = quota guid:'q_to_delete', name:'q_to_delete', limit:40, identifiers:[print], period:weekly('MONDAY')

                       quotaSubject guid: 'user1', name:'user1', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user2', name:'user2', quotas:[p_a4_c, p_a4_bw, p_a4]
                       quotaSubject guid: 'user3', name:'user3', quotas:[p_a4_c, p_a4_bw, p_a4, p]
                       quotaSubject guid: 'user4', name:'user4', quotas:[p_a4_c]"""
        }
    }

    // ~ instance fields ===============================================================================

    @Autowired private QuotaSubjectAdministrationService quotaSubjectAdministrationService;
    @Autowired private QuotaAdministrationService quotaAdministrationService;

    // ~ findByGuid ====================================================================================

    def "findByGuid should throw IllegalInputException for null guid"() {
        when:
        quotaSubjectAdministrationService.findByGuid(null)

        then:
        thrown(IllegalInputException)
    }

Yes, you heard right! Developer testing. It means testing done by developers! And yes, I’m talking about the confirmation testing, which is known as “The changelog” in our R&D department. The result – improvement from 50 % to 95 % of tickets closed at the end of a sprint and all sprint goals completed on time 4 sprints in a row! [1]

Percentage of work planned and really done over sprints

Percentage of work planned and really done over sprints

Our development team has recently been trying very hard to shorten the development cycle of features and fixes. One of the biggest delays we have identified was caused by the tickets waiting in the “To Test” state. It means that the implementation part has been completed and is waiting for a QA Engineer to confirm it’s functionality by testing it. As I was the only tester for 7 developers on the team, the tickets with lower severity had to simply wait, often more than a week. Moreover, the testing activities were concentrated at the end of a sprint. A ticket reopened too late can easily be a cause of a failure in reaching team’s sprint goals.

Generally, it is strongly recommended by the literature to not let developers test their own work. The common reasoning is, that everybody is in love with their creations and would not like to damage them. Also, some defects are approach-related and thus cannot be discovered by the same person with the same approach. Moreover, test planning, test case writing and test data definition need special skills, which developers generally do not possess. Our mindset was changed by our CTO, who saw this as an opportunity to improve efficiency of the development.

In our team, we kept all of the aforementioned risks in mind and tailored the process to negate all of them. We have tried several versions of the process. In a short time we found the most important thing – that we need to differentiate tasks (new development) and defects (bug fixes). You’ll later see why.

Generally, it is much easier to write test cases for known functionality. This is the case for the defects which only fix or slightly modify an already tested feature. Experiences with an existing (sub-)system, where a feature is only updated and testing approaches are well known, help the QA engineer to define a set of all necessary test cases, edge-case scenarios and also expected visual aspects. Therefore, based on the well-defined test cases, a developer should be able to develop the fix and test it in a way, which will ensure it meets quality requirements. Later, a QA engineer interviews the developer to find out his confidence level about the testing and also asks several direct questions about the most critical areas. Based on this information and on the experiences with the particular developer[2], the QA engineer then decides which defects need to be retested and which can be closed right away.

On the other hand, tasks usually represent development of new features. Since the people writing the code in Y Soft are not “coders” but developers, they have to propose solutions for many low-level issues throughout the development process. Therefore, it frequently happens, that some aspects and limitations are discovered later in the sprint, making it very challenging for a QA engineer to define a complete set of test cases in advance. Also, without a proper hands-on the final work, it is also very difficult to define requirements for visual aspects and to judge user friendliness. Therefore, tasks have to always be retested by a QA engineer.

Nevertheless, defining at least some of these tests brings certain benefits that are also common for the defects:

  • A QA engineer can discover inconsistencies between his/her understanding compared to the developer’s understanding of the work to be done. It is generally better to find them before significant time has been spent on development of something undesired and reworking it later.
  • A partial test suite can be very helpful to the developer during the development, as it can be used as a checklist to cover many possible scenarios.
  • Some other scenarios can be discovered during the development and can be added to the test suite. These test cases would otherwise probably not exist.
  • As the developer performs tests himself, many issues are found and fixed in a much shorter time and with less effort than it would when they are found and reported back by the QA department (several days later). This way we can assure higher quality of the developed product in the development phase.
  • Based on the current state of work and human resources, the team can flexibly agree on the extent to which developers will test their work. Either they do extensive tests to help QA engineers, or they perform only a basic set of tests in order to move on to the next development task sooner.

These result in:

  • shorter development cycles (Open to Closed status)
  • less reopened tickets
  • better understanding of the whole solution for all members of the team

The process itself consists of the following steps:

  1. When the sprint backlog is defined:
    1. A QA Engineer creates a set of test cases for each of the tickets (pre-)planned for the sprint;
    2. The test cases are defined in a subtask of each ticket. The subtasks are named “Testing of [ticketID]”;
  2. At the sprint planning meeting:
    1. The QA engineer consults on the technical details of the solution and proposed tests with other team members and the current product manager;
    2. The effort for each ticket is estimated including the testing part;
  3. Developers have to test their work and switch the ticket to status “To Test”:
    1. All defects are tested in a standardized testing environment (ideally prepared by a QA engineer);
    2. Tasks can be tested in a development environment (e.g. running on a developer’s machine built by a development tool);
  4. When a ticket has the “To Test” status, the QA engineer:
    1. Evaluates which defects need to be tested again and which do not;
    2. Retests all tasks;

It is important to note that the aforementioned benefits are only subjectively observed by the members of the team, as none of them has been measured in any systematic way. Doing that would require returning back to the old way of work, in order to make the necessary measurements. Since members of the team are satisfied with the new process , there is no need or motivation to revert back to the old method. A change from less than 50 % to 95 % of closed tickets from the end of Sprint 17 to the end of Sprint 18 and the 100 % fulfillment of sprint goals in the last four sprints presents a sufficiently strong argument to try this process in other teams.

[1] The orange line represents the percentage of developers’ time was planned for a sprint. The time is estimated for each ticket. The green line represents the percentage of that planned estimated time, for which the tickets were closed at the end of the sprint. The first attempt to use developer testing was in Sprint 18. In Sprint 22, we resumed the process. The trend of about 70 % of work planned and more than 90 % finished remains to date.

[2] In order to gain experiences with the developers, there has to be a period of several sprints, where every defect is retested. The QA engineer needs to measure the ratio of closed and reopened defects per developer. During this period the QA engineer can also find out, whether he/she is able to define all necessary test cases beforehand.

I like builders. If you’ve ever seen a constructor with ten parameters, eight of which can be null, you probably like builders, too. While this pattern is quite verbose, it is elegant.

After doing some work with builders, I found myself wondering. Why are getters ever used in these? From Single Responsibility Principle point of view, a builder’s only purpose is to ease creation of a complex object.

Giving access to its data is definitely not the purpose of a builder. It allows the builder to be passed around instead of the (immutable) object itself. However, until built, builder is still invalid and incomplete and definitely not a data object.

I came up with a custom pattern of my own. The builder is in an inner class, the object has a single constructor that takes the builder as its only parameter. Judging by my experience so far, it was a good decision. It also goes well with my no getters approach.

Using a simple immutable class Product, the object-builder duo can look like this:

class Product {
    private final int price;
    private final List<Photo> photos;

    private Product(final Builder builder) {
        this.price = builder.price;
        this.photos = builder.photos;
    }

    public builder() {
        return new Builder(this);
    }

    // equals, hashCode, toString, getters

    public static class Builder {
        private int price;
        private List<Photo> photos;

        public Builder() {
        }

        private Builder(final Product entity) {
            this.price = entity.price;
            this.photos = entity.photos;
        }

        public Builder withPrice(final int price) {
            this.price = price;
            return this;
        }

        public Builder withPhotos(final Collection<Photo> photos) {
            this.photos = Collections.unmodifiableList(
                    new ArrayList(photos));
            return this;
        }

        public Product build() {
            this.validate();
            return new Product(this);
        }

        private void validate() {
            if (photos == null {
                throw new IllegalStateException("photos is null");
            }
        }
    }
}

As you can see, the object constructor is private, as it is only called from the inner class. Another nice thing about it is the fact it only has one parameter – the builder.

To ensure a list is immutable, a defensive copy is needed. It is usually done in constructor of the object. I chose the setter instead, as it is the only entry point where a mutable list can come from. Also, if you use Builder to change Product’s price, you can reuse photos without copying.

Method validate throws an IllegalStateException whenever anything makes it impossible to build correctly. (Your conventions may differ). I used simple if/throw in the example, however, an utility class can shorten it significantly.

Now, imagine you had an (abstract) super-class for your Product object. I’ve read that you need getters in this case, as you cannot access the (private) fields of the superclass. It would be true, provided that you had to pass the individual parameters to the constructor. But if you pass the builder instead, it is actually easy without them:

class Product extends NamedItem { ...
    private Product(ProductBuilder builder) {
        super(builder);
        ...
    }
    ...
    public static class Builder extends NamedItem.Builder {
        ...
        protected void validate() {
            super.validate();
            ...
        }
    }
}

No getters needed either. So I still haven’t encountered a builder problem that actually requires getters. Have you?

SonaQube is great tool for monitoring quality of source code.

Gradle projects can be easily integrated with SonarQube by Sonar Runner Plugin.

You can start analysis by command:

gradle sonarRunner

This works perfectly on command line, but execution by Bamboo CI terminates with following error:

ERROR: Error during Sonar runner execution
ERROR: Unable to execute Sonar
ERROR: Caused by: Missing commit 4a3e...237f

The problem is that Bamboo creates shallow git clones by default. You can find this options under Repositories tab – Advanced options.

Solution: disable “Use shallow clones” options and start job again.

Correct configuration:

stash-shallow-clone

Result: 🙂

build-success

Major feature of Gradle is extensibility. Developer can store common logic in a custom task class.

class GreetingTask extends DefaultTask {
    String greeting = 'hello from Y Soft'

    @TaskAction
    def greet() {
        println greeting
    }
}

task hello(type: GreetingTask)

// Customize the greeting
task greeting(type: GreetingTask) {
    greeting = 'greetings from Brno'
}

It’s not very flexible approach. All classes and the build logic are stored together in one build.gradle file.

It’s possible to move classes into separate Groovy files in buildSrc. Here is the description of transformation process.

Step 1. Create directory buildSrc/main/main/groovy/PACKAGE. E.g.: buildSrc/src/main/groovy/com/ysoft/greeting.

Step 2. Move custom class from build.gradle to GreetingTask.groovy in buildSrc/…/greeting directory.

Step 3. Add package declaration and imports from Gradle API to GreetingTask.groovy.

package com.ysoft.greeting

import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction

class GreetingTask extends DefaultTask {
    String greeting = 'hello from Y Soft'

    @TaskAction
    def greet() {
        println greeting
    }
}

Step 4. Update build.gradle, add groovy plugin and import of the custom class.

apply plugin: 'groovy'

import com.ysoft.greeting.GreetingTask

task hello(type: GreetingTask)

task greeting(type: GreetingTask) {
    greeting = 'greetings from Brno'
}

Alternatively you can use full package name with class name when specifying task type. In that case you can omit import.

Step 5. Type ‘gradle tasks’ and enjoy.

You can find examples of custom tasks at our GitHub.