Blog from August, 2016

Apart from command-line, it's perfectly possible to ConsoleRunner from inside a unit test.  This means you could have different JUnit tests (or some other framework) that run different cucumber features in parallel.

You can configure all settings and pass in system properties.  Inside your unit test you can only assert the exit code of the ConsoleRunner (only 0 is ok), but all logs and reports will still be written to the output directory specified.  Just make sure you configure each instance to log to a different directory.

This can be an alternative approach for Testing Spring Boot microservices using Cucumber and CWB REST that still lets you take advantage of the parallel execution and distribution of the ConsoleRunner.  Another great advantage of this approach is that if you start your Spring Boot application from within the unit test, code coverage should work out of the box.

Example JUnit tests launching ConsoleRunner
public class ITRunFeatures {

	...

    @Test
    public void desktopFeatures() throws Exception {
        Properties systemProperties = new Properties();
        systemProperties.setProperty("webServerPort", webServerPort.toString());

        ConsoleRunner consoleRunner = new ConsoleRunner("target/cucumber/desktop", "src/test/resources/features/");
        consoleRunner.setProperties(systemProperties);
        consoleRunner.setTags(new String[]{"~@ignore", "~@mobile"});
        consoleRunner.setNumberOfThreads(5);

        assertEquals(0, consoleRunner.start());
    }

    @Test
    public void mobileFeatures() throws Exception {
        Properties systemProperties = new Properties();
        systemProperties.setProperty("webServerPort", webServerPort.toString());

        ConsoleRunner consoleRunner = new ConsoleRunner("target/cucumber/mobile", "src/test/resources/features/");
        consoleRunner.setProperties(systemProperties);
        consoleRunner.setTags(new String[]{"~@ignore", "@mobile"});
        consoleRunner.setNumberOfThreads(5);

        assertEquals(0, consoleRunner.start());
    }
} 

 

 

Sometimes there's the need to have groups of features to be run sequentially, where only within a group features can be run in parallel.  Until now this was mostly achieved by doing separate calls to ConsoleRunner (eg. multiple calls through maven) but that creates some additional process overhead.

We've added a folders to the ConsoleRunner command-line arguments.  If that option is specified, the initial directory will be considered a base path, and the folders option should contain a comma-separated list of subdirectories in that base path.  These subdirectories will be considered a root of features and an separate ConsoleRunner execution will be started for each subdirectory in the order specified.  Only if the previous execution has succeeded (no failed scenarios) will the next execution start.  In the configured output directory a subdirectory with the same name will be created for each execution, that directory will contain all reports for that execution.

If you specify the option with an empty string or a . (dot) as value, that amounts to the same thing as not specifying the value at all.

This reduces both some process and configuration overhead.

Example scenario

Assume the following command-line execution:

... --folders "batch-one,batch-two" /myfeatures

This would first executes all features in folder /myfeatures/batch-one, and if all those succeed, will switch to the features in /myfeatures/batch-two.

The console output might look something like the following:

 

main:
     [java] Starting CWB Console runner v1.2.0 for subfolder: batch-one
     [java] Location: /myfeatures/batch-one
     [java] --------------------------
     [java] Using manually configured classpath for batches (5523 characters)
     [java] --------------------------
     [java]
     [java] Waiting 1000 ms between process launches.
     [java] Executing tests in 3 threads with 1 retries.
     [java] Execution will fail if more than 10 scenarios need to be retried!
     [java] Threads will be forcefully killed if they run longer than 10000 ms.
     [java]
     [java] Executing batch: initial/batch-1 - 5 scenarios
     [java] Executing batch: initial/batch-2 - 4 scenarios
     [java] Batch finished: initial/batch-1 - 0 failed [0:00:01.581]
     [java] Executing batch: initial/batch-3 - 4 scenarios
     [java] Batch finished: initial/batch-2 - 0 failed [0:00:01.670]
     [java] Batch finished: initial/batch-3 - 0 failed [0:00:01.613]
     [java] 13 scenarios executed - 0 failed [0:00:03.625]
     [java]
     [java] Generating pretty HTML reports
     [java] HTML reports generated [0:00:00.723]
     [java]
     [java]
     [java] Starting CWB Console runner v1.2.0 for subfolder: batch-two
     [java] Location: /myfeatures/batch-two
     [java] --------------------------
     [java] Using manually configured classpath for batches (5523 characters)
     [java] --------------------------
     [java]
     [java] Waiting 1000 ms between process launches.
     [java] Executing tests in 3 threads with 1 retries.
     [java] Execution will fail if more than 10 scenarios need to be retried!
     [java] Threads will be forcefully killed if they run longer than 10000 ms.
     [java]
     [java] Executing batch: initial/batch-1 - 3 scenarios
     [java] Executing batch: initial/batch-2 - 2 scenarios
     [java] Batch finished: initial/batch-1 - 0 failed [0:00:01.409]
     [java] Executing batch: initial/batch-3 - 2 scenarios
     [java] Batch finished: initial/batch-2 - 0 failed [0:00:01.506]
     [java] Batch finished: initial/batch-3 - 0 failed [0:00:01.437]
     [java] 7 scenarios executed - 0 failed [0:00:03.441]
     [java]
     [java] Generating pretty HTML reports
     [java] HTML reports generated [0:00:00.302]
     [java]
     [java]

 

Test-driving the new option

The new option is available in 1.2.0-SNAPSHOT of cwb-core, which should be fully backwards compatible with 1.1.0.RELEASE.

Remember, just as with the use of @keepTogether only enforce ordering when there's no way around it, as inevitably it will make your suite of tests run slower. Fewer dependencies between tests means better parallelization.