Scaling test execution using the ConsoleRunner

The most up-to-date information about the ConsoleRunner can be found in the reference documentation.

 

With the latest update of Cucumber Web Bridge we added the ConsoleRunner class.  It is meant to execute a batch of tests from the command line, and offers an alternative to cucumber.api.cli.Main for running cucumber features.

Compared to the standard Cucumber Main class, if offers the following advantages:

  • parallel execution of features in separate processes
  • automatic retry of failed scenarios
  • default logging and html report generation

The ConsoleRunner is available in the CWB 1.1-SNAPSHOT version.

Getting started

We usually configure running the features from Maven.  The simplest pom xml might look like this:

Minimal configuration of ConsoleRunner in Maven pom
<build>
   	<plugins>
    	<plugin>
         	<artifactId>maven-antrun-plugin</artifactId>
         	<version>1.7</version>
         	<executions>
            	<execution>
               		<phase>integration-test</phase>
               		<configuration>
                  		<target>
					 		<java classname="com.foreach.cuke.core.cli.ConsoleRunner" fork="yes" failonerror="true">
					   			<classpath refid="maven.test.classpath"/>
						   		<sysproperty key="file.encoding" value="UTF-8"/>
						   		<arg line="--output ${project.build.directory}/cucumber-results"/>
						   		<arg line="--threads 3"/>
						   		<arg line="--retries 1"/>
						   		<arg line="--glue com.foreach.cuke"/>
						   	   	<arg line="--tags ~@ignore"/>
								<arg line="${project.basedir}/src/test/resources/features"/>
							</java>
						</target>
					</configuration>
				</execution>
			</executions>
		</plugin>
	</plugins>
</build>

The base format is:

java com.foreach.cuke.core.cli.ConsoleRunner [arguments] <directory>

The single <directory> contains all features files that should be inspected for execution.  That directory and all sub-directories will be scanned for .feature files.

The arguments in the example above are:

outputdirectory where all logs and reports should be written (see below)
threadsnumber of processes that should be used for parallel execution
retriesnumber of times failing tests should be retried
gluepackage containing all steps (more than one argument is allowed)
tagstags that the scenarios should match (more than one argument is allowed)

More arguments are available to finetune the execution, for a full list please refer to the documentation or execute the ConsoleRunner without any arguments.

Parallel execution

The ConsoleRunner allows you to run tests in parallel.  This is done by starting a separate Java process for every batch of tests you want executed.  Upon start the ConsoleRunner will scan for all scenarios that need running and divide them into batches of approximately the same size.  For every batch a separate process will be launched.  The maximum number of processes is controlled by the threads argument.

Running tests in parallel can gain you a lot of time when executing tests, at the cost of system resources.  Also because every batch is a separate process, there is some overhead involved and only worth it for longer running tests (taking several seconds per scenario).

Every batch process inherits the same classpath as the original ConsoleRunner but only passes system properties explicitly passed as a -P <propertyname>=<propertyvalue> argument to the ConsoleRunner.

Properties for the features must be passed in a different fashion using the ConsoleRunner (as -P arguments) vs default Cucumber Main (as system properties).

Controlling scenario distribution

When dividing scenarios in batches the ConsoleRunner will do so aggressively: every Scenario or Scenario Outline example will count as a separate item and might end up in a different batch.  There is absolutely no guarantee of the order of execution and if scenarios are not completely independent this will cause problems.

Keeping scenarios together

Even though I advise against it, it can sometimes be handy to have scenarios depend on each other.  Especially when transitioning from sequential execution to parallel execution you might end up with unforeseen problems.  You can tell the ConsoleRunner not to split scenarios by using the @keepTogether tag.

If you use it on a Scenario Outline, the examples of that outline will be executed by the same process, in their respective order.  If you use it on a Feature, all scenarios in that feature will be executed in sequence within the same process.

Be aware that tag filtering or scenario retries might exclude some scenarios or examples from running, however the order withing the feature will always be respected if @keepTogether is present.

Make sure your scenarios run completely independent of each other. Run them frequently using the ConsoleRunner and vary with the number of threads to detect possible unwanted inter-dependencies. If you really need to enforce scenario order, you can use @keepTogether on Feature or Scenario Outline to do so.

Mutliple processes and SAHI

You can use parallel execution with SAHI based tests, however I advise you to use at least SAHI 5.0.  Note that not all configured browsers might support multiple instances on the proxy, and it is also usually better to launch the browser instances in private browsing mode.

Process timeout

Sometimes a process can hang for a very long time, especially with SAHI tests we see this happen once in a while.  In that case we consider the process timed out and kill it explicitly.  By default this happens after 900 seconds, but you can configure the number of seconds with the processTimeout argument.

You should always make sure the processTimeout is set to longer than your tests are allowed to take under normal circumstances. Determine a sensible value based on the number of scenarios you have, the number of batches in parallel you want to run and the total execution time for all tests in sequence.

Batch threshold

Due to the overhead of a separate process it is sometimes not worth parallel execution if you only have to run a couple of tests.  You can tune this with the batchThreshold argument.  No matter the number of threads allowed, parallel execution will only happen if there are at least that number of scenarios to run.  The default value for this argument is 1 meaning a separate process might be launched for just a single scenario.

Even if no additional batch creation occurs, the tests will always execute in a separate process. Under all circumstances the ConsoleRunner will create an additional Java process.

Automatic retry of failing tests

Sometimes a test can fail and then succeed on a simple rerun. I see this happen especially with SAHI based tests in complex websites.  It's possible to have the ConsoleRunner automatically retry failing tests any number of times.  After the initial execution, the application will take all failed scenarios and rerun only those.  The application will then repeat this process until all tests have succeeded or the maximum number of retries has occurred.  How many times tests can be retried is controller by the retries argument.

Though a general retry might seem like a good idea, be aware that every failing test will be executed at least the number of configured retries. Even if the test is not failing due to a random circumstance. This makes the overall execution time longer.

Scenario retry limit

Automatic retries are only useful if a couple of scenarios are suffering from random failures.  To tune this you can set the scenarioRetryLimit argument.  If more than this number of scenarios are failing, the ConsoleRunner will fail and no retries will occur.  The default value is 10.

Report generation and logs

The ConsoleRunner writes logs and reports to the directory determined by the output argument.  After a full run you would find a structure very similar to the following:

FolderDescription
/featuresContains a copy of all features considered. Used to circumvent limitations cucumber has with filenames containing whitespace.
/initialContains all logs for the first execution run.
/batch-1Contains all logs for the first batch in the initial execution run.
_input.txtInput file passed to Cucumber Main.
_rerun.txtRerun file generated by Cucumber Main.
command.txtCommand used for launching the separate batch process.
cucumber-batch.logSystem.out and System.err stream of the batch process. This usually contains the logging of the features.
cucumber-report.jsonJSON report for this batch.
cucumber-report.xmlJUNIT XML report for this batch.
executed.txtLists all scenarios executed, with reference to the original feature file.
failed.txtLists all scenarios failed, with reference to the original feature file.
/batch-XContains all logs for the Xth batch in the initial execution run.

executed.txt

Lists all scenarios executed in all batches in the initial, with reference to the original feature file.
failed.txtLists all scenarios failed in the initial run, with reference to the original feature file.
/retry-XContains all logs for the Xth retry run.
/html-reports

Contains the pretty HTML reports generated using the cucumber-reporting library.

Note that every scenario is present in these reports, scenarios with 2 retries will be present 3 times. In the JUnit XML only the last attempt will be present.

cucumber-report.jsonAggregated JSON report for all features.
cucumber-report.xmlAggregate JUNIT XML report for all features.
failed.txtLists all scenarios that were still failing when the ConsoleRunner finished.
retries.txtLists all scenarios that have been retried, along with the number of times they have been retries. This file is only present if there were retries.

When executing on a buildserver, it is often best to create a shared artifact of the output folder so you can easily access the logs in case of test failures.

Migrating from using Cucumber Main

The previous pom.xml of CWB configured Cucumber Main and cucumber-reporting as two consecutive plugins.  The new ConsoleRunner generates the HTML reports by default, so the separate cucumber-reporting task can usually be removed.

pom.xml excerpt using Cucumber Main class
 <plugin>
   <artifactId>maven-antrun-plugin</artifactId>
   <version>1.7</version>
   <executions>
      <execution>
         <phase>integration-test</phase>
         <configuration>
            <target>
               <java classname="cucumber.api.cli.Main" fork="yes">
                  <sysproperty key="file.encoding" value="UTF-8"/>
                  <sysproperty key="environment" value="${environment}"/>
                  <sysproperty key="browser" value="${browser}"/>
                  <sysproperty key="nopause" value="${nopause}"/>
                  <sysproperty key="highlight" value="${highlight}"/>
                  <classpath refid="maven.test.classpath"/>
                  <arg line="--format com.foreach.cuke.core.formatter.ConsoleReporter"/>
                  <arg line="--format html:${project.build.directory}/cucumber/${report.output.dir}/plain-html-reports"/>
                  <arg line="--format junit:${project.build.directory}/cucumber/${report.output.dir}/cucumber-report.xml"/>
                  <arg line="--format json:${project.build.directory}/cucumber/${report.output.dir}/cucumber-report.json"/>
                  <arg line="--glue com.foreach.cuke"/>
                  <arg line="--glue mypackage"/>
                  <arg line="--tags ~@ignore"/>
                  <arg line="--tags ${tags}"/>
                  <arg line="${project.basedir}/features/${features}/"/>
               </java>
            </target>
         </configuration>
         <goals>
            <goal>run</goal>
         </goals>
      </execution>
   </executions>
</plugin>
<plugin>
   <groupId>net.masterthought</groupId>
   <artifactId>maven-cucumber-reporting</artifactId>
   <version>0.0.4</version>
   <executions>
      <execution>
         <id>cucumber-reports</id>
         <phase>integration-test</phase>
         <goals>
            <goal>generate</goal>
         </goals>
         <configuration>
            <projectName>${project.name}</projectName>
            <outputDirectory>
               ${project.build.directory}/cucumber/${report.output.dir}/pretty-html-reports
            </outputDirectory>
            <cucumberOutput>
               ${project.build.directory}/cucumber/${report.output.dir}/cucumber-report.json
            </cucumberOutput>
            <enableFlashCharts>true</enableFlashCharts>
         </configuration>
      </execution>
   </executions>
</plugin>
same configuration using ConsoleRunner
 <plugin>
   <artifactId>maven-antrun-plugin</artifactId>
   <version>1.7</version>
   <executions>
      <execution>
         <phase>integration-test</phase>
         <configuration>
            <target>
               <java classname="com.foreach.cuke.core.cli.ConsoleRunner" fork="yes" failonerror="true">
                  <classpath refid="maven.test.classpath"/>
     			  <sysproperty key="file.encoding" value="UTF-8"/>
                  <arg line="--output ${project.build.directory}/cucumber/"/>
                  <arg line="--threads 1"/>
                  <arg line="--retries 0"/>
                  <arg line="--glue com.foreach.cuke"/>
				  <arg line="--glue mypackage"/>
                  <arg line="--tags ~@ignore"/>
			      <arg line="--tags ${tags}"/>
                  <arg line="-P browser=${browser}"/>
				  <arg line="-P highlight=${highlight}"/>
				  <arg line="-P environment=${environment}"/>
                  <arg line="${project.basedir}/src/test/resources/features"/>
               </java>
            </target>
         </configuration>
         <goals>
            <goal>run</goal>
         </goals>
      </execution>
   </executions>
</plugin>

Skipping the HTML reports

If you want to skip the HTML reports generation entirely, you can do so by passing the disableHtmlReports argument.

Customizing the HTML reports

It's not possible to customize the HTML reports generated by the ConsoleRunner.  If you pass a buildName system property to the ConsoleRunner application, that build name will be displayed on the HTML reports instead of the execution time.  If you need more customization your should skip the report generation and revert to generating them manually using the JSON report.

 

Using the ConsoleRunner can help with running tests faster and limiting the impact of random failing tests. Even though I tried to explain the most important configuration options, there are some more arguments you can use to tweak the execution. For a complete overview and explanation I'd like to refer you to the documentation.

Feel free to comment on this post if you have any questions or remarks.