Tempest Run

Runs tempest tests

This command is used for running the tempest tests

Test Selection

Tempest run has several options:

  • --regex/-r: This is a selection regex like what stestr uses. It will run any tests that match on re.match() with the regex

  • --smoke/-s: Run all the tests tagged as smoke

  • --exclude-regex: It allows to do simple test exclusion via passing a rejection/exclude regexp

There are also the --exclude-list and --include-list options that let you pass a filepath to tempest run with the file format being a line separated regex, with ‘#’ used to signify the start of a comment on a line. For example:

# Regex file
^regex1 # Match these tests
.*regex2 # Match those tests

These arguments are just passed into stestr, you can refer to the stestr selection docs for more details on how these operate: http://stestr.readthedocs.io/en/latest/MANUAL.html#test-selection

You can also use the --list-tests option in conjunction with selection arguments to list which tests will be run.

You can also use the --load-list option that lets you pass a filepath to tempest run with the file format being in a non-regex format, similar to the tests generated by the --list-tests option. You can specify target tests by removing unnecessary tests from a list file which is generated from --list-tests option.

You can also use --worker-file option that let you pass a filepath to a worker yaml file, allowing you to manually schedule the tests run. For example, you can setup a tempest run with different concurrences to be used with different regexps. An example of worker file is showed below:

# YAML Worker file
- worker:
  # you can have more than one regex per worker
  - tempest.api.*
  - neutron_tempest_tests
- worker:
  - tempest.scenario.*

This will run test matching with ‘tempest.api.*’ and ‘neutron_tempest_tests’ against worker 1. Run tests matching with ‘tempest.scenario.*’ under worker 2.

You can mix manual scheduling with the standard scheduling mechanisms by concurrency field on a worker. For example:

# YAML Worker file
- worker:
  # you can have more than one regex per worker
  - tempest.api.*
  - neutron_tempest_tests
  concurrency: 3
- worker:
  - tempest.scenario.*
  concurrency: 2

This will run tests matching with ‘tempest.scenario.*’ against 2 workers.

This worker file is passed into stestr. For some more details on how it operates please refer to the stestr scheduling docs: https://stestr.readthedocs.io/en/stable/MANUAL.html#test-scheduling

Test Execution

There are several options to control how the tests are executed. By default tempest will run in parallel with a worker for each CPU present on the machine. If you want to adjust the number of workers use the --concurrency option and if you want to run tests serially use --serial/-t

Running with Workspaces

Tempest run enables you to run your tempest tests from any setup tempest workspace it relies on you having setup a tempest workspace with either the tempest init or tempest workspace commands. Then using the --workspace CLI option you can specify which one of your workspaces you want to run tempest from. Using this option you don’t have to run Tempest directly with you current working directory being the workspace, Tempest will take care of managing everything to be executed from there.

Running from Anywhere

Tempest run provides you with an option to execute tempest from anywhere on your system. You are required to provide a config file in this case with the --config-file option. When run tempest will create a .stestr directory and a .stestr.conf file in your current working directory. This way you can use stestr commands directly to inspect the state of the previous run.

Test Output

By default tempest run’s output to STDOUT will be generated using the subunit-trace output filter. But, if you would prefer a subunit v2 stream be output to STDOUT use the --subunit flag

Combining Runs

There are certain situations in which you want to split a single run of tempest across 2 executions of tempest run. (for example to run part of the tests serially and others in parallel) To accomplish this but still treat the results as a single run you can leverage the --combine option which will append the current run’s results with the previous runs.